Hateful Comment Interstitial

An interstitial which creates a moment of reflection before users post reactionary comments which are potentially harmful or violate safety protocols.

How does this mitigate hate?

Creates a moment of reflection during the process of posting comments. Allows for review of potentially hateful or flagged words, as well as the opportunity for platforms to reintroduce safety policies helping to ensure equitable, and safer digital environments for all.

LEARN MORE

When to use it?

When the system or keyword algorithm detects flagged terms, keywords or hashtags. Implement on social platforms within sharing, posting content user flow.

Platforms should use nudges to discourage users’ attempts to engage in abusive behavior. One way to do this is to use automation to proactively identify content as potentially abusive and nudge users with a warning that their content may violate platform policies and encourage them to revise it before they post. (1)

How does it work?

Provide a visual interstitial in the UI to alert the user that they have entered flagged words, hashtags or other material potentially harmful content in the comment field before allowing the content to be posted.

Advantages

Stops the user from instant reactions and off-the-cuff comments. Flags hateful or problematic speech, keywords and hashtags. Allows reminders of policies, and safety protocols within the platforms.

Disadvantages

Interstitials slow users down and break train of thought and flow.

Examples

TikTok reconsider comment interstitial

TikTok’s reconsider posting interstitial.
(Screenshot taken July 2021)

YouTube’s interstitial to keep comments respectful interstitial.
(Screenshot taken August 2021)

References

Fan, Rui, Jichang Zhao, Yan Chen, and Ke Xu. “Anger Is More Influential than Joy: Sentiment Correlation in Weibo.” Edited by Rodrigo Huerta-Quintanilla. PLoS ONE 9, no. 10 (October 15, 2014): e110184. https://doi.org/10.1371/journal.pone.0110184.

Goldberg, Ido, Guy Simon, and Kusuma Thimmaiah. “Nudge Theory Examples in Online Discussions.” OpenWeb, September 20, 2020. https://www.openweb.com/blog/openweb-improves-community-health-with-real-time-feedback-powered-by-jigsaws-perspective-api.

Katsaros, Matthew, Kathy Yang, Lauren Fratamico, and Twitter Inc. “Reconsidering Tweets: Intervening during Tweet Creation Decreases Offensive Content,” 2021. https://arxiv.org/abs/2112.00773

“Preliminary Flagging before Posting – Prosocial Design Network.” www.prosocialdesign.org. Accessed September 17, 2021. https://www.prosocialdesign.org/library/preliminary-flagging-before-posting.

(1) Vilk, Viktorya, Elodie Vialle, and Matt Bailey. “No Excuse for Abuse: What Social Media Companies Can Do Now to Combat Online Harassment and Empower Users.” Edited by Summer Lopez and Suzanne Nossel. PEN AMERICA. PEN America, March 31, 2021. https://pen.org/report/no-excuse-for-abuse/.

Wadhawa, Tara. “New Tools to Promote Kindness on TikTok.” Newsroom | TikTok, August 16, 2019. https://newsroom.tiktok.com/en-us/new-tools-to-promote-kindness.

Collaboration


Written in collaboration with PenAmerica