Hateful Post Interstitial

An interstitial which creates a moment of reflection before users post content which is potentially harmful or violates safety protocols.

How does this mitigate hate?

This pattern adds friction to the posting process, prompting users to rethink their decision to post, which mitigates the overall creation of hateful content on the platform.

LEARN MORE

When to use it?

Platforms that experience issues of users sharing hateful content that breaks the platform’s rules, should use this pattern to slow the production of posts that include hateful content.

Including this pattern before the issue arises could help to slow the spread of hateful content, mitigating the influence of accounts that share hateful content.

How does it work?

Users should be told that they are including content that violates the rules of the platform.

The interface should also ask if they are sure they want to continue posting, and provide them the option of either going back to edit out the hateful content, or to continue to post and take responsibility for the consequences

Advantages

An interstitial before posting hateful content increases friction in the posting process, decreasing the viral spread of hateful content by warning users of the consequences of their post if they continue. The added moment of friction also provides users the opportunity to reflect on the content they are sharing, and reconsider their decision to post.

Tracking the interactions of users with this pattern would provide useful data for identifying its effectiveness – such as whether the user attempted to change a specific word, reword the entire phrase itself, or game the system by attempting to manipulate words to dodge the word detection.

Disadvantages

Users who try to post hateful content might attempt to game the system by manipulating words to dodge the word detection.

 

Examples

TikTok Reconsider post/comment interstitial. (Screenshot from TikTok)

TikTok reconsider comment interstitial

OpenWeb “nudge” demo screenshot

NextDoor offers an interstitial calling out hateful or mean language and prompts users to be kinder.
(Screenshot from NextDoor September 2021)

References

Fan, Rui, Jichang Zhao, Yan Chen, and Ke Xu. “Anger Is More Influential than Joy: Sentiment Correlation in Weibo.” Edited by Rodrigo Huerta-Quintanilla. PLoS ONE 9, no. 10 (October 15, 2014): e110184. https://doi.org/10.1371/journal.pone.0110184.

Goldberg, Ido, Guy Simon, and Kusuma Thimmaiah. “Nudge Theory Examples in Online Discussions.” OpenWeb, September 20, 2020. https://www.openweb.com/blog/openweb-improves-community-health-with-real-time-feedback-powered-by-jigsaws-perspective-api.

Katsaros, Matthew, Kathy Yang, Lauren Fratamico, and Twitter Inc. “Reconsidering Tweets: Intervening during Tweet Creation Decreases Offensive Content,” 2021. https://arxiv.org/abs/2112.00773

Porter, Jon. “Nextdoor’s New Kindness Reminder Wants to Stop Neighbors from Being so Mean.” The Verge, September 18, 2019. https://www.theverge.com/2019/9/18/20871894/nextdoors-kindness-reminder-mean-comments-community-guidelines.

“Preliminary Flagging before Posting – Prosocial Design Network.” www.prosocialdesign.org. Accessed September 17, 2021. https://www.prosocialdesign.org/library/preliminary-flagging-before-posting.

Vilk, Viktorya, Elodie Vialle, and Matt Bailey. “No Excuse for Abuse: What Social Media Companies Can Do Now to Combat Online Harassment and Empower Users.” Edited by Summer Lopez and Suzanne Nossel. PEN AMERICA. PEN America, March 31, 2021. https://pen.org/report/no-excuse-for-abuse/.