Hateful Post Reshare Interstitial

Interstitial friction to reduce resharing of known hateful content.

How Does This Mitigate Hate?

An interstitial that asks user if they really want to share, when they attempt to reshare a post containing hateful content. The status of the post as hateful could be determined by patterns of reshare by certain users. The nudge provides friction and a moment to pause for the user in the attempt to break the cycle of fast sharing without thought to consequences.

LEARN MORE

When to use it?

Platforms that experience the spread of hateful content through the re-sharing of posts should include this pattern to slow the spread by providing friction in the re-share process.

Including this pattern before the issue arises could help to slow the spread of hateful posts, mitigating the influence of accounts that create hateful content.

How does it work?

In an instance where a user would attempt to re-share a post which has previously been flagged but has yet to be removed, an interstitial will appear before allowing a re-share. Offering a moment of reflection and awareness of the potential harm re-sharing the post to her network may cause.

The platform’s algorithm should provide consistent interstitials and re-introduce clear and specific safety/conduct rules for each user whilst offering users the opportunity to review the rules and to flag the post.

Advantages

An interstitial before re-sharing hateful content increases friction in the re-share process, decreasing the viral spread of hateful content and disinformation by warning users of the consequences of their re-share should they continue.

Any opportunity to reiterate rules has been proven to decrease harmful behavior amongst social platforms and enhance overall safer experiences for a wider user base.

Disadvantages

Users should be given the opportunity to disregard the interstitial and continue to re-share, still allowing users to share posts that have already been flagged as having hateful content.

Content can be subjective and some users may game the system by tweaking the content if the account has not been recognized as a repeat offender, or if the post has not been flagged by multiple users.

References

Fan, Rui, Jichang Zhao, Yan Chen, and Ke Xu. “Anger Is More Influential than Joy: Sentiment Correlation in Weibo.” Edited by Rodrigo Huerta-Quintanilla. PLoS ONE 9, no. 10 (October 15, 2014): e110184. https://doi.org/10.1371/journal.pone.0110184.

Goldberg, Ido, Guy Simon, and Kusuma Thimmaiah. “Nudge Theory Examples in Online Discussions.” OpenWeb, September 20, 2020. https://www.openweb.com/blog/openweb-improves-community-health-with-real-time-feedback-powered-by-jigsaws-perspective-api.

Katsaros, Matthew, Kathy Yang, Lauren Fratamico, and Twitter Inc. “Reconsidering Tweets: Intervening during Tweet Creation Decreases Offensive Content,” 2021. https://arxiv.org/abs/2112.00773

Vilk, Viktorya, Elodie Vialle, and Matt Bailey. “No Excuse for Abuse: What Social Media Companies Can Do Now to Combat Online Harassment and Empower Users.” Edited by Summer Lopez and Suzanne Nossel. PEN AMERICA. PEN America, March 31, 2021. https://pen.org/report/no-excuse-for-abuse/.