Possible Misinformation Post Interstitial
Show an interstitial before posting content that has been detected as possible misinformation.
How does this mitigate hate?
Social platforms enable users to reach and influence a large audience, so platforms must mitigate the amount of misinformation content shared. However, posting content is often quick and easy, and users might only sometimes know if the content they share is potentially misinforming.
This pattern adds friction when a user tries to post content previously flagged as possible misinformation. It can educate users and prompt them to rethink their decision to post, which mitigates the overall creation of misinformation content on the platform.
When to use it?
Platforms that experience issues of users sharing misinformation content should use this pattern to slow the production of posts that include content flagged as possibly misinforming.
Including this pattern before the issue arises could help slow the spread of misinformation content and mitigate the influence of accounts that share this type of content.
How does it work?
Users should be told that they are including content that can potentially be misinforming or deceptive. It should also ask if they are sure they want to continue posting, and provide them the option of either going back to edit their content, or to continue to post and take responsibility for the consequences.
If a user does post content flagged as potential misinformation, all interactions such as likes, comments, and shares should be disabled for that post. The post should also undergo review by moderators to confirm whether it contains misinformation.
An interstitial before posting misinformation content increases friction in the posting process, which decreases the viral spread of misinformation content by warning users of the consequences of their post if they continue. The added moment of friction also provides users the opportunity to reflect on the content they are sharing, and reconsider their decision to post.
Tracking the interactions of users with this pattern would provide useful data for identifying its effectiveness – such as whether the user attempted to change a specific word, reword the entire phrase itself, or game the system by attempting to manipulate words to dodge the word detection.
Users who try to post misinformation content might attempt to game the system by manipulating words to dodge the word detection.
Facebook alerts a user that the content they are posting might not be factual or is false.
Arechar, Antonio A., Jennifer N. L. Allen, adam berinsky, Rocky Cole, Ziv Epstein, Kiran Garimella, Andrew Gully, et al. 2022. “Understanding and Reducing Online Misinformation Across 16 Countries on Six Continents.” PsyArXiv. February 11. doi:10.31234/osf.io/a9frz. Iannucci, Lisa. “LibGuides: Media Literacy & Misinformation: How Misinformation Spreads.” guides.monmouth.edu, n.d. https://guides.monmouth.edu/media_literacy/how_fake_news_spreads.
Kim, Antino and Dennis, Alan R., Says Who? The Effects of Presentation Format and Source Rating on Fake News in Social Media (August 16, 2018). MIS Quarterly, Vol. 43, No. 3, pp. 1025–1039 (2019), Available at SSRN: or
Pennycook, Gordon, Adam Bear, Evan T Collins, and David G Rand. “The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines without Warnings.” Management Science 66 (2020): 4944–57. https://doi.org/10.1287/mnsc.2019.3478.