Automated Moderation Usage Indicator

The platform should indicate whether a reported post will be reviewed via automated moderation, by a human, or both.

How does this mitigate hate?

Platforms indicating who or what will review a reported post lets users know how their reports will be handled. Also, this pattern allows platforms to hold themselves accountable for reviewing reports. As a result, this builds more transparency and confidence in the reporting process.

LEARN MORE

When to use it?

Use this pattern to build more trust and transparency in a platform’s reporting process. The machine-learning usage indicator should be shown before a user submits the report. Consider including a way for users to follow up on a report to check whether it’s been processed.

How does it work?

When filling out a report, a machine-learning usage indicator should be shown to the user in the confirmation section before they submit the report.

The indicator should explain in detail who (eg. an algorithm, a real person, a combination of both) will review their report, and the sequence for reviewing (eg. first pass by machine-learning, second pass by a person).

Advantages

This pattern can help users have confidence and peace of mind knowing how their reports will be processed. It also holds platforms accountable to follow these procedures.

Disadvantages

Giving detailed information as to how reports are processed might entice users to abuse the reporting system by spamming or reported non-harmful content.

References

Davis, Lee. “Machines Are Better at Online Content Moderation & Pattern Recognition.” www.spectrumlabsai.com, August 11, 2021. https://www.spectrumlabsai.com/the-blog/machines-and-online-content-moderation.

Leetaru, Kalev. “Algorithmic Moderation: The Algorithms That Decide What We See Now Delete Us Too.” Forbes, April 24, 2019. https://www.forbes.com/sites/kalevleetaru/2019/04/24/algorithmic-moderation-the-algorithms-that-decide-what-we-see-now-delete-us-too/?sh=44d186d31d2b.

MacCarthy, Mark. “How Online Platform Transparency Can Improve Content Moderation and Algorithmic Performance.” Brookings, February 17, 2021. https://www.brookings.edu/blog/techtank/2021/02/17/how-online-platform-transparency-can-improve-content-moderation-and-algorithmic-performance/.

Christina A. Pan et al., “Comparing the Perceived Legitimacy of Content Moderation Processes: Contractors, Algorithms, Expert Panels, and Digital Juries,” Proceedings of the ACM on HumanComputer Interaction 6, no. CSCW1 (April 2022), https://hci.stanford.edu/publications/2022/ ComparingPerceivedLegitimacy.pdf.