Livestream Viewer Report

The ability to report a viewer (listener) of a livestream who may be acting inappropriately or violating the rules through comments or other tools.

How Does This Mitigate Hate?

In many commenting areas, reactions towards content can “be off the cuff” with harmful intent. When missed by moderators or algorithms, comments can be seen by millions of people in real time. Providing a usable reporting system for streamers, and viewers is crucial in stopping the spread of comments which violate safety rules and policies. Empowering hosts, and participants to take ownership of safety through reporting helps maintain safer environments.

LEARN MORE

When to use it?

Within the livestream room, all listeners/viewers should have the ability to report an instance of abuse or violation of policies, regardless of whether that is the streamer or other viewers/listeners.

An option to report any participant should be accessible, concise, and specific to use for all.

How does it work?

After a successful report has been submitted, the offending viewer/listener or hosts’ comments should be hidden. Both the livestream room participants and the offender should be notified that their comments have been removed and why. If multiple offenses occur, further action should take place with updates regarding the review, and appeals processes.

Advantages

Reporting systems help to empower users to bring instances of hate to the platform’s attention. Often posts or comments can be manipulated to get past interstitials put in place to mitigate hateful comments. Reporting not only removes the comments sooner but also helps bring specific users to the platform’s attention while providing specific info as to what was violated, and how.

Disadvantages

Spam reporting can be used to target specific users or groups, especially in reporting systems that are not as robust or when platforms are under-resourced to effectively mitigate against abuse of the reporting system.

Examples

YouTube Livestream reporting a commenter/chat participant.
(Screenshots taken August 2021)

References

ADL. “The Unique Challenges of Audio Content Moderation Part Two: Static vs. Livestreaming Audio.” Anti-Defamation League, June 30, 2021. https://www.adl.org/blog/the-unique-challenges-of-audio-content-moderation-part-two-static-vs-livestreaming-audio.

Douek, Evelyn, and Quinta Jurecic. “The Lawfare Podcast: The Challenges of Audio Content Moderation.” Podcast. Lawfare, April 22, 2021. https://www.lawfareblog.com/lawfare-podcast-challenges-audio-content-moderation.

Jiang, Jialun Aaron, Charles Kiene, Skyler Middler, Jed R. Brubaker, and Casey Fiesler. “Moderation Challenges in Voice-Based Online Communities on Discord.” Proceedings of the ACM on Human-Computer Interaction 3, no. CSCW (November 7, 2019): 1–23. https://doi.org/10.1145/3359157.

Sultan, Ahmad. “Livestreaming Hate: Problem Solving through Better Design.” Anti-Defamation League, May 13, 2019. https://www.adl.org/news/article/livestreaming-hate-problem-solving-through-better-design.

Taylor, T L. Watch Me Play Twitch and the Rise of Game Live Streaming. Princeton Oxford Princeton University Press, 2018. Watch Me Play