Flagged Livestream Comment Vote

Participant based voting for removal of hateful comments.

How does this mitigate hate?

Livestreams can be a hotbed for hateful content, comments, and spam reporting. Offering the option to report any misconduct or hateful content is a major factor in mitigating hate. In this case, after a user reports any hateful commenting during a livestream, a prompt offering participants an option to opt out, or vote to ensure the removal of any potentially dangerous, offensive comments gives the community the control as some communities’ vocabulary might be offensive to the reporter but may not be to the majority of the community.

LEARN MORE

When to use it?

This pattern should be prominent after a user reports any hateful commenting during a livestream, a prompt offering all participants the option to opt out or vote and collectively ensure the removal of any potentially dangerous, offensive comment.

How does it work?

When a comment becomes flagged by either a reporting user or a platform’s moderation algorithm, the platform can offer participants the option to opt out, or vote for having the comment, and offender removed.

This pattern may give reassurance to users that moderation is effective and currently enforcing guidelines that were introduced upon account registration. It can also show that moderation is reflecting community norms.

Advantages

A voting system pattern can help to bring a collective understanding of what actions violate community guidelines, as well as bring contextual awareness for those who may not understand why, and how specific groups or persons are targeted.

Disadvantages

The platform will have to be accurate in altering the format of the comment to present it in a voting system, without exposing users to the original hateful comment. This may also be abused by groups within the community banding together to have a person’s comments removed.

Examples

The platform Nextdoor allows moderators, community leads and reviewers to weigh in on whether or not a comment should be removed from the community depending on the norms of that community. (screenshot from Nextdoor)

References

ADL. “How Platforms Can Stem Abuses of Livestreaming after the Storming of the Capitol.” Anti-Defamation League, January 15, 2021.
https://www.adl.org/blog/how-platforms-can-stem-abuses-of-livestreaming-after-the-storming-of-the-capitol.

Pardes, Arielle. “To Clean up Comments, Let AI Tell Users Their Words Are Trash.” Wired, September 22, 2020.
https://www.wired.com/story/comments-section-clean-up-let-ai-tell-users-words-trash/.

Sultan, Ahmad. “Livestreaming Hate: Problem Solving through Better Design.” Anti-Defamation League, May 13, 2019.
https://www.adl.org/news/article/livestreaming-hate-problem-solving-through-better-design.