Voice Chat Muted User Indicator
An indication within the user interface that a user in a voice chat has been muted for violating the rules.
How does this mitigate hate?
Muting users who have violated a platform’s safety policies in a voice chat helps limit the amount of harmful speech in a live chat room. The muted user indicator allows hosts and participants to facilitate and engage in safe conversations, helping platforms prevent hate.
When to use it?
When users engage in unsafe speech in any context. If a user have been reported or flagged as an offender in a voice chat room, the platform should provide a clear notice to the reporting user that the offender has been muted and why.
How does it work?
A notification is sent in the voice chat room to the reporting user after offender has been successfully reported and muted. Muted indication is sent to all participants that a user has been muted and an explanation of why including examples of violations to users that are repeat offenders.
Indications should also provide access to revisit platforms rules.
Further info can be provided in the form of “Learn more” UI indicating how or why a user may be removed from a room.
The platform should provide access rules/ policies consistently across the platform, as well as consistent notifications for similar indications.
Advantages
Timely moderation can help empower platforms and users to maintain safe, and equitable environments giving participants of all ages peace of mind. Indication of muting or removal from voice chat environment helps users to collectively understand what is in violation of the rules, and what the rules/policies are on the platform or specific room.
Disadvantages
Users may be targeted and spam reported. Malicious groups may exploit underdeveloped reporting processes, or platforms that don’t have consistent appeal processes for those who have been wrongfully targeted.
Examples
Left: Clubhouse reporting interstitial.
Right: Spotify muted user notification indicating a specific person is muted.
References
ADL. “The Unique Challenges of Audio Content Moderation Part Two: Static vs. Livestreaming Audio.” Anti-Defamation League, June 30, 2021. https://www.adl.org/blog/the-unique-challenges-of-audio-content-moderation-part-two-static-vs-livestreaming-audio.
Douek, Evelyn, and Quinta Jurecic. “The Lawfare Podcast: The Challenges of Audio Content Moderation.” Podcast. Lawfare, April 22, 2021. https://www.lawfareblog.com/lawfare-podcast-challenges-audio-content-moderation.
Koss, Hal. “Are Content Moderators Ready for Voice-Based Chat Rooms?” Built In, October 27, 2020. https://builtin.com/product/content-moderation-voice-audio-chat.