Voice Chat Muted/Removed Notification

A notification for users informing them that they have been muted or removed because they violated the rules (or terms of service) the maximum amount of times.

How does this mitigate hate?

Voice chat rooms tend to be a hotbed for hateful and harassing speech. Users who spread hate should be muted and then removed from the chat room as quickly as possible to reduce inciting speech.

 

LEARN MORE

When to use it?

Platforms that experience issues of users breaking the voice chat rules of the platform the maximum amount of times, should use this pattern to notify users that their actions have resulted in them being muted/removed as a consequence.

Including this pattern before the issue arises could help to mitigate hateful and harassing behaviors on voice chats by making the consequences of breaking the rules apparent to the user that broke the rules the maximum amount of times.

How does it work?

Users who are muted/removed for violating the community agreements should be quickly and clearly notified of why the action was done to maintain safe environments for all.

The notification should cover the majority of the user’s screens to ensure participants are notified. An option for an auditory notification and with the visual notification should be accessible to users.

The notification must be manually dismissed by the offender to verify that they have been notified of their violation, and understand why they were muted/removed.

Duration of the mute/removal should be determined by the platform and be communicated to the offender.

 

Advantages

A notification that is clearly seen and has to be manually dismissed notifies users of the reason for why they were muted or removed, and verifies that they have seen the message.

This pattern ensures that users are aware of the violations that resulted in them being muted/removed, so that they understand the consequences of their actions.

Disadvantages

Users who are targeted by spam reporting may be wrongfully muted or removed from voice chat rooms due to hateful targeting. Because some audio platforms are still developing policies and related interface options, proper mitigation of hateful targeting may take longer or may not be as robust.

Therefore, notifications should only be sent out after a successful instance of reporting is processed, and should be distinct for each process while adhering to clearly communicated policies.

Examples

Clubhouse reporting confirmation
(Screenshot taken September 2021)

User muted notice
(Screenshot taken September 2021)

References

ADL. “The Unique Challenges of Audio Content Moderation Part Two: Static vs. Livestreaming Audio.” Anti-Defamation League, June 30, 2021. https://www.adl.org/blog/the-unique-challenges-of-audio-content-moderation-part-two-static-vs-livestreaming-audio.

Douek, Evelyn, and Quinta Jurecic. “The Lawfare Podcast: The Challenges of Audio Content Moderation.” Podcast. Lawfare, April 22, 2021. https://www.lawfareblog.com/lawfare-podcast-challenges-audio-content-moderation.

Vilk, Viktorya, Elodie Vialle, and Matt Bailey. “No Excuse for Abuse: What Social Media Companies Can Do Now to Combat Online Harassment and Empower Users.” Edited by Summer Lopez and Suzanne Nossel. PEN AMERICA. PEN America, March 31, 2021. https://pen.org/report/no-excuse-for-abuse/.