Keyword Muting & Hiding

The ability for users to mute/hide comments by keywords that can be chosen from a default list and also customized.

How does this mitigate hate?

Muting/hiding harmful content by keywords—or emojis—(such as hateful slurs) is a preemptive action that allows users to make abusive content invisible to them, reducing their exposure to hate and harassment.

LEARN MORE

When to use it?

Users who have experienced–or may experience–online hate and harassment should be able to mute/hide harmful content by keywords so that they can proactively protect themselves from exposure.

How does it work?

Users should be provided with a default list of keywords and phrases that they can choose to mute/hide.

Users should be able to create a custom list of words that they can mute from their comments.

When these keywords or phrases are detected, the content is automatically hidden from the targeted user’s view.

User should have the option to mute and hide emojis and combinations of emojis as well if allowed.

Users should have the option to revise muted/hidden keywords at any time and unmute specific content or comments as needed. Ideally, muted/hidden content should be quarantined in a dashboard so that users can review it as needed (to monitor for threats or escalations in abuse, for example) – see “Harmful Content Filter & Dashboard.”

Advantages

Giving users the ability to create keyword filters allows one time setup to pre-emptively block toxicity, harassment and hate, creating a safer experience for the user. Quarantined comments can be batch-reported to the platform.

Disadvantages

Muting/hiding can make it harder for targets to assess the risk they are facing because they can no longer see if abuse is ongoing, or if it has escalated to threats of physical or sexual violence, or doxing. Platforms should therefore give users the option to review muted/hidden content and explicitly flag or label content detected as threatening. Platforms should also allow users to call on trusted allies to help monitor and report abuse. (see “Delegated Access”).

Examples

 

Twitter allows users to add muted words and/or phrases, but not to review content that has been filtered out. (screenshots taken May 2022)

Twitter allows users to designate specific areas for muting and blocking keywords. (screenshot taken May 2022)

Instagram allows users to mute comments by keywords, including a default list of keywords that are commonly reported, but not to review content that has been filtered out.

References

Vilk, V., Vialle E., & Bailey, M. No Excuse for Abuse: What Social Media Companies Can Do Now to Combat Online Harassment and Empower Users., PEN America (2021). Accessed at: https://pen.org/report/no-excuse-for-abuse/.

Attribution

Written by PEN America