The ability for users to customize who can comment on their posts or comments.
How does this mitigate hate?
On social platforms, a majority of reported incidents have consistently come from the comments section. Platforms that allow a more customized experience in who is allowed to comment on a user’s content can help mitigate the spread of hate speech and harassment within comment sections.
When to use it?
Comment preferences should be presented upon registration, account creation, and should be configurable throughout the account’s existence. In particular, these preferences should be easily accessible from the comments section of each of the user’s posts.
How does it work?
Comment settings should incorporate the following aspects to allow flexibility to users, and parents of users who may need more customization and control of content. Algorithms in place can help to facilitate the following setting options to help protect new users and previously targeted groups or persons.
No one can comment
Only people who are mentioned can comment
Only friends/followers can comment
Friends of friends can comment
Anyone can comment
Users who have permission to comment on a friend’s post should an indication that they are allowed to comment in this conversation.
Users who do not have permissions to comment on a posting should see a disabled comment button or a notice that they do not have permission to add a comment to this conversation.
Comment settings give users leverage to control possible escalations of harassment or hate speech within their posts. Proactive comment filtering and exclusion can give users peace of mind and mitigate potential targeting and harassment.
Conversations may end up being too restricted if a person forgets they have limited who can participate. This may lead to an “echo chamber” experience and only few voices in the conversation.
Instagram allows users to turn off commenting.
LinkedIn offers 3 options for who can comment on a posting.
On twitter, users who have permissions to comment see a notice that they are allowed to post a reply.
Feldman, Brian. “How to Block, Mute, and Avoid Your Enemies Online.” Intelligencer, November 11, 2019. https://nymag.com/intelligencer/2019/11/how-to-block-mute-and-avoid-your-enemies-online.html.
Madison, Quinn. “Tuning out Toxic Comments, with the Help of AI.” Google Design, February 11, 2020. https://medium.com/google-design/tuning-out-toxic-comments-with-the-help-of-ai-85d0f92414db.
PEN America. “Blocking, Muting, & Restricting.” Online Harassment Field Manual. Accessed September 28, 2021. https://onlineharassmentfieldmanual.pen.org/blocking-muting-restricting/.
Pardes, Arielle. “To Clean up Comments, Let AI Tell Users Their Words Are Trash.” Wired, September 22, 2020. https://www.wired.com/story/comments-section-clean-up-let-ai-tell-users-words-trash/.
Vilk, Viktorya, Elodie Vialle, and Matt Bailey. “No Excuse for Abuse: What Social Media Companies Can Do Now to Combat Online Harassment and Empower Users.” Edited by Summer Lopez and Suzanne Nossel. PEN AMERICA. PEN America, March 31, 2021. https://pen.org/report/no-excuse-for-abuse/.