Default Filtering Transparency

Platforms should explain how they design a default content and comment ordering display and disclose when they override a user’s custom settings to order the display of content and comments.

How does this mitigate hate?

Activity can devolve into recommendations focused on hateful content and misinformation just by watching one video. As a result, platforms should be transparent in stating how content and comments are filtered to provide context to users. This pattern can build trust with users and help them feel respected by the platform.


When to use it?

This pattern should be implemented in platforms that often experience echo chambers of hateful content. This pattern can give users more context about why they’re shown the content they see, which can help mitigate escalations into extremist content.

How does it work?

Default filtering notice should be easily accessible on any page where its used, such as:

Post, video, image, or livestream feeds


Profile recommendations


The platform should explicitly explain how the default feed filter is chosen, such as whether content is sorted by most relevant, most engagement, most recent, etc. When a user’s filter preferences are overridden by the platform, the user should be alerted of this change and have the option to deny the override.

Users should also be able to edit the default filter at any time.


This pattern builds more transparency and contextually between users and platforms. It also mitigates tangents or rabbit holes that lead to hateful content when a person is scrolling through content or comments.


Users can choose to ignore the reports and continue to consume content as usual. Also, if users have a better understanding of the default filtering algorithm, bad actors might take advantage of it by specifically engaging with or posting certain content.


Facebook’s feed filter settings. Unfortunately, the platform doesn’t always honor the user settings and overrides them regularly to the algorithmic preferred feed.


Germain, Thomas. “How to Filter Hate Speech, Hoaxes, and Violent Clips out of Your Social Feeds.” Consumer Reports, August 13, 2020.

Hao, Karen. “He Got Facebook Hooked on AI. Now He Can’t Fix Its Misinformation Addiction.” MIT Technology Review, March 11, 2021.

Sartor, Prof. Giovanni, and Dr. Andrea Loreggia. “The Impact of Algorithms for Online Content Filtering or Moderation.” European Parliament’s Committee on Citizens’ Rights and Constitutional Affairs, September 2020.