Prevent and Protect

Incentivize users to create civil and engaging content, protect users through limiting toxic content, and de-incentivize bad behaviors.

How does this mitigate hate?

Although it is important to limit toxic content on social platforms, it is equally important that users are incentivized to create civil content and de-incentivized to create hateful content, while also educating users. This will help to shift the conversation on the platform in a more healthy direction – mitigating the spread of hate and reducing the need for moderating toxic content on the platform.

LEARN MORE

Considerations

Identifying high-quality content that promotes civil conversation is necessary to incentivize civil content and de-incentivize hateful content – otherwise users lack examples of healthy content. Without exposure to high-quality content, toxic content will spread on the platform and require increased resources for limiting its spread.

References

Lang, Ben. “Humans vs Algorithms.” OpenWeb, September 19, 2020. https://www.openweb.com/blog/algorithms-vs-humans.

Sartor, Prof. Giovanni, and Dr. Andrea Loreggia. “The Impact of Algorithms for Online Content Filtering or Moderation.” Europarl.europa.eu. European Parliament’s Committee on Citizens’ Rights and Constitutional Affairs, September 2020. https://www.europarl.europa.eu/RegData/etudes/STUD/2020/657101/IPOL_STU(2020)657101_EN.pdf.

Pardes, Arielle. “To Clean up Comments, Let AI Tell Users Their Words Are Trash.” Wired, September 22, 2020. https://www.wired.com/story/comments-section-clean-up-let-ai-tell-users-words-trash/.