Menu
Request A Demo

What is Content Moderation?

January 5, 2021

User-generated content (UGC) refers to any type of subject matter: text, reviews, pictures, videos, etc. that is created by users rather than by a company, platform, or brand. This content is being created at an incredible pace, as platforms grow, and offer new ways for customers to interact with products, with brands, and with one another. Content moderation is a critical function of Trust & Safety programs at platforms, as moderators monitor UGC to ensure that UGC doesn’t violate community guidelines. 

Community rules and guidelines differ significantly from one platform to another because each community has different thresholds for acceptable behavior. What is perfectly normal on a dating website would be completely unacceptable on a gaming platform for children.

Community standards generally include prohibitions of behavior that is illegal, dangerous, or graphic; or that falls into the categories of radicalization, harassment, or fraud. For example, Twitter’s rules1 prohibit violence, extremism, CSAM, and hate speech, as well as protecting privacy, authenticity, and copyright.  Creating and enforcing these standards is the primary function of platform content moderation teams.

Related Reading: 7 Best Practices for Content Moderation

Challenges of Content Moderation

Moderating UGC on a platform seems like a straightforward task until you consider the complexities that are involved. Challenges to effective content moderation include:

Volume

The amount of user-generated content varies from one platform to the next, but the volume is growing, and it is staggering. Every minute1:

  • 147,000 photos are uploaded to Facebook
  • 347,222 stories are posted to Instagram
  • 500 hours of video are uploaded to YouTube

Whether using a content moderation team, an automated content moderation solution, user reporting, or some combination of these; the volume of UGC is difficult to manage.

Variety

User-generated content can be text copy, but it can also appear as images, videos, voice chat, and SMS. A content moderation solution must be able to address all the different types of content that the platform supports.

Additionally, many platforms appeal to a multi-lingual or global audience. Community standards should be written for all of the languages that are used on your platform, and your solution should incorporate multi-lingual (and multicultural) support.

Context

By its very nature, behavior must be interpreted based on context. The full context in which UGC appears changes its meaning and its effect on the people that see it. Think of situational context for UGC like the tone of a conversation: sometimes it is less about the words that are said and more about the way they are said.

In the past, this has caused a problem for automating solutions to content moderation. A filter, such as a keyword or RegEx blocker, can help platforms keep an eye on UGC – but cannot interpret the context in which the UGC occurs.

Related Reading: What is Username Moderation, and Why Is It Important?

How to Improve Content Moderation Efforts

To create an effective content moderation plan, a platform should consider:

Mental Health of Content Moderators

The people that moderate content for platforms are essentially like front-line workers or emergency responders: they spend their work lives dealing with graphic images and banned behaviors. Many suffer from stress, anxiety, and even PTSD as a result of their jobs. Platforms should put some thought into the mental health of content moderators, and build support for them into their Trust & Safety programs.

Harnessing the Community

Creating a system for user notification of problematic content, and a system of review, consequence, communication, and appeal is a crucial step in a comprehensive content moderation program. Even the best systems are not infallible; so the user community can be an excellent supplement to any automated solutions or content moderation team that you employ.

Transparency and Undue Impact

A content moderation plan should have regular review and evaluation built into the process, to measure the effectiveness of community standards – and whether they are applied fairly and equitably. Additionally, a transparency report should be standardized, so that information can be tracked from one year to the next and trends identified. Google3 shares three transparency reports regularly: one that deals with security and privacy on the platform, one that covers different reasons for content removal, and one for ‘other’, which includes political advertising, traffic and disruptions.

Automating with AI

Solutions like keyword and RegEx filters are difficult to maintain and largely ineffective. But a solution that employs l AI is different because it can read the context of UGC in real-time, across languages, improving the accuracy and effectiveness of content moderation programs and unburdening your moderation team.

At Spectrum Labs, our solution evaluates user-generated content in real-time, over multiple languages, deciphering context and adapting to a changing environment. Accurate, automated, and reliable: Contextual AI relieves employees of the burden of content moderation, allowing you to focus those resources on higher-level, strategic objectives.


To learn more about Spectrum Labs, download our Solution Guide to explore how we can help your businesses improve content moderation and protect your community.


Sources

1https://help.twitter.com/en/rules-and-policies/twitter-rules

2https://www.socialmediatoday.com/news/what-happens-on-the-internet-every-minute-2020-version-infographic/583340/

3https://transparencyreport.google.com/

You May Also Like

These Stories on social media