Skip to content

Content Moderation

The right automation tools can help platforms manage user-generated content (UGC) to create a safe, inclusive, and welcoming online environment.

Explore the concept of contextual artificial intelligence as a content moderation tool, and three ways it benefits platforms

What is content moderation?

Content moderation is the process of screening and monitoring user-generated content online. To provide a safe environment for both users and brands, platforms must moderate content to ensure that it falls within pre-established guidelines of acceptable behavior that are specific to the platform and its audience.

When a platform moderates content, acceptable user-generated content (UGC) can be created and shared with other users. Inappropriate, toxic, or banned behaviors can be prevented, blocked in real-time, or removed after the fact, depending on the content moderation tools and procedures the platform has in place.

The definition of acceptable and unacceptable behavior is unique to each platform. Platforms may fall within different industries, like dating, gaming, social networks, and marketplaces, and each has its own set of users with different needs, sensitivities, and expectations. 

Priorities will also vary between platforms. A dating platform may be more concerned with underage users or sex solicitation than a marketplace, and a marketplace may be more concerned with illegal drug and weapons sales than a gaming platform. To some degree, though, all online platforms must minimize toxic behaviors to provide users with a safe, inclusive environment.

Toxic behaviors to moderate on any platform

Hate Speech

Sixty-four percent of US teenagers report they often come across racist, sexist, or homophobic comments, coded languages, images, or symbols on social media. 

During sporting events like the World Cup, in-person "hate actions" have translated to increased  hate speech on social media platforms. The analysis, conducted by researchers at the Center for Countering Digital Hate (CCDH) and seen by the Observer, included 100 tweets reported to Twitter.  Of those, 11 used the N-word to describe footballers, 25 used monkey or banana emojis directed at players, 13 called for players to be deported, and 25 attacked players by telling them to “go back to” other countries.

Experiencing hate speech online can lead to depression, isolation, suicide ideation, self-harm, and an increased risk for CSAM grooming.

Extremism

Extremism for Trust & Safety and content moderation professionals is a behavior that further moves people into "out groups" on the basis of race, religion, ethnic origin, sex, disability, sexual orientation, or gender identity.

Extremism does not often happen on a platform, but the severity of the actions can cause serious damage to community health online and offline. Not detecting and removing extremism from a platform, can also have severe consequences, such as being removed from the app store. In early January 2021, social media platform Parler was removed from Apple’s App Store, the Google Play Store, and was later removed from Amazon’s AWS servers due to the unchecked proliferation of online extremism.

Trust & Safety professionals face problems detecting extremism because profanity filters can not detect complex behaviors like hate speech and extremism. Detecting these behaviors require content moderation tools that analyze the context around keyword filters. 

Learn More: Removing Hate Speech & Extremism From Your Platform

Radicalization

The internet offers countless ways to connect. Unfortunately, extremists use gaming, dating, social media, and marketplace platforms to recruit and radicalize users to join their extremist groups and spread radicalized content. Trust & Safety teams need to be aware of radicalization tactics, such as them using the "OK" gesture as a hate symbol on social media platforms, and training their content moderation tools to detect them. Governments across the world, including the United States Department of Justice, understand the severity of detecting radicalization online and have been tackling the issue of online extremists using the Internet to recruit and radicalize users since 2013. 

 

Harassment

More than 40% of U.S. adults say they’ve experienced online harassment, which has remained unchanged since Pew’s 2017 online harassment surveyHowever, more severe forms of cyber harassment, like physical threats, doxxing, and sexual harassment, have nearly doubled over the past several years.

Similar to hate speech and radicalization, sexual harassment is also a complex behavior that can not be detected using profanity filters. For example, on a dating app, words like "sexy", "strip", or "kiss" would not get flagged for sexual harassment using a profanity or keyword filter. But, using a content moderation tool that analyzes the context around the keywords, such as what is discussed in the conversation, and metadata like the user's age, would be able to detect sexual harassment. Learn how to tell the difference between flirting and harassment.

Harassment is also prevalent in gaming platforms. In an episode of The Brief, Spectrum Labs’ Co-Founder and CEO Justin Davis sat down with candidate for congress and game developer Brianna Wu, Roblox’s Director of Community Safety & Digital Civility, Laura Higgins and Matt Soeth the co-founder and executive director of #ICANHELP to discuss the paradox of the female gamer - she wants to play, but she can’t.

Equality

Many people among us still have yet to accept that love is love. Some 30% of Americans say it’s “morally wrong” to be gay or bisexual. A subset of these people will actively seek to harass or harm people within the LGBTQ+. But, there are many steps online platforms and trust & safety teams can do to create inclusive and safe online spaces for the LGBTQ+ community, such as creating inclusive community guidelines, safety by design products, hiring LGBQT+ employees, and using contextual AI to detect toxic behaviors used against the LGBQT+ community. Platforms showing support for the community is more than symbolic gestures, it's showing action in content moderation policies and tools.

Learn More: LGBTQ+ Celebrating Diversity; Safeguarding Inclusion White Paper

Human Trafficking

Perpetrators of human trafficking are exploiting vulnerable people throughout the world, and they are using online platforms to do it. Up to 300,000 women and children are forced into sexual slavery in the U.S. each year, and 75% of child sex trafficking victims are sold online. With the passage of FOSTA-SESTA in 2018, platforms are legally responsible for stopping enabling human trafficking.

Understanding human trafficking behavior is the beginning of building your content moderation data and AI to be able to detect and remove human traffickers. Their tactics rapidly change, and they work across multiple platforms to avoid detection. Learn more about how Spectrum Labs built AI to detect human traffickers by partnering with DeliverFund, which is applying cutting-edge technological solutions against human traffickers, turning the tools traffickers use into weapons against them

Learn More: Stop Human Trafficking White Paper

Content moderation to protect children on the internet

With so many children using the internet, improving internet safety for kids is imperative. Ninety-five percent of children in the US aged 3 to 18 have Internet access at home. As you imagine, they use the Internet to play games, do schoolwork, connect with friends and explore their interests. On average, they spend more than 1.6 hours online per day, totaling around 11 hours per week. Forty-five percent of teens are online on a near-constant basis.

The fact that the Internet is not a safe place for children isn't news; since as early as 1999, UNICEF has sponsored and published research on youth internet safety. However, after more than a year of heavy Internet usage due to COVID, we are more aware of the dangers kids face online.

However, what may be news is the fact that many (over 30 percent) of children lie about their age to access age-restricted content. And this figure doesn't count the victims of sex trafficking forced to lie about their age.

So, while you may not think children are on your app, game, or site, they are.

What are the threats to online child safety?

CSAM Grooming

Predators use online communities of all kinds, from social word games to neighborhood forums, to find and groom young victims. Grooming is a phased series of actions intended to normalize sexual communications or behaviors, usually with the long-term intention of coercing them into sexual acts. These online predators follow a sophisticated, incremental playbook for changing children's behavior.

Befriend
Predators connect with the victim, sympathize with them, and support them as a friend would. At this stage, their chat interactions would appear harmless and positive in some cases.

Desensitizing
Predators gradually add sexual topics, themes, and jokes to conversations, numbing the child. Their chat interactions could also appear harmless at this stage - it is possible to introduce sexual subjects without triggering a basic sexual filter.

Isolating
They cast doubt on the child's relationships with their parents and peers, suggesting the child isn't worthy of those relationships or that those people aren't worthy of the child. Chat interactions can still appear harmless at this stage.

Coercion
They force the child to create CSAM materials leveraging the information they've gathered over time and the isolated position they've placed the child in. Chat interactions at this age may trigger filters, but a great deal of damage has already been done by this time.

Reports of child sexual abuse material (CSAM) online have increased 15,000% over the last 15 years. How? Technology has unwittingly made it easier for predators to groom children and share CSAM materials. Learn more about protecting underage users from CSAM in our white paper.

Read This: Protecting Underage Users White Paper

CSAM Detecting and Reporting

CSAM models can detect conversations about posted CSAM (“uhh… she looks underage”) and identify early-stage child grooming behaviors that seek to obtain sexually explicit materials from minors or start a sexual relationship with a minor. When CSAM is detected, Trust & Safety teams should have a process to report these users to NCMEC.

Learn More: Child Safety Ecosystem White Paper

Cyberbullying

Cyberbullying is a widespread issue: Fifty percent of children aged 10- to 18-years-old in the EU have experienced at least one kind of cyberbullying in their lifetime. Fifty-nine percent of US teens have been bullied or harassed online. Half of LGBTQ+ children experience online harassment. 

Cyberbullying is challenging for basic filters to catch for three reasons (among others):

  • It can happen without using traditionally banned words
  • It can pattern after trash-talking banter
  • It can also seem like flirting

Spectrum Labs' technology can distinguish between  flirting and harassment and trash-talking and harassment. 

Why content moderation tools are important

The user-generated content presented within a platform will directly influence user experience. If the content is moderated well, and users have a safe experience while encountering the kinds of content they expect from a platform, they will be more likely to stick around. But any deviation from these expectations will negatively affect the user experience, ultimately causing churn, damage to the brand reputation, and loss of revenue. Fortunately, platforms have plenty of reasons to invest in effective content moderation tools.

Benefits of content moderation

Protect Communities

Platforms can foster a welcoming, inclusive community by preventing toxic behavior like harassment,cyberbullying, hate speech, etc. Help users avoid negative or traumatizing experiences online with thoughtful and consistently enforced content moderation policies and procedures.

Increase Brand Loyalty and Engagement

Safe, inclusive, and engaged communities are not born. They are deliberately made and maintained by invested community members and passionate Trust & Safety professionals. Platforms grow and thrive when they can provide a great user experience, free of toxicity. Content moderation helps to reduce the churn rate and generates more revenue with less spend.

Protect Advertisers

The experience that people have on a platform impacts brand perception. This is true not only of the platform’s reputation but also for the brands who appear in ads within the platform. Because consumers may view ads placed next to negative content as an intentional endorsement, content moderation is critical to protect advertisers. Research shows that purchase intent is significantly stifled and consumers are less likely to associate with a brand when it is displayed next to unsafe or brand-adverse content.1

Customer Insight

Content moderation can give a platform a deeper understanding of its customer base by providing data for analysis. This can then be used to identify trends and provide actionable insight – which can be used to improve marketing, advertising, branding, and messaging and refine content moderation processes even further.

Challenges of content moderation

Volume of Content

Managing the extreme volume of content that is created every day – every minute – is too large a job for a content moderation team to complete in real-time. As a result, many platforms are exploring automated and AI-powered tools and relying on users to file complaints about banned behaviors online.

  • Send 41 million messages shared on WhatsApp
  • Spend $1 million on products
  • Post 347,000 stories on Instagram
  • Join 208,000 Zoom meetings
  • Share 150,000 messages on Facebook
  • Apply for 69,000 jobs on Linked In

 

Learn more: Scaling content moderation blog

Content Type

A solution that works for the written word may not be effective at monitoring  video, voice and live chat in real-time. Platforms should seek tools that can moderate user-generated content across multiple formats.

Content Category

In addition to content types, there are categories for where the content is shared, including usernames, room names, group chat, 1-1 chat, profiles, and more.Username moderation is important. It sets the tone for how that user interacts and behaves with other users. If a user creates an offensive username, it can make other users feel uncomfortable, impacting their experience and diminishing confidence in the platform's ability to moderate toxic content, according to their policy.

Contextual Interpretations

User-generated content can have a drastically different meaning when analyzed across separate situations. For example, on gaming platforms, there is a tradition of ‘trash talk’ – users communicating and giving each other a hard time to drive competition. However, the same comment on a dating app could be viewed as harassment or misogyny. Context is critical.

Mental Health of Content Moderators and Data Labelers

Sifting through the illegal, offensive, and graphic content on behalf of platforms can cause severe mental distress to the employees that do it. Content moderators often suffer from anxiety, stress, and even PTSD due to their jobs. As a result, high turnover rates plague these roles that are meant to be entryways into a new career. Content moderation doesn't only affect moderators. Moderating content begins with data labelers reviewing data and content by users and identifying (or labeling) them appropriately to develop and train content moderation AI.

From our blogProtecting the Mental Health of Content Moderators

Ineffective Filtering Tools Such As Profanity Filters

Solutions commonly in use include keyword or RegEx (Regular Expression) filters, which block words or expressions that are related to banned behaviors. However, because these filters cannot interpret a comment's context, it can result in the accidental elimination of safe content.

For example, a virtual paleontology conference used a filter to moderate content in real-time, but accidentally eliminated content that had terms commonly used by paleontologists, including pubic, bone, and stream. As one attendee stated, “Words like ‘bone,’ ‘pubic,’ and ‘stream’ are frankly ridiculous to ban in a field where we regularly find pubic bones in streams.”

Changing Tactics

Finally, users that post content that is illegal, graphic, fraudulent, or banned constantly change their approach to evade detection. Human content moderators can adapt to new tactics, but it is very difficult for most automated solutions to keep up.

Methods of content moderation

Proactive Moderation

With pre-moderation, all user-submitted content is screened and approved by a person or an automated tool before it goes live on the site. Content can be published, rejected, or edited depending on how well it meets the guidelines established by the platform. On the plus side, this offers the highest possible level of control for the platform; however, it is expensive, it can be difficult to keep up with the level of UGC, and the delay caused by pre-moderation can negatively impact the user experience.

Post Moderation

Post-moderation allows content to be published immediately and reviewed afterward by a live team or a moderation solution. This offers users the gratification of immediacy, as they can see their posts go live as soon as they are submitted. However, it can be quite detrimental to the platform if offensive content makes its way through and is viewed by users before being removed.

Reactive Moderation

A reactive moderation solution involves content moderators becoming involved when a user flags content or files a complaint according to community guidelines. This is more cost-effective, as it only brings in valuable human efforts to address the severe content to generate a reaction in another user. However, it could be more efficient and offer a platform with more control over the content on the site.

Real-Time Automated Moderation

When a platform can moderate user-generated content in real-time, it avoids the pitfalls associated with other moderation methods. Real-time analysis empowers platforms to proactively prevent toxic content and shape users’ experiences in the moment. The user experiences no delays, toxic content is blocked, and human moderators are protected from severe content they might otherwise be exposed. to.

Related reading7 Best Practices for Content Moderation

How to choose the right approach

Choosing the right solution for a platform is very difficult, as content moderation requirements will vary depending on the platform type and the business's target audience. When deciding on a content moderation approach, be sure to consider:

  • User expectations
  • User demographics
  • Community guidelines
  • Priority of banned behaviors

Once you’ve decided what activities are tolerated on your platform, and to what degree, you can begin building a content moderation strategy. A combination of human moderators and automated solutions is generally the most flexible and efficient method of managing content moderation, particularly on platforms with a high volume of UGC.

Spectrum Labs for real-time, AI-powered moderation

Whether you are looking to safeguard your audiences, increase brand loyalty and user engagement, or maximize your moderators’ productivity, Spectrum Labs can help make your community a better, more inclusive place. Our contextual AI solution is available across multiple content types including text (chats, usernames, profile info), and voice. Our patent-approved multi-lingual approach means your non-English language users receive the same benefits as English language users.

Spectrum Labs provides user-generated content moderation solutions and services to help consumer brands recognize and respond to toxic behavior. The platform identifies 40+ behaviors across languages enabling Trust & Safety teams to deal with harmful issues in real-time. Spectrum Labs’ mission is to unite the power of data and community to rebuild trust in the Internet, making it a safer and more valuable place for all.

Contact Spectrum Labs Today

Whether you are looking to safeguard your audiences, increase brand loyalty and user engagement, or maximize moderator productivity, Spectrum Labs empowers you to recognize and respond to toxicity in real-time across languages. Contact Spectrum Labs to learn more about how we can help make your community a safer place.