Skip to content
3 min read

How to stop online hate speech during the World Cup

By Hetal Bhatt

Screen Shot 2022-12-02 at 9.56.25 AM

The World Cup is a global phenomenon that ignites passions everywhere as soccer fans cheer on their favorite teams.

Soccer hooliganism is nothing new. But in the age of social media, harmful behavior can fester on online platforms instead of just stadium parking lots. With a worldwide event, that can lead to a spike in online toxicity from multiple corners of the globe.

Dealing with hate speech is nothing new to Trust & Safety teams – but the sheer volume of it spurred by a global event can catch unprepared teams flat-footed.


World Cup hate speech can be anticipated

With a recurring tournament like FIFA's World Cup, Trust & Safety teams can learn from the past in anticipation of what to expect. Hate speech is nothing new at the World Cup – past matches have seen racist banners, Blackface, and homophobic chants by fans in attendance. Similarly, online racism has been common among fans of big soccer tournaments like UEFA Euro Championship.

Unfortunately, despite the predictability of racial abuse, online platforms have failed to stop it.

UK soccer officials say Facebook did nothing to stop racial abuse against their Black players after the Euro 2020 tournament final. Instead, they say Facebook gave them an "athlete safety guide" on how to use Facebook's tools to shield themselves against racist content – putting the onus on players and teams to avoid racist posts, rather than proactively removing them from the platform.

Similarly, research shows Twitter is currently failing to delete 99% of World Cup posts that contain racial slurs:

The analysis, conducted by researchers at the Center for Countering Digital Hate (CCDH) and seen by the Observer, included 100 tweets reported to Twitter.  Of those, 11 used the N-word to describe footballers, 25 used monkey or banana emojis directed at players, 13 called for players to be deported, and 25 attacked players by telling them to “go back to” other countries.

The Observer: Twitter fails to delete 99% of racist tweets aimed at footballers


The whole world watches the World Cup, and many people watch what's happening online as much as what's on TV. If your platform is mentioned in headlines alongside the World Cup, you do not want it to be about your inability to stop racial slurs.

In 2022, online platforms have no excuse for failing to remove racist posts in their communities.

 

Automation and scaleability are essential for removing online hate speech during a global event

Given the predictable and repeated nature of racist posts during the World Cup, much of the process can be automated.

Before changing ownership in 2022, Twitter used automation to instantly remove thousands of racist tweets during the aforementioned Euro 2020 Final. According to Twitter, 90% of those tweets were proactively removed before they were reported, and only 2% of the tweets managed to land 1,000 impressions before removal.

If your online platform hasn't developed an automated process to remove repetitive toxicity, collaborating with a content moderation partner is the way to do it.

Spectrum Labs has got you covered on all fronts:

  • Spectrum Labs uses Contextual AI to accurately recognize + remove toxic content from online communities. Not only does it work well on hate speech, but also on more complex harmful behaviors like harassment and radicalization that could be problematic for online platforms.

  • Given the international nature of the World Cup, online communities must be prepared to combat toxicity from all corners of the globe. With multi-language capabilities, Spectrum Labs can help platforms scale their content moderation efforts worldwide to proactively remove racist posts in any language.

  • We've found that a disproportionate amount of harmful content is created by a small portion of users – across all online platforms, roughly 3% of users post 30% of toxic and illegal content. Spectrum Labs' user-level moderation feature allows Trust & Safety teams to pinpoint problematic users and remove them to stop the bulk of hateful content at the root.

If you'd like to learn more about how Spectrum Labs can automate and scale your efforts to combat hate speech, check out our eBook below or get in touch with us directly!

Spectrum Labs Solution Guide: Hate Speech & Extremism

Learn more about how Spectrum Labs can help you create the best user experience on your platform.