Skip to content
4 min read

How Major Online Gaming Companies Are Combatting Toxicity in 2020

By Katie Zigelman

As social gaming continues to mature and expand across the gaming industry, almost every socially-driven video game is running into a core issue. Player toxicity (e.g., misogyny, racism, sexual grooming, violent language, cyberbullying, etc.) is eroding gaming communities. At GDC 2013, Riot Gaming's Lead Designer, Jeffery Lin, admitted that player toxicity was one of the most frequently listed issues when players left League of Legends.

For many gamers, toxic behaviors are the norm. They're the "price" of playing games. But, as women continue to feel forced to mask their identities, sexual grooming continues to plague gaming environments, and instances of racism and homophobia continue to grow, some of the largest gaming companies in the world are fighting back.

Over the past few years, we've learned that toxicity can be reduced through savvy mechanics and intelligent AI and data distribution. Contrary to popular belief, toxic language isn't an inherent part of gaming (or worse, a necessity for enjoyment). Instead, it's a culturally-driven phenomenon that can be reduced, penalized, and squashed with ongoing behavioral efforts.

Here are some of the ways that the world's leading gaming companies are fighting back against toxic environments.

Blizzard Gives the Community Control of Toxicity on Overwatch

Overwatch — one of Blizzard's flagship multi-console social games — has had a long-lasting war against toxicity. Not only is Overwatch a poster child of sorts for toxic gaming, but Blizzard has been trying to reduce the toxic elements of the game since launch. In Blizzard's eyes, toxic behavior is one of the key drivers of player abandonments, and finding ways to reduce these toxic behaviors is a "major initiative" for Overwatch's core team — to the point that fighting toxicity slows down updates.

While Blizzard has had some success fighting toxicity by tracking down toxic players on-and-off of the game, they weren't getting scalable results. In 2018, Blizzard announced a new social feature aimed at reducing toxic behaviors — and endorsement system. Players can endorse other players for positive behaviors, which will show up on a badge next to their name.

According to Blizzard, this in-game policing system has had success. They reported a 40% reduction in overall toxicity in 2019. Of course, there are still issues. Systematic toxicity isn't being eliminated; it's being sheltered. Players can feign kindness in four games, and go all-out aggressive in the next two. The overall drivers that create toxicity aren't being immediately reduced. Instead, the system is rewarding players for conforming to in-game social norms. And, if those norms are toxicity, we may see players continue to be toxic, but they're being toxic on teams that promote toxicity.

If you get four of your friends together and group up, you can still be toxic. In fact, you can farm endorsements on one day, and resume toxicity on the next. Players are fighting against mathematics, not toxicity at scale. The question is whether these in-game systems will continue to reduce toxicity, or if Overwatch's new endorsement system has capped its impact on toxic social behaviors.

Valve Investigates Shielding Players from Abuse

When it comes to toxic games, Valve's Counter-Strike: Global Offensive (i.e., CS:GO) is, without a doubt, a prime example of toxic environments. Not only is toxic behavior a norm on CS:GO, but some of the game's biggest stars endorse and actively defend toxic language.

To help combat this plague of social toxicity, Valve announced an auto-mute feature in February 2020. The premise is simple. If a player gets too many behavioral in-game reports, they're auto-muted until they play enough games to remove the mute. Of course, there are some surface-layer issues with this system. For starters, the punishment essentially encourages toxic players to continue playing. And, since toxicity isn't only language-oriented (e.g., losing games on purpose, annoying other players in the game, etc.), we're not sure if this will reduce toxicity en masse.

But, it's a simple, possibly effective solution that may work well at reducing toxicity. We'll have to wait and see.

Valve Launches Machine Learning Initiative "Minerva" to Reduce In-Game Toxicity at Scale

Of course, auto-muting isn't the only feature that Valve has initiated to reduce CS:GO toxicity. In 2019, Valve also launched Minerva, a machine-learning-enabled AI that analyzes reports and messages to initiate corrective actions. To keep things simple, Minerva ingests player reports, analyzes messages during the game to detect if the report is a "false flag" or a real issue, and hands out corrective actions within seconds of the games end.

So far, Minerva claims to have reduced toxicity by 20%. Again, there are a few issues. Overall, using AI to moderate and contextualize verbal abuse is an incredibly innovative solution that we firmly stand behind. But, Minerva primarily leverages chat logs to make decisions. But toxicity happens outside of chat. Cheating and "griefing' (i.e., in-game actions that are non-verbal) both happen outside of chat logs. And, sexual grooming and solicitation are heavily contextual instances that require sophisticated situation awareness. So far, Minerva hasn't touched on its ability to combat some of these hyper-contextual instances outside of verbal chatlog abuse.

At Spectrum, we believe that context and non-verbal cues are a critical component of reducing toxicity. Friends "talking trash" on each other is an entirely different issue than people verbally abusing those they don't know. As online gaming toxicity continues to plague the industry, we propose a more rounded, machine-learning enabled solution that understands and utilizes context beyond chat logs. Contact us to learn more.

Learn more: Spectrum Labs Guardian Content Moderation AI