Skip to content
4 min read

How Riot Games Used Science to Curb Toxic Behavior in League of Legends

By Katie Zigelman

At Spectrum Labs, we're always interested in gaming companies that challenge public perceptions of toxicity and use behavioral science techniques to reign in negative and highly-toxic language. One of the coolest examples of behavioral science applied to online toxicity comes from Jeff Lin while he was with Riot Games — creators of the massively popular online arena game League of Legends.

At GDC 2013, Jeff Lin exposed some of Riot's latest toxicity-combatting tools and features, and he talked about how he and his team applied behavioral science to curb toxic behavior on a game that sees over 27 million active players every day.

Preventing Toxic Behaviors in Online Gaming

A significant number of League of Legends players cite toxic behavior as their primary reason for leaving the game. For Jeff Lin — then Lead Designer at Riot with a Ph.D. in Cognitive Neuroscience — finding ways to curb this type of behavior was front-of-mind. While many gaming communities observe toxic behavior as a "natural" component of gameplay, Jeff and his team decided to run some experiments to see if Riot could directly influence and reduce the level of toxicity on their in-game chat channels.

At Spectrum, we see this idea of natural toxicity relatively often. Cyberbullying, hate speech, misogyny, and radicalization are undoubtedly common in gaming environments, but letting this type of behavior run loose can quickly lead to a loss of players and a destabilization of your game culture.

For Riot, discovering the inner workings of these types of behavior could help them prevent toxicity and improve their overall gameplay experience for their millions of players. At GDC, Jeff unveiled three critical experiments that Riot managed during the past year — and their impact on behavior and toxicity.

Experiment #1:  Shielding Players from the Impact of Toxic Language

The first core pillar of Riot's "behavior team" (a group of behavioral scientists looking to disrupt League of Legend's player toxicity) is to shield players from toxicity. In other words, Riot wanted to see if shielding players from negative language would curb the overall usage of that language.

To test this, Riot put in an option to remove cross-chat (the ability to chat with the other team's players) and defaulted that option to "off." In other words, players would automatically start with cross-chat disabled. Within one week, there was:

  • 32.7% reduction in negative chat

  • 1.9% reduction in neutral chat (i.e., semi-toxic chats)

  • 34.5% increase in positive chats

Better yet, they saw no decline in overall conversations. This means that simply giving players the option to be shielded from negative chat, in turn, reduced the overall instances of that negative behavior outright.

Experiment #2: Reforming or Removing Toxic Players

In Riot's second year-long experiment, they enabled "The Tribunal." This online portal collected reported players and displayed their chat logs and items to the community. Then, the community could vote on whether or not that player was behaving toxically. In effect, Riot was letting the community police itself (with oversight).

Over the year, Riot recorded over 105 million votes and reformed 280,000 players using the Tribunal system. They also found that player votes were almost identical to in-house judgments, making the community an accurate identifier of negative behavior.

To further promote reformation, Riot also included "reform cards." In the past, Riot would send vague warnings and bans to players that didn't spell out the incident that caused their ban. They found that this caused players to act even more negatively once returning to the game. With reform cards, players would receive a shareable link to their Tribunal card that showed them exactly what they did in the game.

Not only did this decrease toxic behavior after the ban, but Jeff shared several examples of players writing in to apologize for their behaviors. The cards also allowed the community to get involved in bans. When players complain on the forums about their bans, the community can see exactly what they did and rally behind the banner of positive behavior together.

Experiment #3: Creating a Culture of Sportsmanship

By far the most interesting experiment Riot ran over the last year was their "Optimus Experiment." Jeff Lin and his behavioral team decided to test out whether or not priming could influence gamers' behaviors. In psychology, priming is the idea that exposure to one stimulus can impact your exposure to another stimulus. An example given in the keynote was a study in the Journal of Experimental Psychology, where students who were exposed to brief glimpses of the color red saw their performance decrease by 20%.

To test whether Riot could create a culture of sportsmanship using in-game stimuli, they changed the in-game tips randomly across accounts. There were multiple categories of change. Some users would see tips with fun facts or jokes — while some users would negative behavior statistics or positive behavioral stats. They also changed up the colors of the tips and delivered the tips at different areas in the game (e.g., in-game and loading screen).

In total, Riot tested priming across 217 unique in-game tip combinations (including control groups). Here's what they found:

  • Users who were exposed to positive behavioral statistics (e.g., "X% of players punished by the Tribunal improved their behavior and are never punished again") in the color white had decreased levels of verbal abuse (6.35% lower), offensive language (5.89% lower), and in-game reports (4.11% lower).

  • Users who were exposed to negative behavioral statistics (e.g., "Teammates perform worse if you harass them after a mistake.") in the color red had decreased levels of negative attitude (8.34% lower), verbal abuse (6.22% lower ), and offensive language (11% lower). However, that same exact message in the color white caused no changes in behavior.

  • Users who were exposed to positive behavioral statistics (e.g., "Players who cooperate with their teammates win X% more games.") in the color blue had decreased levels of negative attitudes (5.13% lower), verbal abuse (3.64% lower), and offensive language (6.22% lower). However, that exact message in the color red caused no changes in behavior.

  • Users who were exposed to a neutral question about behavior (e.g., "Who will be the most sportsman-like player in the game?") in the color red had increased levels of negative attitudes (14.86% higher), verbal abuse (8.64% higher), and offensive language (15.15% higher).

It's important to note that this experiment ran across millions and millions of games and to remember that this happened over 2012/13. Of course, this long-term study may present more questions than answers, and things may have changed since then. But it does tell us one thing — priming works. And applying behavioral sciences to toxic in-game behaviors has the very real potential to impact and influence how players interact and engage with each other at scale.