Consider the following chat messages:
- “Kill yourself”
- "You suck, dumba**”
- “You play like a girl”
- “I’m gonna shoot you”
Without context each of these messages could be perfectly innocent examples of things groups of friends playing together would say in multi-player games. Without context, each of these messages could also be considered cyberbullying which results in great harm to the receiver. So how does a gaming platform differentiate between the two so they can keep their community safe?
But first, a little context for how we got here. In the world of Trust & Safety, gaming has a few unique challenges.
First, the topics of the games. Many multi-player games are battle focused where the winners live and the losers die. And in a team setting, sometimes losing a single player actually helps the team win (i.e. “Kill yourself” may be a strategic team move). Some of the topics are also related to shooting in which case telling someone you’re going to shoot them is perfectly normal.
Second, the nature of competitive game play. Games are often created to put players in stressful situations where they want to win at any cost, which causes a very different set of behaviors than what would otherwise be “normal.” And with the online nature where players are anonymous and represented with fun usernames and icons, there is often a lack of social consequences for when behaviors cross the line. And they do.
Per a Ditch the Label report based on a survey of 2,515 gamers between the ages of 12 and 25 years old, 57% of players have been bullied in online games, and 47% have received threats. 22% of respondents also said that they have quit playing a game because of online bullying.
So, in a world where the player community is arguably a game’s most valuable asset, it is incredibly important for them to be able to differentiate between context-appropriate player behavior and harmful cyberbullying behaviors. And it’s important that this is done without causing undue stress on moderation teams by flooding them with false positives.
Using context to identify cyberbullying in online games
Enter Spectrum’s Behavior Identification. Our technology takes context into consideration in 3 main ways:
1. Aspect (“Conversational”) Models: We look beyond a single message to detect behaviors. We don’t just see “kill yourself,” mark it as a threat, and call it a day. Instead we look at the messages before and after it to see the full picture. We also look at things like sentiment and emotions in the response to see how it was received. This allows us to get a sense for whether players are being disrupted by these messages or not.
2. Metadata: We use metadata both as an input to our behavior models and to our custom automation builder so our customers can separate how they want to respond based on the setting of the behavior. Metadata such as difficulty levels, time of day, player’s longevity, and even user reputation can be very useful in understanding the intent behind messages and how they may be received.
3. Models Customized Per Game: A World War II shooting game is going to have a slightly different definition for and language around Violence than a FIFA soccer game. Even within the same customer’s instance we set up separate flows for each game so we can iterate on the models specific to the game’s context. This means that each game essentially gets its own branch of our baseline model that is fine tuned to that game’s needs.
Multiplayer games are a great way for people to connect and take a break from the “real world.” Let’s keep them going while also encouraging positive player behavior and safe communities by considering context.