Toxic behavior such as hate speech has the ability to affect users online and offline and escalate if not addressed. In this masterclass, Kris McGuffie, Research Director at Spectrum Labs, Kesa White, Program Research Associate at PERIL, and Jamie Cohen, Head of Education at Digital Void shared strategies on how to detect toxic behavior, why it is so difficult to detect this behavior, and strategies to encourage a positive online community.
What is hate speech?
For Internet platforms, hate speech is generally any explicit or suggestive attack against a person or group on the basis of race, religion, ethnic or national origin, sex, disability, sexual orientation or gender identity. Most major platforms roughly adhere to this definition, but due to nuances in policy and challenges of moderation, enforcement can result in disparities.
I Was Just Joking
What appears to be the most harmless, “it’s a joke,” is usually the loudest and will garner a lot of attention. Platforms are dealing with a well-equipped and very smart group of bad actors when it comes to hate speech and radicalization. Early signs may show an Increase of insults with violent discourse that transitions into hate speech or exposure to radicalized content. This includes links to content on external web pages or on other platforms.
Irony memes are memes that prey on relatable memes by undermining the underlying formats and preconditions of their existence. They often emerge to represent a group or shared idea of understanding. One example of this behavior is the OK hand gesture. These memes will be used to push people towards civic action or may be used to troll others on a divisive topic. For platforms, it’s important to check the language as these memes are a good predictor of bad actors through compounding behaviors, user reports, or platform violations over time with the same user.
Platforms are continuing to look at ways to deploy restorative practices to build brighter communities. Below are a few recommendations from our panel on how to support this effort.
Give users more choice in how they respond (thumbs up, heart, etc).
Give users ways to contribute to community-building behaviors.
Know your platform ethics, use those to guide the structural elements that influence human behavior.
Create tangible and efficient ways for people to provide useful information and encourage sharing in the community.
Encourage users to display positive sportsmanship online.
For dating platforms, having a considerate code of conduct.
Platforms can encourage good behavior through community guidelines, platform culture.
Check out Tinder’s community guidelines on Impersonation: relatable, funny, human.
Platforms are here for community, so reward the community behavior when it’s working, or users will go someplace else.
Develop metrics for Joy, which is the opposite of what you are evaluating for toxic behavior. In-app metrics as well as user research can help with this effort.
The biggest thing that platforms need to be aware of is that these online harms can often become offline harms. The more proactive we can be in not only detecting these trends, but having better tools to evaluate the metrics to take action, the brighter our online community will be for our users. In addition, working with experts to stay on top of this toxic behavior is a must. Not everyone needs to be an expert, but knowing who to reach out to and educating your team and company will be a must going forward. Radicalization and hate speech behaviors are playing out globally in multiple languages.
Resources Shared in the Masterclass
Pepe the Frog, one of the first memes studied in conjunction with hate speech and radicalization
Emojipedia - stay up to date with emoji trends and how they are being used
Dangerous Speech Project - excellent research around operationalizing hate speech in different regions and how individuals can use counter speech to de-escalate online hate.
Resources on Moderating Hate Speech