Major world events, such as Russia’s invasion of Ukraine, can have a massive impact within online communities. Toxic content online contributes to hate, discrimination, and violence, and real-world violence contributes to spikes in online toxicity this is the high-risk cycle that platforms seek to disrupt and mitigate.
Early trends analysis
Spectrum Labs partnered with the Middlebury Institute’s Center on Terrorism, Extremism, and Counterterrorism (CTEC) to more closely examine emerging risk related to the conflict in Ukraine. CTEC’s findings and additional analysis by Spectrum Labs indicate that much like the toxic trends we have seen with the pandemic, early trends related to Russia’s invasion of Ukraine center around disinformation and malinformation being leveraged and amplified. Intolerant content, including hate speech, and existing hateful conspiratorial narratives target Ukrainians, Jews, Russians, and other groups of people who are identified as being somehow on the wrong side of the conflict.
Identifying tactics behaviors
The tactic of “flooding the zone” is being used, as bad actors attempt to confuse, distract, and influence with massive amounts of false and misleading information. Fears of global conflict that includes nuclear, chemical, and biological warfare are being exploited to push hateful narratives and conspiracies. Within the confusion, calls for violence, dehumanizing language, denial of civilian atrocities and related content persists. We know from historical examples such as with Nazi Germany, the Rwandan genocide, and the genocide of the Rohingya in Myanmar that human conflict feeds intolerant discourse, and that intolerant discourse leads to widespread violence. In the Trust and Safety space, we recognize this type of situation as central to the types of harm we work everyday to prevent, along with the myriad harms that smaller-scale conflict and intolerant speech may instigate.
Keys to mitigation
Careful linguistic and cultural analysis of intolerant content is the most efficient step towards mitigation. Subtle plays on words, allusions to culturally specific historical events, masked references to existing hateful conspiracies, glorification of violence, and jokey and ironic content all contribute to increased discrimination, hate, and violence. By identifying specific themes and language, along with the users responsible for generating and amplifying intolerant content, platforms will also identify bad actors and their supporters who will exploit any number of real-world conflicts for the same purpose.
Staying on top of early violent, insulting, and hateful trends makes them less likely to take hold on a platform. This is a great opportunity to remind users about the platform's values, simplify access to platform Trust & Safety guidelines, encourage users to use reporting mechanisms, and take a stand against intolerant content. From subtle othering to overt inferiority narratives, intolerant discourse reduces community cohesion and derails the intended purpose of platforms: healthy engagement.
Back to the basics
Terrible events like war or other forms of mass violence, natural disasters, widespread social unrest and similar events provide platforms with an opportunity to return to the essentials of why they exist and how they want their communities to feel for users who need engagement and connection more than ever. While a policy audit is always useful, many platforms will find that their policies are sound and that their best approaches remain in careful examination of what is happening between their users each day. This basic approach to ensuring that our online spaces can retain their integrity and value during challenging times sends the signal that all of us need during trying times—that we have safe places to gather and engage.
Spectrum Labs Solution for Hate Speech & Radicalization
Working with CTEC helped Spectrum Labs create AI solutions that proactively analyze trends, foreign languages, social networks and adversarial behaviors. Download the guide to learn more!
Hate Speech on platforms has the potential to escalate to extremist behavior on and offline but Trust & Safety teams need to overcome many challenges to be able to detect Hate Speech accurately. This white paper covers effective methods for detecting and removing Hate Speech and Extremism from your online community.
Toxic behavior such as hate speech has the ability to affect users online and offline and escalate if not addressed. In this masterclass, Head of Research at Spectrum Labs, Program Research Associate at PERIL, and Head of Education at Digital Void will discuss how to detect toxic behavior, why it is so difficult to, and different ways of encouraging a positive online community.