More than 40% of U.S. adults say they’ve experienced online harassment, a figure that has remained unchanged since Pew’s 2017 online harassment survey. However, more severe forms of cyber harassment like physical threats, doxxing, and sexual harassment have nearly doubled over the past several years.
Those figures are alarmingly higher among traditionally marginalized populations – for instance, 51% of LGBT adults say they’ve experienced more severe forms of online harassment, more than double the rate among straight adults.
Despite its prevalence and growing severity, the percentage of Americans who consider online harassment a “major problem” has dropped from 62% in 2017 to 55% present.
Have we started to accept harassment as a “normal” part of the everyday online experience?
Online abuse can have offline consequences
Last month’s mass-shooting at an elementary school in Uvalde, Texas was committed by a perpetrator who displayed major warning signs in his online behavior before the attack.
The Uvalde suspect had repeatedly threatened to rape and murder people on the Yubo livestreaming app. Since the attack, several young women who interacted with him on Yubo said he frequently made graphic threats. In one instance, he told a 19-year-old girl that he would break down her door and rape her after she rejected his sexual advances.
Strikingly, these violent threats didn’t seem particularly unusual to the women receiving them:
Many of [the Uvalde suspect’s] threats to assault women, the young women added, barely stood out from the undercurrent of sexism that pervades the Internet — something they said they’ve fought back against but also come to accept.
– Excerpt from The Washington Post on May 28, 2022
What can be done about this?
Trust & Safety teams must be given the proper tools to identify violent threats early on, before online harassment spills into the real world. For small platforms, traditional keyword-based Content Moderation tools can help detect abusive terms and hateful language. For larger platforms with more sophisticated Trust & Safety operations, tools like Spectrum Labs’ Guardian suite allow moderation teams to monitor specific users who have a pattern of violations. Additionally, Guardian’s Contextual AI can detect threatening language even if it circumvents specific keywords.
If current trends continue, online harassment will worsen in severity and inevitably cross legal thresholds more often in the future. If the latter becomes common, platforms will need Trust & Safety systems with expedited ways to notify law enforcement when necessary. After all, threats of rape and mass-murder don’t just violate terms of service – they also violate the law. In Uvalde, there were missed opportunities to get authorities involved and possibly prevent the mass-shooting.
As the line between online and offline harassment becomes more blurred, it’s critical that every online platform with user-generated content creates a Trust & Safety plan from Day One. And as platforms grow, those Trust & Safety measures must scale and be able to remove illegal content before it can spread across the community – and quickly inform authorities of real-world threats.
But with better tools and processes, Trust & Safety teams can begin to reverse the trend of toxic content becoming “normal”. Learn more in Spectrum Labs White Paper,Managing Hate Speech and Extremism on Your Platform.
How Spectrum Labs can help
Spectrum Labs' Contextual AI solution evaluates user-generated content in real-time over multiple languages, deciphering context and adapting to a changing environment. Accurate, automated and reliable – Contextual A.I. eases the burden of content moderation off employees, allowing companies to shift resources to higher-level, strategic objectives.
To learn more about Spectrum Labs, contact our team today!