Those figures are alarmingly higher among traditionally marginalized populations – for instance, 51% of LGBT adults say they’ve experienced more severe forms of online harassment, more than double the rate among straight adults.
Strikingly, these violent threats didn’t seem particularly unusual to the women receiving them:
Many of [the Uvalde suspect’s] threats to assault women, the young women added, barely stood out from the undercurrent of sexism that pervades the Internet — something they said they’ve fought back against but also come to accept.
Trust & Safety teams must be given the proper tools to identify violent threats early on, before online harassment spills into the real world. For small platforms, traditional keyword-based moderation tools can help detect abusive terms and hateful language. For larger platforms with more sophisticated Trust & Safety operations, tools like Spectrum Labs’ Guardian suite allow moderation teams to monitor specific users who have a pattern of violations. Additionally, Guardian’sContextual A.I.can detect threatening language even if it circumvents specific keywords.
If current trends continue, online harassment will worsen in severity and inevitably cross legal thresholds more often in the future. If the latter becomes common, platforms will need Trust & Safety systems with expedited ways to notify law enforcement when necessary. After all, threats of rape and mass-murder don’t just violate terms of service – they also violate the law. In Uvalde, there were missed opportunities to get authorities involved and possibly prevent the mass-shooting.
As the line between online and offline harassment becomes more blurred, it’s critical that every online platform with user-generated content creates a Trust & Safety planfrom Day One. And as platforms grow, those Trust & Safety measures must scale and be able to remove illegal content before it can spread across the community – and quickly inform authorities of real-world threats.
But with better tools and processes, Trust & Safety teams can begin to reverse the trend of toxic content becoming “normal”.
How Spectrum Labs can help
Spectrum Labs' Contextual A.I. solution evaluates user-generated content in realtime over multiple languages, deciphering context and adapting to a changing environment. Accurate, automated and reliable – Contextual A.I. eases the burden of content moderation off employees, allowing companies to shift resources to higher-level, strategic objectives.