
The car was designed to provide transportation, not to harm people. But a sad side effect of the rise in automobile usage is that accidents happen all the time. While a national seat belt law in 1968 required all cars to have seatbelts, there was no coordinated action to get people to wear them. Sixteen years, millions of deaths and accidents later, individual states began requiring passengers to wear a seatbelt, proactively preventing the injuries and deaths caused as an effect of increased car travel.
Online communities and platforms require a similarly proactive approach. The consequence of a Trust & Safety failure, particularly a failure to protect the most vulnerable people on the internet can have far-reaching, life-altering consequences, like a car crash. A predator who successfully grooms a child can devastate that child’s wellbeing, and destroy their chance at happiness.
Create A Platform for Users to Report Unwanted Behaviors
The first layer of content moderation that most online platforms employ is user complaints. A user reports an issue, and the company has somebody look into it. However, these avenues are used quite rarely (STAT), so a problem can quickly grow out of control.
In many cases involving minors, the children are unaware that they have been targeted by bad actors, and are being led into increasingly harmful situations. Perhaps the children are unable to recognize grooming behaviors, or they are unwilling or unable to complete a report for the platform to investigate.
Moreover, this is a reactive approach: the platform is only aware of the problem once the harm is done. This reactive approach is too little, too late. It is time for Trust & Safety teams to move to a proactive approach - to prevent their users from coming to harm in the first place.
Download the Whitepaper to Discover how to Align Your Company on Trust & Safety
Proactive Internet Safety Strategies
In the cyber world, internet safety strategies are like the seat belts and sensors that warn drivers of trouble, or automatically engage the brakes in order to prevent a crash. These approaches deploy AI-based solutions (such as natural language processing and contextual AI) to spot early warning signs that signal danger, and automate a response in order to protect the victim.
Benefits of Proactive Trust & Safety Solutions
Prevent Secondary Trauma
Removing toxic and harmful content from your platform benefits your users, but it can also prevent secondary trauma for your content moderation review team. When Lisa Thee entered the field of cyber safety for children, she was unprepared for the impact of witnessing such abuse would have on her emotional stability, and speaks of suffering from PTSD. She’s not alone; in May 2020, Facebook agreed to pay $52 million to moderators suffering from PTSD. The more you rely on machines to identify and escalate instances of abuse, the more you can protect your Trust & Safety team from psychological harm.
Check out our blog: Protecting the Mental Health of Content Moderators
(As an aside, Lisa Thee recommends all platforms build a team of content moderators rather than rely on a single individual to perform this function. Access to a community of people who understand what a moderator experiences will boost the moderator's emotional wellness.)
Download the Online Child Safety White Paper to Discover the Regulatory Response to CSAM