Skip to content
4 min read

5 Trust and Safety Best Practices

By Katie Zigelman

People are spending more and more time in the digital world, with an estimated 30% increase in the amount of time U.S. adults spend online every day since COVID-19.1 We spend more time online, and while we do, recent events have increased the amount of scrutiny on the ways that digital platforms support user safety.

The riot at the U.S. Capitol on January 6 was enabled by social platforms, which provided a way for organizers to communicate with and mobilize people over a wide geographic area. Some participants even live-streamed their activities on the day, and the FBI is actively investigating users of Reddit, Facebook, Twitter, and YouTube to track down alleged rioters.2

Misinformation, falsified news, and posts generated from bots posing as real people have affected the political climate and the outcome of elections globally. And online platforms, including Facebook and Twitter, have been under review by journalists, lawmakers, regulatory bodies, and law enforcement to determine who is responsible for protecting user safety.3

While this responsibility is being debated at a macro level, individual platforms should take matters into their own hands. Here are 5 best practices every platform should be following to foster a safe, inclusive, and engaged community:

Download Whitepaper: Trust and Safety Alignment: A Whole Company Opportunity


Trust and Safety Best Practices for Platforms

1. Create and Share Community Guidelines

Creating comprehensive guidelines, and making them available to users, is a required first step for Trust and Safety. The best community guidelines are specific to the platform and include examples of violations, to ensure that expectations are clear and easy to understand.

Guidelines should also be reviewed regularly and updated to address emerging trends and changes in your platform and the Trust and Safety landscape.

Learn More about Regulatory Compliance

2. Enable User Reporting

User reporting is one of the fundamental tools in the Trust and Safety arsenal. Users need a way to communicate with the platform directly to bring violations to the attention of the moderation team. The process for reporting inappropriate behaviors should be included in published guidelines, and accessible to users. (Note: While this is an important avenue for communication between a platform and its users, it can also be underused and misused by community members. It’s important to layer in additional detection capabilities to protect your users and brand alike.)

To ensure good Trust and Safety practices, an online platform should ensure that the corporate culture reinforces the importance of Trust and Safety for everyone. One of the struggles that many encounter when building a team is getting buy-in from other departments, and making sure that Trust and Safety is a common goal can help a platform to overcome resistance.

3. Build Trust with Transparency

Sharing Trust and Safety metrics with the public is critical for platforms, helping to build relationships with users through open and honest communication. Many online platforms publish transparency reports quarterly or annually, sharing statistics including government requests for information, violation numbers and types, as well as appeals and restorations of content. This shows current and future users that you are taking their safety seriously.

4. Define KPIs

Deciding on relevant metrics and tracking key performance indicators (KPIs) is important for several reasons. Metrics are necessary to understand current Trust and Safety processes and to measure the effectiveness of new initiatives. KPIs are a common tool used in different areas of a business, providing easy-to-understand ways to speak to stakeholders about processes and initiatives, and align different departments with Trust and Safety goals. 

Trust and Safety is challenging because, like the security space, there is a clear definition of what is bad (negative press coverage, brand reputation damage, declining engagement rates) but there isn’t a clear definition of what’s good. Companies must reflect and define what goals they're comfortable with and identify a way to measure progress towards that goal.

5. Support Moderator Wellness

Content moderators are regularly exposed to nudity, sex, and violent acts which causes discomfort and distress, in addition to serious mental issues like PTSD, anxiety, and depression. Without content moderation tools and policies to protect moderators, high turnover rates can plague these roles that are meant to be entryways into a new career.

Integrate advanced AI moderation tools that can take the first pass at the content, weeding out toxic material and providing validating information for what’s left. This action minimizes the human moderation queue, saving their efforts for more nuanced content, resulting in increased job satisfaction, efficiency, and productivity of your teams.

Platforms can also support moderator wellness with remote work policies, extended benefits like mental health counseling, and by developing career paths to place them in advanced positions that would benefit from their experience working on the frontline.

Related Reading: Protecting the Mental Health of Content Moderators

Strengthen Your Content Moderation Efforts with Contextual AI

A Trust and Safety team should periodically evaluate the tools and solutions that it uses to support the enforcement of community guidelines with transparency and consistency. Contextual AI from Spectrum Labs evaluates multiple data points to more accurately identify behaviors, outperforming keyword-based tools like profanity filters. This allows for real-time analysis to proactively prevent toxic content and shape your users’ experience in the moment. Our solution is available across multiple content types and languages. 

Whether you are looking to safeguard your audiences, increase brand loyalty and user engagement, or maximize your moderators’ productivity, Spectrum Labs can help make your community a better place. If you'd like to learn more, download the Spectrum Labs Contextual AI Solution Guide.

Get The Guide

 

https://www.wsj.com/articles/how-covid-19-has-transformed-the-amount-of-time-we-spend-online-01596818846

2 https://www.vox.com/recode/22218963/capitol-photos-legal-charges-fbi-police-facebook-twitter

3 https://www.washingtonpost.com/technology/2020/10/28/twitter-facebook-google-senate-hearing-live-updates/