Skip to content

Trust & Safety

Successful Trust and Safety strategies involve the entire business and go beyond moderating user generated content.

Align internally on Trust and Safety
 

What is Trust and Safety?

Trust and Safety is the set of business practices whereby an online platform reduces the risk that users will be exposed to harm, fraud, or other behaviors that are outside community guidelines. This is becoming an increasingly important function at online platforms as they look to protect their users while improving customer acquisition, engagement, and retention.

Effective Trust and Safety programs create a safe and inclusive environment for users, allowing platforms to build and maintain relationships while growing the size and diversity of the audience.

As platforms support new ways for users to communicate – text, image, voice, video, etc. – their Trust and Safety solutions must evolve to protect users on these channels.

Download the solution guide: Learn how Spectrum Labs can help you scale.


 

Why is Trust and Safety important?

Trust is a foundational concept for human interactions, governing social, political, and economic norms and behaviors. For face-to-face interactions, these norms have had thousands of years to reach acceptance: to develop and be communicated and understood by people. Online interactions have progressed so swiftly, though, that there has been little time for these norms to develop.

This puts platforms in the position of managing user interactions: ensuring that the content that is posted (whether generated by an individual or by a bot) is within community guidelines.

Inappropriate content negatively impacts the user experience. People that are bullied, harassed, threatened or insulted directly will likely have negative emotions about their experience – but the people that witness these behaviors will feel it as well. This erodes user safety, and puts brand reputation at risk. This can have a direct impact on the long-term success of the business.

Safety is also critically important to your community or platform. Without effective moderation, the online environment can be unsafe for users. The sale of illegal products on marketplaces, the spread of radicalization and the promotion of extremist behaviors can all put users at risk. And allowing these unsafe behaviors on your platform can result in irreparable damage.

Managing Trust and Safety, then, is a fundamental part of building a safe, inclusive environment for users and a critical concern for online platforms. It incorporates operations, design and engineering, user experience, customer relations, fraud protection, content moderation, and more.


 

Who is responsible for Trust and Safety?

Once a subset of compliance, Trust and Safety has grown in importance to become its own set of strategies and initiatives. Recently, companies have been assigning a person or department to take responsibility for the effectiveness and optimization of Trust and Safety initiatives.

However, Trust and Safety initiatives can impact activities across the company – and can be impacted by other departments as well. For example, a Trust and Safety initiative that requires an automated moderation solution embedded in a platform needs the support of Product Development and Engineering. Cooperation between departments in the initial building, testing and rollout may extend development timelines; but will result in a better user experience in the end.

Webinar: Trust & Safety is Everyone's Responsibility


 

What metrics can you track for Trust and Safety?

Measuring the effectiveness of Trust and Safety initiatives is critical to justifying strategies, and optimizing processes to ensure continuous improvement. Benchmarking and tracking key performance indicators is a good way for Trust and Safety to communicate their efforts and generate buy-in from the team at large. Some metrics to consider include:

  • Impact
    • Number of users exposed to harmful content
  • Community Health
    • Percentage of content flagged as inappropriate
    • Percentage of users breaking guidelines
    • Broken down to the per behavior level
  • Detection Coverage
    • Accuracy of detection solution - may be measured in precision & recall
  • Moderation & User Reports
    • Number of false reports 
  • Average time to mitigation

Read the blog: Guardian: The Health and Safety of Your Users is in Your Hands


What does a Trust and Safety team look like?

Creating a Trust and Safety team is critical, to ensure that someone bears responsibility for managing Trust and Safety initiatives and building cooperation from key stakeholders. Because at many companies this is a newer initiative, it can be difficult to know where to begin. Some important things to keep in mind include:


Evaluate your needs first.

Look at existing Trust and Safety issues at your company. This may involve initiating data collection or reviewing data that has already been gathered; conducting interviews with employees and surveying a subset of users; even speaking to peers at other companies in your industry. This should be done with a view to two different ends: understanding Trust and Safety issues at your company, and evaluating current processes and strategies for effectiveness.

Build a business case.

An effective Trust and Safety team must have the support of key stakeholders across the company – from the executive level and throughout different departments as well. Express the goals and objectives of Trust and Safety in a way that people all over the company can understand and relate to: for example, building user loyalty and increasing longevity, growing your audience, reducing complaints that require human examination and intervention, etc.

Read the white paper: Developing and Operationalizing a Trust & Safety Policy


Challenges of Trust and Safety

Volume

The sheer amount of digital content that is created by users can be overwhelming. Moderating this content by human review is not only time-consuming and ineffective, it can endanger the mental health of the moderators. One of the primary challenges of Trust and Safety is finding an efficient, accurate solution to deal with the volume of content moderation and username moderation that must be done to protect user safety.

Read the blog: Protecting the Mental Health of Content Moderators

Variety

There are a number of different ways that user-generated content can violate community guidelines. Hate speech, cyberbullying, radicalization, illegal solicitation, violent or explicit content – each of these are subject to prohibition by platforms. However, each of these behaviors has different targets, perpetrators, and methods; requiring solutions that are adaptive enough to address different situations.

Change

The tactics that users employ to engage in inappropriate activities are constantly changing – in part, to evade simplified automated solutions, such as keyword or profanity filters. Trust and Safety teams must devise processes and solutions that answer a platform’s current needs, while keeping abreast of changes and evolving methods for the future.

Channels

Online platforms continue to develop new ways for users to communicate with one another. A social platform that launched with an exchange of comments may later add the ability to post a photo. During social distancing, many dating apps incorporated video chat as a way to get people together while separated.

However, Trust and Safety processes that work on one channel may not work on another. This is where interdepartmental commitment to promoting Trust and Safety is critical. Before a new channel is launched, it should be designed, developed, and tested to ensure a safe and inclusive environment for all users.

Language

Similar to opening a platform to new channels – supporting new languages should be a thoughtful, measured, and tested initiative. At the very least, community guidelines should be translated into a new language before the company supports it, because failing to do so can result in inappropriate or abusive behaviors on your platform.

For example, Facebook ‘officially’ supports 111 languages with menus and prompts; and Reuters found an additional 31 languages commonly used on the platform. However, the Facebook community guidelines were only translated into 41 different languages: meaning that users speaking 60-90 different languages were not informed of what represents inappropriate content on Facebook.

Learn More: Content Moderation in Multiple Languages

Nuance

Finally, one of the trickiest aspects of building a safe, inclusive environment for users is managing behavioral nuances. It can be difficult to identify and respond to behavior without a person reviewing content: but then, this is an extraordinarily resource-intensive, inefficient solution.

Luckily, technological advancements are being applied to answering Trust and Safety for online platforms. Artificial intelligence (AI) can help to automate the identification and initial response to inappropriate user behaviors for different platforms, with different thresholds of what constitutes appropriate behavior. For example, Spectrum Labs offers an AI-based solution that moderates content in context, reading the nuance of different situations, reducing the need for human moderation by 50%.

Case Study: How The Meet Group Reduced username incidents requiring human intervention by 50%


Benefits of Improving Trust and Safety

Improving Trust and Safety outcomes can have a number of benefits for a platform. It results in a better user experience, increasing loyalty and reducing churn. This can in turn strengthen brand reputation, and increase revenue.

A strong Trust and Safety program can also improve online visibility and encourage good word-of-mouth, bringing more users to your platform. It can increase interactions and conversions – and all of these factors make the business more valuable to stakeholders and to advertisers.

Spectrum Labs provides AI-powered behavior identification models, content moderation tools, and services to help Trust and Safety professionals safeguard user experience from the threats of today, and anticipate those that are coming. Because every company has different needs when it comes to content moderation, Spectrum Labs has specialized expertise in the fields of gaming, dating, social networks, and marketplaces.

Contact Illustration

Whether you are looking to safeguard your audiences, increase brand loyalty and user engagement, or maximize moderator productivity, Spectrum Labs empowers you to recognize and respond to toxicity in real-time across languages.

Contact Spectrum Labs to learn more about how we can help make your community a safer place.

Contact Spectrum Labs Today