Skip to content

Cover more content with

Scale your content moderation with up to 5x more content coverage* and multi-language implementation.
*Based on an average of Spectrum Labs customers across gaming, dating, and social media apps. Results may vary.
hero-img-guardian
We keep billions of users safe

Guardian uses true Natural Language Understanding (NLU) AI to go beyond keyword-based filters. Guardian can discover and act at scale on difficult-to-detect behaviors that other solutions miss:

  • Child sexual abuse material (CSAM)
  • Bullying
  • Hate speech
  • Spam
  • Radicalization and extremism
pexels-greta-hoffman-7675861 1

What makes Guardian better?

Guardian's powerful features help moderation teams scale the coverage and quality of their content moderation efforts:
Contextual AI

Contextual AI

Analyzes metadata in addition to keywords, increasing detection accuracy and precision.

icon-noun-translate

Multi-Language Capability

Your content moderation efforts can scale uninterrupted as your platform grows internationally.

User-Level Moderation

Allows moderators to pinpoint and take action against users who create disproportionate volume of harmful content.

icon-Healthy

Healthy Behaviors AI

The world's first AI that detects and promotes positive user behavior.

This 360-degree, community-customizable, and always-improving solution helps reduce legal and business risk, lowers the cost of scaling globally, and drives higher retention and revenue-per-user — all while making the internet a safer, more positive place to be.
ContextualAlanim_020623_Opt2B
Identifies complex dangerous behaviors like bullying and CSAM.
Developed by human behavior experts and trained by our massive data vault.
Configurable for your community, adapts to changes, and improves over time.
Contextual AI

Sophisticated detection of complex behaviors

ContextualAlanim_020623_Opt2B
Identifies complex dangerous behaviors like bullying and CSAM.
Developed by human behavior experts and trained by our massive data vault.
Configurable for your community, adapts to changes, and improves over time.

Toxic content isn’t just harmful for users — it’s also high-risk for businesses. Illegal behaviors can get apps and websites shut down, kicked off app stores, shut down by government regulators, and blacklisted by payment processors.

Some toxic behaviors are easy to find, but others depend on context.  

Our Contextual AI analyzes metadata from user activity on your platform, like conversations, when and where they happen, and past user behaviors.

This allows Guardian to detect patterns and find behaviors that other solutions miss.

Advanced Behavior Systems

Spectrum Labs can create custom solutions for your specific needs. Choose any models from Spectrum Labs' behavior library and adapt them for your community to proactively prevent harmful conduct.
 
SL_BehaviorIcons_Bullying

Bullying

User(s) seek to harm another user through intimidation, coercion, and/or humiliation.

SL_BehaviorIcons_CSAM Discussion

CSAM Discussion

Mentions or references of anything related to child sexual abuse material (CSAM).

SL_BehaviorIcons_CSAM Grooming

CSAM Grooming

Conversations that are initiated by adult sexual predators to obtain sexually explicit materials from minors.

SL_BehaviorIcons_Hate Speech

Hate Speech

Hateful behaviors directed at protected attributes in order to prevent the normalization of hateful discourse.

SL_BehaviorIcons_Insult

Insult

Content that is insulting to people. 

SL_BehaviorIcons_PII

PII Scrubbing

Scrubbing personally identifiable information (PII) about themselves or other individuals.

SL_BehaviorIcons_Profanity

Profanity

Vulgar, profane or obscene words or phrases.

SL_BehaviorIcons_Radicalization

Radicalization

Efforts to recruit and radicalize people into violent extremist ideologies and actions.

SL_BehaviorIcons_Self Harm

Self Harm

Users that are at risk of self harm and are in need of mental health resources.

SL_BehaviorIcons_Severe Toxic

Severe Toxic

Captures both patterns and instances of behaviors that are considered harmful or severely harmful to people.

SL_BehaviorIcons_Sexual

Sexual

Content that references sexual activity.

SL_BehaviorIcons_Sol of Drugs

Solicitation of Drugs

Content that indicates online drug transactions.

SL_BehaviorIcons_Sol of Sex

Solicitation of Sex

Content that indicates a transactional nature to a sexual relationship.

SL_BehaviorIcons_SPAM

Spam

Detect repetitive content and/or attempts to lead other users off the platform.

SL_BehaviorIcons_Threats

Threats

The user has a plan that could affect the real-world safety of other people if it were to be followed.

SL_BehaviorIcons_Underage 13

Underage 13

The presence of users who are under the age of 13.

SL_BehaviorIcons_Underage 18

Underage 18

Indicates the user is under the age of 18.

"Overnight I saw a 50% reduction in manual moderation of display names."

David Brown SVP, Trust and Safety

david-brown
the meet group-1
place-holder
Detect and add new languages quickly and easily with AI transfer learning.
Moderate character-based and hybrid languages, l33tspeak, emojis, and more.
Localize actions and see insights by language and region.
Multi-Language Moderation

High-quality moderation in any language

place-holder
Detect and add new languages quickly and easily with AI transfer learning.
Moderate character-based and hybrid languages, l33tspeak, emojis, and more.
Localize actions and see insights by language and region.

To provide multilingual moderation platforms often work with multiple vendors who specialize in different languages. That approach is expensive, difficult to manage, and doesn’t enable consistent quality.

Guardian’s patented multi-language capability offers a better solution.  Our AI is configured for local social norms and country-specific regulations to help you moderate at scale for a global audience.

Guardian’s supported languages include Arabic, French, Hindi, Korean, and many more — but Spectrum Labs can add moderation capabilities for any language.

Spectrum Labs can add moderation capabilities for any language.

Language Detection Library

SL_Languages_WhiteBG SL_Languages_WhiteBG
"Spectrum Labs was the only partner who could offer high-quality content moderation across multiple languages, allowing us to automate and scale our moderation efforts."

Weszt Hart Head of Player Dynamics

riot-games-headshot.2x
healthy-behaviors-ai
Get a 360° view of all user behaviors within your community, including good behaviors.
Evaluate your efforts and quantify the impact of your work.
Dynamically pair positive users with newcomers to drive retention and engagement.
Healthy Behaviors AI

Remove the bad — and promote the good

healthy-behaviors-ai
Get a 360° view of all user behaviors within your community, including good behaviors.
Evaluate your efforts and quantify the impact of your work.
Dynamically pair positive users with newcomers to drive retention and engagement.

Simply removing toxic content does not mean you are building a healthier, more cohesive online community. After all, engagement and retention are not just tied to a game or app’s design, but to how users relate to each other on your platform.

With the first-ever positive user behavior detection, analytics, and actioning capability, Guardian by Spectrum Labs enables trust & safety and community leaders to analyze and improve their moderation efforts.

Learn More

“Adding IMVU to the growing data set that Spectrum Labs uses to improve and evolve their AI benefits both of our companies. And above it all, our missions are aligned: both companies are focused on making online environments more successful for human connection.”

Maura Welch Vice President of Marketing
Actioning
1000s of possible combinations to meet your unique needs for automation.
Automate content moderation decisions where it makes sense.
Custom Configured Actioning

Take action against toxic behavior based on your own policy.

Actioning
1000s of possible combinations to meet your unique needs for automation.
Automate content moderation decisions where it makes sense.

Spectrum Labs allows you to scale coverage of toxic content through customized actioning. Actioning can be done through Spectrum Labs' Guardian queue or a platform's in house queue

Types of actions against content include real-time redaction, automated moderation, referral for human moderation, and more.

Actions against users also can be configured with responses like displaying a warning, shadow-banning, reducing a user's reputation score, suspending an account, and reporting to law enforcement.

With Spectrum's Guardian content moderation product, we've been able to protect 25,000 more users per day from unwanted - sometimes illegal - messages.

Alice Hunsberger Head of CX
SL_Implementation Phases-Eyeballs (2)
Prioritize and automate moderation based on user reputation scores.
Utilize user-level bulk actions for more impact with fewer clicks.
Track metrics and gain insight into larger patterns.
User-Level Moderation

Moderate users, not just messages

SL_Implementation Phases-Eyeballs (2)
Prioritize and automate moderation based on user reputation scores.
Utilize user-level bulk actions for more impact with fewer clicks.
Track metrics and gain insight into larger patterns.

Across nearly every type of platform, 30% of toxic and illegal content is generated by just 3% of users.

To find toxic users, Guardian leverages user reputation scores, which aggregate individual user behavior over time and assign scores that can be used in AI analysis.

Each user reputation score is based on behavior severity, prior violations, and recency of offensive posts or actions.

This fully anonymized and privacy-compliant tool enables moderators to identify and penalize bad actors faster, more efficiently, and in time to prevent real-world harm.

It can also be leveraged to identify users who may be at risk for self-harm or CSAM grooming by detecting factors that indicate possible vulnerability.

“Approximately 30% of toxic content originates from just 3% of users across every kind of platform.”

Hill Stark, PhD Head of Data Analytics
guardian-for-voice
Guardian for Voice

Voice and audio
moderation AI

guardian-for-voice

Our industry-leading voice content moderation solutions help you identify and control disruptive user behaviors to build a healthier, growing community.

With Contextual AI detection and automated user-level action, content mods can scale coverage across all types of user-generated audio content:

  • Voice channels and rooms
  • In-game audio chat
  • Voice memos
  • Interactive podcasts
  • Livestream video with audio chat
  • And more

Content Moderation Actioning UI

Spectrum Labs Guardian UI is designed to work with our API to help you take automated, effective action at scale.  If you use your own UI, we can help you trigger custom actions with configurable webhooks that respond in milliseconds.
queue_ui-2
Moderation Queue

Moderate efficiently

View cases prioritized by severity with all relevant information for decisions. Review and act on a user or content level. Integrate with your internal systems.
queue_ui-1
Automation Builder

Trigger actions

Configure nuanced automated actions to moderate at scale. Block or remove content, warn or suspend users, or send for human review.
queue_ui-3
Analytics Dashboard

Gain Insights

Measure behavior prevalence and moderation impact. Compare to benchmarks, get strategic recommendations, and share your progress.

Easy API, decisioning, and webhooks

Implementing Spectrum Labs' solutions is easy through our well-documented API and webhooks, which require only minimal engineering resources to get up and running.

Our API comes with a real-time decision framework where you can configure complex business rules around the actions taken when a prompt or output is in violation of your policy. The API response will return a determination of the detected behavior and the action to be taken on it within 20 milliseconds.

Additionally, our event-based action framework allows you to set complex rules and fire off a webhook once those rules are met, allowing for complex workflows.

Spectrum Labs prides itself on its first-class customer support. Our clients are provided with a dedicated solutions consultant who works closely with your organization from day one to help oversee the implementation phase.

SL_GenAI_ChartG_White-min

Our complete solution

Detecting harmful behaviors is just the start. Our platform includes everything you need to achieve your key Trust & Safety objectives and the analytics tools to show a 360° view of the health of your community so you can quantify the effectiveness of your teams’ efforts.
Icon-Community

Community assessment

Get visibility into what's really happening in your community, along with ideas for improvement.

Icon-API

Easy-to-implement API

Streamlined setup ensures fast time-to-value with minimal maintenance.

icon-scale-speed

Enterprise scale and speed

Big data infrastructure handles any volume of user-generated content in milliseconds.

icon-success

Dedicated customer success

Our trust & safety, technical, and data experts are here to help you achieve your goals.

Questions?

To learn more about Guardian or our other solutions, get in touch with our team or download our Guardian Content Moderation AI product sheet.