Skip to content

Cover more content with

Scale your content moderation with up to 5x more content coverage* and multi-language implementation.
*Based on an average of Spectrum Labs customers across gaming, dating, and social media apps. Results may vary.
hero-img-guardian
We keep billions of users safe

Guardian uses true Natural Language Understanding (NLU) AI to go beyond keyword-based filters. Guardian can discover and act at scale on difficult-to-detect behaviors that other solutions miss:

  • Child sexual abuse material (CSAM)
  • Bullying
  • Hate speech
  • Spam
  • Radicalization and extremism
pexels-greta-hoffman-7675861 1

What makes Guardian better?

Guardian's powerful features help moderation teams scale the coverage and quality of their content moderation efforts:
This 360-degree, community-customizable, and always-improving solution helps reduce legal and business risk, lowers the cost of scaling globally, and drives higher retention and revenue-per-user — all while making the internet a safer, more positive place to be.
ContextualAlanim_020623_Opt2B
Contextual AI

Sophisticated detection of complex behaviors

Toxic content isn’t just harmful for users — it’s also high-risk for businesses. Illegal behaviors can get apps and websites shut down, kicked off app stores, shut down by government regulators, and blacklisted by payment processors.

Some toxic behaviors are easy to find, but others depend on context.  

Our Contextual AI analyzes metadata from user activity on your platform, like conversations, when and where they happen, and past user behaviors.

This allows Guardian to detect patterns and find behaviors that other solutions miss.

Advanced Behavior Systems

Spectrum Labs can create custom solutions for your specific needs. Choose any models from Spectrum Labs' behavior library and adapt them for your community to proactively prevent harmful conduct.
 
"Overnight I saw a 50% reduction in manual moderation of display names."

David Brown SVP, Trust and Safety

david-brown
place-holder
Multi-Language Moderation

High-quality moderation in any language

To provide multilingual moderation platforms often work with multiple vendors who specialize in different languages. That approach is expensive, difficult to manage, and doesn’t enable consistent quality.

Guardian’s patented multi-language capability offers a better solution.  Our AI is configured for local social norms and country-specific regulations to help you moderate at scale for a global audience.

Guardian’s supported languages include Arabic, French, Hindi, Korean, and many more — but Spectrum Labs can add moderation capabilities for any language.

Spectrum Labs can add moderation capabilities for any language.

Language Detection Library

SL_Languages_WhiteBG
"Spectrum Labs was the only partner who could offer high-quality content moderation across multiple languages, allowing us to automate and scale our moderation efforts."

Weszt Hart Head of Player Dynamics

riot-games-headshot.2x
healthy-behaviors-ai
Healthy Behaviors AI

Remove the bad — and promote the good

Simply removing toxic content does not mean you are building a healthier, more cohesive online community. After all, engagement and retention are not just tied to a game or app’s design, but to how users relate to each other on your platform.

With the first-ever positive user behavior detection, analytics, and actioning capability, Guardian by Spectrum Labs enables trust & safety and community leaders to analyze and improve their moderation efforts.

Learn More

“Adding IMVU to the growing data set that Spectrum Labs uses to improve and evolve their AI benefits both of our companies. And above it all, our missions are aligned: both companies are focused on making online environments more successful for human connection.”

Maura Welch Vice President of Marketing
Actioning
Custom Configured Actioning

Take action against toxic behavior based on your own policy.

Spectrum Labs allows you to scale coverage of toxic content through customized actioning. Actioning can be done through Spectrum Labs' Guardian queue or a platform's in house queue

Types of actions against content include real-time redaction, automated moderation, referral for human moderation, and more.

Actions against users also can be configured with responses like displaying a warning, shadow-banning, reducing a user's reputation score, suspending an account, and reporting to law enforcement.

With Spectrum's Guardian content moderation product, we've been able to protect 25,000 more users per day from unwanted - sometimes illegal - messages.

Alice Hunsberger Head of CX
SL_Implementation Phases-Eyeballs (2)
User-Level Moderation

Moderate users, not just messages

Across nearly every type of platform, 30% of toxic and illegal content is generated by just 3% of users.

To find toxic users, Guardian leverages user reputation scores, which aggregate individual user behavior over time and assign scores that can be used in AI analysis.

Each user reputation score is based on behavior severity, prior violations, and recency of offensive posts or actions.

This fully anonymized and privacy-compliant tool enables moderators to identify and penalize bad actors faster, more efficiently, and in time to prevent real-world harm.

It can also be leveraged to identify users who may be at risk for self-harm or CSAM grooming by detecting factors that indicate possible vulnerability.

“Approximately 30% of toxic content originates from just 3% of users across every kind of platform.”

Hill Stark, PhD Head of Data Analytics
guardian-for-voice
Guardian for Voice

Voice and audio
moderation AI

Our industry-leading voice content moderation solutions help you identify and control disruptive user behaviors to build a healthier, growing community.

With Contextual AI detection and automated user-level action, content mods can scale coverage across all types of user-generated audio content:

  • Voice channels and rooms
  • In-game audio chat
  • Voice memos
  • Interactive podcasts
  • Livestream video with audio chat
  • And more

Content Moderation Actioning UI

Spectrum Labs Guardian UI is designed to work with our API to help you take automated, effective action at scale.  If you use your own UI, we can help you trigger custom actions with configurable webhooks that respond in milliseconds.

Easy API, decisioning, and webhooks

Implementing Spectrum Labs' solutions is easy through our well-documented API and webhooks, which require only minimal engineering resources to get up and running.

Our API comes with a real-time decision framework where you can configure complex business rules around the actions taken when a prompt or output is in violation of your policy. The API response will return a determination of the detected behavior and the action to be taken on it within 20 milliseconds.

Additionally, our event-based action framework allows you to set complex rules and fire off a webhook once those rules are met, allowing for complex workflows.

Spectrum Labs prides itself on its first-class customer support. Our clients are provided with a dedicated solutions consultant who works closely with your organization from day one to help oversee the implementation phase.

SL_GenAI_ChartG_White-min

Our complete solution

Detecting harmful behaviors is just the start. Our platform includes everything you need to achieve your key Trust & Safety objectives and the analytics tools to show a 360° view of the health of your community so you can quantify the effectiveness of your teams’ efforts.

Questions?

To learn more about Guardian or our other solutions, get in touch with our team or download our Guardian Content Moderation AI product sheet.