We keep billions of users safe





Guardian uses true Natural Language Understanding (NLU) AI to go beyond keyword-based filters. Guardian can discover and act at scale on difficult-to-detect behaviors that other solutions miss:
Allows moderators to pinpoint and take action against users who create a disproportionate volume of harmful content.
The world's first AI that detects and promotes positive user behavior.
Your content moderation efforts can scale uninterrupted as your platform grows internationally.
Toxic content isn’t just harmful for users — it’s also high-risk for businesses. Illegal behaviors can get apps and websites shut down, kicked off app stores, shut down by government regulators, and blacklisted by payment processors.
Some toxic behaviors are easy to find, but others depend on context.
Our contextual AI analyzes metadata from user activity on your platform, like conversations, when and where they happen, and past user behaviors.
This allows Guardian to detect patterns and find behaviors that other solutions miss, including:
“Overnight I saw a 50% reduction in manual moderation of display names.”
To provide multi-language moderation, platforms often work with multiple vendors who specialize in different languages. That approach is expensive, difficult to manage, and doesn’t enable consistent quality.
Guardian’s patented, multi-language capability offers a better solution. Configured for local social norms and country-specific regulations, our AI helps you moderate for a global audience at scale.
Guardian’s supported languages include Arabic, French, Hindi, Korean, and many more — but Spectrum Labs can add moderation capabilities for any language.
“Spectrum Labs was the only partner who could offer high-quality content moderation across multiple languages, allowing us to automate and scale our moderation efforts.”
Simply removing toxic content does not mean you are building a healthier, more cohesive online community. After all, engagement and retention are not just tied to a game or app’s design, but to how users relate to each other on your platform.
With the first-ever positive user behavior detection, analytics, and actioning capability, Guardian by Spectrum Labs enables trust & safety and community leaders to analyze and improve their moderation efforts.
“Adding IMVU to the growing data set that Spectrum Labs uses to improve and evolve their AI benefits both of our companies. And above it all, our missions are aligned: both companies are focused on making online environments more successful for human connection.”
Across nearly every type of platform, 30% of toxic and illegal content is generated by just 3% of users.
To find toxic users, Guardian leverages user reputation scores, which aggregate individual user behavior over time and assign scores that can be used in AI analysis.
Each user reputation score is based on behavior severity, prior violations, and recency of offensive posts or actions.
This fully anonymized and privacy-compliant tool enables moderators to identify and penalize bad actors faster, more efficiently, and in time to prevent real-world harm.
It can also be leveraged to identify users who may be at risk for self-harm or CSAM grooming by detecting factors that indicate possible vulnerability.
“Approximately 30% of toxic content originates from just 3% of users across every kind of platform.”
Our industry-leading voice content moderation solutions help you identify and control disruptive user behaviors to build a healthier, growing community.
With Contextual AI detection and automated user-level action, content mods can scale coverage across all types of user-generated audio content:
Every community has different content moderation policies and needs. We provide solutions for these industries and more:
Get visibility into what's really happening in your community, along with ideas for improvement.
Streamlined setup ensures fast time-to-value with minimal maintenance.
Big data infrastructure handles any volume of user-generated content in milliseconds.
Our trust & safety, technical, and data experts are here to help you achieve your goals.