We keep billions of users safe
















Guardian uses true Natural Language Understanding (NLU) AI to go beyond keyword-based filters. Guardian can discover and act at scale on difficult-to-detect behaviors that other solutions miss:
Analyzes metadata in addition to keywords, increasing detection accuracy and precision.
Your content moderation efforts can scale uninterrupted as your platform grows internationally.
Allows moderators to pinpoint and take action against users who create disproportionate volume of harmful content.
The world's first AI that detects and promotes positive user behavior.
Toxic content isn’t just harmful for users — it’s also high-risk for businesses. Illegal behaviors can get apps and websites shut down, kicked off app stores, shut down by government regulators, and blacklisted by payment processors.
Some toxic behaviors are easy to find, but others depend on context.
Our Contextual AI analyzes metadata from user activity on your platform, like conversations, when and where they happen, and past user behaviors.
This allows Guardian to detect patterns and find behaviors that other solutions miss.
User(s) seek to harm another user through intimidation, coercion, and/or humiliation.
Mentions or references of anything related to child sexual abuse material (CSAM).
Conversations that are initiated by adult sexual predators to obtain sexually explicit materials from minors.
Hateful behaviors directed at protected attributes in order to prevent the normalization of hateful discourse.
Content that is insulting to people.
Scrubbing personally identifiable information (PII) about themselves or other individuals.
Vulgar, profane or obscene words or phrases.
Efforts to recruit and radicalize people into violent extremist ideologies and actions.
Users that are at risk of self harm and are in need of mental health resources.
Captures both patterns and instances of behaviors that are considered harmful or severely harmful to people.
Content that references sexual activity.
Content that indicates online drug transactions.
Content that indicates a transactional nature to a sexual relationship.
Detect repetitive content and/or attempts to lead other users off the platform.
The user has a plan that could affect the real-world safety of other people if it were to be followed.
The presence of users who are under the age of 13.
Indicates the user is under the age of 18.
David Brown SVP, Trust and Safety
To provide multilingual moderation platforms often work with multiple vendors who specialize in different languages. That approach is expensive, difficult to manage, and doesn’t enable consistent quality.
Guardian’s patented multi-language capability offers a better solution. Our AI is configured for local social norms and country-specific regulations to help you moderate at scale for a global audience.
Guardian’s supported languages include Arabic, French, Hindi, Korean, and many more — but Spectrum Labs can add moderation capabilities for any language.
Weszt Hart Head of Player Dynamics
Simply removing toxic content does not mean you are building a healthier, more cohesive online community. After all, engagement and retention are not just tied to a game or app’s design, but to how users relate to each other on your platform.
With the first-ever positive user behavior detection, analytics, and actioning capability, Guardian by Spectrum Labs enables trust & safety and community leaders to analyze and improve their moderation efforts.
“Adding IMVU to the growing data set that Spectrum Labs uses to improve and evolve their AI benefits both of our companies. And above it all, our missions are aligned: both companies are focused on making online environments more successful for human connection.”
Spectrum Labs allows you to scale coverage of toxic content through customized actioning. Actioning can be done through Spectrum Labs' Guardian queue or a platform's in house queue
Types of actions against content include real-time redaction, automated moderation, referral for human moderation, and more.
Actions against users also can be configured with responses like displaying a warning, shadow-banning, reducing a user's reputation score, suspending an account, and reporting to law enforcement.
“With Spectrum's Guardian content moderation product, we've been able to protect 25,000 more users per day from unwanted - sometimes illegal - messages.”
Across nearly every type of platform, 30% of toxic and illegal content is generated by just 3% of users.
To find toxic users, Guardian leverages user reputation scores, which aggregate individual user behavior over time and assign scores that can be used in AI analysis.
Each user reputation score is based on behavior severity, prior violations, and recency of offensive posts or actions.
This fully anonymized and privacy-compliant tool enables moderators to identify and penalize bad actors faster, more efficiently, and in time to prevent real-world harm.
It can also be leveraged to identify users who may be at risk for self-harm or CSAM grooming by detecting factors that indicate possible vulnerability.
“Approximately 30% of toxic content originates from just 3% of users across every kind of platform.”
Our industry-leading voice content moderation solutions help you identify and control disruptive user behaviors to build a healthier, growing community.
With Contextual AI detection and automated user-level action, content mods can scale coverage across all types of user-generated audio content:
Implementing Spectrum Labs' solutions is easy through our well-documented API and webhooks, which require only minimal engineering resources to get up and running.
Our API comes with a real-time decision framework where you can configure complex business rules around the actions taken when a prompt or output is in violation of your policy. The API response will return a determination of the detected behavior and the action to be taken on it within 20 milliseconds.
Additionally, our event-based action framework allows you to set complex rules and fire off a webhook once those rules are met, allowing for complex workflows.
Spectrum Labs prides itself on its first-class customer support. Our clients are provided with a dedicated solutions consultant who works closely with your organization from day one to help oversee the implementation phase.
Get visibility into what's really happening in your community, along with ideas for improvement.
Streamlined setup ensures fast time-to-value with minimal maintenance.
Big data infrastructure handles any volume of user-generated content in milliseconds.
Our trust & safety, technical, and data experts are here to help you achieve your goals.