Cover more content with
Scale your content moderation with up to 5x more content coverage* and multi-language implementation.
*Based on an average of Spectrum Labs customers across gaming, dating, and social media apps. Results may vary.
We keep billions of users safe
Guardian uses true Natural Language Understanding (NLU) AI to go beyond keyword-based filters. Guardian can discover and act at scale on difficult-to-detect behaviors that other solutions miss:
- Child sexual abuse material (CSAM)
- Bullying
- Hate speech
- Spam
- Radicalization and extremism
What makes Guardian better?
Guardian's powerful features help moderation teams scale the coverage and quality of their content moderation efforts:
Contextual AI
Analyzes metadata in addition to keywords, increasing detection accuracy and precision.
Multi-Language Capability
Your content moderation efforts can scale uninterrupted as your platform grows internationally.
User-Level Moderation
Allows moderators to pinpoint and take action against users who create disproportionate volume of harmful content.
Healthy Behaviors AI
The world's first AI that detects and promotes positive user behavior.
This 360-degree, community-customizable, and always-improving solution helps reduce legal and business risk, lowers the cost of scaling globally, and drives higher retention and revenue-per-user — all while making the internet a safer, more positive place to be.
Configurable for your community, adapts to changes, and improves over time.
Identifies complex dangerous behaviors like bullying and CSAM.
Developed by human behavior experts and trained by our massive data vault.
Configurable for your community, adapts to changes, and improves over time.
Identifies complex dangerous behaviors like bullying and CSAM.
Contextual AI
Sophisticated detection of complex behaviors
Toxic content isn’t just harmful for users — it’s also high-risk for businesses. Illegal behaviors can get apps and websites shut down, kicked off app stores, shut down by government regulators, and blacklisted by payment processors.
Some toxic behaviors are easy to find, but others depend on context.
Our Contextual AI analyzes metadata from user activity on your platform, like conversations, when and where they happen, and past user behaviors.
This allows Guardian to detect patterns and find behaviors that other solutions miss.
Advanced Behavior Systems
Spectrum Labs can create custom solutions for your specific needs. Choose any models from Spectrum Labs' behavior library and adapt them for your community to proactively prevent harmful conduct.
Bullying
User(s) seek to harm another user through intimidation, coercion, and/or humiliation.
CSAM Discussion
Mentions or references of anything related to child sexual abuse material (CSAM).
CSAM Grooming
Conversations that are initiated by adult sexual predators to obtain sexually explicit materials from minors.
Hate Speech
Hateful behaviors directed at protected attributes in order to prevent the normalization of hateful discourse.
Insult
Content that is insulting to people.
PII Scrubbing
Scrubbing personally identifiable information (PII) about themselves or other individuals.
Profanity
Vulgar, profane or obscene words or phrases.
Radicalization
Efforts to recruit and radicalize people into violent extremist ideologies and actions.
Self Harm
Users that are at risk of self harm and are in need of mental health resources.
Severe Toxic
Captures both patterns and instances of behaviors that are considered harmful or severely harmful to people.
Sexual
Content that references sexual activity.
Solicitation of Drugs
Content that indicates online drug transactions.
Solicitation of Sex
Content that indicates a transactional nature to a sexual relationship.
Spam
Detect repetitive content and/or attempts to lead other users off the platform.
Threats
The user has a plan that could affect the real-world safety of other people if it were to be followed.
Underage 13
The presence of users who are under the age of 13.
Underage 18
Indicates the user is under the age of 18.
"Overnight I saw a 50% reduction in manual moderation of display names."
David Brown SVP, Trust and Safety
Localize actions and see insights by language and region.
Detect and add new languages quickly and easily with AI transfer learning.
Moderate character-based and hybrid languages, l33tspeak, emojis, and more.
Localize actions and see insights by language and region.
Detect and add new languages quickly and easily with AI transfer learning.
Multi-Language Moderation
High-quality moderation in any language
To provide multilingual moderation platforms often work with multiple vendors who specialize in different languages. That approach is expensive, difficult to manage, and doesn’t enable consistent quality.
Guardian’s patented multi-language capability offers a better solution. Our AI is configured for local social norms and country-specific regulations to help you moderate at scale for a global audience.
Guardian’s supported languages include Arabic, French, Hindi, Korean, and many more — but Spectrum Labs can add moderation capabilities for any language.
Spectrum Labs can add moderation capabilities for any language.
Language Detection Library
"Spectrum Labs was the only partner who could offer high-quality content moderation across multiple languages, allowing us to automate and scale our moderation efforts."
Weszt Hart Head of Player Dynamics
Dynamically pair positive users with newcomers to drive retention and engagement.
Get a 360° view of all user behaviors within your community, including good behaviors.
Evaluate your efforts and quantify the impact of your work.
Dynamically pair positive users with newcomers to drive retention and engagement.
Get a 360° view of all user behaviors within your community, including good behaviors.
Healthy Behaviors AI
Remove the bad — and promote the good
Simply removing toxic content does not mean you are building a healthier, more cohesive online community. After all, engagement and retention are not just tied to a game or app’s design, but to how users relate to each other on your platform.
With the first-ever positive user behavior detection, analytics, and actioning capability, Guardian by Spectrum Labs enables trust & safety and community leaders to analyze and improve their moderation efforts.
“Adding IMVU to the growing data set that Spectrum Labs uses to improve and evolve their AI benefits both of our companies. And above it all, our missions are aligned: both companies are focused on making online environments more successful for human connection.”
Maura Welch
Vice President of Marketing
Automate content moderation decisions where it makes sense.
1000s of possible combinations to meet your unique needs for automation.
Automate content moderation decisions where it makes sense.
1000s of possible combinations to meet your unique needs for automation.
Custom Configured Actioning
Take action against toxic behavior based on your own policy.
Spectrum Labs allows you to scale coverage of toxic content through customized actioning. Actioning can be done through Spectrum Labs' Guardian queue or a platform's in house queue
Types of actions against content include real-time redaction, automated moderation, referral for human moderation, and more.
Actions against users also can be configured with responses like displaying a warning, shadow-banning, reducing a user's reputation score, suspending an account, and reporting to law enforcement.
“With Spectrum's Guardian content moderation product, we've been able to protect 25,000 more users per day from unwanted - sometimes illegal - messages.”
Alice Hunsberger
Head of CX
Track metrics and gain insight into larger patterns.
Prioritize and automate moderation based on user reputation scores.
Utilize user-level bulk actions for more impact with fewer clicks.
Track metrics and gain insight into larger patterns.
Prioritize and automate moderation based on user reputation scores.
User-Level Moderation
Moderate users, not just messages
Across nearly every type of platform, 30% of toxic and illegal content is generated by just 3% of users.
To find toxic users, Guardian leverages user reputation scores, which aggregate individual user behavior over time and assign scores that can be used in AI analysis.
Each user reputation score is based on behavior severity, prior violations, and recency of offensive posts or actions.
This fully anonymized and privacy-compliant tool enables moderators to identify and penalize bad actors faster, more efficiently, and in time to prevent real-world harm.
It can also be leveraged to identify users who may be at risk for self-harm or CSAM grooming by detecting factors that indicate possible vulnerability.
“Approximately 30% of toxic content originates from just 3% of users across every kind of platform.”
Hill Stark, PhD
Head of Data Analytics
Guardian for Voice
Voice and audio
moderation AI
Our industry-leading voice content moderation solutions help you identify and control disruptive user behaviors to build a healthier, growing community.
With Contextual AI detection and automated user-level action, content mods can scale coverage across all types of user-generated audio content:
- Voice channels and rooms
- In-game audio chat
- Voice memos
- Interactive podcasts
- Livestream video with audio chat
- And more
Content Moderation Actioning UI
Spectrum Labs Guardian UI is designed to work with our API to help you take automated, effective action at scale. If you use your own UI, we can help you trigger custom actions with configurable webhooks that respond in milliseconds.
Moderation Queue
Moderate efficiently
View cases prioritized by severity with all relevant information for decisions. Review and act on a user or content level. Integrate with your internal systems.
Automation Builder
Trigger actions
Configure nuanced automated actions to moderate at scale. Block or remove content, warn or suspend users, or send for human review.
Analytics Dashboard
Gain Insights
Measure behavior prevalence and moderation impact. Compare to benchmarks, get strategic recommendations, and share your progress.
Easy API, decisioning, and webhooks
Implementing Spectrum Labs' solutions is easy through our well-documented API and webhooks, which require only minimal engineering resources to get up and running.
Our API comes with a real-time decision framework where you can configure complex business rules around the actions taken when a prompt or output is in violation of your policy. The API response will return a determination of the detected behavior and the action to be taken on it within 20 milliseconds.
Additionally, our event-based action framework allows you to set complex rules and fire off a webhook once those rules are met, allowing for complex workflows.
Spectrum Labs prides itself on its first-class customer support. Our clients are provided with a dedicated solutions consultant who works closely with your organization from day one to help oversee the implementation phase.
Our complete solution
Detecting harmful behaviors is just the start. Our platform includes everything you need to achieve your key Trust & Safety objectives and the analytics tools to show a 360° view of the health of your community so you can quantify the effectiveness of your teams’ efforts.
Community assessment
Get visibility into what's really happening in your community, along with ideas for improvement.
Easy-to-implement API
Streamlined setup ensures fast time-to-value with minimal maintenance.
Enterprise scale and speed
Big data infrastructure handles any volume of user-generated content in milliseconds.
Dedicated customer success
Our trust & safety, technical, and data experts are here to help you achieve your goals.
Questions?
To learn more about Guardian or our other solutions, get in touch with our team or download our Guardian Content Moderation AI product sheet.