SILVER PLAN
Early-stage, fast-growing platforms chose the silver plan to quickly establish content moderation best practices.
Silver plans start at just $1,300/month
Pricing will vary based on the number of active users and the number of API calls.
GOLD PLAN
Maturing platforms chose the gold plan to establish an end-to-end content moderation solution.
Gold plans start at just $3,000/month
Pricing will vary based on the number of active users and the number of API calls.
PLATINUM PLAN
Platforms with unique needs and large user bases choose our platinum plan.
Contact us to learn about pricing.
Companies with more active users can save with the Platinum plan.
TROV, BY SPECTRUM LABS
Online toxicity comes in many forms. We identify over 40 types of harmful behaviors and can quickly create custom models beyond those.
Our customers use our technology to extend the same safety benefits across languages.
If you'd like more detail, please email contact@getspectrum.io or fill out our contact form.
Our technology identifies harmful behaviors within a whole conversation, as well as in a single message, by understanding what's being said, and how it's being responded to. This context-aware approach helps us accurately identify up to 5x more harmful content with a great reduction in false positives, when compared to chat filters, allowing our customers to provide an improved and consistent user experience for their platforms.
Filters are powered by keywords — a list of words, or Regular Expressions, banned from use. Filters can identify simple harmful behaviors, like profanity. Though, you have to have thought of every variation of profane terms, like Ass, A_s_z, and @$$, in order for the filter to perform. We do not require you to maintain a keyword list.
Our customers prefer our approach over traditional chat filters because it is more accurate, comprehensive, nuanced, and efficient.
False positives — when content is incorrectly flagged or filtered as toxic — are problematic for Trust & Safety departments because they waste moderator time, increase moderation costs, and detract from the user experience.
We minimize false positives by delivering high precision models (95%+) built leveraging a wide range of data and by taking context into consideration when making a determination.
The most performant models are ones tuned to the specific requirements and nuance of the language used on our customers' platforms. We customize every customer’s solution to match their specific guidelines. We evaluate precision and recall performance using extensive evaluation datasets. We consistently refine each customer’s solution based on feedback and moderator actions.
Spectrum’s model library, Trōv, can be found here. We offer packages of these models for customers based on size and need. More information on our packages can be found on our pricing page.
We can also partner with customers to develop custom models for behaviors that are not already in Trōv.
Yes. Our models are tuned to deliver the highest precision and recall for our customers based on their business needs.
Our technology can detect harmful behaviors in over 30 languages and our patent-pending approach for processing languages allows us to spin up new languages in weeks. We scale our models horizontally to new languages first and then work with translators and native language training datasets to fine-tune the models.
We have built a proprietary language detection system used in standard pre-processing steps. In addition, our custom embeddings are multilingual so we can detect harmful behaviors even if users are switching between multiple languages in a single piece of data.
Our API has an average response time of less than 20ms.
Yes. We have a customizable reputation system that assigns a score to a user based on their actions and associated metadata. The system works both for an individual user reputation and for communities of users in channels, live streams, etc. The score can be integrated into your product to inform matching algorithms and much more and/or used to set the appropriate response to identified behavior.
Yes. Spectrum offers our behavior identification service as an on-premise binary delivered as a Debian package or Docker image. Please contact our partnerships team for additional information.
Spectrum integrates with your internal management systems through webhooks so we can send signals for actions. We partner with you to configure which actions should be taken based on which behavior conditions.
Today we support behavior detection for Text in all forms including usernames, chat messages, profile descriptions, feed posts, comments, search queries, and more. We are also introducing behavior detection for Audio (Voice).
While we don’t support behavior detection for images, we can integrate image moderation into our Guardian UI so that the moderation team can work out of a single queue.
Yes. Spectrum offers professional services to consult on decisions like this. We also offer best practices based on what we’ve seen from our customers and the community. In general, once you have recognized the behaviors and their severity, you can decide what behaviors are routed for human review and/or what responses you want to automate for other behaviors. For example, behaviors like threatening to bring a gun to school or self-harm ideation, are typically sent to human review while removing profanity is automated.
Our tech is built to help you digitize your community guidelines and your behavior/action matrix.
Sign up for our newsletter
Industries