Safe, inclusive, engaged communities are not born. They are deliberately made and maintained by invested community members and passionate Trust & Safety professionals.
We exist to power their efforts.
Our Contextual AI system identifies behaviors happening in real time, across different content types and languages by evaluating many inputs, not just one word or line of text. It assembles those inputs into a larger picture and, from that vantage point, is able to make a more accurate determination of what may be happening.
Timestamps, profile information, other conversations and content produced by involved users, and more pieces of metadata are considered.
AI is only as good as the data it is trained from. Our Contextual AI system is trained with data from our Data Vault.
People communicate differently in essays, books and articles then they do in posts and messages. The Vault contains different shapes of data to account for that.
“To build a service that helps inspire people to find and do what they love, we have to deliberately engineer a safe and positive experience. That’s why we partner with Spectrum Labs.”
Head of Search
As Trust & Safety professionals in the gaming industry know, moderating voice chat is really hard. Accuracy, cost, speed and privacy challenges have combined to make moderating voice chat not for the faint of heart.
Traditionally, Trust & Safety professionals have moderated across languages by maintaining lists of translated keywords. Sure, translation software has gotten better, but the step ads delays. Don’t even get us started on the problems with keywords.
Our customers have stopped translating, embracing our patent-pending approach to multi-language support and confidently moderate across the languages used in their communities.