“To build a service that helps inspire people to find and do what they love, we have to deliberately engineer a safe and positive experience.
That’s why we partner with Spectrum Labs.”
“To build a service that helps inspire people to find and do what they love, we have to deliberately engineer a safe and positive experience.
That’s why we partner with Spectrum Labs.”
Pinterest Trust & Safety Operations team
Content moderation for social media is arguably the most challenging task in Trust & Safety. Social media platforms need passionate moderators reinforced by powerful automated systems.
Spectrum Labs uses Contextual AI to help social media moderators spot hard-to-detect toxic behaviors which keyword-only systems don’t catch:
Not all harmful behavior can be caught with keywords. Online bullying and harassment generally involves prolonged contact using language that may appear benign to moderation systems.
Spectrum Labs’ Contextual AI looks at user behavior patterns and interprets complex language to better detect online bullies who target victims with threats and insults.
Today’s society doesn’t tolerate hate speech, and expects online platforms to take concrete steps against it.
Spectrum Labs can accurately detect and remove hate speech across multiple languages and dialects, as well as in 1337speak or other evasive online text.
81% of Americans want technology companies to provide more options for people to filter hateful or harassing content.(1)
Social media profiles, comments, and DMs get inundated with spam that disrupts users’ conversations, targets them for scams, or detours them off the platform.
Spectrum Labs can stop spam with accurate detection and automated action.
Online platforms are often surprised to find self-harm being promoted or publicized within their community.
Spectrum Labs’ contextual AI can remove self-harm content using advanced behavior identification models and automated actioning to alert authorities or provide mental health resources when needed.
Combating the rise of extremism requires powerful detection models that can parse complex behavior and language.
Spectrum Labs’ radicalization, threat and violence models root out online radicalization before it can spread across your platform. Spectrum’s Contextual AI can analyze nuances and detect conversations where bad actors groom prospects and plan offline violence, better protecting your platform from predatory users and potential legal liability.
Just as negative user behavior can impact your retention and growth numbers, positive user behavior can drive them in the right direction.
Amplify can help.
It’s the first-ever AI tool that allows product teams to dynamically pair helpful users with new visitors to optimize user retention and engagement without changing in-app design or mechanics.
Amplify connects your platform’s highest-rated users with new visitors to create more positive user experiences. It is the first-ever retention AI tool able to detect and boost positive behavior that fuels retention, engagement, and revenue.
Amplify has already helped a pilot client with a metaverse platform increase their average revenue per user (ARPU) by +12% within the first 4 weeks of implementation.
Amplify is the only solution that offers 360-degree analytics into your community (the bad and the good!), so you can isolate toxicity and spread positivity.
By using Contextual AI, Amplify activates existing users to increase retention by recognizing and rewarding healthy behavior. These positive interactions can optimize retention and ARPU and create a more engaging and enjoyable platform — and a safer community.
Families choose our apps because they keep children safe while empowering them to experience the fun and connection of technology. We partner with Spectrum Labs because there's no better solution for protecting kids online.
Sean Herman CEO
We feel like we have a content moderation partner who is collaborating and working together with us to make our platform a great experience for our users.
Matt Toy Head of Trust and Safety