
Spectrum Labs API can now consume user reports, how the severity of a user's collective cases can be factored into actioning decisions, and how bulk user moderation can help scale.
Consume user reports for more efficiency and DSA compliance
The world of content moderation started with user reports – the ability for users to report other users for posting toxic or harmful content. This feature stood at the very foundation of the Trust & Safety space. However, one key problem with user reporting is that only a small subset of users ever report something, and what they report usually turns out to be false.
User report-based content moderation made way for more technological approaches, first using keyword-based filtering and eventually using powerful AI models that are specifically trained to spot a range of behaviors.
Spectrum Labs recently released an innovation that brings the spotlight back to user reports. User reports do offer value into content moderation management when used in the right way. What Spectrum Labs has now added to the API is a user report endpoint, which enables our customers to do several things:
First, it allows content moderators to see user-reported cases along with content and user action cases created through our AI. This makes for a more efficient process since it can all be done from a single, consistent interface.
Second, it will be of great value for companies working to comply with the EU’s Digital Services Act (DSA). As you may know, the DSA requires platforms to establish a way to identify trusted flaggers of illegal content and prioritize the processing of user reports coming from these trusted flaggers. Spectrum Labs’ new user report endpoint allows cases in the moderation queue to be assigned a higher priority for those reported from trusted flaggers.
And lastly, user reports can boost automation. Automation is achieved through Spectrum Labs’ powerful rules engine that executes automated actions based on confidence scores from Advanced Behavior Systems (ABS) for a range of toxic behaviors. When the ABS confidence score doesn’t meet the defined threshold, a case is added to a queue for manual review. However, when a user report comes in that flags the same content, it is a pretty strong indicator that it indeed is content that needs to be actioned. This allows our customers to act on content without always requiring a moderator review, despite the lower confidence of the AI model. Overall, user reports coupled with automation can further boost efficiency in content moderation.
Collective severity
The true heroes in any online community are Trust & Safety professionals. They have a big impact on the community, especially when they save lives after acting on intentions of self-harm or preventing really nasty situations like child sex grooming.
However, Trust & Safety pros still face a challenge in long queues of cases. There needs to be a better system to identify and prioritize the more severe cases that require urgent review.
Spectrum Labs recently released its collective severity feature. For this, we calculate the cumulative risk of cases associated with individual users so that high-risk users and at-risk users can be prioritized at the top of the queue. The ability to bulk-manage the most risky or dangerous users helps moderation teams tackle the most time-sensitive and impactful cases.
Through Spectrum Labs, moderators can access a privacy-safe user view that is prioritized based on each user's collective severity of open cases. For example, child-grooming cases are marked with high priority for moderator review. Another example would be users expressing intentions of self-harm. The new collective severity innovation allows moderators to swiftly spot and direct users to the help they need.
With collective severity, we have already seen a 4x increase in efficiency so Trust & Safety staff can handle more high-priority cases in less time and make better-informed decisions with a privacy-safe consolidated view of a user's activity.
Bulk management
Spectrum Labs’ final innovation for efficiency is bulk management of user moderation. This seemingly minor enhancement can be a strong efficiency booster for your Trust & Safety team. Bulk management allows moderators to perform actions on multiple cases for the user at the same time, thus reducing repetitive tasks in the moderation queue and scaling moderator performance.