Skip to content
2 min read

Marketplace Risk Recap of Using AI for Regulatory Compliance

By Alexis Palmer

During the Marketplace Risk event on May 17th, we had the privilege of speaking with Sarika Oaks, Director of Trust and Safety Operations at Udemy, and Neema Basri, Chief Operating Officer at Duco Experts. The experts discussed the use of AI in complying with regulations such as COPPA and EU DSA, the operationalization of compliance, and strategies for detecting spam and scams. Here are some key takeaways.

EU Digital Services Act

  • The EU DSA is a collection of policies and regulations designed to ensure a secure user experience and provide effective channels for reporting illegal activities. It will come into effect in the near future. 
  • Operationalizing compliance with the EU DSA begins with accurately interpreting the law. If you're not sure what provisions of the DSA apply to your platform, DUCO Experts can audit your platform and provide a specific list of criteria to achieve compliance.
  • Once the law is interpreted, operationalizing compliance can be broken down into three components:
    • Monitoring
    • User flagging of content
    • Reporting through transparency reports

COPPA

  • COPPA was established to protect children under 13 from being targeted by marketers and advertisers. It aims to manage and mitigate risks for young users and requires platforms to implement safeguards for children's use.
  • The most important aspect of COPPA compliance is ensuring that your platform does not possess any personally identifiable information (PII) of children under 13 and that no such data is stored if children use your platform.
  • Protecting kids isn’t just for compliance; a platform is responsible for keeping their community safe. When kids and adults sign up to use your platform, they trust you will protect them from bullying, abuse, hate speech, and sexual behavior. Driving trust in your community results in better brand reputation.

Spam & Scams

  • The process of operationalizing detection and actions for spam and scams can be divided into three key components:
    • Actioning: This involves implementing automated measures or forwarding the content to a moderator for human review.
    • Defining Responsibility: It is crucial to determine whether AI is being used as an assistant or as the primary creator of spam and scam content.
    • Setting up Enforcement: Establishing appropriate mechanisms to enforce regulations and take action against this type of content.
  • Data and its training play a vital role in building effective content moderation tools. If you develop in-house content moderation AI, the training dataset might be limited. It is advisable to rely on a content moderation partner with a substantial dataset for training and iterating models to detect toxic behaviors like spam and scams.

Spectrum Labs AI uses advanced behavior system transformer models built from billions of user-generated messages so its data is fine-tuned to detect toxicity and spam in chats and other human-to-human interactions online.  Being able to develop highly accurate models based on real-world UGC across different languages makes it uniquely capable of proactive detection to scale trust and safety efforts, with detailed reporting needed for regulatory compliance built in. Learn more about Spectrum Labs AI for compliance.

Learn more about how Spectrum Labs can help you create the best user experience on your platform.