Skip to content
5 min read

5 Takeaways from the Scaling Content Moderation AI Master Class

By Alexis Palmer

Online platforms are increasingly focused on building safe, inclusive, and healthy online communities. As part of a recent master class, AWS's Head of AI/ML, Stephen Peacock and Spectrum Labs' Co-Founder and CFO, Josh Newman, discussed the importance of content moderation, scaling, and customization, trends in user experience and safety, regulatory compliance, and ethical considerations in AI-driven content moderation. By fostering a healthy online community, online platforms can build trust, encourage user engagement, and ultimately achieve success in the ever-growing digital landscape.

1) Fostering a Healthy Online Community

During the master class, Peacock emphasized the importance of community health and content moderation, mentioning that platforms must work towards building safe, healthy, and inclusive environments for users to enjoy their experiences. One of the critical aspects of fostering community health is proactive detection, which involves identifying both harmful and healthy user behavior. Finding and encouraging the good players in your community is essential, especially since they often outnumber the bad players.

When it comes to building an AI solution in-house, the challenges include the need for large amounts of data, maintaining speed and latency, and scaling the solution across multiple languages. These challenges could impact the effectiveness of AI-powered content moderation in fostering community health. However, the right AI solution can help maintain a positive online environment by proactively detecting harmful content, promoting positive behavior, and ultimately contributing to the platform's success.

By prioritizing community health, platforms can better understand and meet the needs of their users. Focusing on community health can help platforms grow, but creating a healthy online community involves more than just stopping harmful content; it also requires proactively promoting a positive environment and supporting the good players within the community. By doing so, platforms can build trust and encourage user engagement, leading to a successful platform that benefits both the users and the company.

2) Scaling and Customization: Building the Right Solution

If your platform partners with a vendor for content moderation, several key factors must be considered. It's essential to know your player base and who you want it to be. This information will help platforms evaluate vendors based on their ability to meet specific needs and community genres.

One crucial aspect is the vendor's capacity to scale up with the company's growth. Can they handle a sudden increase in users, and can they support the company as it expands? This is particularly important for larger gaming platforms that expect significant growth. Additionally, platforms should consider the vendor's ability to maintain low latency, especially for games that rely on real-time communication and collaboration. As Josh emphasized, ensuring that the "message is going to get there in time is essential."

Another critical factor is the vendor's flexibility in accommodating unique use cases. For instance, can the vendor provide a more locked-down and safer environment if the game targets children? Platforms should also consider the vendor's capacity to adapt to different workflows, content review needs, and budgets. This will help ensure the solution is tailored to the company's requirements.

It's essential to evaluate whether the vendor can help the company comply with relevant regulations. As Josh mentioned, partnering with a vendor that understands and can support compliance efforts is also imperative in preventing potential financial losses due to non-compliance. By carefully considering these factors, platforms can make informed decisions when choosing a vendor to help build and maintain a safe, inclusive, and engaging online environment.

3) Trends in User Experience and User Safety

In recent years, there has been a significant increase in attention paid to user experience and user safety in the gaming industry. The heightened focus can be attributed to various factors, including the COVID-19 pandemic, improvements in AI technology, and growing awareness of the importance of a healthy gaming community.

According to Peacock, a toxic community can quickly become the "number one driver of churn," negatively impacting a company's bottom line. As a result, gaming platforms are becoming more socially responsible and investing in trust and safety measures, which ultimately benefit both the community and the company.

Another trend noted is the use of AI to protect content moderation teams. Since these teams are continuously exposed to the darker side of online communities, AI can help reduce their burden by filtering out harmful content.

We also highlighted the importance of user-level analysis in content moderation. By focusing on the small percentage of users responsible for the majority of toxic content, platforms can significantly impact the overall user experience. This approach can reduce churn, increase retention, and higher average revenue per player.

Overall, current user experience and safety trends focus on leveraging technology and user-level analysis to create more positive gaming communities. Not only does this help maintain the well-being of players, but it also contributes to the success of gaming platforms by improving player retention and increasing revenues.

 

4) Addressing Safety and Regulation Compliance

With the emergence of regulations like the Digital Services Act from the EU, platforms are now facing new requirements related to transparency, reporting, and making sure that activities illegal offline are also considered illegal online. In light of this, the significance of content moderation toolsets cannot be overstated.

Legal professionals are still interpreting the depth of explanation required for these regulations, but content moderation tools clearly play a critical role in supplying platforms with the information needed to adhere to these guidelines.

Besides offering the right tools and signals, we have to emphasize the value of community engagement and learning from others in the industry. Discussing regulation compliance with peers can give platforms insight into how others navigate the regulatory landscape, which is invaluable when determining the best approach to comply with new regulations.

As regulations like the Digital Services Act continue to evolve, content moderation tools and community engagement will play a crucial role in helping platforms ensure compliance and maintain transparency in their operations. Collaborations such as the one between AWS and Spectrum Labs aim to support platforms in addressing these challenges by providing the necessary tools, data, and resources.

5) Ethical Considerations in AI for Content Moderation

Ethical considerations are an important aspect of AI, especially when it comes to content moderation. 

Peacock highlights the challenge of bias in AI, stating, "The data is really where bias tends to creep in. It's very hard to see it. Sometimes it's very subtle what the machine picks up on and what behavior you start to see from your models." He emphasizes the importance of using tools to track model drift, ensure transparency, and maintain data privacy and sovereignty.

Newman also acknowledges the potential for systemic bias in AI, noting the differences between unsupervised and supervised learning: "Using a large unsupervised model is going to subject you to whatever systemic biases in the data that you're trying to get off of." He explains that Spectrum Labs leans heavily on supervised learning and works closely with its in-house research team and language experts to ensure fair and unbiased decisions.

To summarize, ensuring ethical AI in content moderation requires a combination of supervised learning, transparency, data privacy, and collaboration with experts to minimize biases and maintain trust and safety. Both AWS and Spectrum Labs are committed to providing the necessary tools and resources to address these ethical considerations in AI-driven content moderation.

Collaboration Between AWS and Spectrum Labs for Enhanced Platform Safety

AWS and Spectrum Labs are joining forces to create safer online communities by offering powerful tools and technology for content moderation. As Stephen Peacock from AWS highlights, "every community has its unique requirements, and the tools we offer aim to help customers identify and address issues specific to their user base."

The strengths of both platforms unite to offer a comprehensive and adaptable content moderation solution. By focusing on providing a holistic approach to community safety, they empower platforms to create customized solutions that meet the specific needs of their user base while ensuring the security, stability, and scalability necessary for today's online environments. Learn more here.


If you missed it, you can watch the master class in its entirety here.

Take the first step towards creating a safer, more inclusive online environment for your platform. Contact our team today to learn more about how Spectrum Labs Contextual AI solutions can help you comply with online safety regulations and build a thriving community.

Learn more about how Spectrum Labs can help you create the best user experience on your platform.