Skip to content
3 min read

Using AI for content moderation

By Lee Davis

Contextual AI can help communities detect early signs of nefarious behaviors, such as grooming or recruitment, at scale. This branch of AI analyzes data within its context, meaning it looks at content (the complete raw text) and context (e.g. attributes of users and scenario, frequency of offender) in order to classify a behavior.

More specifically, Contextual AI looks across all aspects of a platform (e.g. Posts, private chats or messaging) and ties multiple messages together so that it can analyze conversations that span multiple interactions. At its core, Contextual AI looks at how behaviors build over time and how users respond to different messages in order to distinguish between conversations that are consensual and those that are not. For example: If 10 people in a community are talking trash, is it playful banter surrounding a competitive game or is it a bunch of bullies ganging up on someone?

Once Contextual AI identifies inappropriate behavior and the reason why it violates a community standard, the platform’s actions against the offender can be automated. For instance, you may decide to issue a warning for a first offense, suspend an offender’s account for three days for a second offense, or ban a repeat offender altogether.

 

Facial Recognition and “Next Best Neighbor”

AI can help bridge many of the well documented challenges of facial recognition technology. For example, investigators who look for missing children spend hours upon hours comparing photos provided by their family to photos in online escort services. But facial recognition software isn’t as useful in these scenarios as law enforcement would like because the models have been trained on photos of white adults, not the young and diverse people who are actually the victims of human trafficking.

AI can help resolve this challenge by deploying a “next best neighbor” approach, essentially prioritizing photos in a descending order of likely matches, and then presenting them to the investigator to make an educated determination. This saves the investigators valuable time and effort, and allows them to focus on cultivating their detective skills, make decisions, and put some context around data, rather than scrolling through photos.

The need for a “next best neighborhood” underscores a major challenge with all AI: models must be trained using a wide datasets that are accurately labeled.

Eliminating Bias in AI

Over the past 10 years the world has seen some spectacular examples of AI failures, delivering outcomes that are highly biased against certain ethnicities and groups of people. To eliminate bias, AI models must be trained on diverse datasets that are labeled by a diverse group of labelers.

A person’s ability to label data accurately will depend largely on his or her background, culture and life experiences. For instance, people who haven’t grown up around drugs may miss many coded-drug terms. Some may be better than others at identifying grooming behaviors, while others excel at recognizing hate speech.

It’s also important to use labelers who are native speakers of the language of the content they will be asked to evaluate. A speaker who isn’t linguistically and culturally fluent may be unaware of the particular nuances, euphemisms or culturalism of a given language or the ways in which the context of a word or phrase can affect its meaning. As a result, people who are not native speakers of the language they are asked to label may have a low accuracy rate.

Diversity is essential when building a pool of data labelers. A man may believe that a specific term or action is harmless or perhaps merely distasteful, while a woman may find it outright offensive. The same is true for all genders, ages, religions, national origins, races and ethnicities. A diverse dataset and team of data labelers will help you ensure your labels are accurate and that you’re not introducing inadvertent individual human biases. Diversity will also reduce potential over-sensitivity in particular topics.

AI is one of the most important investments you can make to ensure the long-term health of your community and the emotional well-being of your members. It is an evolving field, and one worth paying attention to as your platform matures.

Learn More: Benefits of Using Contextual AI

 

Learn more about how Spectrum Labs can help you create the best user experience on your platform.