Skip to content
3 min read

Master Class Recap: Using AI to recognize pro-social behaviors

By Alexis Palmer

This month, Spectrum Labs hosted a master class titled "Using AI to Recognize & Reward Pro-Social Behaviors."

The conversation featured Grindr's VP of Customer Experience, Alice Hunsberger; Together Labs' Sr. Director, Customer Care & Education, Jeff Hanlon; and Spectrum Labs' own Manager of Data Analytics, Hill Stark PhD. and VP of Data Science, Jonathan Purnell.

This blog features a few key takeaways from the discussion, but if you want to watch the master class in its entirety, you can do so here. 

Looking at a graph of the past 20 years, you'll see a gentle increase in content moderation from platforms. This change correlates to the expectation of users and government regulation, putting more accountability on sites. Though social media sites seem to bear the brunt of the headlines and targeting, the responsibility lies in most sites that cultivate community, whether gaming, dating, or other genres of platforms. Luckily, for the most part, technology has evolved alongside these expectations, and the strategies platforms have been expected to develop. 

When talking about the industry as a whole, the general approach to content moderation at first was to ignore the problem - or, instead, not take responsibility for the problem. For the most part, now, the approach is reactive - whether in keyword moderation, banning users, etc. However, the newest shift is toward a more proactive approach to moderation.

Proactive vs. Reactive Content Moderation

A reactive approach to content moderation includes keyword filtering, human moderators, and user reporting. All of these are effective means of protecting your online community. However, they all require one thing in particular - bad actors repeatedly causing issues within your platform. On the other hand, proactive involvement in your communities through healthy behavior encouragement can fill in the gaps that reactive approaches can't reach. 

What data are we looking at regarding prosocial behaviors? What story can that tell us when moderating online content? 

The data that drives this AI is vast - millions upon millions of pieces every month. However, the most colorful story is told within the metadata. When looking at individual users, we can use it to form a story for them, understand behavioral patterns, and assess their risk level. 

Historically, to gather this data, we've focused on messaging; however, we've found more value at the user level. When we look at behaviors by the same user over time and outside of that individual message, we can have much greater confidence in what someone's intention is on the platform. In other words, thanks to contextual AI, we can look at the entire conversation rather than singular messages triggered by keywords.

December Masterclass  (6)

How do we derive a signal to inform insights and empower action? 

Deciding what's prosocial, toxic, or neither involves having large enough definitions to encompass the target situations we want to find. Simultaneously, we're also aiming to be restrictive enough to limit the bias and subjectivity of the labelers. Added signals are located within the disagreements in labels found through ever-evolving platforms and language - they show us the expanded examples needed within lexicons to better illustrate our target.

It becomes an iterative process because language and platforms evolve - however, when we get these disagreements in our labels, that's a signal we can use to better shape the lexicon.  

Separating trolls from those with malicious intent.

In most cases, someone with malicious intentions wants their content to be exposed to as many people as possible, which is where metadata comes in. When a user's goal on a platform is a negative one (harassment, spam, etc.), we see a solid pattern as to how those people act versus a user taking advantage of the platform's intended purpose. Through metadata fields, we can see many of these triggered within the first few minutes of someone being on the platform. We can confidently determine their intention right away. 

Ex: If someone's signup date is recent, and they begin sending out messages that don't fit the norm while also being flagged by other users, the tech can be used to flag them with malicious intent more quickly. 

December Masterclass  (7)

How can this information equate to a proactive approach? 

When all this information comes together, the tech can better understand how individual user values their fellow users. Are they teaching new users? Are they inviting them to join other conversations or games? Generally speaking, they may be distinctive behaviors in your community, so we can use these aspect models to flag positive behaviors and reward them. 

Why is rewarding positive behavior important for building community in online spaces?

At its core - nurturing positive user behavior is rewarding human decency online. Creating safe environments for your users promotes a healthy community for them to thrive, which boosts user retention and revenue, which is excellent news for your bottom line. 

If you would like to learn more about the implementation of AI in promoting healthy behaviors, read our white paper, Boost User Retention by Promoting Healthy & Positive Behaviors.

To get a better understanding of how Spectrum Labs Healthy Behaviors AI can help improve your platform's content moderation strategy, contact our team. 

Learn more about how Spectrum Labs can help you create the best user experience on your platform.