We hear a lot about online toxicity and its negative effects on user retention. It’s pretty simple: the less enjoyable an online platform is, the less likely users will stick around.
Countless investments have been made toward content moderation and removing harmful behavior from online communities. One major gaming platform saw a 15% increase in first-time user retention within the first month of implementing hate speech detection and other A.I. moderation tools from Spectrum Labs.
Now, the same gaming company and others are looking to make their communities a more enjoyable place by proactively encouraging positive behavior among users.
But how exactly can that be done?
Identifying toxic behavior is a pretty straightforward proposition – but how do you define healthy behavior?
Although its specifics may vary between platforms, healthy behavior is anything that contributes to a more beneficial online experience. This could range from funny posts and helpful comments on a social network to good manners and pleasant conversations on a dating app. The Spectrum Labs team always has advised that Trust & Safety should be built into platforms from the design phase, and the same goes for systems to recognize positive behavior.
Recent analysis by Spectrum Labs shows a strong correlation between positive interactions among users and growth of a thriving community. According to Trust & Safety leaders in our #TSCollective, users with positive first-time visits are 6x more likely to return to an online platform. This kind of impact on retention has a direct impact on revenue – healthy behavior isn’t just good for community, it’s also good for business.
The key to defining healthy behavior is to determine the purpose of your platform, then reward user behavior that helps fulfill that vision. That doesn’t necessarily mean only acknowledging squeaky-clean users either – for instance, profanity-laced conversations about heavy-duty weaponry may be fine for multiplayer first-person shooters but not for, say, an eLearning community.
Every platform has its own rules and contexts, so recognizing positive behavior isn’t a “one size fits all” endeavor. Spectrum has begun to work with pilot clients on constructing new pro-social indicators that can be customized for each community to shape positive experiences for different types of communities.
When positive behavior is built into an online platform and reinforced throughout the entire community experience, users are more likely to stand up against harmful behavior. This is a more preferable outcome than users abandoning their accounts when a platform gets too toxic – as 26% of Facebook users have done in recent years.
Making users feel “at home” empowers them to contribute positively and defend their space against toxicity. In many communities, healthy behavior is incentivized for users to display on their profiles:
Most importantly, all of these rewards are based entirely on users’ interactions with other users. That’s one way to establish and reinforce positive behavior as an inherent part of your online community and your platform's user experience.
Earlier this year at our Safety Matters Summit, Spectrum Labs hosted a panel about positive behavior on gaming platforms with Trust & Safety leaders from Microsoft (Xbox), Blizzard Entertainment, and Riot Games.
Now, we’ve compiled the best insight from the panel into a whitepaper which further elaborates on positive behavior through 5 key sections:
The entire whitepaper is available to download here:
Driving Positive Behavior in Online Platforms: Lessons learned from gaming communities
After you've checked it out, tell us on Twitter what you think!