
Content moderation has become increasingly important in almost every industry. Moderators on many platforms work extensively to protect the people who use those platforms, whether they want to avoid discussions of certain hot-button issues or to prevent predators from using their pages as a place to find potential victims. Cyberbullying is becoming increasingly common in schools, with girls victimized three times more often than boys; at the same time, both teens and adults may be victims of sex trafficking, kidnapping, assault, and more as a result of their involvement in online communities. As a trust and safety executive, it's more important than ever that you take steps to protect your community. Facebook alone has already put more than 10,000 people to work dealing with the potential challenges faced by online communities. Are you prepared for the changes coming in 2020?
1. Develop a solid understanding of what topics to censor.
Some people worry that content moderation will end the free speech that Americans, for the most part, take for granted. Content moderation, however, is not just about censorship. Rather, it's about creating a safe environment for the visitors in your online community.
The individuals who populate your community may make a big difference in what you want to censor and what you want to allow. For example, a community for parents of young children may want to censor violent content or crude language, lest those little eyes peek over their parents' shoulders as they scroll social media. Any community, however, can benefit from the censorship of hate speech or discriminatory language; and no community should permit online predators to operate.
To help prepare your organization for 2020, take a solid look at your organization's ethical stance and how you would like to deal with hot-button issues. Create a comprehensive list of the topics you will absolutely censor within your community. Then, make it a clear part of the rules. Make sure that you enforce them! Rules are only useful if you continue to use them to keep your community safe, rather than allowing some members to slide.
2. Understand how predators operate (or use a system that does).
Predators often feel confident enough to operate right out in the open, in the online communities you use every day. Get a feel for how predators operate--or use a system like Spectrum that can flag inappropriate content as soon as it goes up. Spectrum, for example, uses advanced AI technology to help identify potential predators lurking within online communities based on sexual grooming or triggers that could indicate a plan to kidnap or assault a member.
For 2020, make sure that you're utilizing a solid system that will help identify these predators and eject them from your community as soon as possible. You don't want your community to be a stalking ground for predators--nor do you want the legal liability that could go along with it.
3. Create plans to keep the human element.
AI can now identify potential dangers or negative behaviors at an incredible rate, but you shouldn't rely on AI technology alone to deal with harassment, predators, or hate speech in your community. AI lacks the ability to see context and understand why some content has been posted. Human moderators, on the other hand, can sort through the flagged content--and even content that AI might not catch--to ensure that everything posted helps provide a safe, secure atmosphere for your users.
This 2020, make sure there are human moderators keeping an eye on your content. Not only can they help flag things that might slip past your technology, they can dig deeper into flagged content and ensure that it really does meet your banned criteria.
Are you struggling to keep up with the demands of online moderation? Spectrum can help. Contact us today to learn more about how we can identify some of the biggest dangers in your online community.