Cyberbullying is no new challenge. It is an issue that seemingly comes with the territory of internet use.
But does it have to?
As young people spend more time online, the effects of cyberbullying are becoming more apparent, and the responsibility of managing it is shifting more heavily into the hands of platforms.
In this master class, our expert speakers dove into what cyberbullying looks like and the steps your platform can take to prevent it. Matt Soeth (Head of Trust & Safety at Spectrum Labs) sat down with Sameer Hinduja (Founder of the Cyberbullying Research Center) and Kris McGuffie (Director of Research at Spectrum Labs) to discuss:
- Identifying cyberbullying is oftentimes not as simple as it seems.
How do we define cyberbullying online? It would be nice to look at it as a black-and-white issue - but in reality, it's much more complicated - there are layers to look at. When it comes to terms and statements traditionally used to identify cyberbullying in online communities, we must recognize that contextual and cultural variables are often at play. Because of this, keyword-based moderation can allow for cracks in moderation strategies that may punish the innocent and let toxic users through.
- What can we look at to identify cyberbullying?
In addressing cyberbullying, looking at two key elements is essential:
A) Is there harm taking place?
When a user reports cyberbullying or it is detected, be willing to look at the whole picture. The bully may approach the victim using words that would not traditionally be flagged in text-based moderation. Flexibility within your approach is essential - what the user may be experiencing could be outside of your definition of harm - but if we have a user who's struggling in some capacity, it is articulable, and it is understandable, then taking them at their word can keep them safer in the long run.
B) What is the intent?
There are plenty of trolls on the internet that, although annoying, don't directly inflict harm. Cyberbullying, on the other hand, tends to be reoccurring and aims to cause some harm to the target, whether that be emotional damage or physical threats.
- Age-appropriate moderation & determining the needs of your community.
Another challenge to content moderation is developing a strategy based on demographics - not just location, gender, etc. - but age. Specific spaces require higher levels of moderation. Take online communities with a younger audience, for example; you may not want to allow them as much freedom in specific ways.
Alternatively, in a more mature platform driven by adult users, keyword-focused moderation can inappropriately flag user speech as cyberbullying when it is community appropriate. These may be cases in communities driven by "harsh" language or ones where you want to allow a certain level of "trash talk."
- How can you utilize experts when identifying, implementing, or enforcing bullying and harassment policies?
Organizations can have a hive mindset that can prevent them from seeing all points of view. However, working with an external partner gives way to a fresh perspective when looking into your platform that your team might miss. Hearing from various disciplines who focus on researching and understanding the reasoning and cause behind user behavior can help expand an organizational point of view.
- How you can encourage healthy behavior within your platform.
The critical components of a healthy community include repetitive, pro-social interactions among bonded people who are essentially rewarded for their behavior. Rewarding positive behavior can develop trust with your user base, which, in turn, can foster good behaviors throughout entire platforms. Simultaneously, understanding the critical area driving cyberbullying within your community can position you in a way that truly allows for control. Utilizing tools to understand what is happening within your community can also give you a better understanding of where to encourage positive behaviors.
- People change, but tech can help.
It is essential to acknowledge how people change the ways they are harmful to one another. Keeping this in mind and implementing more flexibility into your moderation strategy will strengthen your ability to catch those bad actors. Additionally, determining what is universal and transferable between language, culture, age, etc., then combining that into ever-growing data allows us to operationalize human behavior throughout the change. When we take those two points, then utilize data to train AI that will calculate those variables and recognize intent, our view of negative influencers on the platform can expand exponentially.
Learn more about preventing cyberbullying online in our white paper.
Watch the master class on demand.