Skip to content
3 min read

The Truth About Radicalization Online

By Katie Zigelman

The internet offers users countless ways to connect. It also provides a lush hunting ground for violent extremist groups, violent supremacist groups, and violent citizens, who can propagate messages of violence, hate and division. These groups are known to use social media to publish propaganda, reach and groom recruits, and generally make their message of hatred known.

There is no question that members and supporters of these groups visit mainstream websites, forums, and publishing platforms to recruit others or encourage violent acts. It's been happening for years. The United States Department of Justice (DOJ) has been tackling the issue since 2013.

Social media experts and researchers have warned us for years that there is a specific playbook, an actual method used by extremists to turn trolls into terrorists. It's all about hate, and it's happening right now at online marketplaces and classifieds, social media platforms, dating apps, social forums and gaming communities. 

How Do Individuals Become Radicalized?

According to Becca Lewis, research affiliate with Data & Society who studies online extremism, it usually begins with the idea that extremist content is just a joke.

In a recent interview, Marketplace Tech quoted Lewis saying, "Far-right communities online have... been using humor specifically as a recruitment tool for young, disillusioned people and particularly young, disillusioned men. In fact, [there was a] really popular neo-Nazi website, a couple of years ago, it's [recruiting] style guide got leaked. The founder of that website explicitly wrote that humor is a recruitment tool because it gives the far-right extremists plausible deniability about their extremist beliefs."

  • Know that radicalization techniques are often masked as innocent humor.

Considering the "OK" Gesture

Extremists have become pretty talented at spreading their message online, but staying exactly within the parameters required by each forum. Lewis cited the "OK" hand symbol scare in 2016. Initially, this symbol was not considered to be racist or white supremacist.

However, as of 2019, it's continual usage in social media has made it so. The "OK" hand gesture is now officially considered a hate symbol in the English language, per BBC. It's the perfect example of an online "joke," which continues to get blasted in the media, as unknowing individuals use the gesture in its traditional form. 

  • It's this passing of humorous content which propagates hatred online.

Radicalization Via Gaming Platforms Affects Children

NBC published an article in August, 2019 describing the immense amount of hate-driven activity online, even on children's gaming platforms. On Roblox (a popular platform for youths) they were able to instantly find four hateful / white-supremacist propaganda profile accounts. 

Roblox immediately deleted the accounts when notified. But then, with the help of another Twitter user, NBC was ultimately able to find 100 more hateful accounts on this single platform.

Per NBC, Roblox claims to employ around 800 moderators who review thousands of items monthly. But hateful messages still reach the eyes and hearts of children, who love the humorous content.

It all starts with a joke, which leads to a string of casual conversations. Then a feeling of acceptance and identification with a radicalized group.   

What is the Solution to Online Radicalization?

According to Lewis, the solution lies in the notion that platforms must treat online radicalization as a genuinely severe situation. She notes the time when ISIS videos, content, and propaganda were proliferating on all of the popular social media platforms, specifically citing the terrorist mosque shooting in New Zealand, and the difficulty platforms had while trying to stop the spread of violent content.

  • Platforms need to prioritize, to recognize radicalized content as an issue, and invest resources into grappling with it.
  • Companies shouldn't fear a political situation. These are American companies who should be protecting American users.

Here at Spectrum, we offer the AI tools platforms need to continually moderate content in real-time, and sniff out hate propaganda and radicalizing content. Our AI software works with gaming platforms, dating apps, forums, online marketplaces and other social sites! Our mission is to help you create healthy, safe online environments for all of your users.

Our unique AI software works in real-time, across many languages, and is always updating itself in regards to slang, acronyms, and other ways users pass illicit materials or initiate forbidden transactions. Contact us today to learn how we can eliminate radicalization on your platform or download our white paper on preventing hate speech and radicalization online.

Download the Whitepaper

Learn more about how Spectrum Labs can help you create the best user experience on your platform.