Skip to content

Online Safety for Kids

How to Protect Kids on Your Platform

Download the Whitepaper | End Cyberbullying Online

Children and the Internet

Ninety-five percent of children in the US aged 3 to 18 have Internet access at home. As you'd imagine, they use the Internet to play games, do schoolwork, connect with their friends and explore their interests. On average, they spend more than 1.6 hours online per day, totaling around 11 hours per week. Forty-five percent of teens are online on a near-constant basis.

The fact that the Internet is not a safe place for children isn't news; since as early as 1999, UNICEF has sponsored and published research on youth internet safety. However, after more than a year of heavy Internet usage due to COVID, we are more aware of the dangers kids face online.

However, what may be news is the fact that many (over 30 percent) of children lie about their age to access age-restricted content. And this figure doesn't count the victims of sex trafficking forced to lie about their age.

So, while you may not think children are on your app, game, or site, they are.

Check out our blog: Improving Internet Safety for Kids



What are the Threats to Online Child Safety?

CSAM Grooming

Predators use online communities of all kinds, from social word games to neighborhood forums, to find and groom young victims. Grooming is a phased series of actions intended to normalize sexual communications or behaviors, usually with the long-term intention of coercing them into sexual acts. These online predators follow a sophisticated, incremental playbook of changing children's behavior.

Predators connect with the victim, sympathizing with them, supporting them as a friend would. At this stage, their chat interactions would appear harmless, positive in some cases.

Predators gradually add sexual topics, themes, jokes to conversations, numbing the child. Their chat interactions could also appear harmless at this stage - it is possible to introduce sexual subjects without triggering a basic sexual filter.

They cast dought on the child's relationships with their parents and peers, suggesting the child isn't worthy of those relationships or that those people aren't worthy of the child. Chat interactions can still appear harmless at this stage.

They force the child to create CASM materials leveraging the information they've gathered over time and the isolated position they've placed the child in. Chat interactions at this age may trigger filters, but a great deal of damage has already been done by this time.

Reports of child sexual abuse material (CSAM) online have increased 15,000% over the last 15 years. How? Technology has unwittingly made it easier for predators to groom children and share CSAM materials. 

How the Founder of SOSA - Safe from Online Sex Abuse is fighting against CSAM

Read This: Child Safety WhitePaper


Hate Speech

Sixty-four percent of US teenagers report they often come across racist, sexist, or homophobic comments, coded languages, images, or symbols on social media. 

Experiencing hate speech online can lead to depression, isolation, suicide ideation, self-harm, and an increased risk for CSAM grooming.

Watch the Master Class: Online Toxicity and How to Prevent it From Affecting Users Offline



Cyberbullying is a widespread issue: Fifty percent of children aged 10- to 18-years-old in the EU have experienced at least one kind of cyberbullying in their lifetime. Fifty-nine percent of US teens have been bullied or harassed online. Half of LGBTQ+ children experience online harassment. 

Cyberbullying is challenging for basic filters to catch for three reasons (among others):

  1. It can happen without using traditionally banned words
  2. It can pattern after trash-talking banter
  3. It can also seem like flirting

Spectrum Labs' technology can tell the difference between flirting and harassment and trash-talking and harassment. Learn how in these blogs. 

Read This: Let's Get Serious About Ending Cyberbullying


What are the Regulations on Child Online Safety?

Because threats to online child safety are of enormous concern to so many individuals, parents and platforms have pressed for a regulatory response. This has resulted in regulations at the state and federal level, including:


A federal law, known as the Children’s Online Privacy Protection Act (COPPA), helps to protect kids under 13 years of age, with the intent of keeping a child’s personally identifying information (name, address, social security number) out of the wrong hands.


The Children’s Internet Protection Act (CIPA) was created in 2000 to help limit a child’s access to obscene or harmful content. It specifically restricts websites that can be accessed by schools or libraries that get benefits through the E-rate program, also requiring that they set internet safety policies and address the safety of email, chat rooms, and other forms of online communication by minors.

While regulations can be somewhat effective in helping to promote online child safety, they are not a complete resolution in themselves. Such a widespread, complex, and critical issue requires a multidisciplinary approach using the best of technological solutions, thought leadership, and platform innovation to create actionable insights and real solutions to online child safety.

One of the most effective emerging solutions to online child safety is contextual AI. Contextual AI has the benefit of interpreting contextual cues that other technological solutions miss. It can also be used for content moderation on a variety of media, including text, voice, and chat: and in several different languages as well.

Spectrum Labs provides AI-powered behavior identification models, content moderation tools, and services to help Trust & Safety professionals safeguard user experience from the threats of today, and anticipate those that are coming. Because every company has different needs when it comes to content moderation, Spectrum Labs has specialized expertise in the fields of gaming, dating, social networks, and marketplaces. 

If you’d like to learn more about how Contextual AI can help solve content moderation challenges and create safe and inclusive online environments, check out our Solution Guide.

Read This: Protecting Underage Users on the Internet Whitepaper

Get the Guide


Let's create a smarter, safer healthier Internet

When it comes to moderating disruptive behaviors online, you shouldn’t have to do it alone. Spectrum’s AI models do the heavy lifting - identifying a wide range of behavior, across languages. Our engines are immediately deployable, highly customizable and continuously refined.

Contact Illustration

Whether you are looking to safeguard your audiences, increase brand loyalty and user engagement, or maximize moderator productivity, Spectrum Labs empowers you to recognize and respond to toxicity in real-time across languages.

Contact Spectrum Labs to learn more about how we can help make your community a safer place.

Contact Spectrum Labs Today