Skip to content
3 min read

Improving Internet Safety for Kids: Crafting a Multi-Layered Response

By Lee Davis

Keeping kids safe on the Internet has never been more challenging. Young people now spend more hours online than ever before, on a variety of different websites and platforms. And even those that have been well-informed about online safety and acceptable usage still engage in dangerous behaviors: a 2019 study found that 40% of kids in grades 4-8 had chatted online with a stranger, even though 87% of them were taught to use the Internet safely.

While legislators have attempted to address Internet safety in a variety of ways, to date, these efforts have a common weakness that makes them largely ineffective: they require technology companies to shoulder the majority of the burden rather than making Internet safety for kids a responsibility that is shared between parents, brands, platforms, and regulators set by regulatory compliance.

For example, in March 2020, the Eliminating Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act was introduced in the Senate. The Act focuses on eliminating child sexual abuse material (CSAM) from digital platforms by allowing states to develop regulations that online platforms will be required to follow.

The problem with this is that it places the entire responsibility for the identification and removal of CSAM: rather than building an effective response that fights against the exploitation of children online. An effective response to CSAM should address data misrepresentation, funding, and regulatory response.

Data misrepresentation

The bulk of CSAM data comes from the National Center for Missing and Exploited Children (NCMEC). Electronic service providers (ESPs) are required to report identified CSAM incidents to the NCMEC tipline: but these reports are inconsistent representations of the number of children that are exploited as well as exploitative acts. Moreover, a platform that submits a low number of reports to NCMEC isn’t necessarily safer: it may just have an ineffective identification system.

Because the NCMEC is not a governmental agency, it is not subject to the Freedom of Information Act: so it is difficult to clarify what the data means outside of the NCMEC reports. To reduce data misrepresentation, the NCMEC should release raw data for analysis, for greater transparency and a better, more granular level of analysis.

Funding

While regulations such as the EARN IT Act focus on increasing the accountability of technology companies, they fail to address the issue that law enforcement must address each report. Because of this, funding law enforcement agencies sufficiently is a critical factor in the effectiveness of CSAM prevention regulation.

Regulatory Response

Existing regulations include the requirement that CSAM must be deleted within 90 days of the initial report being submitted. This burdens law enforcement, who may need more time to complete an investigation. Moreover, when materials are deleted, it means that they cannot be used for analytics, specifically to train AI algorithms to identify incidents of CSAM. An AI algorithm that is effective in accurately identifying CSAM in real-time is critical to resolving this issue, as it can:

  • Improve response time by automating identification and reporting
  • Reduce the cost of creating and maintaining a CSAM content moderation program and
  • Protect the mental health of human content moderators by limiting their exposure to damaging materials

However, as access to CSAM by AI is limited by the 90-day requirement, it is difficult to train AI models to improve accuracy and efficiency.

To read more about CSAM prevention, download our white paper: Challenges to the Online Child Safety Ecosystem

Online child safety is a critical priority for technology companies and for regulators alike. However, responses to date have been limited, by failing to address the nuances of the online ecosystem. Placing responsibility on a single party, like the platforms, is an ineffective response to the CSAM problem. To improve CSAM response, regulations must at least attempt to address some of the current system's shortcomings, correct them at the source, and support forward-looking content moderation technologies that will get us closer to reducing, and maybe one day eliminating, online CSAM.

Learn how Spectrum Labs technology is detecting and removing CSAM from online platforms.

Get The Guide

1 https://isc2-center.my.salesforce.com/sfc/p/#G0000000iVSt/a/0f000000fyoc/TYQ9XvDATBA78rR00G.PGJ9fmaLm1vQfAW9HCpy3GWk

Learn more about how Spectrum Labs can help you create the best user experience on your platform.