You may have seen Exploited, the New York Times’ horrifying series on online child sexual abuse. We at Spectrum Labs have been following the series since the first article, The Internet is Overrun with Images of Child Sexual Abuse, was published in late September, followed by a devastating deep-dive into tech companies’ tepid reaction with Child Abusers Run Rampant as Tech Companies Look the Other Way. The most recent installment, While They Play Online, Children May Be the Prey, leveled a gut punch to the gaming industry, describing how predators use in-game chats to find and coerce children into sexual acts — and then blackmail them into a cycle of extortion and shame. The disturbing article left us with one question:
If gaming companies are already using software to detect child sexual abuse, then why is this problem growing?
Simple answer: predators’ actions evolve from innocuous to serious over time, and most technologies are not sophisticated enough to detect the pattern early on. Some content moderation companies tout their AI, but are essentially keyword-based solutions or simple image recognition; however, neither approach will detect the patterns and signs early enough to stop predators from crossing the gap from seemingly innocent to dangerous. And by then the child is in too deep, overwhelmed and ashamed, in secret.
To be clear: predators have an advantage in these situations because they are following a proven script, shared and refined in their own communities — whereas the child is just enjoying their online gaming experience in the illusion of safety. From the article:
After making contact, predators often build on the relationship by sending gifts or gaming currency, such as V-Bucks in Fortnite. Then they begin desensitizing children to sexual terms and imagery before asking them to send naked pictures and videos of their own.
When Kate started scrolling through her son’s Discord account, she saw how the sexualized chats had unfolded. The imagery becomes increasingly disturbing, moving from innocuous anime figures to pornographic illustrations - and finally to actual children being abused.
Improving online child safety through AI and context
Context-sensing AI is the only way predators can be found early enough to stop them, at scale, before their behavior escalates. Predators don’t tell children, “Hey, I’m a pedophile” or immediately ask a child for sexually explicit photos upon engaging them online. However, predators leave clues from the get-go — and Spectrum Labs’ Contextual AI can read and recognize those clues because it evaluates interactions over time and with relevant metadata like chat timestamps, user profiles and activity — across multiple languages/idioms and multiple content types (text, images, video, voice).
Where some content moderation technology focuses on the small, point-in-time interactions — like the dots of color on a pointillist painting — Spectrum Labs’ Contextual AI looks at the bigger picture so that patterns — and intent — are revealed. It’s scarily accurate, without using PIIF.
While the article shares helpful resources for parents and guardians of gamers, it is missing one critical call to action:
Ask the companies that make the games or forums enjoyed by the children in your life, “What are you doing to protect children against this very real, and very serious problem?”
Technology created these games and forums for children to enjoy independence and gaming in a safe way. It is now time — past time — to restore that safety. These online interactions have devastating real-life consequences and whatever companies are doing now, it is clearly not enough.
For more information on Contextual AI and how it works, contact me at Justin@getspectrum.io.