The numbers are staggering, but experts say what we’re seeing is just the beginning. As child sexual abuse material generated by AI, or CSAM, is increasing, researchers warn that the technology is not only creating dangerous content, but is actually changing the way children are targeted; how survivors escape; and how burdened researchers are.
Researchers already had their hands full with clearing CSAM from the internet. But with the introduction of AI, that challenge has been increased. The Internet Watch Foundation (IWF), Europe’s largest network to combat child sexual abuse images online, has documented a 260-fold increase in child sexual abuse videos generated by AI in 2025. From just 13 videos a year earlier to 3,443. Researchers who have been following the story for years say the explosion is no surprise. However, it is a warning.
Melissa Stroebel, vice president of research and policy insight at Thorn, a nonprofit that develops technology to combat online child sexual abuse, said: “Any number we see, it’s the tip of the iceberg.” “That is about what has been found or previously reported.”
This process is a direct result of AI being developed faster, cheaper, and more accessible to bad actors. Thorn identified three specific ways in which these tools are being used against children.
The first is the rehabilitation of former survivors of abuse. A child who was molested in 2010 and whose images have been published online for more than a decade is now facing a new threat. Criminals are using AI to take existing images and personalize them: they put themselves in recorded images of abuse to generate new information.
“The same way you can Photoshop the Grandma Who Missed Christmas picture into a Christmas picture,” Stroebel said. good luck, “Bad actors can create images and profiles of the chosen child.” This process creates new abuse for survivors who may have spent years trying to overcome their abuse.
The second is to use weapons of innocent images. A child’s photo on the school’s football team’s website is now a source of abuse. With AI tools available everywhere, a criminal can turn that cute picture into sexual exploitation material in minutes. Thorn also reports cases of peers, where a young person displays offensive images of a classmate without fully understanding the potential harm they are causing.
The third impact, and a systemic one, is the pressure placed on already overburdened reporting pipelines. The National Center for Missing and Missing Children receives tens of millions of CSAM reports each year. The speed at which AI can produce new goods is incredibly compounding the burden and creating new demand. When a new image arrives, researchers must decide whether it shows a child in danger in action now, or if it is an image created by AI.
“These are important pieces of information to help them try and respond to these cases,” Stroebel said. The information generated by AI makes those decisions very difficult, but he added two cases of a picture taken in real time and a picture made by AI is reported and handled in the same way by the authorities.
Technology has also made the most-repeated child safety guidelines dangerously outdated. For years, children have been warned not to share images online as a primary defense against abuse. That advice is no longer valid. Thorn’s own research found that one in 17 young people have experienced serious pornography, and one in eight know someone who has experienced it. Victims of sexual harassment are now being sent pictures that look exactly like them—pictures they’ve never taken before.
“It is no longer necessary for a child to share an image at all to be considered for abuse,” Stroebel said.
At the level of detection, traditional hashing technology, which acts as a digital fingerprint for known exploit files, cannot identify content generated by AI because each image generated is new. Take, for example, a photo of a well-known object, such as the Statue of Liberty. That image of the statue has digital fingerprints. Now, say you zoom in, zoom in some more, and zoom in again to change the shadow of one pixel by 0.1%. That change may not be visible to the human eye. However, the fingerprints of that photo are completely new, which means that the hashing technology does not recognize it as the same photo with only one pixel difference.
In the past, under traditional hashing technology, making a single pixel difference in a photo known as CSAM would mean it would be undetectable by the technology. However, classifier technology, which examines what an image contains instead of comparing it to a known file, has become essential for capturing information that would otherwise penetrate completely.
For parents, Stroebel’s message is urgent and unequivocal. The conversation cannot stop, and it must go beyond old warnings. If the baby is born, the first response will not be doubt: “Our job is, ‘Are you safe, and how can I help you move to the next step?'”
#Internet #Watch #Foundation #finds #increase #CSAM #generated #year #tip #iceberg #Good #luck