Skip to main contentSkip to navigationSkip to navigation
Composite illustration of hair and back in front of multi-colored background
Since dark web browsers let users be anonymous or untraceable, child safety groups have few means of requesting images be removed or reporting users to law enforcement. Composite: The Guardian/Getty Images
Since dark web browsers let users be anonymous or untraceable, child safety groups have few means of requesting images be removed or reporting users to law enforcement. Composite: The Guardian/Getty Images

Child predators are using AI to create sexual images of their favorite ‘stars’: ‘My body will never be mine again’

Safety groups say they’re increasingly finding chats about creating images based on past child sexual abuse material

Predators active on the dark web are increasingly using artificial intelligence to create sexually explicit images of children, fixating especially on “star” victims, child safety experts warn.

Child safety groups tracking the activity of predators chatting in dark web forums say they are increasingly finding conversations about creating new images based on older child sexual abuse material (CSAM). Many of these predators using AI obsess over child victims referred to as “stars” in predator communities for the popularity of their images.

“The communities of people who trade this material get infatuated with individual children,” said Sarah Gardner, chief executive officer of the Heat Initiative, a Los Angeles non-profit focused on child protection. “They want more content of those children, which AI has now allowed them to do.”

These abuse survivors may now be grown adults, but AI has exacerbated the prospect that more people may be viewing sexual content depicting them as children, according to experts and abuse survivors interviewed. They fear that images of them circulating the internet or their communities could threaten the lives and careers they’ve built since their abuse ended.

Megan, a survivor of CSAM, whose last name is being withheld because of past violent threats, says that the potential for AI to be used to manipulate her images has become an increasingly stressful prospect over the past 12 months, though her own abuse occurred a decade ago.

“AI gives perpetrators the chance to create even more situations of my abuse to feed their own fantasies and their own versions,” she said. “The way my images could be manipulated with AI could give the false impression it was not harmful or that I was enjoying the abuse.”

Since dark web browsers enable users to be anonymous or untraceable, child safety groups have few means of requesting these images be removed or reporting the users to law enforcement.

Advocates have called for legislation that goes beyond criminalization to prevent the production of CSAM, by AI and otherwise. They are pessimistic that not much can be done to enforce bans on the creation of new sexualized images of children though, now that AI enabling it has become open source and private. Encrypted messaging services, now often default options, allow predators to communicate undetected, say advocates.

Creating new CSAM and reviving old CSAM with AI

The Guardian has viewed several excerpts of these dark web chat room conversations, with the names of victims redacted for safeguarding. The discussions take an amiable tone, and forum members are encouraged to create new images with AI to share in the groups. Many said they were thrilled at the prospect of new material made with AI, others were uninterested because the images do not depict real abuse.

One message from November 2023 reads: “Could you get the AI to recreate the beautiful images of former CP [child porn] stars [redacted victim name] and [redacted victim name] and get them in some scenes – like [redacted victim name] in a traditional catholic schoolgirl’s uniform at Elementary School, and [redacted victim name] in a cheerleader’s outfit at Junior High?”

In another chat room conversation, predators also discussed using AI to digitally remaster decades-old popular child exploitation material of low quality.

“Wow you are awesome,” one predator wrote to another in January. “I appreciate your effort keep going upscaling classical vids.”

While predators have used photo editing software in the past, new advancements in AI models present easy-access opportunities to create more realistic abuse images of children.

Much of this activity focuses on so-called “stars”.

“In the same way there are celebrities in Hollywood, in these online communities on the dark web, there’s a celebrity-like ranking of some of the favourite victims,” said Jacques Marcoux, director of research and analytics at the Canadian Centre for Child Protection. “These offender groups know them all, and they catalogue them.”

“Offenders eventually exhaust all the material of a specific victim,” said Marcoux. “So they can take an image of a victim that they like, and they can make that victim do different poses or do different things. They can nudge it with an AI model to do different poses on a bed or be in different stages of undress.”

Data bears out the phenomenon of predators’ preoccupation with “stars”. In a 2020 assessment to the National Center for Missing and Exploited Children, Meta reported that just six videos accounted for half of all the child sexual abuse material being shared and re-shared on Facebook and Instagram. Roughly 90% of the abusive material Meta tracked in a two-month period was the same as previously reported content.

Real Hollywood celebrities are also potential targets for victimization with AI-generated CSAM. The Guardian reviewed chatroom threads on the dark web discussing desires for predators who are proficient in AI to create child abuse images of celebrities, including teen idols from the 1990s who are now adults.

How child sexual abuse material made by AI spreads

Predators’ use of AI became prevalent at the end of 2022, child safety experts said. The same year as OpenAI debuted ChatGPT, the LAION-5B database, an open-source catalogue of more than 5bn images that anyone can use to train AI models, was launched by an eponymous non-profit.

A Stanford University report released in December 2023 revealed that hundreds of known images of child sexual abuse had been included in LAION-5B and are now being used to train popular AI image generation models to generate CSAM. Though the images were a minor fraction of the whole database, they carry an outsize risk, experts said.

“As soon as these things were open sourced, that’s when the production of AI generative CSAM exploded,” said Dan Sexton, chief technology officer at the Internet Watch Foundation, a UK-based non-profit that focuses on preventing online child abuse.

The knowledge that real abuse images are used to train AI models has resulted in additional trauma for some survivors.

“Non-consensual images of me from when I was 14 years old can be resurrected to create new child sexual abuses images, and videos of victims around the world,” said Leah Juliett, 27, a survivor of child sexual abuse material and activist. “To know my photos can still be weaponized without my consent to harm other young children, it’s a pain and a feeling of helplessness and injustice.”

“My body will never be mine again, and that’s something that many survivors have to grapple with,” they added.

Experts say they’ve seen a shift towards predators using encrypted private messaging services such as WhatsApp, Signal and Telegram to spread and access CSAM. A great deal of CSAM is still shared outside of mainstream channels on the dark web, though. In an October 2023 report, the Internet Watch Foundation (IWF) says it found more than 20,000 AI-generated sexual images of children that were posted on just one forum on the dark web in a one-month period in September.

“Images show the rape of babies and toddlers; famous pre-teen children being sexually abused; BDSM (bondage and discipline, dominance and submission, and sadomasochism); content featuring tweens and teenagers, and more,” the report states.

Over the last year, AI image generators have improved across the board, and their output has become increasingly realistic. Child safety experts said AI-generated still images are often indistinguishable from real-life photos.

“We’re seeing discussions happen where [offenders] are discussing how to fix problems, such as signs the image is fake like extra fingers. They’re coming up with solutions. The realism is getting better,” said Sexton. “There is a demand to create more images of existing victims using fine-tune models.”

What effect will AI-generated CSAM have?

Experts say the impact of AI-generated CSAM is only starting to come in focus. In certain circumstances, viewing CSAM online can cause a predator’s behavior to escalate to committing contact offences with children, and it remains to be seen how AI plays into that dynamic.

“There are examples of men that I’ve worked with where their online behavior reinforced a sexual interest in children and led to a greater preoccupation of that sort of behavior,” said Tom Squire, head of clinical engagement at the Lucy Faithfull Foundation in the UK, a non-profit focused on preventing child sexual abuse. The organization operates an anonymous helpline for anyone with a concern about child sexual abuse, including their own thoughts or behaviors.

“They joined a group online where there was a currency to the sharing of images, and they wanted to contribute to that, then directly on from there they’ve gone on to sexually abuse children, and perhaps take images of that abuse and share it online,” said Squire.

Some predators mistakenly believe that viewing AI-generated CSAM may be more ethical than “real life” material, experts said.

“One of our concerns is the capacity for them to justify their behavior because these are somehow images of a victimless crime that doesn’t involve real-world harm,” said Squire. “Some of the people who call us make an argument to minimize the gravity of what they’re doing.”

What can be done to curb AI-generated sexualized images of children?

In many countries, including the US and UK, decades-old laws already criminalize any CSAM created using AI via prohibitions on any indecent or obscene visual depictions of children. Pornographic depictions of Taylor Swift made by AI and circulated early this year prompted the introduction of legislation in the US that would regulate such deepfakes.

In April, a 51-year-old US man was arrested in Florida on allegations he created CSAM using AI with the face of a child he’d taken pictures of in his neighborhood. On May 20, the US Department of Justice announced the arrest of a 42-year-old man in Wisconsin on criminal charges related to his alleged production, distribution and possession of more than 10,000 AI-generated images of minors engaged in sexually explicit conduct.

“We need legislative reform to ensure that abuse has no place to fester,” said Juliett. “But we also need cultural reform to stop abuse like this from happening in the first place.”

Child safety and tech experts interviewed were pessimistic on whether it is possible to prevent the production and distribution of AI-generated CSAM. They highlight that much of the production goes undetected by the authorities.

“Once it became open source, it was problematic,” said Michael Tunks, head of policy and public affairs at the Internet Watch Foundation. “Anybody can use text to image-based AI-generated tools to create any AI imagery they want.”

AI software is downloadable, which means these abusive and illegal activities can be taken offline.

“This means offenders can do it in the privacy of their own home, within the walls of their own network, therefore they’re not susceptible to getting caught doing this,” said Marcoux.

Most viewed

Most viewed