A Tipping Point in Online Child Abuse

AI Summary6 min read

TL;DR

AI-generated child sexual abuse material reached record levels in 2025, with over 3,400 AI videos detected—a dramatic increase from just 13 the previous year. Advanced AI tools and open-source models are enabling criminals to create highly realistic abusive content, exacerbating the online child exploitation crisis.

Key Takeaways

  • AI-generated child sexual abuse material surged in 2025, with over 3,400 videos detected compared to just 13 in 2024.
  • Nearly two-thirds of AI-generated abuse videos were classified as the most severe category, involving penetration, torture, or bestiality.
  • Open-source AI video tools with minimal safeguards are enabling widespread creation of abusive content, often stored privately to evade detection.
  • Major AI companies report increasing incidents despite safeguards, with OpenAI reporting over 75,000 cases in early 2025.
  • The crisis is expected to worsen as AI technology advances faster than legal prosecution and prevention measures.
Thousands of abusive videos were produced last year—that researchers know of.
Photo illustration of a child running from shadowy hands.
Illustration by The Atlantic. Source: fcscafeine / Getty.
In 2025, new data show, the volume of child pornography online was likely larger than at any other point in history. A record 312,030 reports of confirmed child pornography were investigated last year by the Internet Watch Foundation, a U.K.-based organization that works around the globe to identify and remove such material from the web.

This is concerning in and of itself. It means that the overall volume of child porn detected on the internet grew by 7 percent since 2024, when the previous record had been set. But also alarming is the tremendous increase in child porn, and in particular videos, generated by AI. At first blush, the proliferation of AI-generated depictions of child sexual abuse may leave the misimpression that no children were harmed. This is not the case. AI-generated, abusive images and videos feature and victimize real children—either because models were trained on existing child porn, or because AI was used to manipulate real photos and videos.

Today, the IWF reported that it found 3,440 AI-generated videos of child sex abuse in 2025; the year before, it found just 13. Social media, encrypted messaging, and dark-web forums have been fueling a steady rise in child-sexual-abuse material for years, and now generative AI has dramatically exacerbated the problem. Another awful record will very likely be set in 2026.

Of the thousands of AI-generated videos of child sex abuse the IWF discovered in 2025, nearly two-thirds were classified as “Category A”—the most severe category, which includes penetration, sexual torture, and bestiality. Another 30 percent were Category B, which depict nonpenetrative sexual acts. With this relatively new technology, “criminals essentially can have their own child sexual abuse machines to make whatever they want to see,” Kerry Smith, the IWF’s chief executive, said in a statement.

Read: High school is becoming a cesspool of sexually explicit deepfakes

The volume of AI-generated images of child sex abuse has been rising since at least 2023. For instance, the IWF found that over just a one-month span in early 2024, on just a single dark-web forum, users uploaded more than 3,000 AI-generated images of child sex abuse. In early 2025, the digital-safety nonprofit Thorn reported that among a sample of 700-plus U.S. teenagers it surveyed, 12 percent knew someone who had been victimized by “deepfake nudes.” The proliferation of AI-generated videos depicting child sex abuse lagged behind such photos because AI video-generating tools were far less photorealistic than image generators. “When AI videos were not lifelike or sophisticated, offenders were not bothering to make them in any numbers,” Josh Thomas, an IWF spokesperson, told me. That has changed.

Last year, OpenAI released the Sora 2 model, Google released Veo 3, and xAI put out Grok Imagine. Meanwhile, other organizations have produced many highly advanced, open-source AI video-generating models. These open-source tools are generally free for anyone to use and have far fewer, if any, safeguards. There are almost certainly AI-generated videos and images of child sex abuse that authorities will never detect, because they are created and stored on personal computers; instead of having to find and download such material online, potentially exposing oneself to law enforcement, abusers can operate in secrecy.

OpenAI, Google, Anthropic, and several other top AI labs have joined an initiative to prevent AI-enabled child sex abuse, and all of the major labs say they have measures in place to stop the use of their tools for such purposes. Still, safeguards can be broken. In the first half of 2025, OpenAI reported more than 75,000 depictions of child sex abuse or child endangerment on its platforms to the National Center for Missing & Exploited Children, more than double the number of reports from the second half of 2024. A spokesperson for OpenAI told me that the firm designs its products to prohibit creating or distributing “content that exploits or harms children” and takes “action when violations occur.” The company reports all instances of child sex abuse to NCMEC and bans associated accounts. (OpenAI has a corporate partnership with The Atlantic.)

The advancement and ease of use of AI video generators, in other words, offer an entry point for abuse. This dynamic became clear in recent weeks, as people used Grok, Elon Musk’s AI model, to generate likely hundreds of thousands of nonconsensual sexualized images, primarily of women and children, in public on his social-media platform, X. (Musk insisted that he was “not aware of any naked underage images generated by Grok” and blamed users for making illegal requests; meanwhile, his employees quietly rolled back aspects of the tool.) While scouring the dark web, the IWF found that, in some cases, people had apparently used Grok to create abusive depictions of 11-to-13-year-old children that were then fed into more permissive tools to generate even darker, more explicit content. “Easy availability of this material will only embolden those with a sexual interest in children” and “fuel its commercialisation,” Smith said in the IWF’s press release. (Yesterday, the X safety team said it had restricted the ability to generate images of users in revealing clothing and that it works with law enforcement “as necessary.”)

Read: Elon Musk cannot get away with this

There are signs that the crisis of AI-generated child sex abuse will worsen. While more and more nations, including the United Kingdom and the United States, are passing laws that make generating and publishing such material illegal, actually prosecuting criminals is slow. Silicon Valley, meanwhile, continues to move at a breakneck pace.

Any number of new digital technologies have been used to harass and exploit people; the age of AI sex abuse was predictable a decade ago, yet it has begun nonetheless. AI executives, engineers, and pundits are fond of saying that today’s AI models are the least effective they will ever be. By the same token, AI’s ability to abuse children may only get worse from here.

Visit Website