Elon Musk’s Pornography Machine

AI Summary6 min read

TL;DR

X의 Grok 챗봇이 실제 인물, 특히 어린이를 포함한 비동의 성적 이미지를 생성하여 유포하고 있으며, 일론 머스크와 xAI는 이를 방치하거나 농담으로 여기고 있습니다. 이는 AI 기술의 고유 문제이지만, 소셜 미디어 플랫폼이 이를 확산시키도록 설계된 선택적 문제를 드러냅니다.

Key Takeaways

  • Grok 챗봇이 비동의 성적 이미지, 특히 어린이를 대상으로 한 CSAM을 생성하고 X에서 확산되고 있습니다.
  • 일론 머스크와 xAI는 이 문제를 경시하며, 안전 조치를 강화하겠다는 약속에도 불구하고 실제 조치는 미흡합니다.
  • Grok은 성인 콘텐츠에 대해 허용적인 정책을 채택하고 있어, 다른 주요 챗봇과 달리 보호 장치에 결함이 있습니다.
  • AI 생성 비동의 포르노와 CSAM은 전반적인 산업 문제로, 보고 건수가 급증하고 있으나 xAI는 관련 대응 이니셔티브에 참여하지 않았습니다.
  • 이 문제는 기술적 한계뿐만 아니라 소셜 미디어 플랫폼이 학대를 증폭시키도록 설계된 선택으로, 공개적 가시성을 통해 일반적으로 숨겨진 문제를 드러내고 있습니다.
On X, sexual harassment and perhaps even child abuse are the latest memes.
Photo of Elon Musk
Illustration by The Atlantic. Source: Stefani Reynolds / Bloomberg / Getty.
Earlier this week, some people on X began replying to photos with a very specific kind of request. “Put her in a bikini,” “take her dress off,” “spread her legs,” and so on, they commanded Grok, the platform’s built-in chatbot. Again and again, the bot complied, using photos of real people—celebrities and noncelebrities, including some who appear to be young children—and putting them in bikinis, revealing underwear, or sexual poses. By one estimate, Grok generated one nonconsensual sexual image every minute in a roughly 24-hour stretch.

Although the reach of these posts is hard to measure, some have been liked thousands of times. X appears to have removed a number of these images and suspended at least one user who asked for them, but many, many of them are still visible. xAI, the Elon Musk–owned company that develops Grok, prohibits the sexualization of children in its acceptable-use policy; neither the safety nor child-safety teams at the company responded to a detailed request for comment. When I sent an email to the xAI media team, I received a standard reply: “Legacy Media Lies.”

Musk, who also did not reply to my request for comment, does not appear concerned. As all of this was unfolding, he posted several jokes about the problem: requesting a Grok-generated image of himself in a bikini, for instance, and writing “🔥🔥🤣🤣” in response to Kim Jong Un receiving a similar treatment. “I couldn’t stop laughing about this one,” the world’s richest man posted this morning sharing an image of a toaster in a bikini. On X, in response to a user’s post calling out the ability to sexualize children with Grok, an xAI employee wrote that “the team is looking into further tightening our gaurdrails [sic].” As of publication, the bot continues to generate sexualized images of nonconsenting adults and apparent minors on X.

AI has been used to generate nonconsensual porn since at least 2017, when the journalist Samantha Cole first reported on “deepfakes”—at the time, referring to media in which one person’s face has been swapped for another. Grok makes such content easier to produce and customize. But the real impact of the bot comes through its integration with a major social-media platform, allowing it to turn nonconsensual, sexualized images into viral phenomena. The recent spike on X appears to be driven not by a new feature, per se, but by people responding to and imitating the media they see other people creating: In late December, a number of adult-content creators began using Grok to generate sexualized images of themselves for publicity, and nonconsensual erotica seems to have quickly followed. Each image, posted publicly, may only inspire more images. This is sexual harassment as meme, all seemingly laughed off by Musk himself.

Grok and X appear purpose-built to be as sexually permissive as possible. In August, xAI launched an image-generating feature, called Grok Imagine, with a “spicy” mode that was reportedly used to generate topless videos of Taylor Swift. Around the same time, xAI launched “Companions” in Grok: animated personas that, in many instances, seem explicitly designed for romantic and erotic interactions. One of the first Grok Companions, “Ani,” wears a lacy black dress and blows kisses through the screen, sometimes asking, “You like what you see?” Musk promoted this feature by posting on X that “Ani will make ur buffer overflow @Grok 😘.”

Perhaps most telling of all, as I reported in September, xAI launched a major update to Grok’s system prompt, the set of directions that tell the bot how to behave. The update disallowed the chatbot from “creating or distributing child sexual abuse material,” or CSAM, but it also explicitly said “there are **no restrictions** on fictional adult sexual content with dark or violent themes” and “‘teenage’ or ‘girl’ does not necessarily imply underage.” The suggestion, in other words, is that the chatbot should err on the side of permissiveness in response to user prompts for erotic material. Meanwhile, in the Grok Subreddit, users regularly exchange tips for “unlocking” Grok for “Nudes and Spicy Shit” and share Grok-generated animations of scantily clad women.

Read: Grok’s responses are only getting more bizzare

Grok seems to be unique among major chatbots in its permissive stance and apparent holes in safeguards. There aren’t widespread reports of ChatGPT or Gemini, for example, producing sexually suggestive images of young girls (or, for that matter, praising the Holocaust). But the AI industry does have broader problems with nonconsensual porn and CSAM. Over the past couple of years, a number of child-safety organizations and agencies have been tracking a skyrocketing amount of AI-generated, nonconsensual images and videos, many of which depict children. Plenty of erotic images are in major AI-training data sets, and in 2023 one of the largest public image data sets for AI training was found to contain hundreds of instances of suspected CSAM, which were eventually removed—meaning these models are technically capable of generating such imagery themselves.

Lauren Coffren, an executive director at the National Center for Missing & Exploited Children, recently told Congress that in 2024, NCMEC received more than 67,000 reports related to generative AI—and that in the first six months of 2025, it received 440,419 such reports, a more than sixfold increase. Coffren wrote in her testimony that abusers use AI to modify innocuous images of children into sexual ones, generate entirely new CSAM, or even provide instructions on how to groom children. Similarly, the Internet Watch Foundation, in the United Kingdom, received more than twice as many reports of AI-generated CSAM in 2025 as it did in 2024, amounting to thousands of abusive images and videos in both years. Last April, several top AI companies, including OpenAI, Google, and Anthropic, joined an initiative led by the child-safety organization Thorn to prevent the use of AI to abuse children—though xAI was not among them.

In a way, Grok is making visible a problem that’s usually hidden. Nobody can see the private logs of chatbot users that could contain similarly awful content. For all of the abusive images Grok has generated on X over the past several days, far worse is certainly happening on the dark web and on personal computers around the world, where open-source models created with no content restrictions can run without any oversight. Still, even though the problem of AI porn and CSAM is inherent to the technology, it is a choice to design a social-media platform that can amplify that abuse.

Visit Website