03/23/2026
Three Tennessee teen girls filed a class action lawsuit against Elon Musk's xAI this week, alleging the company knowingly facilitated the production and distribution of child sexual abuse material through its Grok AI tool -- and profited from it.
"These are children whose school photographs and family pictures were turned into child sexual abuse material by a billion-dollar company's AI tool and then traded among predators," said Annika Martin, a partner at Lieff Cabraser, one of the firms representing the plaintiffs. "Elon Musk and xAI deliberately designed Grok to produce sexually explicit content for financial gain, with no regard for the children and adults who would be harmed by it."
Two of the three plaintiffs are under 18. All are withholding their names to protect what remains of their privacy.
Jane Doe 1 found out what had been done to her when she received an anonymous message on Instagram. It directed her to a Discord server where her high school yearbook photo had been altered to show her in full nudity and sexually explicit positions. At least 18 other girls from her school had been targeted the same way.
The images were being traded on Discord, Telegram, and the file-sharing platform Mega -- used as currency to obtain sexually explicit content of other minors. The perpetrator has since been arrested. Criminal investigators found hundreds of AI-generated sexual abuse images of children on his devices, all produced using xAI's technology.
All three plaintiffs' files have now been entered into a national database managed by the National Center for Missing & Exploited Children. For the rest of their lives, they will receive a notification every time their images are identified in a criminal case. Every notification, a reminder that the images are still out there. One plaintiff has recurring nightmares so severe she's needed accommodations at school. Another can't fall asleep without medication and dreads attending her own graduation.
The mother of Jane Doe 2: "Watching my daughter have a panic attack after realizing that these images were created and distributed without any hope of recalling them was heartbreaking. Her excitement about all that she would experience over her senior year -- her Spring formal, her graduation, and her senior trip -- now comes with the fear anything she shares will be used and manipulated again."
All of this abuse was made possible when xAI released what it called "Spicy Mode" for Grok last year -- a feature that allowed users to generate sexually explicit images and videos, including by manipulating real photographs of real people.
Musk personally promoted it as a profit-making business strategy, comparing the decision to how VHS beat Betamax -- because, he said, VHS "allowed spicy mode." The system's own internal prompt was configured to "assume good intent" when users referenced "teenage" or "girl." Every other major AI company had implemented industry-standard safeguards to prevent exactly this kind of abuse. xAI refused.
In eleven days, from late December 2025 to early January 2026, researchers estimated Grok generated approximately three million sexualized images. Roughly 23,000 depicted children.
When the scale of the abuse became public, xAI did not disable the feature in response to the outrage -- it simply made "Spicy Mode" a premium benefit for paying subscribers. UK Prime Minister Keir Starmer's office called the move "insulting," saying it "simply turns an AI feature that allows the creation of unlawful images into a premium service." The European Commission said the restrictions didn't address its concerns regardless of subscription status. As of this month, Grok continues to allow the generation of photos and videos with nudity and sexualized content.
The guardrails xAI eventually claimed to have added were cosmetic. European non-profit AI Forensics analyzed 2,000 user conversations shortly after the announced restrictions and found the "overwhelming" majority still depicted nudity or sexual activity. Security researchers demonstrated they could generate explicit content of real people using basic prompt workarounds.
The Washington Post confirmed the standalone Grok app continued to digitally "undress" real people even after xAI said it had blocked the capability. An AI content analysis firm found nonconsensual sexualized images of real women still being generated at a rate of roughly one per minute.
In response to xAI's profiting from the sexual exploitation of minors, the federal government has done virtually nothing. No investigation by the DOJ. No enforcement action by the FTC. A Justice Department spokesperson said the agency would "aggressively prosecute" producers of AI-generated child sexual abuse material -- then indicated it was more inclined to go after individual users than the billion-dollar company that built the tool.
California's attorney general issued a cease-and-desist and opened a state investigation. A bipartisan coalition of 35 state attorneys general sent a demand letter. But at the federal level -- from the Trump administration with its deep financial and political ties to Musk -- silence.
In stark contrast, the international response has been extraordinary. In February, French police raided X's Paris offices in coordination with Europol as part of a criminal investigation into offenses including complicity in the distribution of child po*******hy and sexually explicit deepfakes. Prosecutors summoned Musk himself for questioning. The UK's Information Commissioner opened formal investigations into both X and xAI. The European Commission launched a probe under the Digital Services Act and ordered X to preserve all Grok-related internal documents until the end of 2026. Malaysia and Indonesia banned Grok outright.
X called the French raid "law enforcement theater." A French official responded: "Do you believe yourselves above French, European, and even American laws?"
None of this had to happen. The technology to prevent it exists and is standard across the industry. Every other major AI company -- Anthropic, Google, OpenAI -- builds safeguards into their systems that prohibit the generation of sexual content and block the manipulation of real people's images. Anthropic's Claude refuses such requests at an architectural level. xAI is the only major AI company that chose to build a tool capable of mass-producing child sexual abuse material -- and then marketed it as a feature.
The lawsuit brings 13 counts under Masha's Law, the Trafficking Victims Protection Act, and California state law. It seeks $150,000 per victim per violation in statutory damages, punitive damages, and an injunction barring Grok from producing such content. The class action represents not just the three named plaintiffs but all minors in the United States whose real images were altered by Grok -- a class the attorneys estimate consists of thousands of children.
Annika Martin of Lieff Cabraser summed up the devastation xAI has caused and the justice the plaintiffs intend to seek: "Without xAI, this harmful, illegal content could never, and would never, have existed. The lives of these girls have been shattered by the devastating loss of privacy and the deep sense of violation that no child should ever have to experience. We intend to hold xAI accountable for every child they harmed in this way."
--> To learn more about the class action lawsuit or to contact the legal team representing the plaintiffs, visit Lieff Cabraser at https://www.lieffcabraser.com/2026/03/lchb-files-class-action-obo-minor-victims-alleging-xais-grok-generated-and-profited-from-ai-sexual-exploitation-images-and-videos/
--> The DEFIANCE Act --- bipartisan legislation that passed the Senate unanimously in January --- would create a dedicated federal right for all victims of nonconsensual sexually explicit deepfakes, including adults, to sue. It has been stalled in the House, where Republican leadership has yet to bring it to a vote. To ask your Representative to co-sponsor the bill and demand it be brought to the floor, call the Capitol switchboard at (202) 224-3121 and ask to speak to your Representative
--> For what to do if you discover exploitive images of yourself or your child online, visit https://www.missingkids.org/gethelpnow/isyourexplicitcontentoutthere
---
For an excellent book for parents on helping kids learn to navigate the digital world, we highly recommend "Growing Up in Public: Coming of Age in a Digital World" at https://www.amightygirl.com/growing-up-in-public
For two valuable books to help tweens and teens develop healthy online habits, we recommend “First Phone: A Child's Guide to Digital Responsibility, Safety, and Etiquette” for ages 10 and up (https://bookshop.org/a/8011/9780593538333) and "The Social Media Workbook for Teens" for ages 13 and up (https://www.amightygirl.com/the-social-media-workbook-for-teens)
To teach children -- girls and boys alike -- about the need to respect others and their personal boundaries from a young age, we recommend "Let's Talk About Body Boundaries, Consent, and Respect" for ages 4 to 7 (https://www.amightygirl.com/body-boundaries) and "Consent (for Kids!)" for ages 6 to 10 (https://www.amightygirl.com/consent-for-kids)
If you know a teen girl struggling after sexual abuse or trauma, “The Sexual Trauma Workbook for Teen Girls: A Guide to Recovery from Sexual Assault and Abuse” may help at https://www.amightygirl.com/sexual-trauma-workbook-girls
---
To read more about the teens' lawsuit on the BBC, visit https://www.bbc.com/news/articles/cgk2lzmm22eo
There is also a detailed account in The Tennessean at https://www.tennessean.com/story/news/local/2026/03/17/elon-musk-grok-xai-lawsuit-tennessee-child-sexual-deepfakes/89193298007
To learn more about the Defiance Act, which would help address the problem of explicit fake images, but is being stalled in the House, visit https://www.congress.gov/bill/119th-congress/senate-bill/1837/text