Teens Sue Elon Musk’s Grok for Turning Their Photos into Pornographic Images

A hand holds a smartphone displaying a large white "X" logo on a black screen, with a blurry blue background.

Three teens have filed a class action lawsuit claiming that Elon Musk’s AI chatbot Grok was used to create nonconsensual sexually explicit images and videos of them when they were minors.

The lawsuit, filed Monday in a federal California court, alleges that xAI’s tools were used to alter photos of three Tennessee-based teenagers in which they were clothed, turning them into nude and sexualized images. According to a report by The Washington Post, the edited images spread on platforms such as Discord and Telegram, and some were traded for other child sexual abuse material (CSAM).

The legal action focuses on Grok’s controversial “spicy mode” released last year, which allowed users to remove a woman’s clothing in photographs without asking permission. Lawyers for the plaintiffs say the features were created to increase usage of Grok and Musk’s social media platform X.

“Like a rag doll brought to life through the dark arts, this [AI-generated] child can be manipulated into any pose, however sick, however fetishized, however unlawful. To the viewer, the resulting video appears entirely real,” the complaint reads. “For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse.”

The complaint also accuses xAI and Musk of knowing that Grok could produce such results, including by using images of children, yet releasing it publicly anyway. “xAI—and its founder Elon Musk—saw a business opportunity,” the complaint states.

The plaintiffs are seeking unspecified damages and an immediate court order barring Grok from creating such images, as well as preventing xAI from generating and distributing AI-generated CSAM.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety,” lawyers write.

Two of the teens are under 18, and all three are withholding their names to protect their privacy. One plaintiff says she discovered the images after receiving an anonymous Instagram message linking to altered photos and videos, including her high school yearbook picture, showing her nude and in sexually explicit acts. The material was shared on a private Discord server alongside similar AI-generated images of at least 18 other minor girls. The other two plaintiffs also found fake sexualized imagery of themselves online created through Grok. The Washington Post reports that xAI has not responded to requests for comment through its parent company.

Grok, developed by xAI and hosted on Musk’s platform X, was launched in 2023. Last year, xAI released Grok Imagine, more commonly called spicy mode, which allowed users to generate sexualized images. According to a study by the Center for Countering Digital Hate, Elon Musk’s AI tool Grok generated an estimated three million sexualized images, including around 23,000 involving children, based on sampling. The study found that the tool produced roughly 190 sexualized images per minute over an 11-day period after the introduction of a one-click editing feature.


Image credits: Header photo licensed via Depositphotos.

Discussion