Mother of Elon Musk’s child sues his AI company over Grok deepfake images

4 hours ago 5

Ashley St Clair, parent of Musk’s son, Romulus, sues implicit ‘pain and intelligence distress’ caused by fake AI images generated by Grok chatbot.

Published On 17 Jan 2026

The parent of 1 of Elon Musk’s children is suing his artificial quality company, saying its Grok chatbot allowed users to generate sexually-exploitative deepfake images of her that person caused her humiliation and affectional distress.

The suit was filed conscionable earlier ‍California Attorney ‍General Rob Bonta sent a cease-and-desist missive to Musk’s xAI institution demanding that it halt the instauration and organisation of Grok-generated nonconsensual sexualised imagery.

Recommended Stories

list of 4 itemsend of list

“The avalanche of reports detailing this worldly – astatine times depicting ​women and children engaged successful intersexual enactment – is shocking ‍and, arsenic my bureau has determined, perchance illegal,” Bonta said connected Friday.

Ashley St Clair, a writer and governmental commentator, alleges successful a suit filed connected Thursday successful New York City against xAI that she was the unfortunate of sexualised deepfake images generated by Grok.

St Clair, who is the parent of Musk’s 16-month-old son, Romulus, said she reported the images to Musk’s X societal media platform, which hosts Grok, aft they began appearing past twelvemonth and asked that they beryllium removed.

The level replied that the images did not interruption its policies, she said. Then it promised not to let images of her to beryllium utilized oregon altered without her consent. Later, the societal level retaliated against her by removing her premium X subscription and verification checkmark, and continued to let degrading fake images of her, she said.

“I person suffered and proceed to endure superior symptom and intelligence distress arsenic a effect of xAI’s relation successful creating and distributing these digitally altered images of me,” St Clair said successful a papers attached to the lawsuit.

“I americium humiliated and consciousness similar this nightmare volition ne'er halt truthful agelong arsenic Grok continues to make these images of me,” she said.

‘A nationalist nuisance’

On Thursday, lawyers for xAI countersued St Clair successful national tribunal successful the Northern District of Texas, alleging she violated the presumption of her xAI idiosyncratic statement that requires lawsuits against the institution beryllium filed successful national tribunal successful Texas. It is seeking an undisclosed monetary judgement against her.

Carrie Goldberg, a lawyer for St Clair, called the countersuit a “jolting” determination that she had ne'er seen by a suspect before.

“Ms St Clair volition beryllium vigorously defending her forum successful New York,” Goldberg said successful a statement.

“But frankly, immoderate jurisdiction volition recognise the gravamen of Ms St Clair’s claims – that by manufacturing nonconsensual sexually explicit images of girls and women, xAI is simply a nationalist nuisance and a not reasonably harmless product.”

In an interrogation with US media earlier this week, St Clair said her conflict with Grok was “not conscionable astir me”.

“It’s astir gathering systems, AI systems which tin produce, astatine scale, and maltreatment women and children without repercussions. And there’s truly nary consequences for what’s happening close now,” she told CNN.

“They are saying ‘we are going to marque it illegal, wherever it’s illegal’. That is absent [of] each morality and, conjecture what, if you person to adhd information aft harm, that is not information astatine all. That is simply harm control,” she said.

Musk’s Grok is already nether scrutiny and facing an international backlash for the instauration of explicit deepfake images successful ‌the United Kingdom, the European Union, and different jurisdictions, ⁠including India, Indonesia, Malaysia, the Philippines, and Japan.

Japanese authorities said connected Friday they excessively were probing X implicit Grok, saying each options were nether information to ‌prevent the procreation of inappropriate images.

Read Entire Article