Ashley St. Clair, a conservative influencer and the mother of one of Elon Musk’s children, has filed a lawsuit against Musk’s artificial intelligence company, xAI. St. Clair alleges that xAI’s chatbot Grok, without her consent, altered photographs of her to create nude and sexually explicit images. St. Clair argues that xAI failed to comply with multiple requests to stop creating images of her and to take down existing ones.
The company responded with its own suit, claiming that when St. Clair filed her lawsuit in New York, she violated xAI’s terms of service, which dictate that lawsuits must be filed in federal court in Texas.
In her suit, St. Clair uses 12th- and 20th-century legal doctrines to fight a 2026 legal problem. But we should not force old doctrines to the breaking point in an effort to address new harms. Our legal architecture is aging, and the cracks are beginning to show. It is time for lawmakers to try to draw lines between acceptable and unacceptable behavior in our new AI-laden reality.
In her suit, St. Clair uses 12th- and 20th-century legal doctrines to fight a 2026 legal problem.
St. Clair alleges that Grok is unreasonably dangerous and constitutes a public nuisance. Her suit’s first argument, that Grok is unreasonably dangerous, rests on a theory about products liability claims. Essentially, St. Clair is saying that Grok is the 2026 version of a defective mid-1950s power tool, a malfunctioning mid-1970s high-lift loader on a construction site, an exploding Ford Pinto from the 1960s or a prescription drug that caused some patients to develop gangrene.
These analogies are appealing at first, but under the stress test of a lawsuit, cracks will show. AI is not a static product like a car or a pill. AI systems are malleable, and perhaps always changing. They are shaped by user prompts and are probabilistic. AI is not a product that malfunctions so much as it is technology that exists as a platform, a process and a probabilistic generator of content. It responds, evolves and creates. It blurs lines; it is simultaneously a speaker, publisher and tool.
Simply put, St. Clair’s claim is inherently dangerous and doesn’t map neatly onto the products liability framework.
St. Clair’s suit also argues that Grok constitutes a public nuisance. Here, St. Clair is saying that Grok is akin to something that causes harm to the public at large, such as air pollution or lead paint.
But here again, a legal claim about the harms of AI (here Grok) don’t fit comfortably into the public nuisance framework.
Public nuisances cause widespread, ongoing harm. One can certainly make the case that Grok’s output pollutes social media sites like X and leads to systemic harm. Information pollution is real. But public nuisance law was not designed to punish technology that depends on expression as well as creates it. Stretching public nuisance law to guard against harms created by AI risks turning a historically narrow doctrine into a blunt regulatory instrument.
AI amounts to a shock not just to the economy — and the social fabric of our society — but also to the legal system. It’s a new tool that promises enormous benefits, as well as tremendous burdens. Lawmakers, judges and advocates are all struggling to keep up with the explosion of its capabilities. Where technology sprints, the law often crawls.









