Did Grok break the law?
Reports that it generated nudes of real people raise questions about the safety of AI
Grok, the AI system integrated into X, has reportedly been used to turn real pictures of people — including minors — into nude or sexualized imagery.
People are understandably outraged. This episode shows how a person can use AI tools to violate a sense of human dignity and security with little more than a photo and a prompt. The fact that the tool was used to target real people, especially children, without their knowledge or consent is particularly disturbing to many.
Some have responded by calling for new laws. That instinct is understandable. But many proposals would raise serious First Amendment concerns, and before trying to scratch the “do something” itch with new legislation, it’s important to first ask: does existing law already prohibit this?
In many cases, the answer is yes.
Federal criminal law prohibits knowingly making or sharing child sexual abuse material involving actual children, whether it is created by a camera or with the assistance of AI. Likewise, AI-generated material that meets the high bar for obscenity and is publicly created or distributed, is not protected speech. Users who knowingly prompt an AI system to create such content, or who share it, can already face criminal prosecution. Liability shields don’t protect anyone from federal criminal prosecutions. AI operators that knowingly provide substantial assistance to those creating this unlawful content may face legal exposure as well.
The fight to define social media could redefine free speech
Karan Kuppa-Apte is a junior at Bates College and 2025 FIRE summer intern.
Existing law also provides other avenues to hold people accountable through private lawsuits. Civil claims for harms like intentional infliction of emotional distress, invasion of privacy, defamation, and misappropriation of likeness may also be available to people depicted in the images created by Grok, provided the elements of those torts, and any constitutional protections built into them, are satisfied. These types of claims allow victims to collect monetary damages against users who make, share, or sell such content and, in limited cases, developers.
At the same time, it’s important to be clear about the limits of the law. The law will never be able to fully prevent bad actors from doing bad things. And the Constitution limits how far the government can go in trying. Nudity and sexual content involving adults are generally protected by the First Amendment unless they fall into a narrow category of unprotected speech. Use of AI does not change that constitutional analysis. This means a great deal of offensive or distasteful expression remains protected speech, even when it disturbs or makes us uncomfortable.
This matters. If every technological failure becomes an excuse to expand government authority over speech, the predictable outcome is overreach that chills expression and silences voices.
Public pressure, reputational risk, and the possibility of lawsuits are powerful incentives to motivate xAI, the parent company of both Grok and X, to improve safeguards, redesign systems, and limit misuse. That is the preferred path. Editorial and design decisions made by private companies are far less dangerous than granting the government broad power to regulate speech and assume control over platforms protected by the First Amendment.
How to FOIA your college’s Facebook and X records
Last week, FIRE released the results of our survey of the Facebook and Twitter/X settings of public universities and colleges. These records, obtained through state public records laws, revealed the extent to which some 198 institutions are censoring content and blocking users on their official social media accounts.
Using Grok’s failures as a justification for sweeping new AI speech regulations would be a mistake. Existing laws already target real harms and real actors. Broad new rules risk overreach, chilling lawful expression and empowering the state in ways that are difficult to unwind.
The right response here starts with enforcing the law we already have, and to resist the temptation to trade constitutional principles for the illusion of control.







Really solid legal anlaysis here. The existing civil remedies angle is underrated in these debates, poeple forget that defamation and privacy torts still apply. I dealt with a similar issue in a tech case where everyone wanted new regs when we already had the tools for accountability.
When photography first started it wasn't long before "dirty postcards" started showing up in France ,and the advent of moving pictures quickly spawned the "stag film" industry. Early vcr catered to pornography. Taking a new technology and using it for sexual material, and most importantly profit, has always been the case. It's was inevitable that AI would open up another Pandora's Box , and social media would make the spred far faster than anything experienced in the past. I think we're just at the beginning of this , and would be wise to have serious discussions sooner rather than later. The coming of "the feelies" with someone you actually know is not far away.