The NO FAKES Act is a real threat to free expression
“NO FAKES” bills claim to promise deepfake fixes, but their restrictions on expression would chill news, history, art, and everyday speech.
Imagine a fourth-grade classroom in which the teacher uses AI to generate a video of Ronald Reagan explaining his Cold War strategy. It’s history in living color, and the students lean in, captivated. Now imagine that same teacher facing thousands of dollars in damages under the proposed NO FAKES Act because the video looks too real.
That’s not sci-fi. It’s a risk baked into this bill. The NO FAKES Act, introduced this year in both the House and Senate, would create a new federal “digital replication right” letting people control the use of AI-generated versions of their voice or likeness. That means people can block others from sharing realistic, digitally created images of them. The right can extend for up to 70 years after the person’s death and is transferred to heirs. It also lets people sue those who share unauthorized “digital replicas,” as well as the companies that make such works possible.
A “digital replica” is defined as a newly created, highly realistic representation “readily identifiable” as a person’s voice or likeness. That includes fully virtual recreations and real images or recordings that are materially altered.
The bill bans unauthorized public use or distribution of “digital replicas.” But almost all of the covered “replicas” are fully protected by the First Amendment, meaning Congress cannot legislate their suppression.
Can someone own a voice? Breaking down the right of publicity.
Can someone employ a voice actor, or even use artificial intelligence, to copy your voice and then use that audio likeness to sell a service without your permission? And does the First Amendment protect the copycats?
The bill does list exceptions for “bona fide” news, documentaries, historical works, biographical works, commentary, scholarship, satire, or parody. But there’s a catch. News is exempt only if the replica is the subject of, or materially relevant to, the story. At best, this means any story relating to, say, political deepfakes must be reviewed by an attorney to decide if the story is “bona fide” news and the deepfake is sufficiently relevant to include in the story itself. At worst, this means politicians and other public figures will start suing journalists and others who talk about newsworthy replicas of them, if they don’t like what the person had to say.
Even worse, the documentary, historical, and biographical exceptions vanish if the work creates a false impression that it’s “an authentic [work] in which the person actually participated.” That swallows the exception and makes any realistic recreations, like the fourth-grade example above, legally radioactive.
The reach goes well beyond classrooms, too. Academics using recreated voices for research, documentarians patching gaps in archival footage, artists experimenting with digital media, or writers reenacting leaked authentic conversations could all face litigation. The exceptions are so narrowly drawn that they offer no real protection. And the risk doesn’t end with creators. Merely sharing a disputed clip can also invite a lawsuit.
That’s a digital heckler’s veto whereby one complaint can erase lawful speech.
The law also targets AI technology itself. Section 2(c)(2)(B) imposes liability on anyone who distributes a tool “primarily designed” to make digital replicas. That vague standard can easily ensnare open-source developers and small startups whose generative AI models sometimes output a voice or face that resembles a real person.
Then there’s the “notice-and-takedown” regime, modeled after the Digital Millennium Copyright Act. The bill requires online platforms to promptly remove or disable access to any alleged unauthorized “digital replica” once they receive a complaint, or risk losing legal immunity and facing penalties. In other words, platforms that don’t yank flagged content fast enough can be on the hook, which means they’ll likely delete first and ask questions never. That’s a digital heckler’s veto whereby one complaint can erase lawful speech.
On paper, the NO FAKES Act just looks like a safeguard against misleading and nonconsensual deepfakes. In practice, it would give politicians, celebrities, and other public figures new leverage over how they’re portrayed in today’s media, and grant their families enduring control over how they can be portrayed in history.
And let’s not forget that existing law already applies to digital replicas. Most states already recognize a right of publicity to police commercial uses of a person’s name, image, or likeness. Traditionally, that protection has been limited to overtly commercial contexts, such as advertising or merchandising. The NO FAKES Act breaks that guardrail, turning a narrow protection into a broad property right that threatens the First Amendment.
Creativity cannot thrive under constant permission. New mediums shouldn’t mean new muzzles.
AI-generated expression, like all expression, can also be punished when it crosses into unprotected categories such as fraud or defamation. Beyond those limits, government restrictions on creative tools risks strangling the diversity of ideas and free speech makes possible.
Creativity cannot thrive under a constant need for permission. New mediums shouldn’t mean new muzzles.






Disagree with FIRE on this one... Why should another have the right to generate an extremely realistic version of me saying this that I do not want to be associated with saying. Seems to violate my right of association. Also major privacy concerns.
I also disagree with FIRE. I support free expression, and fair use of likenesses for critical or artistic purposes. I do not support realistic fakes that portray events that didn’t happen. That’s defamation and slander, unless it’s clearly labeled satire. And it has nothing to do with AI, the principle is the same. It’s just that AI makes it easier to create such fakes.
A history teacher making a video of Ronald Reagan saying things he never said for “educational purposes”? What is the possible benefit of that? Have you lost your mind?