Edan Kauer is a former FIRE summer intern and a sophomore at Georgetown University.
Elliston Berry was just 14 years old when a male classmate at Aledo High in North Texas used AI to create fake nudes of her based on images he took from her social media. He then did the same to seven other girls at the school and shared the images on Snapchat.
Now, two years later, Berry and her classmates are the inspiration for Senator Ted Cruz’s Take It Down Act (TIDA), a recently enacted law which gives social media platforms 48 hours to remove “revenge porn” once reported. The bill considers any non-consensual intimate imagery (NCII), including AI deepfakes, to fall under this category. But despite the law’s noble intentions, its dangerously vague wording is a threat to free speech.
This law, which covers both adults and minors, makes it illegal to publish an image of an identifiable minor that meets the definition of “intimate visual depiction,” which is defined as certain explicit nudity or sexual conduct, with intent to “arouse or gratify the sexual desire of any person” or “abuse, humiliate, harass, or degrade the minor.”
That may sound like a no-brainer, but deciding what content this text actually covers, including what counts as “arousing,” “humiliating,” or “degrading” is highly subjective. This law risks chilling protected digital expression, prompting social media platforms to censor harmless content like a family beach photo, sports team picture, or images of injuries or scars to avoid legal penalties or respond to bad-faith reports.
Civil liberties groups such as the Electronic Frontier Foundation (EFF) have noted that the language of the law itself raises censorship concerns because it’s vague and therefore easily exploited:
TAKE IT DOWN creates a far broader internet censorship regime than the Digital Millennium Copyright Act (DMCA), which has been widely abused to censor legitimate speech. But at least the DMCA has an anti-abuse provision and protects services from copyright claims should they comply. This bill contains none of those minimal speech protections and essentially greenlights misuse of its takedown regime … Congress should focus on enforcing and improving these existing protections, rather than opting for a broad takedown regime that is bound to be abused. Private platforms can play a part as well, improving reporting and evidence collection systems.
Nor does the law cover the possibility of people filing bad-faith reports.
In the 2002 case Ashcroft v. Free Speech Coalition, the Court said the language of the Child Pornography Protection Act (CPPA) was so broad that it could have been used to censor protected speech. Congress passed the CPPA to combat the circulation of computer-generated child pornography, but as Justice Anthony Kennedy explained in the majority opinion, the language of the CPPA could be used to censor material that seems to depict child pornography without actually doing so.
Also in 2002, the Supreme Court heard the case Ashcroft v. ACLU, which came about after Congress passed the Child Online Protection Act (COPA) to prevent minors from accessing adult content online. But again, due to the broad language of the bill, the Court found this law would restrict adults who are within their First Amendment rights to access mature content.
As with the Take It Down Act, here too were laws created to protect children from sexual exploitation online, yet established using vague and overly broad standards that threaten protected speech.
But unfortunately, stories like the one at Aledo High are becoming more common as AI becomes more accessible. Last year, boys at Westfield High School in New Jersey used AI to circulate fake nudes of Francesca Mani, who is 14 years old, and other girls in her class. But Westfield High administrators were caught off guard as they had never experienced this type of incident. Although the Westfield police were notified and the perpetrators were suspended for up to 2 days, parents criticized the school for their weak response.
Voters want AI political speech protected — and lawmakers should listen
This essay was originally published in 24sight’s The Vox Populi section on June 24, 2025.
A year later, the school district developed a comprehensive AI policy and amended their bullying policy to cover harassment carried out through “electronic communication” which includes “the use of electronic means to harass, intimidate, or bully including the use of artificial intelligence "AI" technology.” What’s true for Westfield High is true for America — existing laws are often more than adequate to deal with emerging tech issues. By classifying AI material under electronic communication as a category of bullying, Westfield High demonstrates that the creation of new AI policies are redundant. On a national scale, the same can be said for classifying and prosecuting instances of child abuse online.
While we must acknowledge that online exploitation is a very real issue, we cannot solve the problem at the expense of other liberties. Once we grant the government the power to silence the voices we find distasteful, we open the door to censorship. Though it is essential to address the very real harms of emerging AI technology, we must also keep our First Amendment rights intact.
S.B. 771 in California is extremely dangerous in my opinion too.