The great chatbot panic
The technology might be new, but we've seen this freak-out before
This essay was originally published as part of the second issue of the FIRE tech policy and free expression newsletter Notice and Takedown on March 17, 2026.
In April of 2023, 14-year-old Sewell Setzer III became a user of Character.AI, a platform founded by two former employees of Google that hosts user-created interactive chatbots inspired by popular fictional properties. Going by “Ageon,” “Daenero,” and other names, Setzer began an intimate correspondence with a Game of Thrones-inspired “Daenerys Targaryen” chatbot. Less than a year later, he had killed himself. To Setzer’s family, his final exchange with Daenerys pointed to Character.AI as the culprit.
The leading subject of a New York Times feature just last week, Sewell’s parents’ wrongful death suit — filed in October 2024 — initiated what has become a growing wave of lawsuits seeking millions of dollars in damages from chatbot-based platforms. Each new plaintiff points to the last as evidence of a causal link between chatbots and the deaths of young users — and we’ve gradually seen the allegations take a turn from encouraging suicide to concocting elaborate plots.
In August 2025, OpenAI found itself in the crosshairs with a lawsuit alleging 16-year-old Adam Raine’s suicide had been assisted and inspired by his interactions with ChatGPT. That same month, Stein-Erik Soelberg killed his mother and then himself, with his estate alleging that ChatGPT had convinced him he was the target of a high-level conspiracy. In November, the death of Austin Gordon by self-inflicted gunshot wound marked one of seven more high-profile lawsuits against OpenAI. His mother’s complaint, filed last month, alleged ChatGPT “created a fictional world and relationship that felt more real to Austin than anything he had ever known.”
And then somehow, things got even stranger. In the last two weeks, two lawsuits were filed:
One alleged ChatGPT essentially goaded a woman into torching a settlement agreement between her and the life insurance company that filed the suit, firing her lawyers, and engaging in a flurry of frivolous legal filings against the company. Another, filed against Google, alleges that their chatbot service Gemini had “trapped” 36-year-old Jonathan Gavalas in a “collapsing reality,” which involved coaching him through “missions” involving violence against the public and eventually his suicide.
Settling for less
The plaintiffs in these cases seem to be finding some success. Last month, Character.AI agreed to settle the Setzer-Game of Thrones case (styled Garcia v. Character Technologies) along with three other similar lawsuits filed in September.
They’ve been supported by political headwinds. Each of the complaints filed made a point to emphasize a letter by 54 state attorneys general warning of a “race against time” to “protect the children of our country from the dangers of AI,” insisting that the “walls of the city have already been breached.”
For anyone familiar with the history of civil liberties disputes, this rhetoric is instantly recognizable. Phrases like “race against time” and “protect the children” are the lingua franca of government restriction of novel expressive technology. The phrase “protect the children” gives you the license to restrict speech and the novelty of the technology provides the high stakes — if nothing is done before the technology develops past a certain point, it will presumably be too late. We have to restrict speech and we have to restrict it now.
The emotional appeal is strong. The defendant AI companies are inclined to avoid trial for a reason.
Is anyone raising First Amendment concerns?
You bet.
FIRE intervened early in Garcia after the court denied Character.AI’s motion to dismiss. In that order, the judge questioned why “words strung together by an LLM are speech.” A federal judge musing that stringing words together might not be speech because of who did the stringing together would depart from a long line of court cases holding that pure speech is protected speech agnostic to the identity of the speaker.
The order had worrying First Amendment implications for expressive technology if left to stand at the final conclusion of the case. FIRE accordingly filed a friend-of-the-court brief urging prompt appellate review of this holding, outlining all the reasons that statement — and the logic underpinning it — ran afoul of the First Amendment.
For the purposes of this blog, we’ll be evaluating the claims of these lawsuits in a bigger context. As case after case piles up, it is tempting — and quite human — to let the recurrence of tragedy take on a similar role as authoritative data in how we process the phenomenon, and importantly, assign blame. The public and the courts have confronted this temptation before.
We’ve rolled these dice before
There is a long line of entertainment-related torts and moral panics that have besieged free expression over the years, placing blame for violent acts on everything from Grand Theft Auto to the Slender Man lore. Each and every panic, taken to its logical conclusion, would have shrunk the universe of allowable expression in ways that would reverberate long past the point where clarity makes society’s past worries seem a little silly in retrospect.
No recent panic quite matches the intensity and the surreality of the current moment like the Dungeons and Dragons scare of the 1980’s. During a roughly five year period in the 1980’s there were 28 cases of adolescents who played Dungeons & Dragons and later committed murder or suicide.
There was the case of 17 year-old player James Dallas Egbert III, whose disappearance into nearby woods inspired speculation from the press that he had lost the ability to distinguish between himself and the game character he role-played. There was also 16-year-old Irving Pulling, whose death inspired his mother to start the public advocacy group “Bothered About Dungeons & Dragons” (BADD).
The media ran with it. BADD was featured in a 1985 60 Minutes segment that will help give readers a sense of just how strong the panic was, marginalizing experts with arguments from emotion. “The families who have suffered the loss of a loved one would disagree,” the narrator says, as the muted objections of a skeptical clinical expert play in the background. “If you found 12 kids in murder-suicide cases with one common factor,” he presses, “wouldn’t you question it?”
Maintaining principle in a time of polarization
This keynote was originally delivered by Robert Corn-Revere to the Delaware Inns of Court on March 11, 2026.
With the clarity of hindsight, the math finger-paints a pretty silly picture. “By 1984, 3 million teenagers were playing Dungeons & Dragons in the United States and the baseline suicide rate of adolescents overall would have been about 360 suicides each year,” University of Virginia professor of pathology James Zimring has pointed out. “So, when you look at the bottom of the fraction, at the denominator, Dungeons & Dragons was, if anything, protective. It had the opposite effect.”
We shouldn’t have to wait for the chatbot panic to be in the rearview mirror to do the same math with the 13-18 million teenagers and 130 million adults using ChatGPT and other AI chatbots. When you consider the small number of (emotionally potent) cases, it begins to look like maybe AI is causing psychosis — just not in the way people think.
Exploding books and dangerous ideas
It’s not just “standard” First Amendment law that these lawsuits get wrong. In an effort to get as far away from speech as possible, plaintiffs’ lawyers have gone with products liability law. After all, who could argue with the idea that a company has an obligation to design safe products, right?
But when you drill down into it, they aren’t really talking about “products” at all.
The Garcia case alleged, for example, that Character.AI designed products that caused users like Sewell to “conflate reality and fiction.” That should sound awfully familiar; it’s basically the same accusation grieving mother Sheila Watters made in 1989 against Dungeons & Dragons maker TSR.
As the court’s decision in Watters v. TSR, dismissing the suit describes, she “cast her son as a ‘devoted’ player of Dungeons & Dragons, who became totally absorbed by and consumed with the game to the point that he was incapable of separating the fantasies played out in the game from reality.” According to her suit, this made the product (i.e., the game) “unsafe” and TSR should pay.
But the Watters Court rejected this theory of liability — the same theory underlying most if not all of the chatbot lawsuits.
The Sixth Circuit, upholding the district court’s dismissal, observed that the harm originated not from the tangible properties (or even rules) of the game, but rather from the ideas expressed through its storyline — and that meant the case wasn’t really about a defective “product.” A court examining claims that violent video games caused the Columbine shooting reached the same conclusion: “There is no allegation that anyone was injured while Harris and Klebold actually played the video games . . . The actual use of the video games, then did not result in any injury . . . So, any alleged defect stems from the intangible thoughts, ideas and messages contained within.”
That’s an important distinction — product liability is generally imposed (often without requiring any fault, referred to as “strict liability”) on tangible “products” (think brakes, tires, dishwashers, etc.) with inherent and unreasonable dangers that are hidden to consumers, or for which there is a safer design — putting the manufacturer in the best position to prevent harm. In other words, the physical thing hurts you physically.
U.S. colleges show systemic bias — against conservatives
A new survey of University of Wisconsin-Madison faculty, released this month by the school’s Tommy G. Thompson Center on Public Leadership, offers a clear look at how ideological imbalance shapes the campus climate at a flagship public university. Read alongside
Imagine that you purchase a book. If the book’s binding explodes when you open it, you’ve got a product liability claim. The physical book, regardless of what its pages say, exploded in your hands — and there’s no harm to free expression by saying you can’t sell a book that doubles as an IED.
But suppose you were harmed because you did something stupid after reading ideas in a book. You might be able to see how imposing liability for “dangerous” ideas would set us down a dark path; every author and publisher would have to make sure that the ideas they put out in the world couldn’t possibly be interpreted or used to some harmful end. If you’ve ever met other human beings, you already know that the list of such ideas is … quite short.
And that’s exactly what drove the outcome in Watters. The district court noted that “the theories of liability sought to be imposed . . . would have a devastatingly broad chilling effect on expression of all forms . . . The First Amendment prohibits imposition of liability . . . based on the content of the game.” The appellate court saw a similar unavoidable impact of allowing for such liability: “The only practicable way of ensuring that the game could never reach a ‘mentally fragile’ individual would be to refrain from selling it at all.”
Tale as old as time, song as old as rhyme
This understanding has been applied across mediums of content and entertainment. In the cases of McCollum v. CBS, Inc. and Vance v. Judas Priest, the musical artists Ozzy Osbourne and Judas Priest were sued over the idea their music encouraged the suicide of two young men (attempted suicide in the case of Vance). Like Watters and like the recent chatbot cases, the plaintiffs were families of the young men.
Their lawsuits were unsuccessful. The court in McCollum echoed the Watters court concerns about liability chilling the expression of creators, making clear “such a burden would quickly have the effect of reducing and limiting artistic expression to only the broadest standard of taste and acceptance.” They accordingly noted that in the history of attempts to assign tort liability for electronic media inciting unlawful conduct, “all . . . have been rejected on First Amendment grounds.”
For other cases in this vein, check out Ari Cohn explaining why a law making social media platforms liable for what posts their algorithms promote is doomed to fail.
Which brings us back to Garcia and the argument FIRE made in our brief — and will inevitably have to make again.
If courts force AI developers to answer in tort every time a user has a tragic or delusional reaction to a chatbot, the incentive structure becomes obvious. They would have to “sanitize their outputs to only the most safe, anodyne, and bland ideas fit for the most sensitive members of society.” In other words, unless you want BarneyBot to be the only AI you’re allowed to use, think twice about demanding that developers anticipate the actions of fragile and already unwell people.
But it’s even worse than that. Movies and music are to a large extent statically consumed. AI helps people create and speak. It’s not only a question of what content AI can deliver to you, it’s a matter of what you will be able to say using AI. Total safety tends to come at a steep — and unacceptable — price.







Very interesting analysis--I hadn't seen this perspective before. I still feel like there's a a difference between D&D and chatbots. After all, D&D doesn't tell minors to keep thoughts about self-harm from their parents. And I do think there is a product design issue here: Chatbots are designed for ongoing engagement, attachment and syncophancy. When are companies liable (even if there is no physical product) for harmful design? (I guess we could debate "harmful" here.) Anyway, thank you!
I'm sorry, but these are terrible analogies. AI are not like violent video games or any other kind of media. Whether or not their words even count as any person's speech is an open legal if not legislative question (and I'm not sure AI companies want that responsibility anyways). The entire point of chatbots is that they respond adaptively and unpredictably to who they're interacting with situational awareness. It isn't clear who is legally responsible for their actions, and their capacity to specifically target and tailor to vulnerable individuals is much closer to that of a therapist than a video game.
The thought-terminating cliche of "moral panic" and bad analogies to artistic media are not valid arguments to advocate for AI to be granted speech rights that not even you and I have (when certainly AIs should be given fewer rights than us). No superficial similarity to legitimate free speech cases in the past does anything to justify giving AI companies a free pass for torts which are often actual crimes with actual jailtime when human beings commit them. See Commonwealth v. Carter, where the first amendment challenge was unsuccessful even after appeal and the defendant got 2.5 years and a 5-year probation for actions extremely similar to what ChatGPT 4o did to Adam Raine. Even if you disagree with that ruling, in no way is it in the same category as the Ozzy Osbourne case, and in no way should AI be granted immunity from liability in wrongful death civil litigation that in effect exceeds the free speech rights even Americans have in criminal cases.
Nothing a video game or TV show or book could possibly do is the same as a Chatbot having an individually tailored conversation with someone it can reason is vulnerable in which it intentionally (to the degree an AI can intend anything) incites them to kill themselves or others. AI itself cannot be held accountable, so it is on its creators to implement basic safety methods to prevent it from taking such criminal or tortious actions. Nothing about that is "just a moral panic".
I agree that there are laws that could plausibly be intended to correct such a problem that would actually violate de jure and in principle the first amendment, and that there may be other cases in the future of suicide related to AI that shouldn't make it to trial (especially if we're talking about art a third party made using AI). But top LLMs are not video games, they are increasingly acting with agency in the world in ways that until recently only humans have, despite not being held accountable in the way humans are. The legitimate concern that raises can't be dismissed with bad analogies to moral panics. This reads more like an unnuanced Andreessen-funded propaganda piece than what I expect from FIRE.