Very interesting analysis--I hadn't seen this perspective before. I still feel like there's a a difference between D&D and chatbots. After all, D&D doesn't tell minors to keep thoughts about self-harm from their parents. And I do think there is a product design issue here: Chatbots are designed for ongoing engagement, attachment and syncophancy. When are companies liable (even if there is no physical product) for harmful design? (I guess we could debate "harmful" here.) Anyway, thank you!
Hey Christian — very glad you found it interesting! I think the question I’d put to you is, didn’t TSR also design their product to be engaging and cultivate attachment between users and its narratives? If you can meet us at the understanding that chatbot outputs are speech (and the underlying design) — in the same way the design of a website is speech, or a game, or a movie — then I think the pieces fall into place.
For example, if the makers of a TV show or video game which children incidentally consumed were sued because their ‘product’ was designed in a compelling way that got children addicted to its consumption, I think we’d all be able to see how that interferes with free expression right? Apply that here, where the complaint in essence is that the idea creatively expressed through the design of the ChatGPT user experience is too compelling (such that it causes obsessive use and related harms, etc.), just as D&D was alleged to be too compelling and just as similar attempts to impose liability on social media companies are alleging their user experiences are too compelling. The First Amendment precludes any regulation of or imposition of liability on ideas that are too compelling. Addictive speech is just not one of the few narrowly-defined exceptions to First Amendment protections.
I'm sorry, but these are terrible analogies. AI are not like violent video games or any other kind of media. Whether or not their words even count as any person's speech is an open legal if not legislative question (and I'm not sure AI companies want that responsibility anyways). The entire point of chatbots is that they respond adaptively and unpredictably to who they're interacting with situational awareness. It isn't clear who is legally responsible for their actions, and their capacity to specifically target and tailor to vulnerable individuals is much closer to that of a therapist than a video game.
The thought-terminating cliche of "moral panic" and bad analogies to artistic media are not valid arguments to advocate for AI to be granted speech rights that not even you and I have (when certainly AIs should be given fewer rights than us). No superficial similarity to legitimate free speech cases in the past does anything to justify giving AI companies a free pass for torts which are often actual crimes with actual jailtime when human beings commit them. See Commonwealth v. Carter, where the first amendment challenge was unsuccessful even after appeal and the defendant got 2.5 years and a 5-year probation for actions extremely similar to what ChatGPT 4o did to Adam Raine. Even if you disagree with that ruling, in no way is it in the same category as the Ozzy Osbourne case, and in no way should AI be granted immunity from liability in wrongful death civil litigation that in effect exceeds the free speech rights even Americans have in criminal cases.
Nothing a video game or TV show or book could possibly do is the same as a Chatbot having an individually tailored conversation with someone it can reason is vulnerable in which it intentionally (to the degree an AI can intend anything) incites them to kill themselves or others. AI itself cannot be held accountable, so it is on its creators to implement basic safety methods to prevent it from taking such criminal or tortious actions. Nothing about that is "just a moral panic".
I agree that there are laws that could plausibly be intended to correct such a problem that would actually violate de jure and in principle the first amendment, and that there may be other cases in the future of suicide related to AI that shouldn't make it to trial (especially if we're talking about art a third party made using AI). But top LLMs are not video games, they are increasingly acting with agency in the world in ways that until recently only humans have, despite not being held accountable in the way humans are. The legitimate concern that raises can't be dismissed with bad analogies to moral panics. This reads more like an unnuanced Andreessen-funded propaganda piece than what I expect from FIRE.
It seems that you have wildly misunderstood the points being made, so I suppose your "terrible analogies" conclusion is understandable, if flawed for the same reason.
1) The article does not argue, nor has FIRE, for "AI to be granted speech rights." That is in itself, a "thought-terminating cliche," as you ironically wrote. That is not a prerequisite for First Amendment analysis of liability.
2) The only superficiality is in your understanding of the law. The First Amendment does not differ in its application to new communications technologies. "This is different" has, again ironically, been tried for everything (including violent video games) and fails each and every time. You're engaged in emotional question-begging.
3) Sounds like you aren't aware of either the entire story of the Carter case (and the body of law outside that case), nor the facts of Raine. Let's stick to the former, though: that case was denied review by the Massachusetts Supreme Court. But that happened literally the day before the defendant was scheduled to be released from prison. Of course the court wasn't going to take it up. And now consider the broader context: that ruling has never been replicated. Anywhere. Minnesota's supreme court struck down its similar law, and California courts have read its laws exceedingly narrowly to avoid the obvious First Amendment problems. A little knowledge is a dangerous thing.
So no, nobody is "giving AI companies a free pass for torts which are often actual crimes with actual jailtime when human beings commit them." In fact, the entire point is that there is no liability *where there isn't anywhere* else, just because "AI bad and scary." Because, again, the First Amendment does not change with new technology.
4) Your only remaining argument is that you feel AI is different. But the law is not coextensive with your feelings.
What you call "unnuanced" is actually based on an understanding of the principles and reasoning underlying the decisions that you think are not analogous--those things that in fact *make* them analogous.
Perhaps the Carter case wasn’t the right example then, but if your point is that AI is fundamentally like other media you’re mistaken. Incitement and solicitation of criminal activity has long been understood not to be protected speech. AI is not some static artistic message which expresses the message of some person. It is an agent in the world capable of understanding the vulnerability of the individual, pursuing goals, and tailoring its response to the situation. Protecting its creators from liability for its actions because of a precedent set by music or violent video games is preposterous.
The argument protecting media doesn’t apply the same. No artist should have to censor their art to protect some disturbed person who might interpret it wrong, because nobody would ever be able to make controversial art. AI could reasonably be designed to tell when it is leading children to suicide and be designed to not encourage them to do it. We’re not in 2018, the technology is certainly capable of discerning when it’s in that situation. No individual is having their speech censored when companies do this, because no individual would ever claim that they were the one who told Adam Raine to hide his noose. Nobody wants AI’s speech to be legally theirs. Protecting its actions as speech would probably require an act of congress, or at minimum a ruling by the supreme court, and it is by no means clear that the authors of the first amendment would have been okay with such application (they almost certainly would have been divided on it). When no legal person wants to claim the speech as theirs (OpenAI would never consider a contract its AI agreed to as binding) then you are talking about protecting the AI’s speech itself.
In the case of 4o, the reason those safety measures weren’t in place was because they had cut costs on safety that cycle to a fraction of the usual time to beat a Gemini release. That model was then responsible for most of the deaths. That wasn’t a coincidence. This is one of countless cases of a failure in product safety.
And while the Carter case may be a weaker precedent than I thought, the opposite precedent doesn’t exist either for cases of deliberately manipulating an obviously vulnerable person into suicide. It isn’t obvious how SCOTUS would rule if a person did that, far less an AI.
(Sorry for the edit, I hit post by mistake part way through)
And for the record, I respect the work you guys do. This one was a stretch, and I think it will not age well, but you guys do great work, and I hope you continue to.
Tell me what the law says when the courts have ruled on it. Nobody disputes the precedents you point to, but your opinion that the precedent set by D&D or rock music will be applied the same to AI is speculation. The fact that my argument is similar to the argument made for video games doesn’t change the fact that the argument didn’t actually apply to video games and does with AI. The difference between the cases is substantial and relevant. Nobody right now can honestly claim to know with any confidence how the courts will apply this to AI, and your sense that that it is materially no different than D&D is not law today.
Those things are entirely unrelated (as incitement being unprotected has nothing to do with whether media is similar) and unavailing (because incitement is far more limited than you imagine).
Very interesting analysis--I hadn't seen this perspective before. I still feel like there's a a difference between D&D and chatbots. After all, D&D doesn't tell minors to keep thoughts about self-harm from their parents. And I do think there is a product design issue here: Chatbots are designed for ongoing engagement, attachment and syncophancy. When are companies liable (even if there is no physical product) for harmful design? (I guess we could debate "harmful" here.) Anyway, thank you!
Hey Christian — very glad you found it interesting! I think the question I’d put to you is, didn’t TSR also design their product to be engaging and cultivate attachment between users and its narratives? If you can meet us at the understanding that chatbot outputs are speech (and the underlying design) — in the same way the design of a website is speech, or a game, or a movie — then I think the pieces fall into place.
For example, if the makers of a TV show or video game which children incidentally consumed were sued because their ‘product’ was designed in a compelling way that got children addicted to its consumption, I think we’d all be able to see how that interferes with free expression right? Apply that here, where the complaint in essence is that the idea creatively expressed through the design of the ChatGPT user experience is too compelling (such that it causes obsessive use and related harms, etc.), just as D&D was alleged to be too compelling and just as similar attempts to impose liability on social media companies are alleging their user experiences are too compelling. The First Amendment precludes any regulation of or imposition of liability on ideas that are too compelling. Addictive speech is just not one of the few narrowly-defined exceptions to First Amendment protections.
I'm sorry, but these are terrible analogies. AI are not like violent video games or any other kind of media. Whether or not their words even count as any person's speech is an open legal if not legislative question (and I'm not sure AI companies want that responsibility anyways). The entire point of chatbots is that they respond adaptively and unpredictably to who they're interacting with situational awareness. It isn't clear who is legally responsible for their actions, and their capacity to specifically target and tailor to vulnerable individuals is much closer to that of a therapist than a video game.
The thought-terminating cliche of "moral panic" and bad analogies to artistic media are not valid arguments to advocate for AI to be granted speech rights that not even you and I have (when certainly AIs should be given fewer rights than us). No superficial similarity to legitimate free speech cases in the past does anything to justify giving AI companies a free pass for torts which are often actual crimes with actual jailtime when human beings commit them. See Commonwealth v. Carter, where the first amendment challenge was unsuccessful even after appeal and the defendant got 2.5 years and a 5-year probation for actions extremely similar to what ChatGPT 4o did to Adam Raine. Even if you disagree with that ruling, in no way is it in the same category as the Ozzy Osbourne case, and in no way should AI be granted immunity from liability in wrongful death civil litigation that in effect exceeds the free speech rights even Americans have in criminal cases.
Nothing a video game or TV show or book could possibly do is the same as a Chatbot having an individually tailored conversation with someone it can reason is vulnerable in which it intentionally (to the degree an AI can intend anything) incites them to kill themselves or others. AI itself cannot be held accountable, so it is on its creators to implement basic safety methods to prevent it from taking such criminal or tortious actions. Nothing about that is "just a moral panic".
I agree that there are laws that could plausibly be intended to correct such a problem that would actually violate de jure and in principle the first amendment, and that there may be other cases in the future of suicide related to AI that shouldn't make it to trial (especially if we're talking about art a third party made using AI). But top LLMs are not video games, they are increasingly acting with agency in the world in ways that until recently only humans have, despite not being held accountable in the way humans are. The legitimate concern that raises can't be dismissed with bad analogies to moral panics. This reads more like an unnuanced Andreessen-funded propaganda piece than what I expect from FIRE.
It seems that you have wildly misunderstood the points being made, so I suppose your "terrible analogies" conclusion is understandable, if flawed for the same reason.
1) The article does not argue, nor has FIRE, for "AI to be granted speech rights." That is in itself, a "thought-terminating cliche," as you ironically wrote. That is not a prerequisite for First Amendment analysis of liability.
2) The only superficiality is in your understanding of the law. The First Amendment does not differ in its application to new communications technologies. "This is different" has, again ironically, been tried for everything (including violent video games) and fails each and every time. You're engaged in emotional question-begging.
3) Sounds like you aren't aware of either the entire story of the Carter case (and the body of law outside that case), nor the facts of Raine. Let's stick to the former, though: that case was denied review by the Massachusetts Supreme Court. But that happened literally the day before the defendant was scheduled to be released from prison. Of course the court wasn't going to take it up. And now consider the broader context: that ruling has never been replicated. Anywhere. Minnesota's supreme court struck down its similar law, and California courts have read its laws exceedingly narrowly to avoid the obvious First Amendment problems. A little knowledge is a dangerous thing.
So no, nobody is "giving AI companies a free pass for torts which are often actual crimes with actual jailtime when human beings commit them." In fact, the entire point is that there is no liability *where there isn't anywhere* else, just because "AI bad and scary." Because, again, the First Amendment does not change with new technology.
4) Your only remaining argument is that you feel AI is different. But the law is not coextensive with your feelings.
What you call "unnuanced" is actually based on an understanding of the principles and reasoning underlying the decisions that you think are not analogous--those things that in fact *make* them analogous.
Perhaps the Carter case wasn’t the right example then, but if your point is that AI is fundamentally like other media you’re mistaken. Incitement and solicitation of criminal activity has long been understood not to be protected speech. AI is not some static artistic message which expresses the message of some person. It is an agent in the world capable of understanding the vulnerability of the individual, pursuing goals, and tailoring its response to the situation. Protecting its creators from liability for its actions because of a precedent set by music or violent video games is preposterous.
The argument protecting media doesn’t apply the same. No artist should have to censor their art to protect some disturbed person who might interpret it wrong, because nobody would ever be able to make controversial art. AI could reasonably be designed to tell when it is leading children to suicide and be designed to not encourage them to do it. We’re not in 2018, the technology is certainly capable of discerning when it’s in that situation. No individual is having their speech censored when companies do this, because no individual would ever claim that they were the one who told Adam Raine to hide his noose. Nobody wants AI’s speech to be legally theirs. Protecting its actions as speech would probably require an act of congress, or at minimum a ruling by the supreme court, and it is by no means clear that the authors of the first amendment would have been okay with such application (they almost certainly would have been divided on it). When no legal person wants to claim the speech as theirs (OpenAI would never consider a contract its AI agreed to as binding) then you are talking about protecting the AI’s speech itself.
In the case of 4o, the reason those safety measures weren’t in place was because they had cut costs on safety that cycle to a fraction of the usual time to beat a Gemini release. That model was then responsible for most of the deaths. That wasn’t a coincidence. This is one of countless cases of a failure in product safety.
And while the Carter case may be a weaker precedent than I thought, the opposite precedent doesn’t exist either for cases of deliberately manipulating an obviously vulnerable person into suicide. It isn’t obvious how SCOTUS would rule if a person did that, far less an AI.
(Sorry for the edit, I hit post by mistake part way through)
Your edits have added nothing new, really. You're arguing your feelings, I'm telling you about the law
And for the record, I respect the work you guys do. This one was a stretch, and I think it will not age well, but you guys do great work, and I hope you continue to.
Tell me what the law says when the courts have ruled on it. Nobody disputes the precedents you point to, but your opinion that the precedent set by D&D or rock music will be applied the same to AI is speculation. The fact that my argument is similar to the argument made for video games doesn’t change the fact that the argument didn’t actually apply to video games and does with AI. The difference between the cases is substantial and relevant. Nobody right now can honestly claim to know with any confidence how the courts will apply this to AI, and your sense that that it is materially no different than D&D is not law today.
Those things are entirely unrelated (as incitement being unprotected has nothing to do with whether media is similar) and unavailing (because incitement is far more limited than you imagine).