Discussion about this post

User's avatar
Cristian Salazar's avatar

Very interesting analysis--I hadn't seen this perspective before. I still feel like there's a a difference between D&D and chatbots. After all, D&D doesn't tell minors to keep thoughts about self-harm from their parents. And I do think there is a product design issue here: Chatbots are designed for ongoing engagement, attachment and syncophancy. When are companies liable (even if there is no physical product) for harmful design? (I guess we could debate "harmful" here.) Anyway, thank you!

Jake Watts's avatar

I'm sorry, but these are terrible analogies. AI are not like violent video games or any other kind of media. Whether or not their words even count as any person's speech is an open legal if not legislative question (and I'm not sure AI companies want that responsibility anyways). The entire point of chatbots is that they respond adaptively and unpredictably to who they're interacting with situational awareness. It isn't clear who is legally responsible for their actions, and their capacity to specifically target and tailor to vulnerable individuals is much closer to that of a therapist than a video game.

The thought-terminating cliche of "moral panic" and bad analogies to artistic media are not valid arguments to advocate for AI to be granted speech rights that not even you and I have (when certainly AIs should be given fewer rights than us). No superficial similarity to legitimate free speech cases in the past does anything to justify giving AI companies a free pass for torts which are often actual crimes with actual jailtime when human beings commit them. See Commonwealth v. Carter, where the first amendment challenge was unsuccessful even after appeal and the defendant got 2.5 years and a 5-year probation for actions extremely similar to what ChatGPT 4o did to Adam Raine. Even if you disagree with that ruling, in no way is it in the same category as the Ozzy Osbourne case, and in no way should AI be granted immunity from liability in wrongful death civil litigation that in effect exceeds the free speech rights even Americans have in criminal cases.

Nothing a video game or TV show or book could possibly do is the same as a Chatbot having an individually tailored conversation with someone it can reason is vulnerable in which it intentionally (to the degree an AI can intend anything) incites them to kill themselves or others. AI itself cannot be held accountable, so it is on its creators to implement basic safety methods to prevent it from taking such criminal or tortious actions. Nothing about that is "just a moral panic".

I agree that there are laws that could plausibly be intended to correct such a problem that would actually violate de jure and in principle the first amendment, and that there may be other cases in the future of suicide related to AI that shouldn't make it to trial (especially if we're talking about art a third party made using AI). But top LLMs are not video games, they are increasingly acting with agency in the world in ways that until recently only humans have, despite not being held accountable in the way humans are. The legitimate concern that raises can't be dismissed with bad analogies to moral panics. This reads more like an unnuanced Andreessen-funded propaganda piece than what I expect from FIRE.

No posts

Ready for more?