The Big Tech verdicts you’re cheering for are actually terrible for free speech
The verdicts against social media companies in California and New Mexico over the past two days reveal a disturbing trend: Americans are increasingly willing to view speech as a “product,” subject to regulation in the same way physical substances like alcohol or tobacco are. Many are cheering the decisions, likening them to landmark lawsuits against Big Tobacco.
Let’s be clear: An Instagram post isn’t a cigarette. A YouTube Short isn’t a shot of whiskey. Social media platforms and the information, ideas, and entertainment they connect people to aren’t tangible items that inherently and invariably have physical impacts on the human body. No matter how you feel about social media, the minute we start treating speech as if it were just another physical product is the minute we hand the government the power to decide what we can read, watch, and say.
That’s dangerous — and the First Amendment forbids it.
Declaring the target to be “design features” — such as infinite scroll or notifications — instead of speech doesn’t change things. The First Amendment isn’t fooled by synonyms, and what these lawsuits target is, inescapably, speech. Some allegations are aimed at content hosted by platforms that some perceive as harmful. And the ways platforms arrange, display, and choose how users consume content are editorial choices that are protected by the First Amendment. That those features might be designed to keep users’ attention is hardly a groundbreaking discovery. That is the point of all media. Imposing liability because speech is too appealing would be a breathtaking incursion on free speech.
But this isn’t just about big companies. For decades, courts have recognized that this exact kind of broad liability would have severe effects on society as a whole. If media companies must worry about liability whenever their expressive outputs are thought to be “harmful,” the universe of available content would be reduced to the safest, blandest, and least engaging stuff imaginable. And when it comes to social media, that affects what you are allowed to post, too.
That something is “harmful to children” is a familiar refrain from the government. But the government does not have free-floating authority to decide what ideas are suitable for minors, who have significant First Amendment rights of their own.
Parents across the country — including parents here at FIRE — are worried about their children spending too much time scrolling. But outsourcing parental responsibility to lawyers, tech companies, and the government is the last thing Americans should do.
Parents, not platforms, are in the best position to know what speech their children are capable of handling. And they are the ones who should be making decisions affecting their own families. No amount of top-down regulation or liability can change that fact. Exercising the parental prerogative is undoubtedly more complicated than it used to be, and it’s certainly not always easy. But to protect our expressive rights online, it is necessary.




Instagram is not an idea. It is not speech. It is a product designed to addict and manipulate. I hope they get sued into oblivion.
Whenever these tech companies enter the debate for privacy abuses, dominant market share, mental health decline, or algorithmic polarization (and let's be clear: those things are definitely happening) the debate is only focused on how we can regulate them, sue them, have the government break them up into smaller companies, etc.
It's super weird that nobody ever talks about how we can just choose not to use their services.
I get it, some of those services are pretty great and you'll definitely miss them after you've moved on or migrated to services that aren't as good, but this "learned helplessness" phenomenon has got to stop. It's our responsibility to solve our own problems, not to wait for someone to solve it for us.