California wants to make platforms pay for offensive user posts. The First Amendment and Section 230 say otherwise.
This week, FIRE wrote to California Governor Gavin Newsom, urging him to veto SB 771, a bill that would allow users and government enforcers to sue large social media platforms for enormous sums if their algorithms relay user-generated content that contributes to violation of certain civil rights laws.
Obviously, platforms are going to have a difficult time knowing if any given post might later be alleged to have violated a civil rights law. So to avoid the risk of huge penalties, they will simply suppress any content (and user) that is hateful or controversial — even when it is fully protected by the First Amendment.
And that’s exactly what the California legislature wants. In its bill analysis, the staff of the Senate Judiciary Committee chair made clear that their goal was not just to target unlawful speech, but to make platforms wary of hosting “hate speech” more generally:
This cause of action is intended to impose meaningful consequences on social media platforms that continue to push hate speech . . . to provide a meaningful incentive for social media platforms to pay more attention to hate speech . . . and to be more diligent about not serving such content.
Supporters have tried to evade SB 771’s First Amendment and Section 230 concerns, largely by obfuscating what the bill actually does. To hear them tell it, SB 711 doesn’t create any new liability, it just holds social media companies responsible if their algorithms aid and abet a violation of civil rights law, which is already illegal.
Free speech still reigns, but faces setbacks online
This essay was originally published by The Dallas Express on July 21, 2025.
But if you look just a little bit closer, that explanation doesn’t quite hold up. To understand why, it’s important to clarify what “aiding and abetting” liability is. Fortunately, the Supreme Court explained this just recently — and in a case also about social media algorithms to boot.
In Twitter v. Taamneh, the plaintiffs claimed that social media platforms had aided and abetted acts of terrorism by algorithmically arranging, promoting, and connecting users to ISIS content, and by failing to prevent ISIS from using their services after being made aware of the unlawful use.
The Supreme Court ruled that they had not successfully made out a claim. Because aiding and abetting requires not just awareness of the wrongful goals, but also a “conscious intent to participate in, and actively further, the specific wrongful act.” All the social media platforms had done was create a communications infrastructure, which treated ISIS content just like any other content — and that is not enough.
Unfortunately for California, the very goal they want SB 771 to accomplish is what makes it unconstitutional.
California law also requires knowledge, intent, and active assistance to be liable for aiding. But nobody really thinks the platforms have designed their algorithms to facilitate civil rights violations. So SB 771 has a problem. Under the existing standard, it’s never going to do anything, which is obviously not what its supporters intend. Therefore, they hope to create a new form of liability — recklessly aiding and abetting — for when platforms know there’s a serious risk of harm and choose to ignore it.
But wait, there’s more.
SB 771 also says that, by law, platforms are considered to have actual knowledge of how their algorithms interact with every user, including why every single piece of content will or will not be shown to them. This is just another way of saying that every platform knows there’s a chance users will be exposed to harmful content. All that’s left is for users to show that a platform consciously ignored that risk.
That will be trivially easy. Here’s the argument: the platform knew of the risk and still deployed the algorithm instead of trying to make it “safer.”
Soon, social media platforms will be liable solely for using an “unsafe” algorithm, even if they were entirely unaware of the offending content, let alone have any reason to think it’s unlawful.
But the First Amendment requires that any liability for distributing speech must require the distributor to have knowledge of the expression’s nature and character. Otherwise, nobody would be able to distribute expression they haven’t inspected, which would “would tend to restrict the public’s access to [expression] the State could not constitutionally suppress directly.” Unfortunately for California, the very goal they want SB 771 to accomplish is what makes it unconstitutional.
And this liability is not restricted to content recommendation algorithms (though it would still be unconstitutional if it were). SB 771 doesn’t define “algorithm” beyond the function of “relay[ing] content to users.” But every piece of content on social media, whether in a chronological or recommendation-based feed, is displayed to users using an algorithm. So SB 771 will impose liability every time any piece of content is shown on social media to any user.
This is where Section 230 also has something to say. One of the most consequential laws governing the internet, Section 230 states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” and prohibits states from imposing any liability inconsistent with it. In other words, the creator of the unlawful content is responsible for it, not the service they used to do so. Section 230 has been critical to the internet’s speech-enabling character. Without it, hosting the speech of others at any meaningful scale would be far too risky.
SB 771 tries to make an end-run around Section 230 by providing that “deploying an algorithm that relays content to users may be considered to be an act of the platform independent from the message of the content relayed.” In other words, California is trying to redefine the liability: “we’re not treating you as the publisher of that speech, we’re just holding you liable for what your algorithm does.”
The fight to define social media could redefine free speech
Karan Kuppa-Apte is a junior at Bates College and 2025 FIRE summer intern.
But there can be no liability without the content relayed by the algorithm. By itself, the algorithm does not cause any harm recognized by law. It’s the user-generated content that causes the ostensible civil rights violation.
And that’s not to mention the fact that because all social media content is relayed by algorithm, it would effectively nullify Section 230 by imposing liability on all content. California cannot evade federal law by waving a magic wand and declaring the thing Section 230 protects to be something else.
Newsom has until October 13 to make a decision. If signed, the law takes effect on Jan. 1, 2027, and in the interim, other states will likely follow suit. The result will be a less free Internet, and less free speech — until the courts inevitably strike down SB 771 after costly, wasteful litigation. Newsom must not let it come to that. The best time to avoid violating the First Amendment is now.
The second best time is also now.