You can’t eliminate real-world violence by suing over online speech
With so much of our national conversation taking place online, there’s an almost reflexive tendency to search for online causes — and online solutions — when tragedy strikes in the physical world. The murder of Charlie Kirk was no exception. Almost immediately, many (some in good faith, and others decidedly less so) began to postulate about the role played by online rhetoric and polarization.
Taking the stage at Utah Valley University to discuss political violence last week, Sens. Mark Kelly and John Curtis shared the view that social media platforms are fueling “radicalization” and violence through their content-recommendation algorithms. And they previewed their proposed solution: a bill that would strip platforms of Section 230 protections whenever their algorithms “amplify content that caused harm.”
This week, the senators unveiled the Algorithm Accountability Act. In a nutshell, the bill would require social media platforms to “exercise reasonable care” to prevent their algorithms from contributing to foreseeable bodily injury or death, whether the user is the victim or the perpetrator. A platform that fails to do so would lose Section 230’s critical protection against being treated as the publisher of user-generated content — and injured parties could sue the platform for violating this “duty of care.”
The debate over algorithmic content recommendation has been going on for years. Lower courts have almost universally held that Section 230 immunizes social media platforms from lawsuits claiming that algorithmic recommendation of harmful content contributed to terrorist attacks, mass shootings, and racist attacks. When faced with the question in 2023, the Supreme Court declined to rule on the scope of Section 230 — opting instead to hold the claims of algorithmic aiding and abetting at issue would not survive either way.
Forcing social media platforms to do the dirty work of censorship on pain of expensive litigation and expansive liability is no less offensive to the First Amendment than a direct government speech regulation.
But there’s an important question that usually gets lost in the heated debate over Section 230: Would such lawsuits be viable even if they could be brought?
In a Wall Street Journal op-ed making the case for his bill, Sen. Curtis wrote, “We hold pharmaceutical companies accountable when their products cause injury. There is no reason Big Tech should be treated differently.”
At first blush, this argument has an instinctive appeal. But it ultimately dooms itself because there is a reason to treat social media platforms differently. That reason is the First Amendment, which enshrines a constitutional right to free speech — a protection not shared by prescription drugs.
Perhaps anticipating this point, Sen. Curtis argues that the Algorithm Accountability Act poses no threat to free speech: “Free speech means you can say what you want in the digital town square. Social-media companies host that town square, but algorithms rearrange it.” But free speech doesn’t only protect users’ right to post online free of government censorship; it also protects the editorial decisions of those that host those posts—including algorithmic “rearranging,” to use the senator’s phrase. As the Supreme Court recently affirmed in Moody v. NetChoice:
When the platforms use their Standards and Guidelines to decide which third-party content those feeds will display, or how the display will be ordered and organized, they are making expressive choices. And because that is true, they receive First Amendment protection.
The “rearranging” of speech is just as protected as the speech itself, as when a newspaper decides which stories to print on the front page and which letters to the editor to publish. That is no less true for social media platforms. In fact, the term “content-recommendation algorithm” itself points to its expressive nature. Recommending something is a message — “I think you would find this interesting.”
The Moody Court also acknowledged the expressive nature of arranging online content (emphasis added): “Deciding on the third-party speech that will be included in or excluded from a compilation — and then organizing and presenting the included items — is expressive activity of its own.” Similarly, while dismissing exactly the kind of case the Algorithm Accountability Act would enable, the U.S. Court of Appeals for the Fourth Circuit held this past February: “Facebook’s decision[s] to recommend certain third-party content to specific users . . . are traditional editorial functions of publishers, notwithstanding the various methods they use in performing” them.
So the First Amendment is at least implicated when Congress institutes “accountability” for a platform’s arrangement and presentation of user-generated content, unlike with pharmaceutical safety regulations. But does it prohibit Congress from imposing the kind of liability the Algorithm Accountability Act creates?
Yes. Two well-established principles explain why.
First: As the Supreme Court has repeatedly made clear, imposing civil liability for protected speech raises serious First Amendment concerns.
Second: Except for the exceedingly narrow category of incitement — where the speaker intended to spur imminent unlawful action by saying something that was likely to cause such action — the First Amendment demands that we hold the wrongdoer accountable for their own conduct, not the people whose words they may have encountered along the way.
The U.S. Court of Appeals for the Fifth Circuit concisely explained why these principles preclude liability for “negligently” conveying “harmful” ideas:
If the shield of the first amendment can be eliminated by providing after publication that an article discussing a dangerous idea negligently helped bring about a real injury simply because the idea can be identified as ‘bad,’ all free speech becomes threatened.
In other words, faced with a broad, unmeetable duty to anticipate and prevent ideas from causing harm, media would be chilled into publishing, broadcasting, or distributing only the safest and most anodyne material to avoid the risk of unpredictable liability.
For this reason, courts have — for nearly a century — steadfastly refused to impose a duty of care to prevent harms from speech. A few noteworthy examples are illustrative:
Dismissing a lawsuit alleging that CBS’ television programming desensitized a child to violence and led him to shoot and kill his elderly neighbor, one federal court wrote of the duty of care sought by the plaintiffs:
The impositions pregnant in such a standard are awesome to consider . . . Indeed, it is implicit in the plaintiffs’ demand for a new duty standard, that such a claim should exist for an untoward reaction on the part of any ‘susceptible’ person. The imposition of such a generally undefined and undefinable duty would be an unconstitutional exercise by this Court in any event.
In a case brought by the victim of a gruesome attack alleging that NBC knew of studies on child violence putting them on notice that some viewers might imitate violence portrayed on screen, the court ruled:
[T]he chilling effect of permitting negligence actions for a television broadcast is obvious. . . . The deterrent effect of subjecting [them] to negligence liability because of their programming choices would lead to self-censorship which would dampen the vigor and limit the variety of public debate.
Affirming dismissal of a lawsuit alleging that Ozzy Osbourne’s “Suicide Solution” caused a minor to kill himself, the court noted the profound chilling effect such liability would cause:
[I]t is simply not acceptable to a free and democratic society to impose a duty upon performing artists to limit and restrict the dissemination of ideas in artistic speech which may adversely affect emotionally troubled individuals. Such a burden would quickly have the effect of reducing and limiting artistic expression to only the broadest standard of taste and acceptance and the lowest level of offense, provocation and controversy.
When the family of a teacher killed in a school shooting sued makers and distributors of violent video games and movies, the court rejected the premise of the suit:
Given the First Amendment values at stake, the magnitude of the burden that Plaintiffs seek to impose on the Video Game and Movie Defendants is daunting. Furthermore, the practical consequences of such liability are unworkable. Plaintiffs would essentially obligate these Defendants, indeed all speakers, to anticipate and prevent the idiosyncratic, violent reactions of unidentified, vulnerable individuals to their creative works.
In his op-ed, Sen. Curtis wrote, “The problem isn’t what users say, but how algorithms shape and weaponize it.” But the “problem” this bill seeks to remedy very much is what users say. A content recommendation algorithm in isolation can’t cause any harm; it’s the recommendation of certain kinds of content (e.g., radicalizing, polarizing, etc.) that the bill seeks to stymie.
And that content is overwhelmingly protected by the First Amendment, regardless of whether the posts might, individually or in the aggregate, cause an individual to commit violence. When the City of Indianapolis created remedies for people who viewed pornography, the U.S. Court of Appeals for the Seventh Circuit rejected the municipality’s justification that pornography “perpetuate[s] subordination” and leads to cognizable societal and personal harms:
[T]his simply demonstrates the power of pornography as speech. All of these unhappy effects depend on mental intermediation. Pornography affects how people see the world, their fellows, and social relations. If pornography is what pornography does, so is other speech.
[ . . . ]
Racial bigotry, anti-semitism, violence on television, reporters’ biases — these and many more influence the culture and shape our socialization. None is directly answerable by more speech, unless that speech too finds its place in the popular culture. Yet all is protected as speech, however insidious. Any other answer leaves the government in control of all of the institutions of culture, the great censor and director of which thoughts are good for us.
And that’s why the Algorithm Accountability Act also threatens users’ expressive rights. There’s simply no reliable way to predict whether any given post might, somewhere down the line, factor into someone else’s independent decision to commit violence — especially at the scale of modern social media. Faced with liability for guessing wrong, platforms will effectively have two realistic choices: aggressively re-engineer their algorithms to bury anything that could possibly be deemed divisive (and therefore risky), or — far more likely — simply ban all such content entirely. Either road leads to the same place: a shrunken public square where whole neighborhoods of protected speech have been bulldozed.
“What a State may not constitutionally bring about by means of a criminal statute,” the Supreme Court famously wrote in New York Times v. Sullivan, “is likewise beyond the reach of its civil law.” Forcing social media platforms to do the dirty work of censorship on pain of expensive litigation and expansive liability is no less offensive to the First Amendment than a direct government speech regulation.
Political violence is a real and pressing problem. But history has already taught us that trying to scrub away every potential downstream harm of speech is a dead end. And a system of free speech requires us to abstain from the temptation of trying in the first place.






I think many efforts like this legislation assume that people receiving information on social media don’t have agency. I believe that citizens who value liberty should be responsible for the information and speech they ingest. I have a right to speak and to hear, therefore I also have the right to choose when not to speak and not to hear. However it comes down to my agency and choice. People need to curate the information they take in.
I agree with the main premise of the article, against the bill being presented by Sens. Kelly and Curtis. But I think the writer misses a crucial point. Publishing, as in the days of the printed and/or broadcast televised word was/is intended for a mass audience. The First Amendment wins in the cases presented in the article are all valid when it comes to video game and movie creators, music artists, and newspaper publishers.
However, a main difference here is that the algorithms used by social media companies and online platforms, I would say primarily Meta and Youtube, are designed to hyper-focus on an individual. A newspaper or news broadcast is typically designed for a mass audience. A video game or movie or song is as well. Those mediums tend to have less of a violently radicalizing effect by readers/watchers/listeners/players because the people consuming said content aren’t usually inundated with it.
As opposed to social media, where a user easily opens an app and has content flooded on their newsfeed. One click on a post can result in barrage of similar content. So if I click on or watch a controversial post, comment on it, like it, and/or share it, then the algorithm zeroes in on that interaction and begins to flood my individual feed with such content and potentially, if not likely, more controversial or radical content. It can, and I think arguably has shown to, lead people into a more radicalized and potentially violent state. More so than the traditional means of consuming information.
It’s been proven that social media consumption, and general overuse of one’s cellphone or tablet, have very poor effects on mental health for a general and outsize portion of the population. Part of that reason is the extreme targeting of the algorithms to individualize content based on what people interact with or even just happen to come across. It has also been proven to be tied to engagement, i.e. commenting and sharing, which happens far more with things that outrage us than the things that would be considered “good news.” The effects of such algorithms being the primary function of social media propagation and profit have had detrimental effects on society, not to mention the arguably inadvertent self-propagandizing effect it has had on people in both directions of the social and political debate.
I think if there is any merit to the bill being presented, and I don’t think there currently is as far as Section 230 is concerned, targeting the inherent function of the algorithm isn’t an unworthy aim. It may be misguided in how these senators are trying to accomplish that, but the algorithms are actively working against a cohesive society. The free spread of information and healthy debate are vital to a free society, but the algorithms are clearly not fostering such vitality. We are more divided and more anxious than at almost any other time in our country’s history and we are beginning, if not already, approaching conversations and debates with shockingly opposed versions of basic reality. I would argue that allowing tech companies to continue benefitting from such algorithms and software under the guise of the First Amendment would be to allow for the continued and overall disintegration of our society. There IS a conversation to be had on regulating such technology, and one that I think avoids encroaching on the First Amendment.