The fight to define social media could redefine free speech
Social media companies are the latest communicative technology to be caught in the crosshairs of frightened legislatures.
Karan Kuppa-Apte is a junior at Bates College and 2025 FIRE summer intern.
Social media platforms are having a First Amendment moment. From content moderation to so-called “addictive” features such as unlimited scrolling, lawmakers are putting bipartisan pressure on these companies to change their policies and even the fundamental designs of their platforms. But to do so, they have to first define what a social media platform is.
That task has fallen to lawmakers and judges across the country, all struggling to arrive at a definition. However, their reasons for doing so and the approaches they are taking reflect the peculiarities of their respective branches: Lawmakers need a definition they can use to craft workable policies, while judges seek to draw connections between today’s social media cases and established precedent from cases involving past communicative technologies. But the resulting vague laws and inconsistent legal analogies show the severity of these definitional difficulties.
For example, Ohio’s law defines a social media company as an online website, service, or product that allows users to “interact socially” by building a profile, developing a list of contacts, and creating or sharing content.
Despite its relative clarity compared to many other states’ laws, even this definition has some glaring holes. For one, how are courts to decide what it means to “interact socially” on any given platform? Of course, direct messaging on Instagram or reposts on X clearly qualify, but what about building a collaborative playlist on Spotify or saving someone’s Pinterest post to your own board? These examples may seem like a stretch, but both involve people interacting in a way that communicates some message.
The Ohio law also makes exceptions for comment sections under “content posted by an established and widely recognized media outlet.” But the law does not define what makes a media outlet “established and widely recognized.” This vagueness opens the door to partisan abuse when enforcing the law. A liberal attorney general might attempt to enforce the law against Breitbart News, while a conservative one could punish The Huffington Post. And by excluding some topics but not others, the law again reveals itself to be a content-based (and therefore likely unconstitutional) regulation.
When trying to curtail platforms' content moderation policies, Texas and Florida took their own stabs at defining them. They, too, missed the mark: Florida Senate Bill 7072 is incredibly broad, lumping together social media, search engines, and any "internet platform" that meets the revenue and user thresholds. Texas House Bill 20 gives a leaner definition, but still clumsily targets public, account-based platforms that are designed to let users communicate. It explicitly excludes email and news outlets, revealing itself as an attempt to carve social media out of otherwise-protected online speech.
When drafting laws to define social media, lawmakers should remember that while the medium has changed, the speech at issue remains protected.
On the judicial side, the Texas and Florida laws were challenged by NetChoice, a trade group representing social media companies. In both cases, judges sought to understand what social media platforms are by looking at what they do — and how it fits with past case law. In Moody v. NetChoice, the U.S. Court of Appeals for the Eleventh Circuit ruled against Florida, reasoning that social media platforms behave like newspaper publishers or parade organizers, whose curation of speech in their products is protected by the First Amendment.
In NetChoice v. Paxton, the Fifth Circuit took a different approach by likening platforms not to assemblers and curators of speech but rather hosts or conduits of speech. According to this definition, moderation decisions are "not speech at all" and do not enjoy First Amendment protection. The Supreme Court heard both cases together and immediately sent them back to the lower courts for further consideration on technical grounds — but in doing so it signaled that a supermajority of the Court agreed that the Fifth Circuit’s analysis was wrong, and that social media platforms’ content policies and decisions are protected expression the same as they would be for a newspaper.
But just because lawmakers and judges take different approaches to defining social media platforms doesn't mean their roles can’t synergize to positive ends. Look no further than Section 230 of the Communications Decency Act, which shields online services from liability for third-party content published on their sites. Section 230 predates modern social media and thus doesn't offer a definition for platforms, but it helped define the internet — and social media — as we know them today.
Soon after Section 230 passed, courts were asked to clarify the law’s application. In Zeran v. America Online, the Fourth Circuit interpreted Section 230 as saying that platforms can’t be sued for carrying out “a publisher’s traditional editorial functions.” Accordingly, Section 230 confers an additional layer of protection on social media platforms’ content decisions: The First Amendment protects platforms' editorial decisions about which content to display, and Section 230 shields them from liability for the third-party content that is displayed and their decisions about whether or how to display it. The story of Section 230 demonstrates how the courts can apply well-written laws to protect online speech.
This process has yet to result in a definition for social media platforms. However, a definition could be beside the point: Section 230 shields online platforms from liability for carrying third-party content, and just because social media is a relatively new phenomenon does not mean the First Amendment does not apply.
A big reason lawmakers seem to get tripped up in answering this question is because in the past, different forms of media had singular functions: reading newspapers, listening to the radio, or watching and listening to the television — this was part of the basis for how media was regulated. With today's technological advances, different forms of communication have converged and the old regulatory model has been complicated. Attempting to treat a complex platform like Facebook as a telegram company is not only unproductive, it discourages a vibrant online marketplace of ideas.
Instagram publishes third-party content like a newspaper, enables direct communication like a phone company, and makes decisions about which speech it includes and excludes like a parade organizer. Courts have made it clear that as new speech mediums emerge, whether in film, online, or in video games, the First Amendment's governing principles apply with the same force.
Current efforts to define social media are motivated by a desire to control it and, by extension, speech. So far, courts have mostly shown respect for the First Amendment and struck down bad laws — although the Supreme Court's recent decision in Free Speech Coalition v. Paxton could be a warning that judges are starting to turn their backs on protecting online speech.
Social media companies are the latest communicative technology to be caught in the crosshairs of frightened legislatures. When drafting laws to define social media, lawmakers should remember that while the medium has changed, the speech at issue remains protected. And when judges read those laws, they should maintain the respect that courts have shown to the First Amendment as applied to new technologies.