Across the United States, teenagers freely express themselves online. But that freedom is rapidly being restricted, and make no mistake: this doesn’t just end with teens. What is often portrayed as a youth mental health issue is really a battle for everyone’s online speech rights.
Worldwide, governments are moving to restrict minors’ access to social media. In Europe, France is seeking to ban children under 15 from using social media without parental consent, the United Kingdom is pursuing expanded regulatory authority over youth online access and platform design, and Spain has proposed criminal liability for tech executives who fail to remove harmful content quickly enough. In Asia, China has tightened limits on screen time and platform use for minors while Malaysia is considering a nationwide ban on social media for users under 16. And in Australia, the government has approved sweeping youth access restrictions.
The United States has shown signs of taking a similar course. Congress is advancing legislation like the Kids Online Safety Act (now combined with other bills in the House under the title “KIDS Act”) and multiple states have enacted or attempted age-based social media restrictions despite ongoing First Amendment challenges. The policy momentum is unmistakable — and troubling.
The great chatbot panic
This essay was originally published as part of the second issue of the FIRE tech policy and free expression newsletter Notice and Takedown on March 17, 2026.
The concerns driving it are, to some extent, real. Nearly half of American teens say social media negatively affects people their age. Public health authorities have linked heavy social media use to anxiety, depression, sleep disruption, and increased exposure to cyberbullying. Research from the World Health Organization has documented declining adolescent well-being associated with excessive screen time and digital pressure. At the same time, other studies have found beneficial effects of social media use, particularly for vulnerable youth seeking community and support, and researchers continue to debate whether many of the observed harms have been causally proven.
As a 20-year-old who grew up during the rise of Instagram, Snapchat, and TikTok, I recognize those harms. I remember when social media shifted from something you checked occasionally to something that shaped your social world. Group chats determined inclusion. Posts felt permanent. Comparison was constant. I have felt the pressure of likes and the anxiety of visibility. I have watched peers struggle with online harassment and digital burnout.
Today’s youth are experiencing troubling mental health trends. But even if social media does play a role, when authorities claim a noble and urgent purpose in regulating how people can express themselves, these proposed solutions demand scrutiny.
Age-based social media laws do not simply reduce screen time. Most rely on age verification systems that require identity verification in practice. That can mean uploading government-issued identification, biometric scans, or other sensitive personal data just to create an account. What is framed as child protection can quickly become a structural shift in how speech is accessed online, and how many invasive barriers the government requires private companies to place between their users and their platforms.
From a civil liberties perspective, that shift is significant. The Supreme Court has repeatedly recognized the importance of anonymous speech in American tradition, such as in cases like Talley v. California, and McIntyre v. Ohio Elections Commission. Anonymity protects political dissidents, whistleblowers, vulnerable communities, and young people exploring their identities. If participation in digital discourse increasingly requires identity verification, anonymity weakens and the chilling effect on lawful speech grows.
Strict liability regimes create additional risk. If platforms face legal penalties for failing to remove harmful content quickly enough, which is a tool regulators in Europe especially rely on, they will inevitably err on the side of removing more speech. Automated moderation systems cannot perfectly distinguish between offensive but lawful speech, unprotected speech, and speech that is legal but nevertheless targeted by lawmakers as subjectively “harmful.” Discussions about politics, religion, gender, or social justice may be flagged and suppressed simply because platforms cannot afford the regulatory risk.
Supporters argue these laws target only minors. But the technical infrastructure required to verify age often requires verifying everyone. Platforms cannot easily build parallel systems for adults and children without expanding data collection across the board. That means more identification checks, more stored data, and greater vulnerability to breaches or misuse.
By bullying Anthropic, the Pentagon is violating the First Amendment. Here’s why.
When the Trump administration demanded changes to Anthropic’s AI system and backed it up with a threat to seize the system or blacklist the company, the message was clear: comply or be crushed. But cut through the rhetoric and the real question is whether Washington can bankrupt a company for saying no to the Pentagon.
There are better ways to address youth mental health without reshaping the architecture of free expression. Platforms can voluntarily provide stronger parental control tools and resources that help families manage online experiences. Schools can invest in digital literacy education that teaches healthy online engagement. Efforts to address youth well-being should focus on empowering users and families rather than imposing government mandates that reshape how speech is accessed online.
The real question isn’t just teen safety, but this: Is the United States willing to normalize a permission-based internet?
My generation understands the downsides of social media. We have experienced comparison culture and digital pressure. But we have also used these platforms to organize protests, build communities, share stories, and participate in civic life. Social media is not merely entertainment. It is a modern public square, and when we understand it in those terms it’s clear why we cannot tolerate government intervention that censors speech and decimates anonymity.
Protecting teens is essential. Preserving free expression is essential, too. The vast majority of even “harmful” speech on social media is still protected speech that minors have a First Amendment right to access. If access to the public square increasingly depends on proving who you are before you speak, we may address one problem while creating another: a more monitored, less anonymous, and ultimately less free digital environment for everyone.
Lawmakers should approach youth social media reform with care. The goal should be to reduce harm without blocking minors from content they are entitled to, and without infringing on everyone’s rights by conditioning online speech and giving up anonymity.





"Supporters argue these laws target only minors."
This is the universal playbook: begin with something that sounds reasonable, protecting a vulnerable minority. Set up a mechanism for verification and control, then gradually (or suddenly) make it apply to everyone.
Teens have ALWAYS been angst-ridden, and barring re-engineering our DNA, they most likely always will be. Any interventions, when necessary, should come from family and friends, not from the government, which has repeatedly proven beyond all doubt that its motives are always about restricting our freedoms.