The quiet push to control AI speech
New federal plans to review AI models before release could blur the line between oversight and censorship.
Recent reports suggest the Trump administration is now considering new oversight for advanced AI models like ChatGPT, Claude, and Gemini. Few details have been finalized, but officials are reportedly discussing an executive order to create a government–industry working group. Another idea under consideration is a process for reviewing models before or around their release.
As these talks move forward, they risk setting a troubling precedent for free expression.
Some AI companies such as xAI have already agreed to provide an early release of their models. One report announced that a group has already been formed within the Department of Commerce, which will review these models — sometimes testing them with fewer safety limits — to see what risks they might pose, especially for cybersecurity and national security.
Licensed to speak? How NY’s AI bill gets it wrong.
A casual exchange with a chatbot can help someone understand a lease, think through a medical question, or navigate a personal issue. It can become specific and personal, even though people understand it isn’t a licensed professional.
This isn’t entirely new as OpenAI and Anthropic have done something similar before. But now the Trump administration is considering making this kind of review much more formal.
AI is an expressive tool — one that people use to learn, ask questions, and engage with ideas. The design of an AI system also reflects a series of choices about what information to prioritize, how to reason, what boundaries to draw, and how to respond. Those choices embed values and assumptions about knowledge, truth, and human interaction.
People who build and use AI tools do not shed their constitutional rights to freedom of expression at the prompt window. That includes the right to speak without being pressured by the government to first seek approval or give them a look under the hood. What starts as “just a review” can quickly become pressure to change what tools the public is allowed to have and what information users are allowed to see. Informal oversight has a way of turning into coercion.
Imagine an AI company being told its approval depends on how its model handles controversial issues — whether it reflects the government’s preferred stance on tariffs, energy or climate policy, or how it discusses election integrity ahead of an upcoming vote. Even without an explicit mandate, that kind of signal would pressure developers to shape outputs to satisfy officials rather than reflect independent judgment.
We’ve seen this before. When Biden administration officials approached social media platforms and repeatedly pressured them to remove or downgrade COVID-related content, it meant the public discourse about the most important issue of the day was being covertly shaped by government officials. In the current Administration, we’ve observed how the Federal Communications Commission’s authority to regulate the broadcast spectrum — which sounds fairly technical and benign in theory — has been weaponized to pressure news outlets to reshape their coverage at the government’s behest.
Lawmakers want to force Californians to take anti-hate speech training
California lawmakers are considering two bills that would make “anti-hate speech training” a requirement. Assembly Bill 1803 would require employers with five or more employees to incorporate such training into existing, already-mandated sexual harassment prevention programs. Under Assembly Bill
In those cases, pressure largely operated after content was posted. This new arrangement is more dangerous because it moves that pressure upstream, before an AI model is released to the public. Giving the government a role in reviewing speech or expressive tools at that stage creates a point of leverage it can use to ensure certain expression never sees the light of day. In First Amendment terms, this amounts to prior restraint — blocking speech before it’s communicated, rather than punishing it afterward — if the government can give thumbs-up-or-down approval. That’s why they’re presumed unconstitutional by courts.
Officials often invoke national security to justify expanding their reach. But national security isn’t a blank check. The public should be wary of arrangements that give government officials of either party a foothold over what AI systems generate.
Once the government assumes a gatekeeping role over emerging forms of expression, the line between oversight and censorship gets blurry real fast.






