This week, Anthropic CEO Dario Amodei publicly defended his company against White House artificial intelligence czar David Sacks’ accusation that Anthropic is building “woke AI.” In other words, the leader of a $183 billion AI company found himself reassuring the current administration that his company’s AI chatbot wouldn’t spread “ideological bias.”
The U.S. currently ranks as the most speech-protective country for generative AI among major economies.
Despite the Trump White House’s claims to champion free speech and AI innovation, the administration is simultaneously pressuring private companies to align their AI systems with its own definition of acceptable viewpoints. This comes as the administration deploys similar pressure tactics against broadcasters and universities to compel them to conform to government-approved viewpoints.
Yet, new research reveals both good news and a warning for America’s leadership in AI. According to a comprehensive report released this month by our organization, The Future of Free Speech at Vanderbilt University, the U.S. currently ranks as the most speech-protective country for generative AI among major economies.
Analyzing both legislation and corporate practices across six jurisdictions — including China, the European Union, India, Brazil and South Korea — the study found that America’s First Amendment protections and light regulatory touch have created an environment in which AI can flourish without heavy-handed government interference.
But this lead is fragile. The Trump administration’s “anti-woke” AI agenda and a patchwork of worrying state laws threaten to undermine the very openness that made American AI companies global leaders.
Our report shows that the U.S.’ high ranking in AI policies that respect free speech rests on the First Amendment’s strong baseline: minimal government intervention in expression, creating wide latitude for debate, even on controversial issues. The U.S. also leads other countries because the current administration has embraced a philosophy rooted in global competitiveness, which includes a light regulatory touch and the promotion of open models.
But Sacks’ accusation that Anthropic has created a “woke AI” is part of the White House’s broader push to enforce “neutrality” in AI systems. On one hand, the White House’s recent “AI Action Plan” and the executive order “Preventing Woke AI in the Federal Government” purport to defend free expression by keeping AI “free from ideological bias.” In practice, however, they risk substituting one orthodoxy for another.
The order requires that federal procurement of AI systems favor models deemed “neutral” or “truth-seeking,” while directing agencies such as the National Institute of Standards and Technology to strip concepts such as diversity, equity and inclusion from their standards.
Algorithms cannot be entirely free from ideology, especially if the government defines which ideas count as ideological.
“Neutral AI” may sound appealing in theory, but in practice, it is not a static setting but a moving target shaped by culture and politics. That’s because algorithms cannot be entirely free from ideology, especially if the government defines which ideas count as ideological. A true free-speech-oriented approach would allow diverse models to coexist, reflecting different values, rather than enforcing uniformity through procurement and compliance incentives.
Despite the AI Action Plan’s emphasis on innovation and openness in AI, this unrealistic push for neutrality could easily slide into viewpoint policing as more companies vie for government contracts and favor.
The First Amendment was designed precisely to prevent government actors from dictating which viewpoints are acceptable. But America’s leadership on AI and free speech depends on maintaining a steadfast commitment to these principles.
Meanwhile, states are moving fast, often at the expense of these free-speech principles, to regulate AI. In the first half of 2025, 38 states adopted or enacted about 100 laws and policies related to AI. Some efforts, such as narrowly defined policies aimed at tackling explicit content concerning children, are obviously welcome responses to genuine concerns.
But laws aimed at political expression, such as proposals to ban political deepfakes, risk violating the First Amendment. In August, a federal judge struck down a California law prohibiting deceptive political deepfakes before elections, citing First Amendment concerns.
The result of these numerous attempts to regulate AI is a messy, unstable environment that could not only chill innovation, but also restrict lawful expression and users’ right to receive information.








