A Decision the Media Won't Explain Honestly
President Trump ordered federal agencies to immediately cease using technology from Anthropic, the AI company behind the Claude chatbot. The Hill reported it. Most of the tech press covered it as an arbitrary act of political retaliation — Anthropic, after all, has received significant investment from Google and has positioned itself as the more safety-conscious, values-aligned alternative to OpenAI.
That framing is almost entirely wrong about what's actually at stake.
Anthropic is a company founded by former OpenAI researchers who left specifically to build AI systems with what they describe as stronger safety and alignment properties. Their approach to AI safety is real and technically serious — I'm not dismissing it. But "safety" in the Anthropic sense also means something politically specific. It means the systems have been trained with values that reflect the worldview of the people who built them. And the people who built Anthropic are, to put it gently, not representative of the American population at large.
I've sat with my own kids as they used AI chatbots for homework help. I've watched what happens when a thirteen-year-old asks about American history, about gender identity, about political figures from both parties. The responses are not neutral. They are not simply information retrieval. They reflect choices that were made by the people who trained those systems, about what to say, how to say it, what to emphasize, what to decline to discuss. Those choices did not emerge from a democratic process. They emerged from the political culture of a specific industry in a specific set of cities.
Why This Matters for Federal Government Functions
Federal agencies touching millions of Americans' lives were integrating AI tools built on these systems. The Veterans Administration. The Social Security Administration. Federal educational programs. These agencies interact with veterans, retirees, students, and ordinary citizens who didn't choose to have their government-provided services filtered through the assumptions of San Francisco's AI industry.
That's not a hypothetical concern. In 2023, journalists and researchers documented multiple instances of major AI systems from multiple companies — Anthropic included — producing responses to politically sensitive questions that tracked closely with progressive political positions while declining or hedging on equivalent questions from a conservative framing. This is not conspiracy theory. It's documented behavior that the companies themselves have sometimes acknowledged and attributed to their training choices.
When a veteran calls the VA's AI-assisted hotline, or when a senior citizen uses an AI tool to understand their Social Security benefits, they should not be receiving information filtered through any political ideology. The government is supposed to serve all citizens equally. If the AI tools serving them were built with embedded political assumptions — about what information is safe to provide, about how certain historical or policy questions should be framed — that's a legitimate government concern. Not a First Amendment violation. Not censorship. A procurement decision.
The Double Standard in Tech Press Coverage
Watch how the tech press covers this story. They'll emphasize Anthropic's safety research credentials. They'll quote AI researchers lamenting the loss of a responsible actor from federal contracts. They'll frame the decision as ignorant or retaliatory or driven by political animus. What they won't do is engage seriously with the documented ideological tilt of the AI systems in question, because doing so would require acknowledging something the tech press finds uncomfortable: that AI systems reflect the values of their creators, and that their creators are overwhelmingly concentrated in a political demographic that does not represent most of the country.
The same reporters who spent years covering corporate content moderation decisions on social media platforms as neutral, technical, apolitical choices are now discovering that AI training is deeply political — but only in the sense that government intervention in it is political. The political choices that went into building the systems in the first place apparently don't count.
This is a pattern I've watched for years in media coverage of tech. The choices that embed progressive assumptions into platforms and tools are described as engineering decisions or safety measures. The choices that push back against those assumptions are described as political interference. The asymmetry is so consistent it can't be accidental.
What Comes Next and Why Parents Should Pay Attention
The Trump administration's order affects federal procurement. It doesn't affect what Anthropic can build or sell commercially. My kids' school can still buy access to Claude. My daughter can still use it for homework. The federal government has simply decided it won't be the customer for this particular product.
But this moment points toward a bigger fight that parents need to be paying attention to. AI systems are entering classrooms across America right now. School districts are signing contracts with AI companies to provide tutoring tools, writing assistants, research aids. These tools will shape how tens of millions of children understand history, science, politics, and ethics. And the companies building them are not neutral actors.
The question of whose values get baked into the AI tools that educate American children is one of the most important questions of the next decade. It's more consequential than curriculum debates, because AI tools are adaptive and pervasive in ways that textbooks never were. A biased textbook is visible and arguable. A biased AI tutor is invisible, interactive, and persuasive in ways designed specifically to feel natural.
Trump's order is a small step toward acknowledging that these are not purely technical products. They are products with politics. Treating them as such is not anti-science. It's the minimum level of discernment any careful purchaser should apply.
More of this scrutiny, please. Not less.






