On February 27, U.S. President Donald Trump announced on Truth Social that he had instructed all federal agencies to immediately cease cooperation with Anthropic, one of America’s most prominent AI model developers. Agencies already using the company’s technology, including the Department of Defense, were granted a six-month transition period.
“We don’t need it. We don’t want it. And we will no longer do business with them,” Trump wrote, denouncing Anthropic as a “radical left-wing company” attempting to impose its ideology on the U.S. military.
On the surface, the dispute appeared to be a contractual disagreement over technical terms and usage scope. In reality, it rapidly escalated into something far more consequential: a struggle over who ultimately controls artificial intelligence when it becomes a tool of state power.
Trump warned that if Anthropic failed to comply during the transition window, he would invoke “the full authority of the presidency,” including potential civil and criminal penalties. The threat was explicit: agree to unrestricted military use of its AI models—or face coercion under the Defense Production Act.
Anthropic refused.
The First AI on Classified Networks
Anthropic’s models were among the first advanced AI systems deployed on U.S. government classified networks. The Pentagon sought a contract allowing the models to be used for “any lawful purpose” under its AI strategy. Anthropic objected, drawing red lines against mass domestic surveillance and fully autonomous weapons systems operating without human oversight.
In a public statement, CEO Dario Amodei said the company could not, “in good conscience,” accept those terms.
By the deadline—5:01 p.m. Eastern on February 27—the Department of Defense terminated its relationship with Anthropic. The fallout quickly spilled into public view. Defense Secretary Pete Hegseth accused Amodei on X of being a “liar” and “arrogant,” while Trump labeled Anthropic a “woke extremist company detached from reality.”
More strikingly, the Pentagon added Anthropic to its “supply chain risk” list—a designation previously reserved for foreign adversaries such as Huawei. For the first time, an American AI firm was treated as a national security liability.
Why the Rift Was Inevitable
The break had been building for weeks.
Anthropic’s commercial success had already unsettled markets. Valued at $380 billion after a $30 billion fundraising round, the company reported $14 billion in annualized revenue, with over 500 enterprise clients each spending more than $1 million per year. Its rapid expansion fueled fears of software-industry disruption and contributed to sharp declines across publicly listed SaaS firms.
In the summer of 2025, Anthropic secured a $200 million U.S. military contract, becoming the first American AI developer deployed in classified combat operations. Through Palantir, its Claude model was integrated into secret military platforms, parsing massive volumes of unstructured battlefield data and generating strategy recommendations based on real-time intelligence flows.
During the joint U.S.–Israel strikes on Iran on February 28, Claude reportedly processed thousands of hours of intercepted Persian-language communications, identifying fractures within the Islamic Revolutionary Guard Corps’ command structure and simulating multiple strike scenarios under dynamic game-theoretic conditions.
But earlier disclosures proved more controversial.
On February 13, U.S. media reported that during a January operation targeting Venezuelan President Nicolás Maduro, the U.S. military had used Claude not only for pre-mission analysis but also during live execution. Anthropic privately protested, reiterating its strict usage policies prohibiting assistance in violence, weapons development, or surveillance.
A Defense Department official later acknowledged internal concern: if a contractor questioned whether its software was used in a kidnapping operation, that contractor might not be reliable in future frontline scenarios. “Any company that risks mission success needs to be reevaluated,” the official said.
Two Visions of AI Power
The contrast became explicit in January 2026, when the Pentagon announced a partnership with xAI. Hegseth was blunt: the military would not adopt “AI models that won’t let you fight wars.” xAI agreed to unrestricted lawful use. Anthropic did not.
At a February 23 meeting in the Pentagon’s E-Ring, Hegseth reportedly told Amodei: “This isn’t about safety. It’s about ideology. We know who we’re dealing with.”
The ultimatum followed. Remove the “safety filters” or lose the contract.
Those filters are Anthropic’s hallmark—its “Constitutional AI” framework. Instead of relying solely on external enforcement, the model is trained to critique and correct its own outputs based on a predefined set of principles such as harmlessness, fairness, and non-discrimination.
The government’s objection was not simply about enabling mass surveillance or autonomous weapons. It was more fundamental: the belief that decisions about what AI may or may not do cannot be left to the developers who built it.
A Familiar Moral Trap
This struggle echoes an older dilemma.
After the atomic bombing of Nagasaki in August 1945, J. Robert Oppenheimer reportedly told President Harry Truman that he felt his hands were “covered in blood.” Truman later dismissed him, remarking that the president bore far more responsibility—and far less hesitation.
Today, the United States openly frames AI as a strategic imperative. Washington’s “Genesis” initiative—frequently described as an AI-era Manhattan Project—reflects a belief that technological supremacy is inseparable from national security. We are once again in a global arms race, only this time the weapon is decision-making itself.
The ethical tension resembles the classic trolley problem: do nothing and five people die; pull the lever and one dies instead. Most choose to pull the lever. Fewer interrogate the assumptions behind that choice.
Who decides the value of each life? What biases shape that decision? And what happens when sacrificing “the few” becomes routine policy rather than an exceptional moral burden?
Anthropic’s resistance rests on precisely this concern. Once AI is embedded in military systems, subjective judgments—filtered through ideology, incentives, and institutional pressure—can produce moral drift with irreversible consequences.
Beyond False Binaries
The trolley problem is misleading because it presents only two options. Real systems offer more: braking mechanisms, warning signals, structural redesign. AI governance should be no different.
The debate should not be reduced to “government control versus corporate autonomy.” A more viable path is hybrid governance—distributed authority, layered oversight, and adaptive regulation.
That means independent ethics boards, cross-agency audits, public transparency around deployment decisions, and mechanisms for continuous review as models evolve faster than legislation can keep pace. It means accepting that control must be reversible, accountable, and shared.
The lesson of the trolley problem is not that we must always pull the lever. It is that rushing to decide who holds the lever distracts us from designing safer systems altogether.
As AI capabilities grow exponentially, the true challenge is no longer who owns the key—but whether we can collectively agree on the rules governing how that key is used. Without institutional humility and ethical self-restraint, the most powerful tool humanity has ever created may also become the least governable.
I don’t approach this debate as a technologist trying to moralize power, nor as an activist pretending complexity doesn’t exist. I approach it as someone who has spent years watching how institutions behave once tools become indispensable. History suggests a consistent pattern: when capability accelerates faster than restraint, ethics is framed as obstruction rather than guidance.
The confrontation between the U.S. government and Anthropic is not an anomaly—it is an early signal. As large models become embedded in intelligence, military planning, and state decision-making, the question is no longer whether AI will shape power, but whether any durable framework exists to constrain that power once deployed.
My concern is not who holds the lever today, but whether we are building systems that allow levers to be questioned, reversed, or collectively redesigned tomorrow. If we fail at that task, the most dangerous feature of AI won’t be its intelligence—but our confidence that we control it.
— Alaric