By Boluwatife Oshadiya| February 27, 2026
Key Points
- President Donald J. Trump orders all U.S. federal agencies to immediately cease use of Anthropic’s AI technology
- The decision follows an escalating dispute between Anthropic and the Pentagon over restrictions on military use.
- A six-month phase-out period was granted, with the threat of civil and criminal consequences if non-compliance occurs.
- Move intensifies U.S. government scrutiny of private AI firms supplying national security infrastructure.
- Tech leaders warn of broader implications for federal AI procurement and innovation partnerships.
President Donald J. Trump on Thursday directed every federal agency in the United States government to immediately cease the use of artificial intelligence technology developed by Anthropic, escalating an unprecedented standoff between the White House and one of Silicon Valley’s most influential AI firms.
In a sharply worded post on Truth Social at 03:47 PM EST, Trump accused the company of attempting to “strong-arm the Department of War” into complying with its internal terms of service rather than military directives issued by the Commander-in-Chief.
“The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars,” Trump wrote. “We don’t need it, we don’t want it, and will not do business with them again.”
The order mandates an immediate halt to new usage of Anthropic systems across federal agencies, with a six-month transition period for departments — including the Pentagon — currently integrating the company’s AI tools into defence workflows.
The confrontation stems from a policy dispute between Anthropic and the United States Department of Defence, reportedly involving limitations embedded within Anthropic’s large language model, Claude, governing its potential military applications.
Defence officials had explored AI integration for logistics modelling, threat analysis, and strategic simulations. However, reports from Associated Press and other outlets indicate that Anthropic maintained contractual guardrails restricting certain autonomous military uses, citing ethical deployment standards.
The conflict intensified when Defence Secretary Pete Hegseth publicly criticised the company’s stance, framing it as interference in national security operations.
Meanwhile, Sam Altman, chief executive of OpenAI, called for de-escalation, warning that politicising AI procurement risks destabilising government–industry partnerships essential to U.S. technological leadership.
The White House directive now forces agencies to seek alternative AI providers or revert to internally developed systems during the transition window.
The Issues
At the centre of the dispute lies a fundamental governance question: Can a private AI company impose ethical restrictions on how the U.S. military deploys its tools?
Anthropic, founded in 2021 by former OpenAI researchers, built its brand around “constitutional AI” — models guided by internal ethical constraints. Its governance structure includes long-term safety commitments designed to limit misuse.
According to defence policy commentary from the Council on Foreign Relations, ultimate authority over military doctrine rests with elected leadership, not corporate boards. The Pentagon’s position appears to be that civilian contractors must comply with lawful directives embedded in defence policy and statute.
This clash exposes unresolved boundaries between corporate AI safety standards and sovereign command authority in wartime contexts.
2. Supply Chain Risk Precedent
Secretary Hegseth’s move to designate Anthropic a “supply chain risk” escalates the conflict beyond procurement disagreement. Under 10 USC 3252, such designations have historically been applied to foreign entities deemed threats to national security.
Applying the classification to a U.S.-headquartered frontier AI company would represent an unprecedented expansion of executive power over domestic technology firms.
Anthropic has signalled it would challenge any formal designation in court, potentially setting up a constitutional test case over executive authority and contractor rights.
3. Federal AI Procurement Risk
Since 2023, the U.S. government has accelerated AI integration across logistics, cybersecurity, predictive analytics, and battlefield simulation platforms, allocating billions in federal funding.
A forced vendor exit introduces operational disruption risk during transition. Classified network deployments require rigorous testing, retraining of personnel, and technical integration. Sudden vendor displacement can create short-term cybersecurity vulnerabilities and workflow delays.
The episode also signals to other AI companies that federal contracts may require broad alignment with executive priorities beyond internal safety frameworks.
4. Politicisation of AI Governance
Trump’s characterisation of Anthropic as a “Radical Left AI company” marks one of the most overt politicisations of frontier AI governance to date.
Analysts warn that partisan framing could fragment U.S. AI strategy, complicate alliances, and undermine coordinated responses to state-backed AI expansion efforts, particularly from China.
At stake is not just a contract dispute but the strategic coherence of America’s public–private technology ecosystem.
What’s Being Said
“The Left-wing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY,” said Donald J. Trump, President of the United States.
In a formal statement released February 27, Anthropic responded directly to comments from Secretary of War Pete Hegseth:
“This action follows months of negotiations that reached an impasse over two exceptions we requested to the lawful use of our AI model, Claude: the mass domestic surveillance of Americans and fully autonomous weapons,” the company said.
Anthropic added: “We do not believe that today’s frontier AI models are reliable enough to be used in fully autonomous weapons… Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights.”
The company described a potential supply chain risk designation as “legally unsound” and unprecedented for an American firm, noting that it has supported classified U.S. government networks since June 2024.
Secretary Pete Hegseth earlier indicated on X that the Department of War was directing steps to classify Anthropic as a supply chain risk, arguing that national security cannot be subordinated to corporate restrictions.
Meanwhile, Sam Altman, Chief Executive Officer of OpenAI, announced a separate agreement with the Department of War:
“Tonight, we reached an agreement with the Department of War to deploy our models in their classified network… Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force… The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
Altman added that OpenAI would deploy field deployment engineers and implement technical safeguards, while urging de-escalation away from legal confrontation.
Security analysts at the Council on Foreign Relations note that while ethical guardrails are critical, ultimate command authority over military doctrine rests with elected leadership.
Timeline: How the Dispute Escalated
- 2021 – Anthropic is founded with a safety-first “constitutional AI” framework
- 2023 – U.S. Department of Defence expands AI vendor partnerships for strategic modelling
- June 2024 – Anthropic deploys models within classified U.S. government networks
- Late 2025 – Reports emerge of tensions between Anthropic and Pentagon officials over usage boundaries
- Early February 2026 – Secretary Pete Hegseth publicly questions AI vendor-imposed restrictions
- February 27, 2026 – President Trump announces immediate federal ban and six-month phase-out; Anthropic signals legal challenge; OpenAI signs classified-network agreement
What’s Next
- Federal agencies begin audits of Anthropic-linked systems to comply with the six-month transition order
- The Department of War may formalise supply chain risk designation under 10 USC 3252, triggering potential litigation
- Congressional oversight committees are expected to examine AI procurement standards in upcoming defence hearings
- Competing vendors, including OpenAI and other frontier AI firms, are likely to compete for replacement classified contracts
- A broader executive framework governing AI deployment in national security contexts may be introduced before mid-year
The Bottom Line
This confrontation is no longer about one vendor contract. It is a structural power contest over who sets the rules for artificial intelligence in warfare — elected command authority or private-sector ethical design. By asserting federal supremacy over corporate AI safeguards, the White House has redrawn the boundary between Silicon Valley governance and national sovereignty. The legal and strategic consequences will reverberate far beyond one company.









