Pentagon officially designates Anthropic a supply chain risk, CEO vows court challenge
The Pentagon blacklists Anthropic over refusal to remove AI guardrails on surveillance and autonomous weapons, affecting military contractors amid rapid consumer growth of Claude, Anthropic's AI chatbot.
- The Trump administration has formally labeled Anthropic a "supply chain risk." This unprecedented move effectively bans the company’s AI products from the Department of Defense and forces a six-month phase-out for all existing military and national security integrations.
- The friction stems from CEO Dario Amodei’s refusal to lift restrictions on how the military uses Claude. The Pentagon argues that vendor-imposed limits on "lawful use"—specifically regarding autonomous weapons and surveillance—interfere with the chain of command and endanger warfighters.
- Anthropic has announced it will challenge the designation in court, arguing the government’s action is not "legally sound." Amodei maintains that the company’s safeguards are intended to prevent misuse rather than interfere with operational military decision-making.
- Major defense contractors like Lockheed Martin are already distancing themselves from Anthropic in favor of other AI providers. While the administration has provided a six-month window to prevent a sudden loss of tools during active combat operations, the move signals a major shift in how the U.S.
281 Articles
281 Articles
Pentagon turns to ex-Uber executive in Anthropic feud over AI
By Rebecca Torrence | Bloomberg News Emil Michael made his name in Silicon Valley a decade ago as an aggressive dealmaker for a startup — Uber Technologies — as it wrangled with governments in pursuit of market domination. Now, Michael has switched sides in a battle involving a different startup — this time taking a leading role in the Pentagon’s dispute with artificial intelligence pioneer Anthropic. Related Articles San Jose, SJS…
Anthropic Deemed a 'National Security Threat' -- Is Palantir Technologies At Risk?
Quick Read Palantir (PLTR) must immediately stop using Anthropic’s Claude in Pentagon contracts and rebuild classified defense workflows; the War Department designated Anthropic a supply-chain risk, banning all defense contractors from using its AI services. The Pentagon acted after an Anthropic executive questioned Palantir about Claude’s use in the Venezuela raid to capture Maduro, exposing risks of AI vendor dependency in mission-critical o…
Pentagon Labels Anthropic As Security Risk, Trump Orders Ban
The clash between Anthropic and the U.S. military exploded from a contract disagreement into a major national-security fight, driven by leaked internal remarks, a public apology, a rare supply-chain designation, and awkward contradictions about whether the Pentagon can rely on privately governed AI. This piece lays out what happened, why the administration acted, the legal and political flashpoints, and the broader question about private compani…
Pentagon's chief tech officer says he clashed with AI company Ant
A top Pentagon official said Anthropic’s dispute with the government over the use of its artificial intelligence technology in fully autonomous weapons came after a debate over how AI could be used in President Donald Trump’s future Golden Dome missile defense program , which aims to put U.S. weapons in space. U.S. Defense Undersecretary Emil Michael, the Pentagon’s chief technology officer, said he came to view the AI company’s ethical restrict…
Pentagon designates Anthropic a supply chain risk | Honolulu Star-Advertiser
The Pentagon slapped a formal supply-chain risk designation on artificial intelligence lab Anthropic on Thursday, limiting use of a technology that a source said was being used for military operations in Iran.
Coverage Details
Bias Distribution
- 67% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium
































