Pentagon Escalates Dispute with Anthropic, Threatens Defense Production Act
Defense Secretary Pete Hegseth gives Anthropic until Friday to provide military access to Claude or face being declared a supply chain risk or forced compliance under the Defense Production Act.
TechCrunch
Pentagon Escalates Dispute with Anthropic, Threatens Defense Production Act
Defense Secretary Pete Hegseth gives Anthropic until Friday to provide military access to Claude or face being declared a supply chain risk or forced compliance under the Defense Production Act.
techcrunch.com

The Pentagon's ultimatum to Anthropic isn't just another tech policy dust-up—it's the moment when AI ethics collides with national security theatre, and the implications stretch far beyond one company's refusal to weaponise its models.
According to TechCrunch's reporting, Defense Secretary Pete Hegseth has given Anthropic until Friday to provide military access to Claude or face being declared a supply chain risk under the Defense Production Act. Anthropic's position remains firm: no military use for surveillance and autonomous weapons systems.
This isn't about one company's principles—it's about whether the US government can conscript private AI capabilities when it deems them strategically necessary. The Defense Production Act, originally designed for wartime manufacturing, is being weaponised as a compliance tool for digital assets. That's a profound expansion of federal power that should concern anyone building AI systems.
The precedent problem
If the Pentagon succeeds in forcing Anthropic's compliance, every AI company becomes a potential defence contractor by government decree. OpenAI already has military partnerships. Google's relationship with the Pentagon has been rocky but persistent. Meta's Llama models are open-source anyway. But Anthropic's resistance creates a test case: can you build advanced AI in America whilst maintaining ethical boundaries the military doesn't like?
The answer matters because it determines whether AI companies can credibly promise customers, employees, and international partners that their technology won't be repurposed for surveillance or autonomous weapons. If that promise becomes legally meaningless, it changes the competitive landscape entirely.
Consider the international ramifications. European customers already scrutinise American AI companies for potential government backdoors. China points to cases like this to justify its own tech nationalism. If the US government can simply commandeer AI capabilities through executive action, it validates every authoritarian regime's argument for domestic AI development over Western alternatives.
The economics are equally stark. Anthropic has raised billions based partly on its constitutional AI approach and safety-first positioning. Investors backed a company that explicitly rejected certain military applications. Forcing that company to violate its core principles either destroys its market positioning or creates a legal precedent that no AI startup can make credible ethical commitments.
There's also the technical reality that compliance under duress rarely produces effective results. Military AI systems require deep integration, extensive testing, and genuine partnership between technologists and defence personnel. Forcing access through legal threats gets you the bare minimum—not the strategic advantage the Pentagon presumably wants.
The Friday deadline forces a choice that reverberates across the entire AI ecosystem: whether principled positions on AI safety and ethics can survive contact with national security imperatives, or whether building advanced AI inevitably means becoming part of the military-industrial complex regardless of your intentions.
Read the original on TechCrunch
techcrunch.com