The Pentagon values auction: AI safety gets its market test
OpenAI amends its Pentagon deal after Altman admits it looked 'opportunistic and sloppy', while Claude surges to number one on the App Store and hundreds of employees publicly back Anthropic's stance.
Axios
OpenAI amends Pentagon deal after Altman admits it was 'opportunistic and sloppy'
OpenAI has amended its Pentagon deal with new technical safeguards after CEO Sam Altman admitted the original agreement looked 'opportunistic and sloppy.'
axios.com

Something shifted this week. The market started pricing in values.
When Axios reported that OpenAI amended its Pentagon deal after Sam Altman admitted it looked "opportunistic and sloppy," the obvious reading was damage control. But look at what forced the amendment: not regulation, not a lawsuit, not a Congressional inquiry. Consumer behaviour and employee pressure.
Claude overtook ChatGPT as the number one free app on Apple's U.S. App Store, with downloads surging precisely as the Pentagon controversy peaked. Anthropic's refusal to allow unrestricted military use of Claude didn't just earn them press coverage. It earned them users. Millions of them, choosing with their thumbs.
Meanwhile, nearly 500 employees at OpenAI and Google signed a letter backing Anthropic's position. That's not a petition from outsiders. That's people inside the companies most likely to benefit from Anthropic's government exile saying, publicly, that they think Anthropic got it right.
Values as competitive advantage
The conventional wisdom in enterprise software is that principles are a luxury. You take the government contract, accept the terms, cash the cheque. What this week demonstrated is that consumer AI operates by different rules. When your product is a direct relationship with millions of individual users, your brand is your distribution moat, and your brand is inseparable from your values.
Altman's "opportunistic and sloppy" admission is remarkable. He's conceding that the original Pentagon deal was a reputational miscalculation, which means he's conceding that reputation matters to OpenAI's business. The amendment is an attempt to thread the needle: keep the Pentagon revenue while signalling that OpenAI still has guardrails.
The dynamics here are specific to AI. In cloud computing or enterprise SaaS, buyers are procurement teams with spreadsheets. In consumer AI, buyers are individuals who read the news. When the press reports that your company bent to Pentagon pressure, your users have an opinion about that, and an App Store alternative one tap away.
Whether the threading works is an open question. The App Store numbers suggest consumers don't reward the compromise position. They reward the clear one. Anthropic said no to unrestricted military use. Users said yes to Anthropic. The causal chain isn't subtle.
For anyone building AI products, the implication is worth sitting with. We've operated for years on the assumption that ethics and growth are in tension. This week's data says otherwise, at least in consumer markets. The question is whether that signal is strong enough to change how the next company negotiates its next government contract.
Read the original on Axios
axios.com