The AI arms dealer's dilemma

Palo Alto Networks warns AI-driven cyberattacks will become the 'new norm' within months.

·3 min read

CNBC

Palo Alto Networks warns AI-driven cyberattacks will become the 'new norm' within months

Palo Alto Networks warns AI-driven cyberattacks will become the 'new norm' within months.

cnbc.com

Seventy-five vulnerabilities. That's what Palo Alto Networks found in its own products after pointing frontier AI models at its codebase. The models went beyond individual bugs, chaining multiple flaws into working exploit paths, the kind of creative lateral thinking that used to require a skilled red team and weeks of effort.

Here's the part that should worry you: Palo Alto estimates organisations have three to five months before attackers broadly gain access to the same frontier AI cyber capabilities. That's not a forecast. It's a countdown.

And while the clock runs, the defenders are fighting over who gets to use the tools.

The access problem

The ECB warned eurozone banks this week to urgently prepare for AI-assisted cyberattacks, specifically naming Anthropic's Mythos as the kind of model that puts these capabilities within reach. But there's an absurdity buried in the warning: European banks largely can't access Mythos themselves. The very tool the ECB says they need to defend against is the tool they can't use for defence.

This is the arms dealer's dilemma in its purest form. Anthropic restricts access to Mythos for responsible deployment reasons, but restriction only constrains the defenders. Attackers don't submit compliance paperwork. They don't wait for approved access tiers. The asymmetry isn't hypothetical; it's structural.

Mistral sees the gap. Bloomberg reports the French AI company is developing a cybersecurity model for European banks that lack Mythos access. The pitch writes itself: if the American frontier labs won't give you access, we will.

But building a competitive cybersecurity model isn't trivial. The reason Palo Alto's results were so striking is precisely because frontier models — with the broadest training and strongest reasoning — are the ones that find exploit chains. A purpose-built model from Mistral might close part of the gap, but "part" is doing a lot of work in that sentence.

Echoes of the crypto wars

Anyone who remembers the 1990s encryption export controls will recognise this pattern. The US classified strong cryptography as a munition, restricting its export to allies. The result: law-abiding companies shipped weakened encryption while adversaries built their own. The controls didn't contain the capability. They just determined who was exposed while it proliferated.

The same logic applies here. Restricting defensive access to frontier cyber AI doesn't slow the attackers. It slows the defenders who play by the rules. And it creates a market opportunity for whoever can fill the gap, which is exactly what Mistral is doing.

The way I see it, this is the question the AI industry hasn't answered: if your model can find seventy-five vulnerabilities in a mature security company's products, who should get to use it? Restricting access feels responsible right now. But responsibility looks different when the alternative is leaving an entire continent's banking system without the tools to defend itself during a three-to-five-month window before those same capabilities reach every attacker's hands.

The companies building frontier AI are becoming de facto arms dealers whether they intended to or not. The question isn't whether to sell. It's whether the export controls they're choosing will protect anyone, or just determine who's unarmed when the shooting starts.


Read the original on CNBC

cnbc.com

Stay up to date

Get notified when I publish something new, and unsubscribe at any time.

More news