Organisations run on workarounds
Your processes work because your people compensate for them. AI can't compensate — so every dysfunction your team has been quietly routing around becomes a blocking error the moment you deploy it.

Organisations don't run on their documented processes. They run on the undocumented workarounds people have built around those processes over years. Every informal handoff, every "Sarah just knows how to handle that," every unwritten rule about which approval step gets skipped on Fridays. Humans route around dysfunction instinctively. AI can't. That single fact explains more about enterprise AI failure than any discussion of model capability.
The mirror effect
The 2025 DORA State of AI-assisted Software Development report, drawing on nearly 5,000 survey responses and over 100 hours of qualitative interviews, delivers one of the clearest findings in enterprise AI research: AI is an amplifier, not a transformer. In high-performing organisations, it boosts efficiency. In struggling ones, it magnifies weaknesses. DORA calls this the mirror effect. AI reflects an organisation's true capabilities back at it.
The individual numbers look promising. AI coding assistants increase task completion by 21% and pull requests merged by 98%. But organisational-level delivery metrics stay flat. The gains simply don't compound when aggregated across teams and systems. Teams without a user-centric focus actually got worse with AI adoption.
This pattern repeats across every major study. BCG found that 88% of organisations now use AI regularly, but only 5% create substantial value at scale. McKinsey reports that only 39% see EBIT impact at the enterprise level. Five percent creating real value out of eighty-eight percent doing the thing. The gap between adoption and impact is not a technology problem.
Automating a mess gives you a faster mess
Forrester's research puts a number on what should be obvious: organisations that optimised their workflows before deploying AI were 43% more likely to capture year-one productivity gains than those that automated legacy systems directly. Without a strong process foundation, AI introduces more complexity, not less.
A manufacturing company's quality-control agent kept making wrong recommendations because product SKUs weren't consistently formatted across legacy systems. This never mattered before. Humans just knew what the variations meant. They'd spent years building mental maps of which SKU referred to which product, which legacy code meant what. The AI couldn't "just know." It needed clean, consistent data that had never existed because it had never needed to.
This is the pattern everywhere you look. Maintenance records needed for predictive AI are buried in handwritten forms. Customer service knowledge is fragmented across wikis, Slack threads, and the memories of long-tenured staff. None of it lives in any system of record. Roughly 80% of an organisation's intellectual capital is undocumented tribal knowledge, and 90% of enterprise data remains unstructured and siloed. These aren't technology gaps. They're knowledge debt: the accumulated cost of everything your organisation knows but has never written down.
The Klarna lesson
Klarna's AI chatbot is the cautionary tale everyone should study. It handled 2.3 million customer conversations and replaced roughly 700 agents. Initially celebrated as a breakthrough in AI-driven efficiency. Then customer satisfaction tanked. Klarna had to publicly acknowledge they'd prioritised efficiency over service quality.
Those 700 agents compensated for gaps in Klarna's systems every day, applying judgement to edge cases and drawing on institutional knowledge to handle situations the documentation never covered. The AI could match their throughput. It couldn't match their workarounds.
Forrester predicts that one-third of companies will damage customer trust by rolling out AI self-service tools that backfire in exactly this way. Only 15% of AI decision-makers reported an EBITDA lift in the past 12 months. Enterprises are expected to defer 25% of planned AI spend to 2027 as the gap between expectation and reality becomes impossible to ignore.
The invisible work
Organisations function because people compensate for dysfunction every day. This is invisible work, mostly unrecognised, certainly undocumented. AI makes it visible by failing where humans silently succeeded.
Business logic (the real kind, not what's in your documentation) lives in Jira tickets, PowerPoints, email threads, and in the heads of experienced employees. How is churn calculated? Depends who you ask. How do sales territories actually work? Three different answers on the same floor. Why does one product line get different treatment from procurement? Because of something that happened in 2019 that only one person remembers.
Every one of these informal knowledge structures becomes a failure point the moment you try to automate the process it supports. The AI doesn't know about the 2019 incident. It doesn't know that the churn calculation changed last quarter but nobody updated the wiki. It doesn't know that the documented approval process has a shortcut everyone uses. It executes what's specified, and what's specified is almost never what actually happens.
Culture, not compute
DORA identified seven capabilities that determine whether AI benefits scale beyond individuals, including healthy data ecosystems, AI-accessible internal data, and quality internal platforms. Ninety percent of organisations now have internal platforms, but those with low-quality platforms see negligible AI impact. Having a platform isn't the point. Having a good one is.
BCG and MIT Sloan's joint research located the primary barrier not in technology but in organisational culture. Organisations investing at least 10% of their AI budget in change management and training were 1.5x more likely to succeed. Ten percent. Most organisations spend nothing on this. They treat process clarity and data hygiene as checkbox prerequisites, things to get "done" quickly before moving to the exciting model work. This is backwards. These are the hard part and the valuable part.
A study of 20 companies using AI agents found that the most successful ones spent the longest time getting ready before deploying anything. Speed of deployment was inversely correlated with success. Counterintuitive only if you think AI deployment is a technology project. Obvious if you recognise it as an organisational one.
VentureBeat found that 85% of enterprises want to become agentic within three years, yet 76% admit their operations can't support it. The top blockers: siloed teams at 54% and lack of cross-department coordination at 44%. Not model limitations. Not compute costs. Organisational dysfunction.
The most thorough audit you'll ever run
There's a concept in economics called revealed preferences: what people actually do, as opposed to what they say they value. AI adoption works the same way at the organisational level. It reveals how the company actually operates versus how it thinks it operates. Most organisations don't survive this revelation gracefully.
The way I see it, AI adoption is the most thorough organisational audit you'll ever run, whether you intend it or not. Every deployment surfaces the gap between how processes are documented and how they actually work. Every failed pilot is diagnostic information about undocumented dependencies, inconsistent data, informal knowledge networks, and process decay.
This maps to something any product engineer recognises: technical debt. We've always known that messy codebases slow you down, that undocumented systems are fragile, that tribal knowledge is a liability. AI makes the cost immediate and visible instead of slow and hidden. The compiler used to tolerate your shortcuts. The AI won't.
The companies that succeed with AI won't be the ones with the best models or the biggest budgets. They'll be the ones willing to do the unglamorous work of documenting what actually happens, cleaning the data that actually matters, and closing the distance between their org chart and their real information flows. AI has zero tolerance for the ambiguity that humans navigate effortlessly. That's not a flaw in the technology. It's information about your organisation you've been able to ignore until now.