
Organisations run on workarounds
Your processes work because your people compensate for them. AI can't compensate — so every dysfunction your team has been quietly routing around becomes a blocking error the moment you deploy it.
All of my long-form thoughts on AI, programming, product development, and more, collected in chronological order.

Your processes work because your people compensate for them. AI can't compensate — so every dysfunction your team has been quietly routing around becomes a blocking error the moment you deploy it.

Most AI projects fail — and the failure itself is the most valuable thing they produce. When AI breaks down in your organisation, it's pointing directly at the structural problems you need to fix.

BCG found that only 5% of companies generate substantial value from AI, despite widespread adoption. The difference isn't which tools they bought — it's whether they redesigned how work gets done or just made broken processes run faster.

AI has made execution nearly free. Most companies respond by producing more mediocre output, faster. The winners invest the surplus in judgment — knowing what to build, what to ship, and what to kill.

AI makes generating output almost free. But every AI output still needs checking — and checking doesn't scale with compute. The verification tax is the hidden cost most businesses ignore when deploying AI.

Eighty percent of what runs your business has never been documented. AI forces that knowledge debt to come due — and the 95% pilot failure rate is the invoice.

80,508 people told Anthropic they want productivity from AI. When pressed on why, they described wanting their lives back. Most AI products are built for the stated need. The winners are built for the real one.

AI has compressed feature-building from months to days, making every AI feature you ship replicable in weeks. The companies winning with AI aren't shipping better features — they're building learning loops that compound with every user interaction.

The hardest problem in agentic AI is not building capable agents — it is describing what we want them to do. Polanyi's Paradox, Goodhart's Law, and the limits of language converge to create a specification gap that no amount of engineering can close.

Agentic AI systems degrade through context rot, compounding errors, and model drift — but human oversight erodes in lockstep. The widening gap between actual reliability and perceived reliability is the defining engineering challenge of autonomous systems.