Writing on product development, company building, and the AI industry.

All of my long-form thoughts on AI, programming, product development, and more, collected in chronological order.

Organisations run on workarounds

Your processes work because your people compensate for them. AI can't compensate — so every dysfunction your team has been quietly routing around becomes a blocking error the moment you deploy it.

Your AI project failed. Good.

Most AI projects fail — and the failure itself is the most valuable thing they produce. When AI breaks down in your organisation, it's pointing directly at the structural problems you need to fix.

Faster busywork is still busywork

BCG found that only 5% of companies generate substantial value from AI, despite widespread adoption. The difference isn't which tools they bought — it's whether they redesigned how work gets done or just made broken processes run faster.

The execution surplus

AI has made execution nearly free. Most companies respond by producing more mediocre output, faster. The winners invest the surplus in judgment — knowing what to build, what to ship, and what to kill.

The verification tax

AI makes generating output almost free. But every AI output still needs checking — and checking doesn't scale with compute. The verification tax is the hidden cost most businesses ignore when deploying AI.

Your AI can't use what was never written down

Eighty percent of what runs your business has never been documented. AI forces that knowledge debt to come due — and the 95% pilot failure rate is the invoice.

The compound learning gap: Why your AI features are already commoditised

AI has compressed feature-building from months to days, making every AI feature you ship replicable in weeks. The companies winning with AI aren't shipping better features — they're building learning loops that compound with every user interaction.

The Specification Gap: Why We Can't Tell AI Agents What We Actually Want

The hardest problem in agentic AI is not building capable agents — it is describing what we want them to do. Polanyi's Paradox, Goodhart's Law, and the limits of language converge to create a specification gap that no amount of engineering can close.

The Decay Paradox: Why AI Agents Get Worse as We Trust Them More

Agentic AI systems degrade through context rot, compounding errors, and model drift — but human oversight erodes in lockstep. The widening gap between actual reliability and perceived reliability is the defining engineering challenge of autonomous systems.

Stay up to date

Get notified when I publish something new, and unsubscribe at any time.