The compound learning gap: Why your AI features are already commoditised
AI has compressed feature-building from months to days, making every AI feature you ship replicable in weeks. The companies winning with AI aren't shipping better features — they're building learning loops that compound with every user interaction.

The same technology that makes it trivial to build AI features makes those features trivial to copy. A capability that took a five-person team three months in 2023 now takes one engineer a few days. UI polish that needed a dedicated designer gets generated at 80% quality instantly. Backend infrastructure that required a senior engineer scaffolds in hours. So what, exactly, are you shipping that can't be replicated next quarter?
This is the compound learning gap: the distance between companies that own a learning loop and companies that ship features. Features are commodities now. The moat is the data asset underneath.
The thin wrapper is dead
VCs have already priced this in. Thin wrapper AI products (lightweight coordination tools, generic horizontal software, simple chat interfaces over foundation models) are now essentially unfundable. If your product is a UI layer over someone else's model, you are renting your competitive position. And the landlord keeps improving the base model, which keeps narrowing whatever value you add.
Capital is moving toward businesses that own workflows and proprietary learning loops, away from products that can be replicated overnight. The market is telling you something: the feature is not the product.
The buried finding in the failure stats
The headline statistic, that 95% of generative AI pilots fail to reach production per MIT research, obscures a more instructive pattern. S&P Global's 2025 survey of over 1,000 enterprises found that 42% of companies abandoned most of their AI initiatives, up from 17% in 2024. The primary reasons: data quality issues (38%), unviable business cases (29%), and loss of executive sponsorship (21%). The average enterprise lost $7.2 million per failed initiative and abandoned 2.3 initiatives in 2025 alone.
But here's the finding that deserves more attention. Buying AI from specialised vendors succeeds about 67% of the time. Building internally succeeds only a third as often. Companies are building commodity AI in-house (chatbots, document summarisers, basic classification) when they should be buying those commodities and investing their engineering effort in the proprietary learning infrastructure that actually differentiates them.
The build-vs-buy decision is backwards for most organisations.
Compound learning loops
The companies with durable AI advantages share a structural trait: a compound learning loop where each user interaction generates proprietary training data that improves the product, which attracts more users, which generates more data. A flywheel, not a feature.
Stripe Radar trains on hundreds of billions of data points drawn from the Stripe network. A 10x increase in training transaction data produced significant model improvements, and their models retrain daily, evaluating each user's unique transaction profile. Every merchant on Stripe makes fraud detection better for every other merchant. That's a network effect a competitor cannot replicate without the same transaction volume. You could build a fraud scoring feature in a month. You cannot build Stripe's data asset in a decade.
Spotify processes nearly half a trillion data events daily from 678 million users, feeding them into personalisation algorithms like Discover Weekly. Discover Weekly drives retention. Retention generates more behavioural signals. More signals improve personalisation. The loop compounds.
Duolingo's Birdbrain system analyses click-through patterns, time-on-task metrics, and error rates to tailor lesson difficulty and sequencing per learner. Better retention produces cleaner behavioural signals, which improves personalisation, which improves outcomes, which improves retention. Every day, Duolingo collects more learning interaction data than most competitors collect in months. That's not a feature lead. It's a structural advantage.
Shopify's AI tools are powered by what analysts describe as the industry's largest commerce data flywheel: a vast, proprietary, and highly structured commerce-specific dataset that trains progressively smarter models. The feature (product suggestions) is not the moat. The moat is the billions of interactions, feedback signals, and edge cases underneath.
The pattern across all four is consistent. The AI feature itself (recommendations, fraud scoring, lesson sequencing, product suggestions) is replicable. The compound data asset is not.
Features are phenotype, data loops are genotype
Maybe the right framing is biological. Features are phenotype: the observable traits a competitor can study and copy. Learning loops are genotype: the generative process that produces those traits. You can observe and replicate phenotype. You cannot copy the generative process.
Bain's 2025 Technology Report draws a similar line. Workflows where incumbents hold exclusive data and rules are classified as growth opportunities. Workflows without proprietary data are classified as battlegrounds vulnerable to disruption. Either you own the generative process or you're competing on surface traits anyone can match.
I think this is actually about time. Features exist in the present, as snapshots. Learning loops exist across time. They compound. A feature is a photograph. A learning loop is a trajectory. The photograph can be copied. The trajectory cannot, because the trajectory includes everything that happened before this moment.
The first question
The strategic implication is that the first question in any AI initiative should not be "what feature do we ship?" It should be "what learning loop do we create?" The answer determines whether you build compounding advantage or a commodity someone replicates next quarter.
Most companies get this backwards. They start with the feature because that's what they know how to do. The product roadmap says "add AI to X." So they add a chatbot, a summariser, a recommendation widget. They ship it, celebrate the launch, and a competitor ships the identical thing two months later. No compounding. No moat. No learning.
The economics of delegation apply here. Buy the commodity AI capabilities: the chatbots, the document processing, the generic classification. These are table stakes, not differentiators, and specialised vendors will outperform your internal build two-thirds of the time. Instead, invest engineering effort in the proprietary data infrastructure that feeds a learning loop unique to your business: customer interaction patterns, domain-specific edge cases, feedback signals, and workflow data that no foundation model provider ships as a feature.
The gap compounds
The urgency isn't that AI moves fast. Everyone knows that. The urgency is that the advantage of starting your learning loop one month earlier compounds indefinitely, and the cost of starting one month later also compounds indefinitely. Every month of proprietary data collection, feedback integration, and model improvement widens the gap between you and anyone who starts after you.
Most quarterly planning optimises for snapshots: ship this feature, hit this metric. The compound learning gap is a gap in temporal thinking. Can leadership think in trajectories rather than photographs?
Three open questions I don't have clean answers for. How do you measure a learning loop early, before compound effects become visible? Proxy metrics like data accumulation rate, signal quality per interaction, and prediction improvement curves help, but the compounding is precisely what makes early measurement difficult. How do you sell a learning loop to a board that wants feature launches? Perhaps frame it as infrastructure investment with compound returns, the data equivalent of capex. And what happens when foundation models improve enough that the commodity layer keeps expanding upward? Does the learning loop moat narrow, or does it become the only remaining differentiator?
The companies that treat AI as a feature to ship will join the 42% that abandon most initiatives. The companies that treat AI as a learning system to feed will build the kind of compounding advantage that gets harder to replicate with every passing month.