The levelling trap
AI narrows the performance gap between junior and senior workers. That sounds like progress — until you realise nobody is building the expertise that made senior workers valuable in the first place.

Entry-level tech roles in the UK fell 46% in 2024, with projections of a further 53% drop by 2026. The same year, every major consultancy published research showing AI makes junior workers dramatically more productive. These are not contradictory findings. They are the same mechanism, viewed from opposite ends. And the mechanism is quietly dismantling the system that produces the expertise everyone will need in five years.
The floor rises
Harvard Business School and BCG ran an experiment with 758 consultants. Half used GPT-4, half didn't. Junior consultants saw a 43% performance improvement. Seniors gained 17%. The bottom half of performers benefited most.
The consultancies celebrated this as democratisation of capability. The floor rises. Everyone becomes more productive. AI as the great equaliser.
Look at the mechanism, though. Juniors weren't building skills faster. They were bypassing skill-building entirely, producing senior-quality output through AI amplification. The gap between what a junior can produce and what a junior actually understands widens with every prompt.
Stack Overflow's 2025 survey made the dynamics visible. Most junior developers don't use the platform anymore. They go straight to ChatGPT, get working code, and never engage with the reasoning behind the answer. Forty-five percent of developers say debugging AI-generated code is harder than debugging code they wrote themselves. Not because the code is more complex. Because they don't understand the code they're running.
One developer captured it: "I tried coding without Copilot last week and felt like a beginner again."
Production without comprehension, scaled across an entire generation of knowledge workers.
The pipeline collapses
The labour market is reinforcing the problem. Among the largest public tech firms, new hires with less than one year of post-graduate experience dropped 50% between 2019 and 2024. Indian IT services cut entry-level roles by 20–25%. IDC and Deel's 2025 survey found 66% of global enterprises plan to reduce entry-level hiring because of AI. Starting wages in AI-exposed companies fell 4.5% after ChatGPT's launch, led by a 6.3% drop specifically for junior positions, according to IESE Business School research.
Seventy percent of hiring managers believe AI can do the work of interns. Fifty-seven percent trust AI output more than intern output.
Organisations are making two contradictory moves at once: celebrating AI's ability to make juniors productive, and eliminating the junior roles where that productivity development happens.
The pipeline arithmetic is straightforward. The cohort trained with AI from 2024 to 2026 reaches mid-level by 2027–2029 and senior by 2029–2032. If that cohort has systematically weaker foundations, less debugging experience, less system-level thinking, less exposure to failure, the capability gap compounds across an entire industry. You can't poach senior talent from competitors when every company has the same pipeline problem. This is not a single-firm risk. It is industry-wide capability erosion on a five-to-seven-year delay.
At 38,000 feet
Aviation mapped this trajectory decades before anyone was talking about large language models.
Autopilot made routine flying dramatically safer. Pilots who spent careers monitoring automation lost the manual skills they rarely practised. The Flight Safety Foundation documented it precisely: frequent reliance on automated systems reduces competence in manual control, instrument scanning, and situational awareness.
On 1 June 2009, Air France Flight 447's pitot tubes iced over mid-Atlantic and the autopilot disconnected. The pilots needed to execute a basic stall recovery. Push the nose down. Every student pilot learns this in their first weeks of training. The co-pilot pulled back on the stick instead. The aircraft fell for three and a half minutes into the ocean. Two hundred and twenty-eight people died because experienced pilots could not perform a task that beginners are taught on day one.
The FAA responded with mandated Upset Prevention and Recovery Training. By 2016, a Transportation Department report found airlines still hadn't adequately addressed the problem. Many operators restrict manual flying in standard procedures: autopilot engaged at 400 feet on departure, disengaged at 200 feet on approach. The operational logic is sound for routine conditions. It systematically prevents the practice needed for non-routine ones.
The better the autopilot works on Tuesday, the less prepared the pilot is for the engine failure on Wednesday.
Knowledge work is following the same curve. AI handles routine flawlessly. The situations requiring deep human judgement become rarer. The skills for those situations atrophy faster precisely because they're needed less often.
Older than software
The pattern predates computers entirely. NBER research on 19th-century American manufacturing documents how mechanisation deskilled craft workers, not through direct automation but through the division of labour that automation enabled. Complex skilled work was broken into simple, unskilled tasks. The craftsman who understood the whole process was replaced by operatives who understood one step. Output improved. Process knowledge vanished. When something went wrong that required understanding the whole, there was nobody left who could see it.
CEPR research traces this deskilling through the 20th century, with the distinction between white-collar and production workers concealing substantial capability erosion within both categories. The deskilling mechanism is not new. The target is. Knowledge work was supposed to be immune.
MIT Sloan has flagged a specific inversion: junior professionals teaching AI tools to senior colleagues. It reverses the mentorship relationship in ways that undermine the development of judgement. The junior becomes the person who knows which buttons to press. The senior remains the person who knows which problems to solve. But the junior never transitions from one to the other, because the path between them has been paved over with automation.
The commons
Some teams are experimenting with deliberate fixes. Practice environments where people build judgement under supervision rather than output with AI assistance. Mob sessions where juniors watch mentors reject wrong AI suggestions before accepting correct ones, learning discernment rather than syntax. These approaches require organisations to invest short-term productivity in learning, which directly contradicts the efficiency gains that justified AI adoption.
The levelling trap is not a technology problem. It is an incentive problem. Each firm faces the same rational calculation: cut junior headcount, save costs, let AI fill the gap. The collective outcome, no senior pipeline in five to seven years, is nobody's problem until it's everybody's crisis.
A tragedy of the commons, except the commons is institutional expertise. Every company grazes the shared pasture of the talent pipeline. No individual company bears the cost of its depletion. The firm that invests in developing juniors pays real costs today for benefits that materialise in five years, by which point those juniors may have left for a competitor that didn't bother. The firm that cuts junior roles captures savings immediately and externalises the cost to a future that belongs to everyone and therefore no one.
And the self-correcting mechanism you might hope for breaks first. The metrics that show AI boosting output are the same metrics that conceal the capability erosion. Output is visible. Comprehension is not. By the time the gap surfaces as a crisis nobody on the team can diagnose, the correction requires expertise that no longer exists within the organisation.
What levelling actually means
The BCG study's 43% improvement for juniors has been read as democratised excellence. Measured differently, it is subsidised incomprehension. The floor rises because AI lifts it. The ceiling stays because the skills that built senior expertise, debugging, system thinking, confronting failure, are the same skills being bypassed on the way up. Given enough time, the ceiling falls too. The people who would have raised it never developed the capability.
A productivity gain is a capability loss with a quarterly earnings narrative.
Expertise is not a body of knowledge that can be transferred through outputs. It is the scar tissue left by thousands of decisions and their consequences. The senior developer who instinctively avoids a particular architecture didn't read that in a textbook. She watched three projects fail that way. You cannot skip to the end. The intermediate failures are the expertise.
The distance between floor and ceiling compresses until nobody can distinguish good work from plausible work. A plausible legal brief that misses a precedent. A plausible architecture that won't scale. These failures don't announce themselves. They pass review precisely because the reviewers' own expertise has been eroded by the same process. You end up with an organisation that cannot detect its own declining competence, because the ability to detect it was the competence that declined.
The organisations that recognise this will invest in developing people now, while the cost is merely expensive. The rest will discover the problem around 2031, when the first AI-native cohort reaches senior roles and the expertise gap becomes impossible to hire around. By then the investment window will have closed, and the levelling effect will have delivered exactly what it promised. Everyone will be equal. Nobody will be expert.