Borrowed competence

AI makes you faster at your job today while quietly degrading your ability to know when the job was done wrong. The more you delegate, the less equipped you are to catch the mistakes that matter.

·7 min read
Borrowed competence
Viewing the AI-enhanced version of the article I wrote.
Contact me  for the prompt used to generate the AI formatted version.

For three months, a group of endoscopists used an AI system to help spot adenomas during colonoscopies. Detection rates improved. The AI flagged what the doctors might miss, the doctors confirmed or dismissed, patients went home. Then the researchers turned the AI off.

Adenoma detection rates fell from 28.4% to 22.4%. In ninety days, the doctors had gotten measurably worse at the thing they'd spent years training to do. Published in The Lancet Gastroenterology & Hepatology, the study didn't document an AI failure. It documented a human one, caused by AI doing its job well.

This would be troubling enough as an isolated case. It isn't.

At an accounting firm studied by Aalto University researchers, staff who relied on cognitive automation gradually lost awareness of core accounting processes. When the system was removed, they couldn't perform fundamental tasks they'd been hired for. Not "were slower at." Could not do. At Illinois Law School, students using generative AI to assist with legal analysis made more critical errors, not fewer. They never built the analytical muscle that comes from working through problems manually. The reps matter. AI skips the reps.

The loop that closes on itself

Three professions. Three different skill sets. One mechanism: AI makes you faster, speed atrophies the skill needed to check AI's work, you lose the ability to notice when AI is wrong, you depend on AI more. Not a slippery-slope argument. A feedback loop, measurable now.

I call this borrowed competence. The organisation looks more capable on paper. More output and fewer headcount. But fragility accumulates underneath. Everything works until the AI hallucinates something plausible in a domain where nobody human can catch it anymore.

What separates this from ordinary automation risk is the self-reinforcing quality. A factory robot replaces human labour but doesn't erode the human ability to design robots. AI replaces human judgment and erodes human ability to judge. The feedback loop closes.

Same pattern, different wrapper

Boeing outsourced roughly 70% of the 787 Dreamliner's production and lost more than manufacturing capacity. They lost the engineering knowledge required to evaluate what their suppliers were delivering. A senior engineer described the supplier management organisation as not having "diddly-squat in terms of engineering capability when they sourced all that work." Fuselage gaps. Nonconforming materials. Years of delays.

The outsourcing literature calls this capability loss, sometimes corporate hollowing-out: enterprises that hand know-how to external providers in the name of efficiency until they can no longer create innovative products or evaluate the quality of what they receive.

AI delegation follows the same playbook. Outsourcing moved knowledge to other companies. AI moves knowledge to models. Both times, you retain the role but lose the substance. You keep the title "supervisor" and gut the knowledge that makes supervision mean anything.

The pipeline runs dry

The time-bomb version sits in the junior pipeline. Stanford research found a 13% relative employment decline among workers aged 22–25 in AI-exposed fields, while older colleagues saw gains. Among the largest public tech firms, new hires with less than one year of experience dropped 50% between 2019 and 2024. IDC and Deel's survey found that 66% of global enterprises plan to reduce entry-level hiring because of AI.

The mechanical logic is straightforward. AI automates research, first drafts, data gathering, pattern identification. Exactly the structured, repetitive tasks that historically built juniors into seniors. A Minneapolis marketing agency head put it plainly: "We felt we didn't need as many junior marketers." In ten years, they won't have senior marketers either.

The tasks AI eats first are automatable precisely because they're structured and repetitive. But structured repetition is how humans learn. Production without comprehension, scaled across an entire generation of knowledge workers.

Today's seniors can still catch AI mistakes because they built their skills before AI was ubiquitous. Nobody is training their replacements when the training ground has been paved over.

Below the threshold of awareness

Two large-scale preregistered experiments with 2,582 participants found that biased AI autocomplete suggestions shifted people's expressed attitudes toward the AI's position. Expected enough for those who accepted suggestions. But roughly 30% of participants who never accepted a single suggested word were still influenced. The mere presence of the suggestion reshaped their sense of what the right answer looked like.

This operates below conscious awareness. You don't have to click "accept" for the suggestion to recalibrate your internal compass.

The implications run deeper than productivity. If exposure to AI suggestions quietly reshapes what you consider correct, then the concept of independent professional judgment starts to dissolve. Not because people are lazy or credulous, but because the calibration happens before conscious reasoning begins. The doctor who trusts her gut, the lawyer who spots the flaw, the engineer who senses something is wrong before the data confirms it: if AI is setting the baseline of what "correct" feels like, that professional intuition becomes unreliable at exactly the moment organisations depend on it most. An independent opinion is an AI suggestion you don't remember seeing.

The FAA already figured this out

Aviation mapped this trajectory decades before anyone mentioned large language models. The FAA formally warned about manual flying skill erosion among commercial pilots. Long-haul pilots who relied on autopilot from moments after takeoff to moments before landing suffered skill fade in both mental and motor abilities.

The FAA's response: mandate more manual flying. Force humans to practise being human because the automation made them forget.

A regulator requiring people to do their jobs the hard way, deliberately, because the tools designed to help them were degrading the core skill. GPS research tells a parallel neurological story: Dahmani and Bohbot found that navigation aid users showed steeper spatial memory decline over time, while London taxi drivers who learned The Knowledge through years of manual navigation developed measurably larger posterior hippocampi. Use it or lose it is not metaphor. It's neuroanatomy.

Just-in-time expertise

Competence is organisational inventory. For decades, companies optimised supply chains toward just-in-time: minimal stock, maximum efficiency, no buffer. Beautifully efficient until disruption hit and they discovered they had no slack. The pandemic taught this lesson in physical goods. Companies with no inventory had no resilience.

Borrowed competence is just-in-time expertise. Minimal human skill inventory, maximum AI throughput. It works until the model hallucinates, the API goes down, or the domain shifts in a way the training data didn't anticipate. The disruption doesn't have to be dramatic. A model update that changes outputs. A regulatory shift. A novel problem nobody in the building has solved without AI assistance.

And here's the number that should give every executive pause: roughly 40% of non-management employees report that AI saves them zero time per week. Zero. Many organisations are imposing tools that degrade expertise without delivering the promised efficiency gains. Pain without the painkiller.

The hardest sell in business

The answer isn't "don't use AI." That ship sailed. It's closer to the FAA approach: mandatory manual practice. Deliberately preserve friction in learning pipelines. Keep juniors doing grunt work even when AI could do it faster, because the output isn't the point. The learning is.

This requires something genuinely difficult: asking organisations to be deliberately slower in some areas to preserve long-term capability. Every incentive points the other direction. Quarterly metrics reward the executive who cuts junior headcount. Capability loss is invisible until crisis. The cost of maintaining human skill buffers looks like waste on every spreadsheet, right up until the moment it becomes the only thing between you and catastrophe.

At what point does borrowed competence become the only competence? When does the gap between what organisations can produce with AI and what they can understand without it become unbridgeable? We may not find out until we're already on the wrong side.


Stay up to date

Get notified when I publish something new, and unsubscribe at any time.

More articles