The Gap That Matters More Than the Headline
- Oliver Nowak

- 2 hours ago
- 3 min read
There is a lot circulating online about Anthropic's recent labour market study, specifically the radar chart mapping theoretical vs observed AI coverage across job categories. My view is it puts a number on something I've been watching play out in organisations for the past couple of years. The number that should command attention isn't the 90%, it's the gap.
For management, legal, business and finance, computer and mathematics, theoretical coverage, the share of tasks that LLMs could already handle sits somewhere between 70 and 90%. Observed usage, what's actually happening on the ground, is running at 20 to 40%. That's not a rounding error; that's a structural gap, and it's been sitting there quietly while most of the public conversation has been focused on the capability ceiling.

The Wrong Framing Is Holding Organisations Back
The dominant narrative around AI and white-collar work has been one of threat: which roles are safe, which aren't? The chart does nothing to calm that anxiety; if anything, it amplifies it, particularly for the right-hand side of the radar, where the high-exposure sectors cluster. Legal, finance, management, the knowledge work that a generation of career advice told people was the safe bet.
But I push back on framing the gap as a delay before an inevitable outcome. That framing assumes the observed number is simply the theoretical number with a lag attached to it. I don't think that's right, and in my experience working with organisations trying to deploy AI tools, it misses what's actually going on.
The observed usage figure is low not primarily because organisations are moving cautiously through a queue of available capability. It's low because deploying AI at task level requires things that don't appear on a capability chart. It requires data quality. It requires workflow redesign. It requires trust, and trust requires governance, training, and enough positive early experience to outweigh the anxiety. It requires people to actually change how they work, which is arguably the hardest part of any technology programme and the part most consistently under-resourced.
I've sat in enough sessions where the AI feature is technically live and technically capable and technically available, and the adoption rate is still stubbornly low, to know that the capability ceiling and the practical deployment floor are very different ceilings and floors.
What the Left Side of the Chart Actually Tells Us
The finding that genuinely surprised me when I first looked at this chart isn't the exposure of knowledge work. That aligns with what anyone working in this space would have expected. It's the relative insulation of the physical trades like construction, ground maintenance, food and serving, that deserves more scrutiny than it tends to get.
The instinct is to read that as "manual labour is safe." But I'd read it differently. The theoretical coverage is low because the task composition of those roles is dominated by physical, spatial, and sensory work that current AI systems simply can't touch. That's a capability constraint, not an adoption one. When the constraint shifts, and it will shift, though I'd be wary of anyone who claims to know exactly when, the adoption gap problem won't apply in the same way, because the deployment path for physical automation is fundamentally different.
The point isn't that trades workers can relax. It's that the safety of those roles rests on a genuinely different foundation than the exposure of knowledge roles. And that matters for how organisations and individuals think about what comes next.
The Organisational Implication Most Leaders Are Missing
Here's what I find comes up most consistently when I'm working through AI strategy with clients: the question of capability is largely resolved. The models exist. The theoretical coverage is real. The harder questions, the ones that actually determine whether value gets extracted from an AI investment, are governance questions, change management questions, data quality questions, and leadership behaviour questions.
The gap in Anthropic's chart is, in effect, a measurement of how poorly most organisations have answered those questions so far. Not because they're slow or incompetent, but because the pace at which AI capability has moved has outrun most organisations' ability to build the surrounding infrastructure to deploy it safely and effectively at scale.
That infrastructure isn't glamorous. It doesn't make for a compelling conference presentation. But the organisations that will close that gap meaningfully in the next two to three years aren't the ones with the most sophisticated models. They're the ones that have quietly invested in the foundations: clear governance, strong change management, honest conversations with their workforce about what's changing and why, and enough real-world evidence from early use cases to build the trust that adoption depends on.
The 90% is a capability story. The 20 to 40% is an organisation story. And in my experience, the organisation story is almost always the harder one to resolve.




Comments