top of page

AI Doesn't Think For You. It Thinks Like You.

  • Writer: Oliver Nowak
    Oliver Nowak
  • 3 days ago
  • 5 min read

I have spent the past year advising organisations on AI adoption, and the single most common misconception I encounter is the belief that AI will do the thinking for you. Leaders assume they can hand a junior employee a ChatGPT license and suddenly that person operates at a senior level. Many people believe they can skip the hard work of understanding a problem because the model will figure it out.


Anthropic's data in their most recent Economic Index report says otherwise. How you prompt is how Claude responds. The tool reflects the skill of the person wielding it.


A boy looks amazed at a computer screen showing his reflection with text "AI reflects you." Books and sticky notes are visible around him.

This makes intuitive sense once you sit with it for a moment. I have watched the same prompt produce wildly different results depending on how it was written. A senior consultant asks Claude to analyse a process and gets a structured, nuanced breakdown. A junior team member asks the same question with vaguer language and gets something generic and unhelpful. Same model, same underlying capability, but completely different outcomes.


The report frames this as a finding about AI design and training. Claude can respond in highly sophisticated ways, but it tends to do so only when users input sophisticated prompts. The implication is uncomfortable: AI capability, for now at least, comes down to human capability.


The Deskilling Paradox

This creates an interesting paradox. Most people assume AI will replace simple, repetitive work first. Tasks like data entry and basic admin. Essentially the tasks that require the least human judgement. But this data says that assumption is wrong.


Anthropic's analysis shows that Claude is disproportionately used for tasks requiring higher levels of education. The average task in the economy requires 13.2 years of education. The average task that appears in Claude usage requires 14.4 years. AI is not coming for the simple jobs first, instead, it is coming for the complex ones.


The researchers ran a thought experiment: what happens to jobs if you remove the tasks AI can already perform? The net effect is actually deskilling. AI tends to handle the higher-education components of a role, leaving behind the lower-skill work.


Take travel agents as an example. AI covers tasks like planning itineraries and working out travel costs, work that requires judgement and domain knowledge. What remains? Printing tickets and collecting payments. The complex, interesting parts of the job get absorbed and the routine parts stay.


Technical writers face the same pattern. AI handles analysing developments in a field and recommending revisions to published materials. What is left? Drawing sketches and observing production activities. The knowledge work disappears, and the manual work persists.


Teachers lose research, marking homework, and student advising to AI assistance. Classroom management and in-person lecture delivery remain. The intellectual components shrink. The physically present, human-contact components stay.


This is the opposite of what most transformation narratives promise. We have been told AI will free us from drudgery so we can focus on higher-value work. The data suggests it might be the reverse: AI absorbs the cognitively demanding tasks, leaving humans with what cannot be digitised. Yet.


The Task Horizon Problem

AI success rates decline as tasks get more complex. Anthropic's data shows that tasks requiring less than a high school education achieve around 70% success rates. Tasks requiring a college (university) education drop to 66%. The pattern holds: harder tasks yield greater time savings, but lower reliability.


This creates a strange dynamic. AI provides the biggest productivity boost on complex work, but that is also where it fails most often. The report references research on "task horizons," the maximum duration at which a model can reliably complete work. Current models hit roughly 50% success rates at tasks that would take a human 3-5 hours. Beyond that, reliability drops dramatically.


The implication is that human oversight becomes more important, not less, as AI takes on more sophisticated tasks. You cannot hand over the complex work and walk away. You need people skilled enough to verify the output, catch the errors, and know when the model is confidently wrong.


Which brings us back to the education correlation. If your team lacks the foundational knowledge to evaluate AI output, they will not catch the hallucinations, they will not spot the subtle errors, and they will trust a polished response that happens to be fabricated.


What This Means for Businesses

If you are leading an enterprise AI programme, the Anthropic data should change how you think about training.


The instinct in most organisations is to focus AI enablement on tools. Roll out Copilot licenses, run prompt engineering workshops, build an internal GPT etc. These are not bad moves, but they miss the deeper point.


Your AI's effectiveness is capped by your people's expertise. A prompt engineering course will not help someone who lacks domain knowledge simply because a ChatGPT license will not turn a generalist into a specialist.


The most strategic move is to invest in foundational skills alongside AI tools. Train your junior talent in the fundamentals of their role, make sure analysts understand statistics before they ask AI to run analyses, and ensure developers grasp architecture before they delegate code generation.


This feels counterintuitive in a world obsessed with speed. Why spend months building someone's expertise when AI can generate answers in seconds? Because the AI's answers are only as good as the questions asked and the judgement applied to the output. You are not buying a shortcut, you are buying an amplifier. By their very nature amplifiers make good inputs better and bad inputs worse.


What This Means for Schools

For the past two years, the dominant conversation about AI in schools has been about cheating. How do we detect AI-generated essays? Should we ban ChatGPT? These are the wrong questions.


The right question is: how do we ensure students develop the foundational knowledge and critical thinking that AI will reflect back at them for the rest of their careers?


If the Anthropic correlation holds, a student who never learns to think deeply will spend their professional life getting shallow AI responses. A student who builds genuine expertise will have a powerful tool that amplifies that expertise.

This means teaching critical thinking even when AI tools are unavailable. It means building knowledge, not just assessment performance. It means helping students understand that AI will not replace the hard work of learning, it will reveal whether that learning ever happened.


What This Means for Parents

It's well documented what smartphones have done to attention spans. AI has the potential to accelerate that decline if we let it. The temptation to outsource thinking is enormous. Why struggle with a problem when you can ask Claude? Why build understanding when you can get a summary?


The Anthropic data offers a sobering answer: because the AI will only ever reflect the thinking you bring to it.


For parents, this means helping children build knowledge, resilience, and deep understanding. Not because AI will take their jobs, but because AI will amplify whatever intellectual habits they develop. A child who learns to think carefully will have a powerful collaborator. A child who learns to outsource cognition will need a cognitive crutch for rest of their lives.


The Uncomfortable Truth

The narrative around AI has been dominated by two extremes. On one side, techno-optimists promising that AI will solve everything, elevate everyone, and democratise expertise. On the other, doomers warning of mass unemployment and human obsolescence.


The Anthropic data suggests something more nuanced and, frankly, more uncomfortable.


AI is not replacing human thinking, it is reflecting it. The gap between those who can use AI effectively and those who cannot will track closely with the gap between those who have foundational knowledge and those who do not.


This is not a dystopia but simply an amplification of existing dynamics. The question is whether we respond by investing in human capability or by pretending the tool will do the work for us.


The believers thought AI would transform the economy. It will, just maybe not in the way they expected.


References

Appel, R., Massenkoff, M., McCrory, P., McCain, M., Heller, R., Neylon, T., & Tamkin, A. (2026, January 15). The Anthropic Economic Index Report: Economic Primitives. Anthropic. https://www.anthropic.com/research/anthropic-economic-index-economic-primitives

Comments


©2020 by The Digital Iceberg

bottom of page