top of page

The Write-Without-Backspace Problem

  • Writer: Oliver Nowak
    Oliver Nowak
  • 2 hours ago
  • 6 min read

I've been trying to get my head around agentic AI for a little while now. At first, the framing of deterministic and non-deterministic helped me, but then I realised it wasn't quite as black and white as it seemed at first. Similarly, framing it around decision making and being able to take action also helped me, but that just seemed to make everything really complex. So if I'm honest, I never really truly understood it, and I've found myself continuing the search for the right language that finally makes sense.


Then recently, whilst going through one of Andrew Ng's courses on DeepLearning.ai, the right language came in the form of a deceptively simple analogy. He asked, "What would it feel like to write an essay if you were not allowed to use the backspace key?" You would have to write in a single linear pass from the first word to the last, with no corrections, no restructuring, no second thoughts.


That, he pointed out, is essentially what we are asking large language models to do when we prompt them for a one-shot output. It's a simple analogy, but it crystallised something I had been struggling to say clearly for a very long time now.


The Appeal of the Instant Answer

I understand why people and organisations fell in love with single-shot AI. You ask a question, you get an answer. After years of enterprise software that required a consultant, a licence, a configuration workshop and a six-month implementation timeline, that immediacy felt genuinely revolutionary. It still does in many respects. I'm not dismissing what single-shot prompting can do for a large category of tasks: drafting routine communications, summarising documents, reformatting structured data. A single-pass LLM output is genuinely good enough for these. You do not always need the backspace key.


The moment you move to anything requiring judgement, synthesis, planning or multi-source reasoning, the things most organisations actually want AI to do, writing without backspace starts to look like something more... like a limitation.


Agentic AI is a Process Architecture, not a Feature

What Ng's course clarified for me is something that should be obvious but rarely is in practice. Agentic AI is not primarily about autonomy. This is how most vendor conversations frame it. The agent is a self-directing, decision-making entity that operates without human intervention. I've written before about the way that framing sends organisations chasing precisely the wrong thing.


The more useful definition is considerably more modest and considerably more actionable. An agentic AI workflow is a process where an LLM-based application executes multiple steps to complete a task. That's it.


Not "the AI decides what to do". Not "the AI operates without oversight". Just multiple steps, each informed by the last, each one building on what came before.


Think about what that same essay writing example looks like in practice. Instead of a single prompt and output cycle, an agentic approach might involve:

  1. Write a plan or outline.

  2. Assess whether research is needed.

  3. Conduct that research.

  4. Draft a first version.

  5. Review the draft for gaps.

  6. Revise and only then produce a final output.


Same model, more structured process, dramatically better result.


This matters because it reframes the conversation from capability to architecture. The question is not: "Does our AI platform have agentic features?" It is: "Have we designed a workflow that allows iterative thinking?"


Comparison chart: "Single Shot vs. Multi-Step AI". Top shows a linear "Single Shot" process; bottom shows iterative "Multi-Step" workflow.

The Autonomy Spectrum and Where the Real Value Lives

The second insight from Ng's course is the one I suspect most enterprise leaders will find genuinely uncomfortable because it runs against the grain of almost every vendor conversation happening right now.


The most valuable agentic applications at this moment in time are at the least autonomous end of the spectrum.


Ng frames autonomy is a deliberate continuum rather than a binary state. At one end sit deterministic workflows, each step predefined by a human engineer, the LLM generates text at each node, the sequence is fixed and knowable in advance. At the other end are highly autonomous agents that decide their own steps, select their own models, and can even write new tools on the fly to handle situations the original designer did not anticipate.


The instinct for most organisations is to aim at the highly autonomous end. That is where the vendor demos live. That is what makes a compelling slide in a board-level AI strategy presentation. Autonomy sounds like capability, and it sounds like maturity.


But highly autonomous agents are less easily controllable, and a lot more unpredictable. This is a frontier that is still being navigated, and I'm not arguing that it shouldn't be approached. I'm just arguing that there needs to be much more recognition that it is a new frontier with lots of unknowns and uncertainty. And let's be honest, that simply doesn't suit most organisations.


Flowchart titled "The Autonomy Spectrum" shows stages: Directed, Adaptive, and Autonomous Workflow. Arrows indicate increasing autonomy.

And that means the practical consequence of this is important. A deterministic multi-step workflow that takes a customer enquiry, classifies it, searches a knowledge base, drafts a response, checks tone against brand guidelines, and routes for human approval if confidence falls below a threshold. That is an agentic workflow. And it is considerably less risky than an agent deciding its own sequence of actions. For most of the customers I work with, it is considerably more appropriate as a starting point.


The Case for Starting on the Left

Pretty much every client conversation I have about AI agents starts at the right end of the spectrum. The vision is a fully autonomous system that can handle complex tasks end-to-end without human intervention. The implementation reality, when we get honest about data quality, governance constraints, regulatory requirements, and the genuine unpredictability of LLM reasoning in novel situations, is something that requires far more structure than any vendor roadmap tends to suggest.


What I have observed is that organisations which start with structured, lower-autonomy agentic workflows build something genuinely valuable in the process. They begin to understand where the LLM can be trusted, where it needs checking, and what quality looks like across different task types. That is the foundation for moving further along the spectrum with confidence rather than with crossed fingers.


And to be clear, you are not delaying your journey to agentic AI by starting with a deterministic workflow. You are learning the terrain before you hand over the wheel to an entity you don't fully understand.


The organisations I have seen chase full autonomy from the outset tend to find themselves in one of two situations:

  1. The proof of concept succeeds in a controlled demo and fails in production.

  2. The governance conversation catches up with them after deployment, and the system gets paused or wound back.


Neither is a satisfying outcome after significant investment.


What this means for Technology Leaders

The practical implication of all of this is not to "buy an agentic product"; it is to redesign how you think about AI deployment.


Most AI implementations I encounter treat the model as the primary unit of analysis. Which model are we using? Which platform are we using? Which licence do we need to buy? The agentic frame asks a different and more useful question: what is the process we are running the model through?


That means auditing your current AI deployments, not only by output quality but by the number of steps between input and output. For every single-shot prompt delivering mediocre results, ask: what would a two-step or three-step version look like? where could a planning stage, a research stage, or a reflection stage be inserted? where would human oversight fit naturally into the loop?


None of this requires a new platform purchase. It requires workflow design.


It also requires a willingness to treat AI output as a process output rather than a single transaction, which is a more significant cultural shift than it sounds in organisations accustomed to AI as a search box or a chatbot.


What the Technical Frame Leaves Out

As I said, this is genuinely one of the clearest framings of agentic concepts that I have come across, and the honesty about the trade-offs along the autonomy spectrum is a welcome corrective to the vendor landscape. The backspace analogy alone is worth the time it takes to work through the course.


But what it doesn't cover, reasonably enough, is the organisational architecture that needs to sit around these workflows to make them sustainable in business environments. Who owns the quality of an agentic output? How do you govern a workflow where an LLM is making decisions at multiple steps? What does a change request look like when the "code" is a sequence of prompts and the "logic" is emergent rather than deterministic?


These are not insurmountable questions. They are, however, the questions I see many organisations get stuck on after the proof of concept succeeds. The technical architecture is only part of the picture. The operating model, the governance structure, and the human accountability framework that sit around it are what determine whether a successful pilot becomes a scaled capability or a cautionary tale.


That gap between technical feasibility and organisational readiness is, for most of the clients I work with, where the real work begins.


The "write-without-backspace" analogy has become, in the weeks since I came across it, my shorthand for almost every conversation about why AI outputs feel underdeveloped. We are asking models to do something that no thoughtful human would ever attempt: produce final, considered output in a single linear pass with no room to reconsider, revisit, or revise.


Agentic workflows are, at their core, the decision to let the system think before it answers, to research before it writes, to review before it delivers. The model does not change; the process does. Most of what organisations need right now is not more autonomy. It is better process design around the AI they already have.

Comments


©2026 by The Digital Iceberg

bottom of page