top of page
  • Writer's pictureOliver Nowak

Don't Think of AI as a Technology

The performance of a business can be best characterised by the quality of its decision-making. The more often the people within a business make the right decision, at the right time, the better it's going to be at achieving its goals & objectives.


AI is a new form of intelligence that we now have access to, and one definition of intelligence is the ability to solve complex problems. I.e. the ability to make the right decision at the right time. So, the greatest impact AI can have in any business is helping the people within it make better decisions, at the most optimum time. In the modern age, that has become being a 'data-led' organisation - one that uses data insights to best understand where they are today and therefore where they should go next in the pursuit of their objectives.


Essentially, if you got back to People, Process & Technology, as discussed in my previous article, AI is having a much stronger impact at the People level than at the Process & Technology level. Buying the latest and greatest AI technology isn't going to make you a data-driven organisation, nor is implementing a few AI-driven processes. It's going to lie in how well AI is supporting your People in their day-to-day operation and the decisions that come with it.


Why AI isn't a Process?

A process is the conversion of inputs into outputs - it is effectively "getting stuff done". AI can have a massive impact on how we are getting stuff done, take GenAI as the perfect example of that. But at the end of the day, we're still achieving the same goal. If it's the wrong goal, that's a problem. Shouldn't we be channelling much more energy into using AI to help us decide whether the thing that we're doing is the right thing to do?


Why AI isn't simply a Technology?

I'm hearing more and more of our clients saying "we're using a lot of AI in the technology part of our business, and now I want to look at how we can integrate it into the rest of our business." And I love to hear it!


I completely understand why AI is being thrown into the technology category, but those organisations that leave it there, outside of the business, are making a big mistake. If we take a look at our traditional technology stack, the vast majority are process orientated. They are helping us convert inputs to outputs faster, and in some cases we're using automation to do it for us. But that's why AI isn't simply a technology, because, again, like above, it's not helping us identify whether that's the right thing to do.


So how should we think of AI then?


As I have said, the key is decisions so that is where we need to start. What decisions are being made today, and how are they impacting our business performance? Can we help the people making the decisions make better ones? Or, in extreme cases, can we take decisions away from people because an artificial intelligence is better at making them?


Take stock ordering as a simple example. Ordering stock is a process, and pretty much any organisation ordering stock these days will be using a system to do it. But who's deciding the order size and when to place it? For many organisations that decision is made by people using their experience and expertise to judge the size and timing of the order. But, ultimately, for most organisations, this decision is a data problem. So, isn't an artificial intelligence better placed to make that decision? It all depends on whether we can empower it with the relevant data about market conditions, seasonality, supplier lead time etc. to make the best possible decision for us. A decision that significantly impacts business performance.



Effectively you're giving the AI a role, you're asking it to solve a complex problem in your organisation and make a business impacting decision. But at the end of the day, just like a line manager and their direct report, there needs to be a level of accountability for the decisions that the AI is making. I.e. someone still needs to own those decisions, even if they are not making them themselves.


This opens up an interesting conversation. To what extent is a manager responsible for the decisions made by, and therefore the performance of, a member of their team? And, as a consequence, how responsible should an individual be for the decisions made by an AI? It's something to discuss....


But how will this work in reality?


Traditionally people have made decisions based on historical and contextual knowledge that they have either learned through education, or learnt through practical experience by years on the job. But for an AI-augmented decision making model to work, all of this contextual knowledge needs to be transferred to the AI. Remember, the AI is only as intelligent as the data it's trained on. Essentially, we're going to need a new model where people are accountable for ensuring their AI is trained on the right data, with the right organisational context so it can fulfil it's assigned role and make the best decision.


And for that to work in the long-term, we need to make sure that this ownership structure is reorganised and passed on as the people who own it shift around. I.e. when a person is promoted, they handover their AI specific to that role on to the next person. Or if someone leaves the organisation, their AI is passed on to the next person.


The point is, to get the most out of AI it can't just be viewed as a technology that sits in a corner of the business. It needs to be fully integrated and augmented into the business with assigned roles and accountable counterparties. It's a big change, and culturally speaking, it's going to take a major shift. As always, with how closely it ties into performance, those that get it right will have a significant first mover advantage, and those that don't may never catch up....

7 views0 comments

©2020 by The Digital Iceberg

bottom of page