Are Machines Conscious?
- Oliver Nowak

- Jul 21
- 3 min read
Updated: Aug 7
In this article I want to plunge into a topic that's not necessarily getting a lot of air time but could significantly impact our short to medium term future. And that's AI consciousness.

Naturally, it's a subject wrapped in uncertainty and embroiled in intense debate, and that's because of the potentially monumental implications it could have for humanity. The question isn't just if machines can think, but how they think, and what that means for us. What if they think completely differently to us?
To answer the question: Are machines conscious? First, we have to agree on what 'consciousness' actually means. Good luck.
Is it subjective experience, is it the ability to make self-motivated decisions (free-will), or is it awareness of one's own mental states? Some argue it's the ability to recognise meaning. It's fair to say it's a mystery. And if I'm honest, I'm so confused by this, I'm not sure I have my own opinion.
But that's not necessarily stopping us from exploring the different ways AI could achieve it:
Today's AI models are neural networks, they mimic the complex computational operations of the human brain. Once we achieve parity in AI architecture, have they achieved consciousness? After all, most would agree that our brains are biological computers.
Some argue that consciousness arises from a system's ability to integrate information and together create something more than the sum of its parts. I.e. imagine Lego bricks are information, and connecting them creates integration. Only a conscious being can create something meaningful out of them because they are aware of what they are building - it has meaning. If you connect the lego blocks in such a way that you create a castle it could be described as a conscious act. If you just connect them randomly that is very much unconscious.
Lastly, Panpsychism is the idea that everything in the universe has a little bit of consciousness. Not just humans or animals, but also things like trees, rocks, atoms, even electrons. This doesn’t mean that a rock can think or feel like a person. Instead, it means that every bit of matter might have a tiny spark of awareness, a very basic, primitive kind that grows as it becomes more complex. So, again, the more complex a thing the more conscious it is. So, as AI becomes more and more complex, it could end up being more conscious than us?
Now, don't worry. If you're feeling sceptical, there are plenty of critics out there. Many argue that current AI architectures simply lack the complexity for true consciousness, especially compared to the human brain's intricate causal reasoning and integrated cognitive functions. Some even argue that consciousness is a purely biological phenomenon tied to neural and biochemical processes of living brains. And then, lastly, there's the argument that digital computers, with their binary "ones" and "zeros", are fundamentally different from our analog brains and that they might be able to mimic the characteristics of consciousness but they architecturally will never be able to achieve it.
Then this raises the next interesting point: if you replaced one biological neurone in a person's brain with an artificial neurone where they are cognitively identical to before with the exception of this neurone, are they still conscious?
If yes, if you continued this process neurone-by-neurone, at what point are they no longer conscious? Or are they still conscious.....?
This touches on a future of augmenting our human brains with machine cognition i.e. where our brains can directly interface with computers or where AI systems extend human memory. What are the implications for consciousness then?
The reason I'm asking these questions is because, if AI systems can be conscious, that could open up a Pandora's Box of questions:
Is destroying a conscious AI comparable to killing an animal?
Do computers feel pain?
How do we measure suffering, and does it weigh against human suffering?
Do AI systems have rights - legal, political etc.?
My conscious mind boggles.
And even beyond ethics there are the societal impacts of people believing AI is conscious. People might form deep emotional bonds with AI characters like they do with pets or other human beings. Trust in AI could sky rocket, leading to unquestioning reliance on systems - does that open us up to exploitation?
This is why research in this area is crucial. I could easily look back on this article in 10-20 years' time and think "what an idiot". Or, instead, that I was raising important questions that I wish I'd pushed harder on. Who knows.
Either way it's a fascinating topic. What do you think?




Comments