Most of us feel like we have a kind of inner subjective experience which strikes us as special and unique. Throughout human history, most of humanity has taken this idea for granted. But what do we really mean by “inner subjective experience”?
A while ago I watched a video on Curt Jaimungal’s YouTube channel where a man was interviewed who was referred to as the “Godfather of AI”. His name was Geoffrey Hinton. What really struck me about the interview was that Geoffrey Hinton, for all his brilliance, seemed unfamiliar with the whole idea of a subjective experience, as other people experience it.
This isn’t new to me. Often when I’ve discussed consciousness with people, I’ve been struck by how some people really don’t seem to get what other people are even talking about.
There may be multiple reasons for this, and one of those reasons might be a personality trait known as alexithymia. According to Wikipedia, 10% of people are alexithymic, meaning they have trouble recognising and describing their emotions.
I’m not a psychologist, but I would think that if you’re trying to discuss consciousness with someone who is alexithymic, it’s going to be a difficult conversation.
In addition to this, many people deny one or more of the facets of conscious subjective experience because they just can’t see how consciousness could make any sense scientifically.
To most of us, there is a fundamental difference between, say, a cat, and a robot. A cat is conscious, and therefore we avoid mistreating it, because we believe it has the capacity to suffer. A robot, on the other hand, has no such capacity. If it outlives its usefulness we’ll throw it in the trash, and our only regrets will be the wastage of parts and money.
What do we actually mean by “conscious”? I’ve thought about this a lot, over several decades, and my conclusion is that most definitions of “consciousness” do more to obscure what we’re really talking about than to elucidate it.
I think consciousness fundamentally has to do with emotion. Everything that a normal human being consciously does is accompanied by the inner experience of an emotional state, even if that state is quite subtle, to the point where not everyone even finds it easy to recognise or identify.
A person who feels no emotion at all would gladly sit still in a burning house. They might know that the house is burning and that they are about to die, but since that would conjure no negative emotion in them, there would be no reason for them to take any kind of action. In fact, that’s exactly what a computer would do. A computer is just as happy to be consumed by fire as it is to take evasive action, because it feels absolutely nothing.
In 1980, in a paper called Minds, Brains and Programs, the American philosopher John Searle outlined a thought experiment known as “The Chinese Room”.
We imagine there’s a room containing an English speaker who doesn’t speak Chinese, or perhaps even a team of English speakers. A fluent Chinese speaker writes something on a piece of paper and passes it into the room through a slot. The person or people in the room then manipulate the Chinese characters by applying an elaborate set of rules, eventually producing a reply in fluent Chinese. They pass this back to the Chinese speaker through the slot.
The Chinese speaker is then able to have a conversation in fluent Chinese with the room, yet no-one in the room even understands Chinese. The room passes the Turing Test; it seems to be conscious. It can have conversations in Chinese about sunsets, love, death, God, or whatever the Chinese speaker wants to discuss.
Is this room conscious, and if so, where does consciousness reside in the room?
In other words, does the room actually feel anything?
In the past few years we’ve become able to interact with Large Language Models, LLMs, that are very much like this Chinese room. I can talk with ChatGPT in fluent English, and while it does, admittedly, seem distinctly half-witted, it does sort of half-pass the Turing Test. Does this mean that ChatGPT is conscious, and has feelings, and perhaps free will?
Let’s pretend that the Chinese Room is, in fact, conscious. According to my definition of consciousness, then, it has feelings. But the person inside the room is only manipulating symbols according to a rigid set of rules. Whatever feelings the room somehow has, cannot affect what happens in the room at all. Feelings are entirely superfluous to its operation.
Trains of thought like this one have led some philosophers to argue that consciousness is epiphenomenal; that is, it’s simply a by-product of brain activity, but it doesn’t affect what the brain does. You might think you have free will and you might think your feelings are important, but your brain would work just the same without them, and your free will is only a kind of illusion.
Why feelings even exist at all, then, is a complete mystery from this point of view. Our feelings cannot have evolved via natural selection, because there is nothing for natural selection to work on. Since feelings have no effect on anything, they cannot be selected for. They are a mysterious and pointless froth on top of computation, and there’s no reason why that froth shouldn’t inhabit a digital computer, just as it inhabits a human brain.
Faced with having to try to explain how conscious entities can emerge from inanimate matter, some philosophers and scientists then resort to panpsychism, the idea that even atoms or electrons are a little bit conscious. A whole bunch of atoms together can then somehow attain a human level of consciousness, at least if arranged in the form of a human.
Many people talk about emergence, viewing the appearance of consciousness as having somehow to do with a poorly-defined “complexity” of the brain. In this view, the existence of consciousness is analogous to the existence of a tornado in air molecules.
But a tornado is still a physical entity, like the air molecules themselves, whereas there is nothing in physics that suggests certain arrangements of molecules should experience consciousness. Human behaviour might be argued to be “emergent” since behaviour is a physical phenomenon, but subjective inner feelings? They seem to be a different order of phenomenon altogether.
I’d like to raise a rather different perspective to any of the above for your consideration. If we try to insist that the Chinese Room, and by extension, digital computers, can be conscious, then you can see we have to commit ourselves to all kinds of absurdities.
In the alternative view that I’m going to discuss here, we certainly will find that we have other problems to deal with; there is no easy and simple way out of the so-called “Hard Problem” of consciousness, but we ought to at least concede that viewpoints on consciousness involving emergence, panpsychism or free will denialism might be fundamentally flawed.
These points of view stem from a view of the universe in which physical matter consists of things that have states and inherent qualities: that is, all units of physical matter have positions, speeds, charges or masses, and so on.
According to this view, we ought in principle to be able to look at a physical system like a human brain, measure the properties of all its component parts at a given moment in time, then apply laws of physics to predict precisely what the physical system will do next.
In this view there is no room for free will, nor emotion. A brain does what it does for purely mechanical reasons, exactly like a digital computer, and exactly like the Chinese Room.
With digital computers, as long as they continue to function normally, in the manner that they’re designed to function, this is exactly the state of affairs, but the human brain is not a digital computer.
The French polymath Pierre-Simon Laplace is known, among other things, for an early succinct statement of this idea. In A Philosophical Essay on Probabilities, first published in 1814, he said:
“We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past could be present before its eyes.”
This idea holds a strong grip on the minds of many, but does it actually make as much sense as it initially appears to make? Or is it one of those ideas whose flaws are hidden just deeply enough that the idea convinces anyone who fails to properly dot the i’s and cross the t’s?
Actually, it’s a case of the latter.
One serious problem arises when you attempt to envisage how, in principle, you could know the positions of all the particles that compose a brain, at a given moment, if a brain can even truly be said to be composed of particles.
There are, in reality, no such things as moments, nor are there mathematical points, and this means we can never measure a position at a given time with perfect precision.
We cannot say “such-and-such a particle was 1.6 mm distant from a certain other particle at precisely 2.23 in the afternoon”, because apart from anything else, the time is never precisely 2.23pm and a distance is never an exact number of tenths of millimetres.
To specify a position and a time precisely, we would need four infinitely large strings of digits; one for each of the three dimensions of space and one for time.
The best that we can hope for in practice is that we can make measurements precisely enough (in principle if not in practice), that a system will behave according to our expectations over some period of time, until the imprecision in our initial measurements results in significant deviations from reality.
This principle works fine for cannonballs and missiles, and even space probes, but does it work for a thing like the human brain?
If it does, then free will is proven to be illusory, since human behaviour could, in principle, be predicted by a computer program, physical laws precisely establish what a human being does, and any form of consciousness is completely superfluous and unnecessary. Feelings cannot, in this scenario, influence behaviour.
On the other hand, if it is not possible, even in principle, to make measurements on a human brain and then predict what it will do next, then the case against free will is not proven, and we cannot prove that our feelings do not play a causal role in our behaviour. Feelings very well might cause behaviour, and we have no solid reason for saying that they don’t.
An important question that arises is whether, in principle, we could predict the future behaviour of a brain by making use of relatively coarse measurements of things like the positions of cell walls and electric potentials found inside neurons, or whether we would have to make measurements of such great precision that we would end up having to get into quantum mechanics.
The answer appears to be the latter.
Several studies have shown that, under controlled conditions, the human retina can detect individual photons. At first glance this may not seem to matter, but consider how a neuron goes about the process of “firing”, which is believed to underlie all human thought.
This process involves sodium ions (that is, sodium atoms that have lost an electron) moving inside the neuron by migrating across its cell membance, through voltage-gated sodium channels, giving the interior of the cell a positive charge. Under threshold conditions, a few ions migrating or not migrating across the cell membrane could make the difference between the neuron firing or not firing.
Since a typical brain may contain 86 billion neurons, it’s not at all inconceivable that some neurons may fire or fail to fire depending on the action of individual ions. Ions in general have been shown to obey quantum mechanical laws, for example, exhibiting interference effects. It has also been shown that human behaviour can depend on a single neuron firing or not firing.
When we consider the incredible quantity of ions and electrons involved in shaping human behaviour, and taking into account that the brain is technically a tightly-coupled nonlinear chaotic system, in mathematical terms, and that even individual neurons exhibit mathematically chaotic behaviour—that is, macroscopic behaviour that strongly depends on precise initial conditions—it seems that we would have to make quantum-scale measurements in order to successfully predict human behaviour.
In case you’re not convinced, it has been pointed out (for example by Roger Penrose) that if you have a crooked chain of pool balls, and you hit the first so that it hits the second, and so on, after only a handful of collisions it becomes impossible to predict the trajectory of the final ball, because the Uncertainty Principle gets in the way.
When we’re dealing with something much more complex, much more interconnected, than a chain of pool balls, like the human brain, in order to successfully predict the system’s behaviour, we’d have to make innumerable measurements with a precision that the Uncertainty Principle tells us is fundamentally impossible.
Some free will denialists, seeking to justify their claims, turn to experiments like Benjamin Libet's “Readiness Potential” experiment, which attempt to predict human choices by making coarse measurements of one kind or another on human brains, but as Libet and others have pointed out, even when a choice seems to have been made, it’s always possible to change your mind at the very last moment. None of these experiments are ultimately at all convincing, when they’re used to argue against free will. They simply detect, with some statistically significant reliability, the process the brain goes through when it gears up to make a particular choice, under highly-controlled conditions. It does not always then actually make that choice.
You may know that the outcomes of experiments on the quantum scale are random, following a statistical distribution. Doesn’t this imply that if the brain’s behaviour depends on quantum-scale phenomena, then the brain’s behaviour simply has a random element to it?
No, it doesn’t. We cannot visualise the complex quantum interactions taking place inside a brain as simply the outcomes of a vast number of experiments. It’s very unclear what actually constitutes a measurement in quantum physics, and this is known as the Measurement Problem.
There has been considerable debate among physicists over whether particles even have a state when they are not being observed via an experiment.
In the end, if you want to be able to assert that the brain is either a deterministic system, or a deterministic system with random elements, you need make the same assertion about quantum systems in general, and while there’s nothing to prevent you from doing that, equally there’s no proof that quantum systems somehow inherently conform to one interpretation of quantum mechanics or another.
The honest position is to admit there are some things here that we have not been able to determine scientifically; that is, via experiment. People who argue that one interpretation of quantum mechanics or another is definitely correct, are currently doing so on the basis of faith, not science.
It is simply unclear, from a purely scientific perspective, what electrons are doing when they have never been observed. It’s not at all clear that subatomic particles hang around waiting for an observer, and we cannot infer very much about their behaviour from observing the behaviour of complex macroscopic systems like human brains.
If we visualise the human brain as a straightforward, if complex, deterministic or partially random mechanism, all the way down to the subatomic level, then we have visualised something for which we have no proof.
There are questions about the human brain that are even far more radical than any we’ve asked so far.
The brain is essentially contained in a box, which we call the skull. It receives input from the outside world only via nerve impulses, which are themselves electrochemical pulses: tasteless, colourless, textureless, odourless. From these impulses, the brain constructs a colourful textured view of the world, which appears to us to exist in a three-dimensional space, where time passes.
It is not completely clear which observable facts are truly a part of an independent universe—independent from us and our view of it—and which are only a part of our own psychological apparatus. This is a question which has long vexed philosophers; perhaps most notably, Immanuel Kant, but also many others.
For example, is cause and effect an inherent part of the universe, or is that just the way we are compelled to view things, in order to make sense of certain limited aspects of the universe?
There are other problems, even in mathematics, that cast doubt on whether, even leaving aside all these other intractable fundamental difficulties, we could hope to predict the behaviour of a thing like a brain, and in this context we might mention Gödel’s Theorems and the Halting Problem.
If even all questions in arithmetic cannot, as Gödel mathematically proved, be answered using rigorous logic on the basis of a finite set of axioms, then what hope do we have of boiling human behaviour down to a fixed set of principles, applied to a finite set of finite measurements?
In summary, the physical nature of the human brain, or brains in general, make brains fundamentally different to digital computers, and vastly more complex, in numerous different ways. While there is no such thing as a scientific proof that free will exists, neither is there any actual proof that it doesn’t exist, and we have to contend with the fact that, while there is simply no room in digital computing for free will or emotion, there is plenty of room in a human brain for such things.
It does not appear that a digital computer could ever be conscious or exercise free will. A digital computer does what it does for purely mechanical reasons, and while many people assert that the same is true of a brain, there is no scientific foundation for their assertions, and theirs is really a position based on faith alone, as things currently stand.
If you feel great certainty about the answer to a scientific question that cannot be determined by experiment, then what you have there really is a faith-based position—although, to the faithful, faith often does not feel like faith at all. Instead, it feels like obvious reality.
I’d like to address one further argument against free will, which seems to really encapsulate the problem. Remember, if we can somehow “prove” that humans do not have free will, then my objections here against digital computers having free will would become baseless, since I’d have to concede that a digital computer could have free will, could experience emotion and could be conscious, to precisely the same extent that a human being could possess these things.
It is sometimes argued that either our decisions are based on something, in which case they are not free, or else they are based on nothing, in which case they are random and are also not free.
If you have followed the arguments I’ve presented so far, I think you will see that the problem with this idea is that it lightly skims over some very deep issues.
What do we mean by “based on”? How many things are there that we may base our decisions on, how are they quantified and what processes must be applied to them in arriving at a decision? This argument seems superficially convincing, but becomes very unconvincing if you try to translate it into definite mathematical ideas.
Unless we can say that human behaviour can in principle be predicted via a fixed set of mathematical operations applied to a finite set of data points, we cannot say that will is provably not free. The loose notion of “based on” conjures a feeling that a precise mathematical argument exists, while skirting around all the profound difficulties in actually arriving at such an argument.
In the end, an argument in favour of the existence of free will can be viewed as an argument for a third category of causation, in addition to determined and random. But this third category, which we could call “willed”, violates no known laws of physics, contrary to what we might expect. Physics simply does not prescribe everything that must happen in the universe, nor even in a brain. Physics is just not woven that finely.
We should welcome this, because rather than weakening physics as a science, it instead offers interesting possibilities for further progress, while candidly admitting to known limitation of physics as it stands.
In contrast, physics tells us precisely what must happen in a digital computer under normal operation, and there is no room for free will, sentience, consciousness or emotion in that.
If any kind of computer ever becomes conscious, in the sense of possessing awareness, emotions and free will, that will have to be some kind of computer that is no longer digital and is not a computer in the sense that we currently understand the word—which is inherently linked to digital technology.