Will artificial general intelligence one day be conscious?
By April Cashin-Garbutt and Hyewon Kim
With the rise of ChatGPT, we may be feeling a step closer to artificial general intelligence (AGI), the quest to create AI that can solve any general task that a human can. But even if we could create AGI, would it ever be conscious and how would we know? We spoke with 10 leading neuroscientists, who have all given seminars at SWC, to find out their thoughts. Here are their takes.
1. Absolutely!
“Yes it will be. We’ll know primarily because of the way it is built — the nature of the world models it has and the kinds of queries it can perform on its world model, which has a model of its own mind as part of it.” Dileep George, DeepMind and Vicarious AI
“Absolutely. Like the timing problem we studied, I think consciousness is nothing supernatural or mysterious but a sensorimotor process, maybe a slightly more complicated one in the brain. If one believes that artificial intelligence can be as intelligent as human by using knowledge from studying the adaptive motor control, as I do, one should also believe that AI one day would be as conscious as human. Why not, if we understand the human consciousness enough? It is basically chemistry, physics and math in the brain.
How will we know that AI has consciousness? We don’t, just as you don’t know whether I have consciousness. But since you are conscious (at least you think so), and that you believe my brain is similar to your brain in terms of operating principles, you think that I’m conscious as well. We can have the same practice in AI.” Xin Jin, East China Normal University
2. Maybe, but in a different way than humans
“The ‘how will we know’ part is more straightforward. If they pass the Turing test, they could fool you into thinking they’re conscious, at least from the outside. We talked about this in class recently.
There is a perspective on cognition called embodied cognition, which is that the way that we think and the way our thoughts and emotions emerge, is fundamentally dependent on the fact that we have bodies that interact with the world and with other bodies out there. So, a disembodied intelligence or AI might have a very different kind of way of thinking and way of experiencing.
If it had consciousness, it might therefore be a very different kind of consciousness. And then there are some people who say all you have to do is provide enough computing power and connections and eventually these AIs will become conscious.” Michael Platt, University of Pennsylvania
“I don’t think it will have the same concept of consciousness that was set up in our brains. Maybe it could be as equally as intelligent, but not as flexible? I’m not sure.
Some examples of the most impressive forms of intelligence to me are discovering new fields of maths and thinking outside the box. I guess it’s an open question as to whether AI will be able to do that. But I still think one day it would. Whether or not it would have the same concept of self-awareness or consciousness – maybe something like that could get programmed in if we understood the mechanisms well enough.” Ilana Witten, Princeton University
3. Should we care either way?
“I guess AI will one day be conscious. But do I care whether an AI system is conscious? Would I privilege that? How would I weigh the consciousness of an AI system over the consciousness of a child? It’s easy for me because I see such difference between a child and a mouse. I don’t want a mouse to suffer, but the life of a child is infinitely more important and interesting to me than the life of a mouse. In that same sense, the life of a mouse is infinitely more interesting to me than the ‘life’ of an engineering network. The life of a child is in a different league altogether.” Alison Barth, Carnegie Mellon University
4. It depends on how you define consciousness
“It depends how you define consciousness! I think we already have enough of a struggle to define consciousness and to have any sort of measure of it in the human brain, let alone an artificial device. So, I think we need more of an understanding of what happens in the human brain before those questions can be asked.
It is, of course, possible to build devices that have been programmed with sufficient information to exhibit a degree of autonomy and which can therefore make elementary decisions. Robotic devices are improving all the time and they will be increasingly useful, particularly from a clinical perspective. But I think this is well-short of what the brain is capable of.” Andrew King, University of Oxford
“I think that is almost a philosophical question – as people struggle to define what it means to be conscious.” Teresa Guillamón Vivancos, Instituto de Neurociencias de Alicante
“Interestingly, it seems to me to have slightly gone out of fashion, this question of what is the point of consciousness. I did my undergraduate experimental psychology degree at Oxford, where there was a big history of blindsight research – this is a phenomenon that patients with brain damage affecting visual cortex would report not seeing anything in their blind field, but then when they were asked to guess whether or not stimuli was present, they could perform at 90%+ correct. With that, what seemed interesting was that the blindsight patients wouldn’t voluntarily respond or evaluate their response without conscious perception. So I think until we’ve figured out better what matters about consciousness and what it provides to us, I’m not sure this is going to be an easy question to answer.” Mark Walton, University of Oxford
“Will AI ever achieve consciousness? I’m not well-positioned to answer that question – this depends on the definition of consciousness.” Michael Long, New York University
5. No!
“Honestly, I have no idea, but I would lean towards no. I think that looking like having awareness and consciousness is not the same as actually having consciousness. I do not know what consciousness is, what causes it, what underlies it, and I do not believe that a large artificial neural network that is able to interpret and classify the world as we do and looks very much like the way we do sufficiently meets the criterion to be deemed conscious – unless you redefine consciousness. That is where the Turing test and other criteria that have been used in the past come in to assign certain facets of intelligence to machines – these criteria may need to be reassessed.
I frequently think about consciousness in the model organisms I study. In fact, there are different gradations, but there is another concept that people often talk about called sentience. That is important from an ethical point of view when you think about pain, not just as nociception, but as a higher level of experience of unpleasant inputs. You are forced to think about what it feels to be a cephalopod or – to go back a famous philosophy paper – a bat. Also having observed lots of invertebrates and insects in particular, it’s difficult for me to imagine that they don’t have some form of consciousness.
To be able to operate in the world, you need to pay attention to only a subset of the inputs that are offered to you or combine certain groups of inputs together. These faculties are, presumably, components of what I assume to be consciousness. Then, it becomes a matter of semantics. In our research, we encounter this issue all the time. What does it mean to be asleep, what is sleep, what is pain, what is consciousness? All these issues, in the end, become semantic.” Gilles Laurent, Max Planck Institute for Brain Research