Just how conscious are you? Is your dog conscious, or your goldfish? What about your laptop? Why does our consciousness seem to disappear in deep sleep, only to return on waking? For that matter, how is it that you can be more conscious when asleep than when slumped tired and drunk in front of the TV? How do you even know other people are conscious, like you, and don’t just seem to be? Why does consciousness reside in your brain and not in some other organ? Is there even any such thing as consciousness?
It’s an understatement to say that the study of consciousness—what it is, how and in what it arises—is one of the biggest questions facing science and philosophy. After all, our consciousness—not just an awareness of an outside world, but the inner sense that we have an experience as ourselves—is not just the means by which we interact with the world, everything we know or will ever know comes via our consciousness. Consciousness has long been considered what makes us human.
Some 25 years ago Christof Koch, the physicist and neuroscientist best known for his work on the neural basis of consciousness, bet a case of wine with the philosopher David Chalmers that by now we would have an explanation for consciousness, one that would help answer many of those questions above. This year he conceded that he has lost his bet. But, perhaps, only just. Because while the leading voices in the study of consciousness are still a long way from agreeing on any one theory of consciousness, and there are many, recent insights into consciousness have brought about seismic leaps in our comprehension.
“I’d make the same bet again,” says Koch. “Chalmers is primarily known for what’s called the ‘hard problem’—that you can’t take the brain and squeeze the juice of consciousness out of it because you can always ask why [it then seemingly stems from] this particular neuron or that particular event [in the brain]. But a quarter of a century on I think we’re now talking about something else.
“Assuming consciousness is real—and not every philosopher thinks it is, bizarrely when consciousness is a simple fact of our existence—we’re now on a quest for what’s called the ‘neuronal correlate of consciousness’, because you can lose your cerebellum, that’s the majority of your 100 billion neurons and your conscious experience isn’t impaired,” Koch explains. “So which bits and pieces of the brain [make for consciousness]? Lots of people are looking for that [correlation] now but the idea remains controversial. In the meantime David got some madeira wine from 1978. It was pretty good.”
Koch is a leading exponent of the Integrated Information Theory (IIT) of consciousness, proposed by Giulio Tononi of the University of Wisconsin-Madison, and one of the more exciting, if unproven, theories of consciousness to have emerged over recent years. IIT says that consciousness is a product of the mathematically precise integration of information, or interconnections, in a system as it encodes experience. That’s not just brains. That means that any system so indicated will, in effect, have this thing called consciousness. That is however basic, however almost irreducible, the system may seem.
This gives us a new wave of panpsychism, the idea (and one too woo-woo for many) that anything might have, as Koch puts it, “even just an itsy bitsy” degree of consciousness that, when it’s dead, it doesn’t. Not just dogs, cats and other mammals, but also plants and maybe bacteria, across the entire tree of life. “And that’s why so many people hate IIT too,” he admits. “But I find that idea very appealing. It’s general. It’s elegant and universal.”
Indeed, that consciousness is a product of the brain seems self-evident to Anil Seth, professor of cognitive and computational neuroscience at the University of Sussex, UK, and author of (the highly recommended) Being You: A New Science of Consciousness. One experience that first got him excited about the topic was going under general anaesthesia. “And I think that shows the intimate dependency of consciousness on what’s happening in the brain,” he says. “Intervene with the brain, through the use of psychedelics, for example, and consciousness changes. Or it just goes away entirely.
“General anaesthesia is,” he adds, cheerily, “existentially the closest thing we get to death, and when you’ve been through the nothing of non-existence it perhaps loses some of its fear.”
This isn’t just an abstract philosophical idea any more either. Technology, or, more accurately, a clever combination of existing technologies—electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI)—by Tononi has taken us a step closer to developing something akin to a consciousness meter for brains and has certainly shed new light on the level of consciousness in those diagnosed with brain disorders that put them in a persistent vegetative state.
“We’ve now taught plants to associate sounds with food, for example. So I’m more and more inclined to ascribe consciousness to plants and trees too.
Christof Koch
What it has revealed is that around 20 per cent of these people—long thought to have no consciousness—are, in fact, conscious, but unable to outwardly express as much. That’s a critical challenge to any plans to remove life support in these cases. “A lot of the philosophical discussions about consciousness have been around for 2,000 years or longer, it is more and better tools like these that will finally allow us to make progress through experimentation,” reckons Koch.
Of course, this poses an ethical conundrum for the way we treat other forms of life, with one of the major implications of a deeper understanding of consciousness being a push back against the bias we have against things that aren’t like us, with even the idea that, say, a bee is conscious, more thinkable than it was just a decade ago. That’s even if, as Koch points out, most of us have long intuited that mammals are conscious, without this stopping industrial farming, the mistreatment of animals or meat-eating (he’s vegetarian for this reason).
“Certainly it seems as though the trajectory is to ascribe consciousness more and more,” reckons Philip Goth, professor of philosophy at Durham University and author of the ambitiously titled Why? The Purpose of the Universe. “We’ve now taught plants to associate sounds with food, for example. So I’m more and more inclined to ascribe consciousness to plants and trees too. And if a tree isn’t just a mechanism but a conscious organism with moral significance, that matters when it comes to, say, cutting it down. What we’re faced with, potentially, is a radically different way to connect to the natural world. But we just don’t know.”
Indeed, IIT isn’t the only theory—there are more than 20 doing the rounds—or even the most popular. Some 100 consciousness researchers signed a letter this year calling it pseudoscience, not least because from it is also derived the notion that consciousness occurs at a precise moment when there is more integration of information in a system as a whole than in any of its parts. In other words, there’s a number when consciousness switches on. And the bigger the number, the greater the consciousness.
There’s also Global Workspace Theory, to cite one of the other front-runners. This argues that a brain becomes conscious when it becomes a “global workspace” that broadcasts incoming information to be used by many systems throughout the brain for different tasks. A single system in the brain with a single task—regulating breathing, for instance—would not be deemed conscious.
So which—assuming it’s not something else entirely—is right? Who knows, though a barrage of experiments to test both has suggested IIT might just be ahead. One of the problems of studying consciousness to date, says Goth, is that each of the theories is empirically equivalent. On the one hand, he says, we all have the advantage of direct experience of consciousness. But, on the other, that experience is inaccessible to anyone else. You may know consciousness. But it can’t be observed. Data about it can only be collected subjectively. “That’s frustrating when people think we should be able to answer all questions through experiment,” he says. “Maybe then we should just be agnostic.”
One reason why Koch broadly favours IIT is that many of the alternatives depends on what’s called computational functionalism, an idea he isn’t buying. Anyone who has used an AI system knows how uncanny the experience can be. In 1950 the computing pioneer Alan Turing proposed his Turing Test—if we can have a conversation with a computer and not be sure that’s it’s a computer and not a person on the other end of the line, then we can say the computer meets the same standard of consciousness, or at least intelligence—that we apply to people. An AI chatbot by the name of Eugene Goostman is said to have passed the test and others will no doubt follow. Maybe that means we’re on our way to some Terminator-style future, in which, as in the film, the Skynet AI becomes conscious and starts killing everyone.
Scott Aaronson, professor of computer science at the University of Texas, argues that for all the characteristics of consciousness that an advanced AI may yet lack—an identity persistent over time, constant reflection and so on—the answer always is that there may be a future upgrade. So at what point do we concede that it is conscious? It’s a pressing question. As he says, he knows people in the AI community who say all this may be only five years away. It’s a question that unnerves too. Last year a Google employee was fired for suggesting that its AI model could already be sentient.
“It’s possible that there’s some pixie dust that makes the brain biologically different,” Aaronson says—or, indeed, that consciousness somehow doesn’t reside in the brain. “Humans too are physical objects and could be regarded as a kind of computer but we regard ourselves as conscious. So I think the burden is on us to find something about that future computer that differs from the brain so that either it can’t be regarded as conscious, or can be regarded as having a different kind of consciousness.”
But can we say that a computer is actually conscious? Or will ever be? This brings us, says Anil Seth, to the difference between intelligence—doing, planning, behaving in the world—and consciousness, which is more about a state of being. And especially given that we humans are psychological biased to conflate the two. “One is doing the right thing at the right time. The other is being aware of it,” as he sums it up.
Computational functionalism runs against this distinction. This is the idea that if you replicate the brain functions that make us intelligent on a computer then the computer too will be intelligent. “But you can just sit there with your eyes closed alone in your bedroom and, say, be in love, or have a trip,” as Koch puts it. “So we don’t know that just because a computer might be able to do everything we can do, that also means it can be everything we can be as conscious creatures—that’s an assumption that I’ve never believed. Consciousness isn’t just an algorithm.”
Whether there are different kinds of consciousness is yet another unanswered question. And, for all that we may be getting a better understanding of what consciousness is, we’re still not sure what it is for either— “why is there anything in the physical world that gives rise to consciousness at all?” asks Seth. There are philosophical traditions that argue consciousness has no function at all, or that it’s somehow separate to the world. Biologists point out that, nonetheless, evolution did give rise to consciousness and that, evolutionarily speaking, it’s advantageous in terms of how we interact with the world.
Last year Andrew Budson, professor of neurology at the University of Boston—and a believer in the potential for machines to be conscious—proposed that consciousness is a form of memory system: we perceive the world through unconscious sensation that we then consciously remember, as information, as episodes from our past, as knowledge and meaning, to better understand the present and predict the future. He points out how so much of what we do—speak and understand sentences in real time, play sport, improvise in music—happens too fast for consciousness.
“Our conscious decisions are basically memories of unconscious decisions,” explains the neuroscientist. “That doesn’t mean we have no free will—we use unconscious and more conscious processes all the time. So much of what we do is unthinking. It’s like a horse, the unconscious, and rider, consciousness. The horse doesn’t need the rider to walk across the field. But it does to make it across greater territory. The rider still has to convince the horse where to go, like we have to convince our unconscious brain not to eat the chocolate cake.”
Koch takes more of a physicist’s stance on the point of consciousness. “What is consciousness for? Well, what is electrical charge for? Nothing in the brain works without electrical charge. But it doesn’t make sense to ask ‘what’s the point of electrical charge?’ It’s exploited by the brain to do all sorts of things,” he suggests. “So I don’t think consciousness in the strict sense has a function. [It’s just that] we tend to link consciousness to intelligence and to self-consciousness especially because we prize self-consciousness, even if, for example, babies have very little self-consciousness...”.
That we do prize it brings its own problems. As Philip Goth notes, the better understanding of consciousness is a challenge to humanity’s self-perception. “Every generation absorbs the received wisdom and it can be hard to overcome that,” he says. Much as Galileo countered the narcissism that placed the Earth at the centre of the solar system, so the scientists of consciousness are countering human exceptionalism and the idea that we have anything like a soul. If consciousness is a product of a physical brain, when the brain dies, so does our being, period.
That’s a hard sell when many, if not all, world views are so attached to dualism—the feeling that the 17th century French philosopher Descartes spoke of in his distinction between “physical stuff” and “thinking stuff”, the sense that our mental properties—pleasure and pain, love and hate, and so on—feel fundamentally different to the physical ones—that we have charge, mass, hardness and so on. The notion that they can be separated, transported and reintegrated, is beloved of trans-humanists and sci-fi writers—see Upload or Altered Carbon, for examples. Unfortunately for them, yet more theories—deriving from the study of phantom limbs and of those neurons that are exposed to circulation of the blood—suggests mind and body may be even more integrated as one hybridised system than ever imagined.
“Our conscious decisions are basically memories of unconscious decisions.”
Andrew Budson
Anil Seth argues that while there is a deep sense of mystery surrounding the whole idea of consciousness, he believes it’s more likely that, in the end, we will come to understand it not because we discover what he calls “some special sauce”, but, much as we’ve come to understand how life itself comes about, because we’ve got a handle on a number of interacting processes.
“I think there’s a predisposition to over-complicate the issue of consciousness because we’re seeking to explain us, so we look to set the bar high. We expect our understanding of consciousness to be intuitively satisfying in ways we don’t expect of other areas of science,” says Seth. “I understand that consciousness doesn’t feel like it should be just the stuff of neurons. And understanding how that’s the case may decentralise us humans, as we have done before, albeit causing a chaos in the process. But I think the result will be that we’ll feel much more part of nature. There will be a freedom in not having to hang on to the idea of consciousness as something special.”