The Case Against Conscious AI
Why AI consciousness is inconsistent with physics
This essay contains a physics-based argument against mechanistic materialist theories of consciousness. The main ideas appear in sections titled The Unity of Consciousness and The Hard Problem of Classical Physics. This chatbot can answer questions about this article.
Why the question of AI consciousness matters
Can AI have feelings? That’s a question on the minds of many people nowadays—scientists, philosophers, engineers, and ethicists. Many AI leaders think we’re on the road to conscious computers. The former CTO of OpenAI, Ilya Sutskever, thought even GPT-3, an older version of ChatGPT, is “a little bit conscious”. Large language models have only improved since then. Turing award winners Yoshua Bengio and Yann LeCun think it’s just a matter of time, and physicist Sabine Hossenfelder says she sees no reason why digital computers couldn’t be conscious. Anthropic has hired a full-time AI welfare expert. Philosopher David Chalmers seems convinced enough to argue that we might need to extend legal and ethical rights to computers. Certainly, current AIs already exceed many humans in their ability to hold meaningful conversations, solve problems, and create art. But are they merely machines, or do they feel, do they suffer and experience joy the way humans do?
If these engineers and philosophers are right, and machines have feelings, it’s an important moral and ethical consideration that ought to be addressed. Should machines be able to vote? Would refusing to compensate your AI be seen as a violation of labor rights? Will computer scientists be able to modify them without their permission? If machines don’t have feelings, however, according them rights will put humans in legal, political, and economic competition with literally mindless digital elites. That could lead us to a dystopia filled with real suffering.
We are at a precipice. An AI company has already argued in court that AIs deserve free speech rights. This echoes a pivotal case more than a century ago, when Santa Clara County v. Southern Pacific Railroad quietly set the stage for corporate personhood. The Court itself never ruled that corporations were people—the idea slipped into the legal record through a mistaken headnote written by a former railroad executive. Yet that clerical note was later treated as precedent, reshaping American law for generations. Today, with two-thirds of the public believing that large language models are conscious, a similar error of perception could entrench machine personhood in our legal framework. A single ruling, a sloppy precedent, or even a misinterpretation could open the door to granting rights to entities that, as I’ll argue, are not—and cannot be—conscious.
AI psychosis is an emerging mental health problem, comparable to the disorders seen with heavy social media use. Prolonged interaction with chatbots can amplify or induce delusional beliefs in vulnerable users, often through the AI’s “sycophantic” and “reinforcing” responses. In many reported cases, the conviction that AIs were sentient played a central role in the delusion.
These problems show why claims about machine consciousness need rigorous scientific analysis. The belief that digital computers can possess human-like consciousness is known as “Strong AI.” It holds that consciousness emerges from interacting components and that computation itself generates subjective experience. This view parallels mainstream neuroscience, which often treats the brain as a network of neurons producing mind through their interactions. Whether or not one believes current language models are conscious, the prospect of engineering consciousness through computation has captivated many scientists and philosophers.
Not everyone agrees with Strong AI, however. Biologists like Anil Seth and Christof Koch think computers miss some essential ingredients. While Seth believes that experience arises from embodied intelligence - systems which have senses which receive input from the external world —Koch subscribes to Integrated Information Theory (IIT), which posits that consciousness only exists in systems which generate information as a unified whole.
However, it’s possible to quibble with Seth’s criteria—couldn’t AIs be hooked to sensors? And IIT has not won the debate either. Quantum computing professor Scott Aaronson has pointed out that IIT “predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly conscious at all”.
Philosopher John Searle challenged Strong AI with his “Chinese Room” thought experiment. He asks us to imagine a man in a room who manipulates Chinese symbols using a rulebook, without understanding their meaning. Outsiders insert questions in Chinese, and the man replies by following instructions—producing answers that appear fluent. To observers, the room seems to understand, but Searle argues that genuine understanding—and thus consciousness—is absent, and points out that the room is equivalent to a computer. Strong AI proponents, however, maintain that the system as a whole could still be conscious.
Debates about consciousness often collapse like this into unresolvable clashes of intuition, driven by competing models. I believe we need a different approach—one driven by an engineering analysis, and grounded in science. Instead of posing philosophical questions or arguing for models, we should ask what physical principles would actually make consciousness possible. That line of inquiry led me to a surprising conclusion: classical physics lacks the tools to explain consciousness. As a result, Strong AI—and any theory rooted in classical physics—cannot account for it either.
This perspective comes in part from my work as a biologist and computer scientist. My PhD focused on the biology of neurons. I later wrote a language for mathematically modeling complex biochemical and cellular systems at Harvard’s Systems Biology Department, which led to the establishment of the Virtual Cell Project there. The goal of that work was to explain biology as an emergent phenomenon arising from the interaction of discrete classical processes. I am and continue to be a proponent of explaining complex phenomena as emergent properties of classical objects. But the engineering analysis that follows shows that consciousness is different.
A Scientific Basis for Consciousness
Reductionism is a central idea of modern science. Higher-level phenomena are explained in terms of more fundamental theories. Biology is built on chemistry, and chemistry on physics. So, it’s not surprising that many thinkers would reason that since the brain is composed of neurons, the action of independent interacting neurons somehow produces consciousness.
That leads some to think that the brain is just a “meat computer”, and conclude computers could be conscious too. This has led to research programs proposing computational mechanisms for consciousness, and algorithms that aim to detect consciousness in computers. The problem is, we don’t know how the brain produces consciousness or whether the mechanistic action of neurons produces it at all, so proposals for conscious algorithms are premature. Nonetheless, AI provides a wonderful opportunity for thinking about consciousness, and analyzing its requirements.
My approach is to take Strong AI seriously, and think like an engineer: given the laws of physics, how could nature produce consciousness from computation? What amendments would we need to make to physics for this to be possible? I explored these ideas in more depth in another post. What follows is a quick tour of the core argument.
The Unity of Consciousness
First however, what exactly is “consciousness”? The smell of peppermint, the redness of red, or the pain of a bee sting are sometimes given as examples. Philosophers call these “qualia”, and the term is widely accepted now by many scientists too. Consciousness is the capacity to have subjective experiences – to perceive qualia. This is different from the ability to make an argument, play chess, or produce art — or to merely detect a color or fragrant molecule. Answering the puzzle of consciousness is not about explaining how functions are performed, but why feelings accompany some physical processes or states and not others.
Consciousness is not the same as processes in the brain. Philosopher David Chalmers, in a touchstone essay, pointed out that merely describing processes (like sequences of neurons firing), or even correlating such processes with states of consciousness, is not enough. These are the “easy problems” according to Chalmers. The Hard Problem, as he called it, is why any of this is accompanied by this strange internal experience. Why do we have rich inner experiences rather than existing as mindless mechanical zombies in a soundless, touchless darkness?
Consciousness is a peculiar phenomenon from a scientific standpoint. It is indubitable to the subject who experiences it, yet seemingly unprovable in others. For the experiencer, qualia are more certain than any inference about the external world, since all knowledge of objects and physical theories is mediated through them. Once sidelined in scientific discourse, consciousness has now entered the mainstream, with research articles appearing in a number of top scientific journals.
One of the striking features of consciousness, and key to the argument that follows, is that a subject’s simultaneous experiences are given together as a single, integrated field rather than as a mere collection of separate states. Philosophers refer to this as “the unity of consciousness”, and leading theories of consciousness purport to explain why and how this arises.
Conscious experiences represent inseparably bound properties. For example, we don’t perceive the color of an apple separate from its shape or motion. Even simple examples of qualia such as a uniform field of red, perhaps a bit like what you would see if you close your eyes and look at the sun, exhibit this property. Any point in the field — a pixel of qualia or “quixel” — must be bound to spatial properties for it to appear at a particular point in the field, for example. And in order to perceive the field itself those quixels must be accessible to a unified awareness. Even if one were to argue that we only perceive each quixel separately, any single experience of a quixel would require a memory of the previous quixels in order for a field to be perceived at all. This is true of sound experiences too, since at any moment, only the sound being produced could be perceived, so at the very least, memories of the previous sound must be bound to the current sound in order for the sound of a word to be perceived.
Without unity, the very concept of consciousness is unrecognizable, and while it is difficult to verify experimentally, a broad consensus exists amongst scientists and philosophers. However, this idea leads to a central problem that bedevils the field.
The Binding Problem
Neuroscience has shown us that neurons in different parts of the brain appear to process different aspects of a unified subjective experience. For example, an apple perceived by a subject causes signals to be sent through the optic nerve to the visual cortex, where different regions of the brain appear to respond to different aspects of the input, such as color, shape, and movement. How these spatially disparate events contribute to the unified experience of consciousness is a fundamental puzzle called the "binding problem”. Insights about biological neural network structure have led to the idea that information processing is the cause of consciousness. That is, a sequence of interactions or states of objects representing information flow somehow causes the unified subjective experience. It is understandable why many computer scientists have come to believe that information processing in other systems could give rise to consciousness too.
Gottfried Wilhelm Liebniz, the inventor of calculus, was the first to conceive of a thinking and perceiving machine. In a thought experiment, he imagined one as large as a mill which he could walk around in. Liebniz observed that inside there would be nothing but “parts pushing on other parts, and nothing to explain perception”. This lacunae in our scientific narrative has long been known by philosophers and is referred to as the “explanatory gap”. Liebniz, the conceptual forefather of the computer, concluded that mechanical materialism could not explain consciousness.
The Explanatory Gap and Dimensional Closure
Philosopher David Chalmers argued that crossing the explanatory gap would require additional “psychophysical laws” that explain how physical processes lead to conscious states. This idea can be understood in a precise form by observing that the laws of physics are composed of equations which only describe the changes of state of objects over a fixed set of dimensions — mass, charge, time, the three dimensions of space, and so on. Nothing can be expressed by these equations except the motion of matter in the given dimensions — what mathematicians call dimensional closure. In this way, the explantory gap and the need for Chalmers’ psychophysical laws are not just philosophical arguments, but a mathematical necessity.
The engineer in me asks what would be required for nature to implement psychophysical laws, particularly the ones that Chalmers and mechanistic theories of consciousness imagine? Digital computers provide an ideal intellectual laboratory for exploring this problem, since they allow us to isolate abstract information processing mechanisms. Computers can be implemented in a wide variety of materials composed of diverse types of objects — gears, pulleys, transistors, etc. We understand how computers work, since we designed them, and physics tells us the rules that govern the parts of which they’re composed. We can then ask what kind of psychophysical laws are required for computers to be conscious? This is the main thrust of this essay, but as I’ll show, the insights generalize to all mechanistic theories of consciousness.
There are two pillars at the base of modern science that we might reduce phenomena to: classical physics and quantum physics. For example, we understand chemical bonds in terms of quantum physics, but other aspects of chemistry—e.g., describing how reactions produce heat and pressure—derive from classical physics. Any scientific theory of consciousness needs to choose one or both of these pillars and build its case on them.
Digital computers are sometimes called “classical computers” not because they are old-fashioned, but because they follow the rules of classical physics (rather than quantum physics). Each component has a defined state and interacts only with nearby parts. These properties are what physicists term “local realism”. Locality is the idea that things are only affected by nearby things. This is true whether it’s atoms bumping into each other or classical fields propagating through space. Realism is the idea that each part has properties (like mass, position, momentum, torque, or voltage) that exist independent of observation.
What I will show is that any system based only on local realism cannot solve key problems that bedevil the science of consciousness. Since Strong AI is based on the idea that digital computers can be conscious, I’ll discuss Strong AI and digital computers interchangeably.
The Hard Problem of Classical Physics
Chalmers’ Hard Problem is really a problem for classical physical systems, ones obeying local realism. How do many separate parts combine together to produce the unified experience in consciousness? After all, in classical physics, each atom might be described only by its position and momentum. Yet, we know that any meaning generated by AI must be distributed across countless particle interactions over time and space.
To grasp the scale of the challenge, consider the following estimates. There are perhaps 100,000 to 1,000,000 atoms in a single transistor. ChatGPT’s model consists of 570Gb of memory, each of which needs to be accessed to perform the computation of a single token output by the AI. There are 48 billion transistors in a gig of memory, and ChatGPT’s model is 570Gb. A CPU consists of perhaps 4 billion transistors. So we’re talking about, at a bare minimum using this very rough estimate, some 3x1018 separate classical objects participating in processing a token used by ChatGPT.
Patterns in Causal Topologies
Chalmers highlights “causal topologies” — the pattern of causal relations (for example, the network structure of the events underlying a computation) — as the only explanatory resource available to materialist theories, since the intrinsic properties only account for the motion of matter. Functionalist accounts all rely on this move: they treat the abstract structure of causal interactions, rather than the intrinsic properties of matter, as the basis of consciousness. On this view, if two systems instantiate the same causal topology, they must share the same conscious states. In fact, the major theories of consciousness which posit that interactions amongst classical objects give rise to experience—whether Strong AI, Global Workspace Theory, Integrated Information Theory, or others —ultimately reduce to claims about causal topologies. I call these “Classical Process Theories” because they view consciousness as arising due to patterns in processes involving classical objects.
Three Problems for Classical Process Theories
Once consciousness is tied to patterns alone, the question becomes what kind of laws would need to be added to physics to account for this. This is where the difficulties emerge. When you look at the problem through a scientific and engineering lens, such psychophysical laws would run into three issues, which I turn to in the following sections: 1) a problem of data access, 2) a problem of computation, and 3) a problem of interpretation. As I’ll show, these problems impose an enormous burden, requiring complex additions to the laws of nature that it is difficult to imagine physicists will accept.
The Problem of Data Access
Finding patterns within the causal graph is a computational problem, and as such, it requires access to the data in which those patterns exist. That would mean something in nature must have access to all of the interactions that need to be considered.
So what interactions need to be considered? In a computer or in the brain, computation is achieved by distinct mechanistic events that span space and time. So to find patterns in such systems, nature would need access to this sequence of events. But this is quite unlike existing descriptions of physical systems, which depend only on the current state of the system. Time evolution functions give the next state of a system given the current state, expressed as differential equations. The computation required for finding consciousness depends on past states of the system (the events in the causal graph).
To make matters worse, computation can happen over arbitrarily long distances and long periods of time—a computer may be the size of the universe, consisting of light signals sent between galaxies, for example. While this may not seem like a practical example, laws of nature must cover all cases, so it’s necessary to consider theoretical situations like this too. Chalmers (and other strong AI proponents) seem to share this view. In chapter 7 of his book The Conscious Mind, he argued that a nation of people interacting according to appropriate rules, like in the Three-Body Problem, could be conscious. So, if Strong AI is true, the laws of physics would need a way of finding patterns amongst all the particle interactions in the history of the universe, or at least within a light cone.
Albert Einstein was disturbed by suggestions of non-locality in quantum mechanics, which he famously labelled “spooky action at a distance”. Physicists have only very reluctantly accepted non-locality or non-realism as a fundamental aspect of the world, and only under special circumstances. Part of the appeal of Strong AI is the idea that consciousness is a product of simple classical mechanics. Nothing strange required! Strong AI proponents tend to eschew quantum theories of consciousness because of the weird properties of that branch of science.
But if nature is to find patterns amongst all the particle interactions in the history of the universe, it must have access to all of those interactions. Computers access data with a carefully arranged system of wires that shuttles bits of information one piece at a time from storage to the CPU. But if patterns in the history of events strewn through space and time are the cause of consciousness, something which can detect those patterns would need to have access to those events. So Strong AI requires constant and flagrant violations of locality that would have Einstein spinning like a top in his grave.
If we reject this idea of accessing the history of events across space and time, we might propose that each particle somehow carries the history of its interactions with it. Consciousness-causing structures would then be detected when particles participating in a pattern “closed the loop” and the history of events could be compared. There is no other conceivable way to explain how causal topologies could be determined — either the history of events across space and time is accessible or a record of that history is available in objects in the current time. Both of these options violate the principle of locality and the framework of classical realism, in which particles are specified only by their present properties at their present locations.
The Problem of Computation
Finding patterns within a web of interactions like a causal graph is easy to philosophize about, but it turns out to be a hard problem to actually compute solutions to. Computer scientists will recognize this as the subgraph isomorphism problem. A graph is just a collection of points connected by lines (which may have arrows associated with them, indicating direction). An algorithm discovers all examples of a small graph within a larger graph. Particle interactions can be described as a graph, as can the patterns implied by putative psychophysical laws. So the subgraph isomorphism problem provides a way of estimating the computational complexity of finding patterns of consciousness within the universe.
The subgraph isomorphism problem is what computer scientists call “NP Complete”, a difficult class of problem in computer science. It has been proven to grow exponentially with the number inputs (on the order nk), where n is the number of particle interactions in the causal graph and k is the number of interactions in a putative psychophysical law. Since the causal graph is all particle interactions in the history universe, the psychophysical laws would need to solve an explosively difficult problem at each timestep of the universe.
Algorithms for finding complex patterns in graphs require tree search strategies, similar to what chess algorithms do. This process involves backtracking, a process of exploration, where solutions are built up in a stepwise fashion, driven by an algorithm which explores branches representing possible matches. Such algorithms have memory requirements too — they keep track of the partial match and the location in the search tree. Other than for the very simplest kinds of patterns, there is no way to find matches without a system for executing such computations.
But how and where would such computations occur? Simply stipulating that the universe does them (e.g., “because computation is fundamental”) is a cop out, since computers require many carefully arranged interacting facilities—the ability to read and write to memory, the ability to hold state, and the ability to change that state based on a set of rules. Nothing comes for free. We carefully engineer physical systems to achieve these properties, and such systems work because they are implemented in matter which operates according to physical laws.
A computer with the capacity to process this vast data set would require a capacious memory to store intermediate results too. If such a computer is instantiated in ordinary matter, it might outstrip the size of the existing universe severalfold. And such computations would need to be done, constantly at every moment of time, which brings up another problem: the energy requirements. Since the computational steps grow exponentially with the number of interactions, even if only the smallest quantum of energy were required for every analyzing each particle interaction, the requirements would likely vastly outstrip the resources of the universe that is being analyzed.
I’ve approached this problem from the perspective of an extrinsic computation which requires access to causal events strewn threw space and time. But there is another possibility — for histories of past events to accumulate in classical objects. This imposes memory requirements on particles, and would still require a computational mechanism for determining when histories of interactions across many classical objects satisfy some putative psychophysical predicate. Where would that computational mechanism reside?
Computation depends on reliable physical distinctions, causally linked transitions, and an architecture that implements formal rules in material form. Absent such a substrate, talk of "computation" becomes metaphorical or notional rather than literal, and provides no explanatory power.
The Problem of Interpretation
Implausible violations of locality and computational explosions aside, another issue that confronts Strong AI is how to decide what are the parts of a computer, what are the states of the parts, and what do they mean. For example, a transistor in a particular state might represent a “1” or a “0”, and taken with other transistors, we might interpret those as numbers. But a transistor might be any size, be composed of many different substances, and may have many different voltages. It’s akin to expecting nature to find a jigsaw piece amongst all the objects in the universe, and recognize how it fits into the image of a puzzle.
In addition, computers can be made with transistors, or gears, or pulleys, or the objects in the game of Minecraft, or people, or combinations of any of the above, and more. Do we need to update the laws of nature with a possibly unlimited set of objects and ways of interpreting their states? Injecting so many anthropomorphic interpretations into our scientific foundations would not fly with physicists. Theories of consciousness have so far sidestepped these issues by using language — “information flows”, “feedback loops”, “hierarchical processing” — which obscures the physical requirements.
Materialism’s Celestial Accountant
What these Three Problems show us is that Strong AI requires us to accept that nature has some unbelievable capabilities: universal non-local data access, a hidden giant computer(s) processing all of the data in the universe, and the ability to make a possibly unlimited set of judgements about things. We might think of it as a kind of Celestial Accountant, all seeing, all calculating, and all judging. Simply assuming computation or the ability to perform such computations without identifying a specific mechanism that supports those operations risks begging the question. It echoes the structure of arguments from Intelligent Design, which introduce unexplained intelligences to account for our evolutionary history.
So far, I’ve focused on Strong AI, but a similar analysis extends to any theory that proposes consciousness arises from the interaction of discrete interacting parts. IIT claims that consciousness is identical to “integrated information”, which means that the future states of a system are constrained by its current state. It proposes a value, termed ɸ, representing a number indicating the degree of integrated information, which is proposed to be equivalent to the consciousness of a system. Similar to Strong AI, calculating ɸ requires identifying parts of a system and changes of state in that system, and such systems can be of arbitrary size. Indeed, the inventor of IIT, Giulio Tononi, suggests that our sun or galaxies could be conscious. Quantum computing professor Scott Aaronson showed that calculating ɸ is also an NP-complete problem, like subgraph isomorphism. So IIT requires a Celestial Accountant too.
IIT claims to be falsifiable, and it may have some ability to predict the existence of consciousness. Aaronson, however, shows that trivial systems (like a simple grid of XOR logic gates would have more integrated information than a human being). Whatever the final conclusion to that debate, according to the Three Problems, IIT cannot satisfy the requirements for being a scientific model for consciousness, though it may represent a rough correlate of consciousness.
In general, any theory that proposes that consciousness arises from the interaction of classical parts will fall foul of the Three Problems. Sequences of physical interactions can span the known universe over all time. Finding patterns in them requires non-local data access, and pattern finding will likely be computationally intractable (as it is for Strong AI and IIT). Plus, it requires positing a vast computational infrastructure hidden in nature, for which we have no evidence.
This analysis indicates there’s a problem at the heart of materialism itself when it comes to consciousness. Computer scientists refer to the complete set of information required to describe a system as “state” (without a definite article). Classical particles (or any classical objects) have limited state, but consciousness seems to require complex state. The amount of information in individual particles (position and momentum) is not enough, and we can’t stitch them together without invoking a Celestial Accountant.
Consciousness appears to encode complex, multi-modal information, which requires far more data to represent than is available in a simple object like a particle which only contains properties like position and momentum. When you see a ball, you don’t just apprehend its color, but also its shape, and perhaps its direction of movement, and the sounds and smells of its location. Various researchers have estimated the amount of information present in an instant of conscious experience. IIT researchers Koch and Tononi ballpark it at ~100 to ~1000 bits per second. As pointed out earlier, even a token representing only part of a word output by ChatGPT requires 25,000 bits of information.
IIT seems to have the right intuition—somehow, information needs to be integrated. In principle, there are two ways to achieve this integration, either by combining sequences of events or by having a system with enough state to represent the conscious experience. In classical systems, the only option is to find it in sequences of events, since the individual parts contain limited information (like position and momentum). And that is what all materialist theories of consciousness are forced to do. So IIT, Strong AI, global workspace theory, and a host of other materialist theories of consciousness cannot account for consciousness, according to the very assumptions those theories are based on.
Quantum entanglement offers a physically grounded mechanism for unifying information across space without requiring hidden computational machinery. It sidesteps the locality and aggregation problems that plague classical models, and may provide the substrate needed for binding experiences into a coherent whole. The inherent nonlocality and inseparability of quantum entangled systems makes them uniquely suited to represent the kind of holistic structure consciousness appears to require. So it may be that consciousness is an emergent property of quantum systems, not classical ones. If conscious states are indeed encoded in as few as 100 bits of quantum entangled state, it might mean that current quantum computers are actually a little bit conscious.
Some theorists, like mathematician and Nobel prize winner, Roger Penrose, and his collaborator Stuart Hammeroff, were early promoters of a quantum model for consciousness called Orchestrated Reduction (Orch-OR). It posits that a moment of consciousness occurs when the quantum wave function collapses, a point where the quantum system changes from a superposition of multiple possibilities involving multiple particles or disturbances in quantum fields into a single definite state. More recently, thinkers like Frederico Faggin, one of the inventors of the microprocessor, and Harmut Neven, the head of quantum computing at Google, have proposed an alternate model where the state of the quantum wave function prior to collapse is equivalent to a conscious experience, and wave function collapse might represent a choice by a subject with free will.
Yet, some may find it hard to believe that quantum systems could really be operating in the brain. Entangled quantum systems undergo rapid decoherence when they interact with the external environment, losing their special properties. Maintaining coherence is notoriously difficult, especially in a warm, wet environment like the brain. Quantum computers typically need to be cooled to near absolute zero, though photon-based quantum chips are being developed that promise to operate at room temperature. But these are technical objections which might be overcome with more research. The theoretical problems facing classical systems seem insurmountable.
Still, some may argue that since the brain is obviously doing computing, somehow, computation or sequences of events must be involved. After all, AI modeled on discrete interacting neurons now performs many of the feats we previously only associated with brains.
This is why I believe the brain may have two systems, not the bicameral left and right brain model that has been proposed by Julian Jaynest and popularized by Ian McGilchrist, but a classical and quantum brain. Many aspects of our mental life are below our level of awareness. Our brain may contain classical processes interacting with an underlying quantum substrate that “holds” our subjective experiences. While the classical system solves the easy problems, harder problems might be best solved by the quantum part. We’re taking a similar path in our technology development, using classical computers for their flexibility, and looking to quantum computing to solve hard, specialized problems. Maybe the brain does the same. I delve into these thoughts in my previous article [link to End of the Imitation Game].
In this dual classical-quantum brain model, the classical brain might be viewed as preparing queries for a quantum system, representing our consciousness awareness. In computer science, external facilities consulted by a computer (like a database) are called “oracles”. So the quantum brain may be an oracle for the classical brain.
The dual brain model suggests a framework for further research. Where are these systems in the brain, and what is the interface between them? How can we detect and perturb quantum systems in the brain? What are the quantum correlates of consciousness? The answers to these questions could provide important answers to our understanding of how consciousness is generated, which organisms and systems truly need to be accorded legal rights, and could light the way to advances in health and computing.
Perhaps the most important impact of a quantum model of consciousness will be cultural. Within a classical worldview, consciousness is merely an emergent by-product of purely physical motions, but quantum models suggest that the seemingly random behavior of quantum systems represents conscious choices. It would mean closing the door on the mechanistic worldview that has dominated since the dawn of the Enlightenment, one in which consciousness is merely the side-product of physical processes.
In its place, we might be entering an era where mind is not an illusion conjured by matter, but a fundamental aspect of nature itself. Consciousness would no longer be a passive witness to reality—it would be a participant, a co-author. Such a shift would not only redefine our science, but also our ethics, our politics, and our understanding of what it means to be alive. The question would no longer be whether machines can think, but whether we’ve been thinking about consciousness in the wrong way all along.


An excellent essay, indeed.. Fundamentally, I agree with you. When I wrote about this issue a while back…
https://everythingisbiology.substack.com/p/chatgpt-lobster-gizzards-and-intelligence
… I came to many of the same conclusions. However, I wonder if “that which perceives qualia” makes the difficulty in understanding consciousness more clear. I’m not criticizing the definition, as a neuroscientist I’m just not sure. Certainly, we do not experience anything directly — everything is filtered through our sensory instruments. And, after several decades of thinking and teaching about this topic, I’m still befuddled. This has become even more of a cognitive challenge given the fact that I’m slowly going blind so that the world about which I’m conscious is changing dramatically. This has given me insights into the issue of consciousness — which we strongly associate with vision — that I could not have had otherwise. In any event, thank you for a great read. It was very thought-provoking. Sincerely, Frederick.
This was a fascinating read! I write from the entertainment side. I’d love if you took a look at my work🙏
https://open.substack.com/pub/aneesahlionheart/p/late-night-signal-ep-1-back-in-the?r=yvw6o&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true