A Landscape of Consciousness: Toward a Taxonomy of Explanations and Implications by Robert Lawrence Kuhn
Abstract: Diverse explanations or theories of consciousness are arrayed on a roughly physicalist-to-nonphysicalist landscape of essences and mechanisms. Categories: Materialism Theories (philosophical, neurobiological, electromagnetic field, computational and informational, homeostatic and affective, embodied and enactive, relational, representational, language, phylogenetic evolution); Non-Reductive Physicalism; Quantum Theories; Integrated Information Theory; Panpsychisms; Monisms; Dualisms; Idealisms; Anomalous and Altered States Theories; Challenge Theories. There are many subcategories, especially for Materialism Theories. Each explanation is self-described by its adherents, critique is minimal and only for clarification, and there is no attempt to adjudicate among theories. The implications of consciousness explanations or theories are assessed with respect to four questions: meaning/purpose/value (if any); AI consciousness; virtual immortality; and survival beyond death. A Landscape of Consciousness, I suggest, offers perspective.
1. Chalmers’s “hard problem” of consciousness
Philosopher David Chalmers famously characterized the core conundrum of explaining consciousness—accounting for “qualia,” our qualitatively rendered interior experience of motion-picture-like perception and cognitive awareness—by memorializing the pithy, potent phrase, “the hard problem.” This is where most contemporary theories commence and well they should (Section: Chalmers, 1995b, 1996, 2007; 2014a; 2014b; 2016b).
It is no exaggeration to say that Chalmer’s 1995 paper, “Facing up to the problem of consciousness” (Chalmers, 1995b) and his 1996 book, The Conscious Mind: In Search of a Fundamental Theory (Chalmers, 1996), were watershed moments in consciousness studies, challenging the conventional wisdom of the prevailing materialist-reductionist worldview and altering the dynamics of the field. His core argument against materialism, in its original form, is deceptively (and delightfully) simple:
- 1. In our world, there are conscious experiences.
- 2. There is a logically possible world physically identical to ours, in which the positive facts about consciousness in our world do not hold.
- 3. Therefore, facts about consciousness are further facts about our world, over and above the physical facts.
- 4. So, materialism is false.
This is the famous “Zombie Argument” (infamous to some): whether creatures absolutely identical to us in every external measure, but with no internal light, no inner subjective experience, are “conceivable”—the argument turning on the meaning and implications of “conceivable” and the difference between conceivable and possible. (It can be claimed that the Zombie Argument for consciousness being nonphysical, like the Ontological Argument for God actually existing, sneaks the conclusion into one of the premises.)
Chalmers asks, “Why does it feel like something inside? Why is our brain processing—vast neural circuits and computational mechanisms—accompanied by conscious experience? Why do we have this amazing, entertaining inner movie going on in our minds?” (All quotes not referenced are from Closer To Truth videos on www.closertotruth.com, including 2007, 2014a, 2014b, 2016b.)
Key indeed are qualia, our internal, phenomenological, felt experience—the sight of your newborn daughter, bundled up; the sound of Mahler’s Second Symphony, fifth movement, choral finale; the smell of garlic, cooking in olive oil. Qualia—the felt qualities of inner experience—are the crux of the mind-body problem.
Chalmers describes qualia as “the raw sensations of experience.” He says, “I see colors—reds, greens, blues—and they feel a certain way to me. I see a red rose; I hear a clarinet; I smell mothballs. All of these feel a certain way to me. You must experience them to know what they’re like. You could provide a perfect, complete map of my brain [down to elementary particles]—what’s going on when I see, hear, smell—but if I haven’t seen, heard, smelled for myself, that brain map is not going to tell me about the quality of seeing red, hearing a clarinet, smelling mothballs. You must experience it.”
Since qualia constitutes the core of the “hard problem,” and since the hard problem has come to so dominate consciousness studies such that almost every theorist must confront it, seeking either to explain it or refute it—and since the hard problem is a leitmotiv of this Landscape—I asked Dave about its backstory.
“I first remember presenting the hard problem in a talk at the first Tucson ‘Toward a Science of Consciousness’ consciousness in 1994. When did I first use it? Did I use it in writing before then? I’ve looked in my writing and have not found it [i.e., not prior to the 1994 talk]. The hard problem was part of the talk. I remember speaking with some students beforehand, saying I’m going to talk about ‘hard problems, easy problems.’ I had been already talking this way in my seminar the previous year, so maybe it was already becoming part of my thinking. But I didn’t think about it as an ‘insight.’ I just thought it a way of stating the obvious. ‘Yeah, there’s a really hard problem here.’ So, as part of the first couple of minutes of my talk, I said something like ‘everyone knows there is a hard problem’ …. And people took it and said ‘it’s this great insight’ … Well, it did become a catchy meme; it became a way of encapsulating the problem of consciousness in a way that made it difficult to ignore, and I’m grateful for that role. I had no idea at the time that it would catch on, but it’s good because the problem of consciousness is really easy to ignore or to sidestep, and having this phrase, ‘the hard problem,’ has made it difficult to do that. There’s now just a very natural response whenever that happens. You say, ‘Well, that’s addressing the easy problem, but it’s not addressing the hard problem.’ I think this helps in getting both scientists and philosophers to take consciousness seriously. But I can’t take credit for the idea. Everyone knew that consciousness was a hard problem way before me—my colleagues, Tom Nagel and Ned Block; philosophers like C.D. Broad almost 100 years ago; Thomas Huxley back in the 19th century; even Leibniz and Descartes—they all knew that consciousness was a hard problem” (Chalmers, 2016b).
Over the years, while Chalmers has played a leading role in expanding and enriching the field of consciousness studies (Chalmers, 2018), his overarching views have not changed: “I don’t think the hard problem of consciousness can be solved purely in terms of neuroscience.” As science journalist George Musser puts it, “By ‘hard,’ Chalmers meant impossible. Science as we now practice it, he argued, ‘is inherently unable to explain consciousness’” (Musser, 2023a, Musser, 2023b).
This does not mean, of course, that Chalmers is making a case for “substance dualism,” some nonphysical stuff (like the immortal souls of many religions). Chalmers is postulating a “naturalistic dualism,” where perhaps “information” is the connective, because while information is not material, it is embedded in the physical world. He notes, “We can also find information realized in our phenomenology.” This is a “naturalistic dualism,” a kind of property dualism (15.1).
To Chalmers, “It is natural to hope that there will be a materialist solution to the hard problem and a reductive explanation of consciousness, just as there have been reductive explanations of many other phenomena in many other domains. But consciousness seems to resist materialist explanation in a way that other phenomena do not.” He encapsulates this resistance in three related arguments against materialism: (i) The Explanatory Argument (“explaining structure and function does not suffice to explain consciousness”); (ii) The Conceivability Argument (“it is conceivable that there be a system that is physically identical to a conscious being, but that lacks at least some of that being’s conscious states”); (iii) The Knowledge Argument (“someone could know all the physical facts … and still be unable to know all the facts about consciousness”) (Chalmers, 2003).
“Physicalists, of course, resist these arguments,” says Philosopher Frank Jackson. “Some deny the modal and epistemic claims the arguments use as premises. They may grant (as they should) the intuitive appeal of the claim that a zombie physical duplicate of me is possible, but insist that, when one looks at the matter more closely, one can see that a zombie physical duplicate of me is not in fact possible. Any physical duplicate of me must feel pain when they stub their toe, have things look green to them on occasion, and so on” (Jackson, 2023).
Philosopher Daniel Stoljar targets the conceivability argument (“CA”). Strictly speaking, he says, “CA is an argument against the truth of physicalism. However, since it presupposes the existence of consciousness, it may be regarded also as an argument for the incompatibility of physicalism and the existence of consciousness.” Stoljar’s epistemic view offers a two-part response. “The first part supposes that there is a type of physical fact or property that is relevant to consciousness but of which we are ignorant.” He calls this the ignorance hypothesis. The second part “argues that, if the ignorance hypothesis is true, CA is unpersuasive” for reasons of logic (Kind and Stoljar, 2023, pp. 92, 95).
Philosopher Yujin Nagasawa calls “The Knowledge Argument” (Jackson, 1982, 1986, 1995, 1998) “among the strongest arguments (or possibly the strongest argument) for the claim that there is [in consciousness] something beyond the physical” (Nagasawa, 2012a). Based on a thought experiment by Frank Jackson, it imagines “Mary, a brilliant scientist,” who lives entirely in a black-and-white room, who acquires all physical, scientific knowledge about color—wavelengths of light in all detail—“but it seems obvious that when she comes outside her room, she learns something completely new, namely, what is like to see color.” Prior to seeing the color, “she doesn’t have phenomenal knowledge of conscious experience.” While Jackson himself no longer endorses the argument, it is still regarded as one of the most important arguments against physicalism, though of course it has its critics (Garfield, 1996). Nagasawa, who did his PhD under Jackson, responds to critics of the argument (Nagasawa, 2010), but also offers his own objections and novel proposals (Nagasawa, 2008).
Frank Jackson himself has much of the contemporary literature on consciousness revolving around three questions. “Does the nature of conscious experience pose special problems for physicalism? Is the nature of conscious experience exhausted by functional role? Is the nature of conscious experience exhausted by the intentional contents or representational nature of the relevant kinds of mental states?” (Jackson, 1997).
To philosopher Philip Goff, there are two aspects of consciousness that give rise to the hard problem, qualitivity and subjectivity: qualitivity meaning that experiences involve sensory qualities, whether in real-time or via memory recall; subjectivity meaning that there is a subject who has those experiences, that “these experiences are for someone: there is something that it’s like for me to experience that deep red.” Goff argues that these two aspects of consciousness give rise to two “hard problems.” While either problem would be sufficient to refute materialism, he says, the hard problem of qualitivity is more pronounced—or at least easier to argue for—because the vocabulary of the physical sciences, which tell a purely quantitative story of causal structures, cannot articulate the qualities of experience; the language of physics entails an explanatory limitation (Goff, 2021).
Philosopher Colin McGinn provides a culinary perspective: “Matter is just the wrong kind of thing to give birth to consciousness … You might as well assert, without further explanation, that numbers emerge from biscuits, or ethics from rhubarb” (McGinn, 1993).
Philosopher Jerry Fodor put the problem into what he thought would be perpetual perspective. “[We don’t know], even to a first glimmer, how a brain (or anything else that is physical) could manage to be a locus of conscious experience. This … is, surely, among the ultimate metaphysical mysteries; don’t bet on anybody ever solving it” (Fodor, 1998).
2. Initial thoughts
Consciousness has been a founding and primary theme of Closer To Truth, broadcast on PBS stations since 2000 and now a global resource on the Closer To Truth website and Closer To Truth YouTube channel. What is consciousness? What is the deep essence of consciousness? What is the deep cause of consciousness? (These are not the same question.) Again, it is the core of the mind-body problem—how thoughts in our minds and sensations of our experiences interrelate with activities in our brains.
What does the word “consciousness” mean? What is its referent? “Consciousness” has multiple definitions, which has been part of the problem in its study. There are clear categories of consciousness, uncontroversially recognized. For example, distinguishing “creature consciousness” (the somatic condition of being awake and responding to stimuli) and “mental state consciousness” (the cognitive condition of experiential engagement with the environment and oneself). More importantly, distinguishing “phenomenal consciousness” (“what it is like”) and “cognitive consciousness” (Humphrey, 2023a, Humphrey, 2023b) or “access consciousness”10 (Block, 2023), which are more about function than phenomenology.
Philosopher Ned Block sees “the border between perception and cognition” as a “joint in nature,” primed for exploration. He says he was drawn to this subject because of the realization that the difference between what he calls “access consciousness (cognitive access to phenomenally conscious states)” and what he calls “phenomenal consciousness (what it is like to experience)” was rooted “in a difference between perception—whether conscious or unconscious—and cognitive access to perception” (Block, 2023).
With respect to “information,” it is suggested that “the word ‘consciousness’ conflates two different types of information-processing computations in the brain: the selection of information for global broadcasting, thus making it flexibly available for computation and report,” and “the self-monitoring of those computations, leading to a subjective sense of certainty or error” (Dehaene et al., 2017). But, again, the issue is phenomenal consciousness, and to the extent that each type of consciousness comes with inner experience, the same issues obtain.
Artificial intelligence pioneer Marvin Minsky calls consciousness “a suitcase term,” meaning that all sorts of separate or mildly related concepts can be packed into it. “Consciousness,” he says, “is a clever trick that we use to keep from thinking about how thinking works. And what we do is we take a lot of different phenomena and we give them all the same name, and then you think you’ve got it.” Minsky enjoys dissecting consciousness: “When people use the word ‘consciousness,’ it’s a very strange idea that there’s some wonderful property of the brain that can do so many different things—at least four or five major things and dozens of others. For example, if I ask, ‘were you conscious that you touched your ear?’ You might say ‘no, I didn’t know I did that.’ You might say, ‘yes.’ If you say yes, it’s because some part of your mind, the part that talks, has access to something that remembers what’s happened recently with your arm and your ear.” Minsky notes “there are hundreds of kinds of awarenesses. There’s remembering something as an image. There’s remembering something as a string of words. There’s remembering the tactile feeling of something” (Minsky, 2007a).
Minsky says there is no harm in having consciousness as a suitcase term for social purposes. When a word has multiple meanings, that ambiguity is often very valuable, he says. “But if you’re trying to understand those processes and you’ve put them all in one box, then you say, where in the brain is consciousness located? There’s a whole community of scientists who are trying to find the place in the brain where consciousness is. But if it’s ‘a suitcase’ and it’s just a word for many different processes, they’re wasting their time. They should try to find out how each of those processes works and how they’re related” (Minsky, 2007a).
Philosopher Massimo Pigliucci points out that “you do not need phenomenal consciousness in order to react to the environment. Plants do it, bacteria do it, all sorts of stuff do it.” But when it comes to emotion, he says, “Yes, you do need consciousness – in fact, that is what an emotion is. Emotion implies some level of internal perception of what’s going on, some awareness of the phenomenal experience” (Pigliucci, 2023a, Pigliucci, 2023b).
Suffice it to say that the hard problem refers to phenomenal consciousness. (This is not to say, of course, that cognitive or access consciousness is an “easy problem.”)
To Alex Gomez-Marin, a theoretical physicist turned behavioral neurobiologist, “Ask not what neuroscience can do for consciousness but what consciousness can do for neuroscience.” He laments, “When it comes to serious proposals that offer an alternative to materialism, the mainstream has its doors wide shut … I believe the underlying issue of this debate is a tectonic clash about the nature of reality … In other words, the dominant physicalist paradigm can tolerate many things (including its own internal contradictions and empirical anomalies), but not panpsychism, idealism, dual-aspect monism, or any other view … Any nonmaterialist whiff in the consciousness hunger games is punished. Challenge the core foundations, and you shall be stigmatized; propose a cutting-edge new color to the walls of the old building, you will be cheered (Gomez-Marin, 2023).
On the other hand, philosopher Simon Blackburn cautions against overinflating consciousness as a concept. “I wouldn’t try to approach it by definition,” he said. “That’s going to be just a can of worms. Leibniz said that if we could blow the brain up to the size of a mill and walk around in it, we still wouldn’t find consciousness” (Blackburn, 2012).
To Blackburn, the hard problem is not what Chalmers says it is. “I think the really hard problem is trying to convince ourselves that this [consciousness problem] is, as it were, an artifact of a bad way of thinking. The philosopher who did the most to try to persuade us of that was Ludwig Wittgenstein; the central exhibit in his armory was a thing called the private language argument [i.e., a language understandable by only one person is incoherent]. Wittgenstein said if you think in terms of consciousness in that classical way, we meet the problem of other minds. Why should I think that you’re conscious? I know that I am, but what about you? And if consciousness in some sense floats free, it might sort of just come and go all over the place. As I say, the hard problem is getting rid of the hard problem” (Blackburn, 2012).
Physicist-visionary Paul Davies disagrees. “Many scientists think that life and consciousness are just irrelevant byproducts in a universe; they’re just other sorts of things. I don’t like that idea. I think we’re deeply significant. I’ve always been impressed by the fact that human beings are not only able to observe the universe, but they’ve also come to understand it through science and mathematics. And the fact that we can glimpse the rules on which the universe runs—we can, as it were, decode the cosmic code—seems to me to point to something of extraordinary and fundamental significance” (Davies, 2006a).
To computer scientist-philosopher Jaron Lanier, “Fundamentally, we know very little about consciousness and the process of doing science is best served by humility. So, until we can explain this subjective experience, I think we should accept it as being there” (Lanier, 2007a).
I should note that the mind-body problem is hardly the only problem in consciousness studies: there are myriad mind-related problems. Topping the list of others, perhaps, is the problem of mental causation: How can mental states affect physical states? How can thoughts make actions?
Physicist Uzi Awret argues that explaining how consciousness acts on the matter of the brain to “proclaim its existence” is just as hard as explaining how matter can give rise to consciousness. In fact, the two questions constrain each other. (For example, must panpsychists consider phenomenal powers and dualists kinds of interactionism?) Awret makes the insightful point that one reason the two questions should be conjoined is that they can be complementary in the sense that explaining one makes it harder to explain the other (Awret, 2024).
Mental causation is an issue for every theory of consciousness: a serious one for Dualism, less of so for monistic theories—Materialism, Monisms, Idealisms, perhaps Panpsychism-—in that everything would be made of the same stuff. Yet, still, mental causation needs explanation. But that is not my task here.
While precise definitions of consciousness are challenging, almost everyone agrees that the real challenge is phenomenal consciousness. Phenomenal consciousness is the only consciousness in this Landscape.
3. Philosophical tensions
Two types of philosophical tensions pervade all efforts to understand consciousness: (i) epistemological versus ontological perspectives, and (ii) the nexus between correlation and causation. The former distinguishes what we can know from what really exists; they can be the same, of course, but that determination may not be a superficial one and in fact may not be possible, in practice or even in principle. The latter has an asymmetrical relationship in that causation must involve correlation whereas correlation does not necessarily involve causation; the dyadic entities that correlate might each be caused by an unknown hidden factor that just so happens to cause each of them independently.
In addition, there are questions about the phylogenetic evolution of consciousness (9.10). Is it a gradual gradient, from simple single-cells seeking homeostasis via stimulus-response to environmental pressures, relatively smoothly up the phylogenetic tree to human-level consciousness (as is conventional wisdom)? Or is consciousness more like a step-function with spurts and stops? Is there a cut-off, as it were? Others, of course, maintain that consciousness is irreducible, even fundamental and primordial.
I give “Philosophical Tensions” its own section, however short, to stress the explanatory burden of which every theory of consciousness must keep cognizant: the epistemology-ontology distinction and the correlation-causation conundrum.
4. Surveys & typologies
Philosopher Tim Bayne suggests three ways to think about what consciousness is: (i) experience, awareness and their synonyms (Nagel’s “what-its-like-to-be”); (ii) paradigms and examples, using specifics to induce the general; and (iii) initial theories to circumscribe the borders of the concept, such that a more complete definition falls out of the theory. Examples of (iii) are conducting surveys and organizing typologies (see below) and constructing taxonomies (which is the intent of this paper) (Bayne, 2007).
To appreciate theories of consciousness, there are superb surveys and typologies, scientific and philosophical, that organize the diverse offerings.
David Chalmers offers that “the most important views on the metaphysics of consciousness can be divided almost exhaustively into six classes,” which he labels “type A” through “type F.” The first three (A through C) involve broadly reductive views, seeing consciousness as a physical process that involves no expansion of a physical ontology [Materialism Theories, 9]. The other three (D through F) involve broadly nonreductive views, on which consciousness involves something irreducible in nature, and requires expansion or reconception of a physical ontology [D = Dualism, 15; E = Epiphenomenalism, 9.1.2; F = Monism, 14] (Chalmers, 2003).
PhilPapers (David Bourget and David Chalmers, general editors) feature hundreds of papers on Theories of Consciousness, organized into six categories: Representationalism; Higher-Order Theories of Consciousness; Functionalist Theories of Consciousness; Biological Theories of Consciousness; Panpsychism; Miscellaneous Theories of Consciousness (including Eliminativism, Illusionism, Monisms, Dualism, Idealism) (Bourget and Chalmers, PhilPapers). In presenting a case for panpsychism, Chalmers arrays and assesses materialism, dualism and monism as well as panpsychism (Chalmers, 2016a).
Neuroscientist Anil Seth and Tim Bayne gather and summarize a wide range of candidate theories of consciousness seeking to explain the biological and physical basis of consciousness (22 theories that are essentially neurobiological) (Seth and Bayne, 2022). They review four prominent theories—higher-order theories; global workspace theories; reentry and predictive processing theories; and integrated information theory—and they assert that “the iterative development, testing and comparison of theories of consciousness will lead to a deeper understanding of this most central of mysteries.” However, Seth and Bayne intensify the mystery by observing, “Notably, instead of ToCs [theories of consciousness] progressively being ‘ruled out’ as empirical data accumulates, they seem to be proliferating.” This seems telling.
An engagingly novel kind of survey of the mind-body problem is an insightful (and delightfully idiosyncratic) book by science writer John Horgan (2018). Rejecting “hard-core materialists” who insist “it is a pseudo-problem, which vanishes once you jettison archaic concepts like ‘the self’ and ‘free will’,” Horgan states that “the mind-body problem is quite real, simple and urgent. You face it whenever you wonder who you really are.” Recognizing that we can’t escape our subjectivity when we try to solve the riddle of ourselves, he explores his thesis by delving into the professional and personal lives of nine mind-body experts. (He admits it is odd to offer “my subjective takes on my subjects’ subjective takes on subjectivity.”) (Horgan, 2019).
While greater understanding of the biological (and material) basis of consciousness will no doubt be achieved, the deeper question is whether such biological understanding will be sufficient to explain, even in principle, the essence of consciousness, ever. While most adherents at both ends of the Landscape of Consciousness—materialists and idealists—are confident of the ultimate vindication of their positions, others, including me, take this deeper question as remaining an open question.
My high-bar attempt here is to generate a landscape that is universally exhaustive, in that whatever the ultimate explanation of consciousness, it is somewhere, somehow, embedded in this Landscape of theories (perhaps in multiple places)—even if we have no way, now or in the foreseeable future, to discern it from its cohort Landscapees.
5. Opposing worldviews
At the highest level of abstraction, there are two ways to frame competing theories of consciousness. One way pits monism, where only one kind of stuff is fundamental (though manifest in ostensibly different forms), against dualism, where both physical and mental realms are equally fundamental, without either being reducible to the other.11
There are two kinds of monism, each sitting at opposite ends of the Landscape of Consciousness: at one end, materialism or physicalism,12 where the only real things are products of, or subject to, the laws of physics, and can be accessed reliably and reproducibly only by the natural sciences; and at the other end, idealism, where only the mental is fundamental, and all else, including all physical existence, is derivative, a manifestation of the mental. (Nondualism, from philosophical and religious traditions originating on the Indian subcontinent, avers that consciousness and only consciousness, which is cosmic, is fundamental and primitive. 16.1.)
The second way to frame opposing explanations of consciousness is simply the classic physical vs. nonphysical distinction, though certain explanations, such as panpsychism, may blur the boundary.
6. Is consciousness primitive/fundamental?
A first foundational question is whether consciousness is primitive or fundamental, meaning that it cannot be totally explained by, or “reduced” to, a deeper level of reality. (“Totally” is the operative word, because consciousness can be explained by, or reduced to, neuroscience, biology, chemistry and physics, certainly in large part, at least.)
If consciousness is primitive or fundamental, we can try to explore what this means, what alternative concepts of ultimate reality may follow—though, if this were the case, there is probably not much progress to be made.
On the other hand, if consciousness is not primitive or fundamental, there is much further work to be done and progress to be made. To begin, there are (at least) three next questions:
First, is consciousness “real,” or, on the other hand, is it sufficiently an “illusion,” a brain trick, as it were, which would render consternation over the conundrum moot, if not meaningless?
Second, if consciousness is real (and not primitive), then since in some sense it would be emergent, would this emergence of consciousness be “weak,” meaning that in principle it could be explained by, or reduced to, more fundamental science (even if in practice, it could not be, for a long time, if ever)?
Third, if weak emergence has insufficient resources, would this emergence of consciousness be “strong,” meaning that it would be forever impossible to totally explain consciousness, even in principle, by reducing it to more fundamental levels of scientific explanation (9.1.4).
Finally, is there an intermediate position, where consciousness was not fundamental ab initio, but when it evolved or emerged, consciousness came to become somehow inevitable, more than an accidental byproduct of physical processes? Some see in the grand evolution of the cosmos a process where elements in the cosmos—or more radically, the cosmos itself—work to make the cosmos increasingly self-aware (13.8).
Some founders of quantum theory famously held consciousness as fundamental. Max Planck: “I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness” (The Observer, 1931a). Erwin Schrödinger: “Although I think that life may be the result of an accident, I do not think that of consciousness. Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else” (The Observer, 1931b). Also, “The total number of minds in the universe is one. In fact, consciousness is a singularity phasing within all beings.” Arthur Eddington: “when we speak of the existence of the material universe we are presupposing consciousness.” (The Observer, 1931c). Louis de Broglie: “I regard consciousness and matter as different aspects of one and the same thing” (The Observer, 1931d). John von Neumann (less explicitly): “Consciousness, whatever it is, appears to be the only thing in physics that can ultimately cause this collapse or observation.” John Stewart Bell: “As regards mind, I am fully convinced that it has a central place in the ultimate nature of reality” (Mollan, 2007).
Of course, consciousness as fundamental would eliminate only Materialism Theories. Compatible would be Panpsychisms, Monisms, Dualisms and Idealisms; also, some Quantum Theories and perhaps Integrated Information Theory. (But Materialism has substantial resources, 9.)
7. Identity theory
I take special interest in identity theory (Smart, 2007), not because I subscribe to the early mind-brain identity theory as originally formulated, but because its way of thinking is far more pervasive and far more elucidating than often realized (though perhaps in a way not as sanguine as some may have hoped).
In PhilPapers’ Theories of Consciousness, Mind-Brain Identity Theory is classified under Biological Theories of Consciousness. Classic mind-brain identity theory is indeed the commitment that mental states/events/processes are identical to brain states/events/processes (Aranyosi, PhilPapers).
I would want to generalize this. I would want to say that any theory of consciousness, to be complete and sufficient, must make an identity claim. Bottom line, every theory of consciousness that offers itself as a total explanation, necessary if not always sufficient—other than those where consciousness is fundamental—must be a kind of identity theory. I mean identity theory in the strong sense, in the same sense that the Morning Star and the Evening Star must both be Venus, such that if you eliminate the Morning Star you cannot have the Evening Star. (David Papineau makes a virtue of this necessity in his mind-brain identity argument for physicalism. It doesn’t matter which specific materialist or physicalist theory—all of them, in essence, are mind-brain identity theories [Papineau, 2020b]—9.1.9.)
Here’s the point. There is some kind of “consciousness identity” actually happening—it is always happening and it never changes. Something happening or existing in every sentient creature just is consciousness.
8. A landscape
As the title suggests, the purpose of this paper is to work toward developing a landscape of consciousness, a taxonomy of explanations and implications. The focus is ontological: what is the essence of our inner awareness of felt experience, our perceiving, our enjoying, what we call qualia.
To get an overall sense of the entire Landscape, I have three Figures:
Fig. 1: A high-level list of the 10 major categories, and under Materialism Theories, the 10 subcategories.
———————————————————————————————
Download : Download high-res image (564KB), Download : Download full-size image
Fig. 1: A landscape of consciousness – basic outline.
———————————————————————————————
Fig 2: A complete list of all the theories of consciousness, organized under the major categories and subcategories.
——————————————————————————————–
Download : Download high-res image (1MB), Download : Download full-size image
Download : Download high-res image (1MB), Download : Download full-size image
Download : Download high-res image (1MB), Download : Download full-size image
Download : Download high-res image (1MB), Download : Download full-size image
Download : Download high-res image (1MB), Download : Download full-size image
Fig. 2. A landscape of consciousness – complete outline.
———————————————————————————————
Fig. 3: A graphic image of the entire Landscape, with all categories, subcategories and theories (abbreviated) (created by Alex Gomez-Marin).
———————————————————————————————
Download : Download high-res image (1MB), Download : Download full-size image
Fig. 3. A landscape of consciousness.
———————————————————————————————
Note: Categories 1–10 in the Figures correspond to sections 9-18 in the text. To convert from categories/theories in the Figures to sections/theories in the text, add eight (+8). Conversely, to convert from sections/theories in the text to categories/theories in the Figures, subtract eight (−8).
I distinguish what consciousness is ontologically from how consciousness happens operationally. The Landscape I present is populated primarily by claims of what consciousness actually is, not how it functions and not how it evolved over deep time (although both how it functions and how it evolved may well reflect what it is). This is not a landscape of how consciousness emerged or its purpose or its content—sensations, perceptions, cognitions, emotions, language—none of these—although all of these are recruited by various explanations on offer.
Mechanisms of consciousness are relevant here only to the extent that they elucidate a core theory of consciousness. For example, the “neurogeographic” debate between the “front of the head” folks—the Global Workspace (9.2.3) and Higher-Order (9.8.3) theorists—and the “back of the head” folks—the Integrated Information (4) and Recurrent Processing (9.8.2) theorists—is essential for a complete neurobiological explanation of consciousness (Block, 2023, pp. 417–418), but it is of only mild interest for an ontological survey of the Landscape. If the Global Workplace suddenly shifted to the back of the head, and Integrated Information to the front, would the “trading-places” inversion make much ontological difference?
Traditionally and simplistically, the clash is between materialism/physicalism and dualism or idealism; such oversimplification may be part of the problem—other categories and subcategories have standing.
The alternative theories of consciousness that follow come about via my hundreds of conversations and decades of readings and night-musings. I array 10 categories of explanations or theories of consciousness; all but one present multiple specific theories; only Materialism has subcategories. (There are many ways to envision a landscape, of course, and, as a result, many ways to array theories. I claim no privileged view.)
Here are the 10 primary categories of explanations or theories: Materialism Theories (with many subcategories); Non-Reductive Physicalism; Quantum Theories; Integrated Information Theory; Panpsychisms; Monisms; Dualisms; Idealisms; Anomalous and Altered States Theories; Challenge Theories.
It is no surprise that Materialism Theories have by far the largest number of specific theories. It is the only category with a three-level organization: there are 10 subcategories under Materialism, each housing seven to 14 specific theories. This makes sense in that there are more ways to explain consciousness with neurobiological and other physical models than with non-neurobiological and non-physical models, and also in that the challenge for materialism is to account for how the physical brain entails mental states (and there are increasingly innovative and diverse claims to do so).
There is obvious overlap among categories and among theories within categories, and it is often challenging to pick distinguishing traits to classify theories in such a one-dimensional, artificial and imposed typology. For example, one can well argue that Non-Reductive Physicalism, Quantum Theories, and perhaps even Integrated Information Theory and Panpsychisms, are all, in essence, Materialism Theories, in that they do not require anything beyond the physical world (whether in current or extended form). I break out these categories because, in recent times, each has developed a certain independence, prominence and credibility (at least in the sense of the credulity of adherents), and because they differ sufficiently from classic Materialism Theories, exemplified by neurobiological mechanisms.
In addition, the ideas of epiphenomenalism, functionalism and emergence, and the mechanisms of prediction and language models, while themselves not specific explanations of consciousness, represent core concepts in philosophy of mind that can affect some explanations and influence some implications.
Some would impose an “entrance requirement” on the Landscape, such that theories admitted need be “scientific” in the sense that the scientific method should be applicable, whether in a formal Popperian falsification sense or with a weaker verification methodology. I do not subscribe to this limitation, although we must always distinguish between science and philosophy, along with other potential forms of knowledge. (My quasi-“Overton Window” of consciousness—the range of explanatory theories I feel comfortable presenting, if not propounding—may be wider than those of others, whether physicalists or nonphysicalists13 [Birth, 2023]. One reason for my wider window is the unsolicited theories of consciousness I receive on Closer To Truth, some of which I find intriguing if not convincing.)
The Landscape itself, as a one-dimensional typology, is limited and imperfect decisions must be made: which theories to include and which not; where to classify; what is the optimal order; whether to append a possessive name to the theory’s title; and the like. I’ve tried to include all the well-known theories and an idiosyncratic selection of lesser-known theories that have some aspects of originality, rationality, coherence, and, well, charm. In addition, a few theories reflect the beliefs of common people, or the interests of Closer To Truth viewers, though largely dismissed by the scientific and philosophical communities. Some theories some think bizarre, “fabulous” in the original meaning of the word: “mythical, celebrated in fable.” All reflect the imaginations of the human mind driven by a quest to know reality. Please do not take the unavoidable appearance of visual equality among theories as indicating their truth-value equivalence (or, for that matter, my personal opinion of them).
Neuroscientist Joseph LeDoux (9.8.5; 9.10.2), noting “the broad nature” of the Landscape (on reviewing an early draft), suggests that “The Sniff Test” might be relevant. (He uses The Sniff Test to assess the strong AI view substituting “consciousness” for “intelligence” [LeDoux, 2023a, p. 301.]) I’m all for imposing an olfactory hurdle for theories of consciousness (recognizing that olfactory bulbs do differ).
Readers may well have corrections and additions, which I welcome. The Landscape is a work-in-process and I look forward to feedback so it can be extended and improved.
Once again, the rough flow of the theories arraying the Landscape of Consciousness—as per my idiosyncratic approach—is on a rough, arbitrarily linear, physicalism-nonphysicalism spectrum from, to begin with, most physical, and to end with, most nonphysical (or least physical) (Fig. 1, Fig. 2, Fig. 3).
9. Materialism theories
Materialism is the claim that consciousness is entirely physical, solely the product of biological brains, and all mental states can be fully “reduced” to, or wholly explained by, physical states—which, at their deepest levels, are the fields and particles of fundamental physics. In short, materialism, in its many forms and flavors, gives a completely physicalist account of phenomenal consciousness.
Overwhelmingly for scientists, materialism is the prevailing theory of consciousness. To them, the utter physicality of consciousness is an assumed premise, supported strongly by incontrovertible empirical evidence from neuroscience (e.g., brain impairment, brain stimulation). This is “Biological Naturalism,” as exemplified by philosopher John Searle (Searle, 2007a, 2007b). It is a view, to a first approximation, that promises, if not yet offers, a complete solution to Chalmers’s hard problem.14
To neuroscientist Susan Greenfield, the nonmaterialist view that consciousness might be irreducible is “‘a get-out-of-jail-for-free card’, that is to say, whatever I did, whatever I showed you, whatever experiments I did, whatever theories I had in brain terms, you could always say ‘consciousness has the extra thing,’ and this extra thing is the thing that really counts and is something that we brain scientists can’t touch.” She adds, “If reduction is a ‘dirty word,’ we can say explicable, interpretable, or understandable,” but explaining consciousness must be always and solely in brain and body terms (Greenfield, 2012).
Compared to some of the consciousness-as-primary theories that follow, Materialism Theories can be counted as deflationary (which doesn’t make them wrong, of course, or even unexciting). To physicist Sean Carroll, consciousness is “a way of talking about the physical world, just like many other ways of talking. It’s one of these emergent phenomena that we find is a useful way of packaging reality, so we say that someone is conscious of something that corresponds to certain physical actions in the real world.” Carroll is unambiguous: “I don’t think that there is anything special about mental properties. I don’t think there’s any special mental realm of existence. I think it’s all the physical world and all the manifold ways we have of describing it” (Carroll, 2016).
Nobel laureate biologist Gerald Edelman agrees. He does not consider the real existence of qualia to be an insurmountable impediment to a thoroughly materialistic theory of consciousness. “To expect that a theoretical explanation of consciousness can itself provide an observer with the experience of ‘the redness of red’ is to ignore just those phenotypic properties and life history that enable an individual animal to know what it is like to be such an animal. A scientific theory cannot presume to replicate the experience that it describes or explains; a theory to account for a hurricane is not a hurricane. A third-person description by a theorist of the qualia associated with wine tasting can, for example, take detailed account of the reported personal experiences of that theorist and his human subjects. It cannot, however, directly convey or induce qualia by description; to experience the discriminations of an individual, it is necessary to be that individual” (Edelman, 2003). While Edelman’s honest assessment may give Materialism Theories their best shot, many remain unpersuaded. After all, still, we wonder: what are qualia? Literally, what are they!
Even among philosophers, a majority are physicalists (but just barely). In their 2020 survey of professional philosophers, Bourget and Chalmers report 51.9% support Physicalism; 32.1%, Non-physicalism; and 15.9%, Other (Bourget and Chalmers, 2023; Bourget and Chalmers, 2014).
Chalmers provides “roughly three ways that a materialist might resist the epistemic arguments” by mitigating the epistemic gap between the physical and phenomenal domains, where “each denies a certain sort of close epistemic relation between the domains: a relation involving what we can know, or conceive, or explain.” According to Chalmers, “A type-A materialist denies that there is the relevant sort of epistemic gap. A type-B materialist accepts that there is an unclosable epistemic gap, but denies that there is an ontological gap. And a type-C materialist accepts that there is a deep epistemic gap, but holds that it will eventually be closed” (Chalmers, 2003).
A subtle way to think about Materialism Theories recruits the concept of “supervenience” in that “the mental supervenes on the physical” such that there cannot be a change in the mental without there being a change in the physical. One such subtlety is the modal force of the connection or dependency, parsing among logical necessity, metaphysical necessity, factual or empirical necessity, as well as among explanation, entailment, grounding, reduction, emergence, ontological dependence, and the like. For this Landscape of explanations of consciousness, we leave “supervenience” to others (McLaughlin and Bennett, 2021).
Similarly, the relationship between introspection and consciousness is an intimate one, linking the epistemology of self-knowledge with the metaphysics of mind. For several theories of consciousness, introspection is essential (e.g., neurophenomenology, 9.6.4 and 9.6.5), though for most, it is a non-issue (Smithies and Stoljar, 2012).
Two major theories of consciousness are Integrated Information Theory and Global Workspace Theory. Both are important, of course, and perhaps by situating them on the Landscape, they can be evaluated from different perspectives. In what may reflect my personal bias, I situate Global Workspace Theory under Materialism’s Neurobiological Theories, while giving Integrated Information Theory its own first-order category. (This reflects my sense of the nature of their mechanisms, not my opinion of the truth of their claims.)
I group Materialism Theories into ten subcategories: Philosophical Theories, Neurobiological Theories, Electromagnetic Field Theories, Computational and Informational Theories, Homeostatic and Affective Theories, Embodied and Enactive Theories, Relational Theories, Representational Theories, Language Relationships, and Phylogenetic Evolution.
While many of the following theories under Materialism Theories proffer to explain what happens in consciousness, or what causes consciousness, in that they describe alternative critical processes in generating consciousness, the question always remains, are they even acknowledging, much less addressing, the question of what consciousness actually is?
In picking out multiple materialist theories and principles, many overlap or nest, obviously, but by presenting them separately, I try to tease out emphasis and nuance. The list cannot be exhaustive.
9.1. Philosophical Theories
Philosophical theories combine relevant fundamental principles for theories of consciousness with framing of the mind-body problem and philosophical defenses of Materialism.
9.1.1. Eliminative materialism/illusionism
Eliminative Materialism is the maximalist physicalist position that our common-sense view of the mind is misleading and that consciousness is in a kind of illusion generated by the brain—a contingent, evolutionary, inner adaptation that enhanced fitness and reproductive success. This deflationary view of consciousness is associated with philosophers Patricia Churchland (1986), Paul Churchland (1981), Daniel Dennett (1992), Keith Frankish (2022), and others, though their views are often distorted and caricatured.
Paul Churchland defines “eliminative materialism” forcefully as “the thesis that our common-sense conception of psychological phenomena constitutes a radically false theory, a theory so fundamentally defective that both the principles and the ontology of that theory will eventually be displaced, rather than smoothly reduced, by completed neuroscience.” Our third-person understanding and even our first-person introspection, Churchland says, “may then be reconstituted within the conceptual framework of completed neuroscience, a theory we may expect to be more powerful by far than the common-sense psychology it displaces” He applauds “the principled displacement of folk psychology … [as] one of the most intriguing theoretical displacements we can currently imagine” (Churchland, 1981).
Patricia Churchland’s path-setting 1986 book, Neurophilosophy, places the mind-body problem within the wider context of the philosophy of science and argues for a complete reductionist account of consciousness founded on neurobiology (Churchland, 1986). Indeed, “neurophilosophy” is the proffered name of a new discipline that is to be guided by Churchland’s “unified theory of the mind-brain,” for which her “guiding aim” is to develop “a very general framework” (Stent, 1987). She founds her approach on two principles: the progress of neuroscience in addressing mental states, and the recognition by many philosophers that philosophy is no longer “an a priori discipline in which philosophers can discover the a priori principles that neuroscientific theories had better honor on peril of being found wrong.”
That there remain philosophers who persist in arguing that the mind goes beyond the brain—they reject reductionism “as unlikely—and not merely unlikely, but as flatly preposterous”—Churchland attributes to persistent traditions of folk myths. To discover our true nature, she implores, “we must see ourselves as organisms in Nature, to be understood by scientific methods and means” (Churchland, 1986). She rejects the anti-reductionist weapon of “emergence” as being “of little explanatory value” (Stent, 1987).
Dennett argues that qualia—the qualitive features of phenomenal consciousness—which he notes (with a smile) compel philosophers to develop outlandish theories, are illusory and incoherent (9.4). To neuroscientist Michael Graziano, it’s not that consciousness doesn’t exist or that we are fooled into thinking we have it when we don’t. Instead, eliminative materialism likens consciousness to the illusion created for the user of a human-computer interface such that the metaphysical properties we attribute to ourselves are wrong15 (Graziano, 2014, 2019a, 2019c).
In spite of the word “illusion” (see below). its proponents do not actually deny the reality of the things that compose what Wilfrid Sellars famously called “the manifest image”—thoughts, intentions, appearances, experiences—which he distinguished from “the scientific image” (Sellars, 1962). The things we see and hear and interact with are, according to Dennett, “not mere fictions but different versions of what actually exists: real patterns” (Dennett, 2017). The underlying reality, however—what exists in itself and not just apparently for us or for other creatures—is truly represented only by the scientific image, which must be expressed ultimately in the language of physics, chemistry, molecular biology, and neurophysiology.
Picking up on analogies in Dennett’s work, as he puts it, Keith Frankish proposed the term “illusionism,” which has been adopted for the view that consciousness does not involve awareness of special “phenomenal” properties and that belief in such properties is due to an introspective illusion. Frankish concludes: “Considered as a set of functional processes—a hugely complex informational and reactive engagement with the world—it is perfectly real. Considered as an internal realm of phenomenal properties or what-it-is-likenesses, it is illusory” (Frankish, 2022).
Although what we see and hear, for all the world, seems precisely what really exists, ringing in our ears and stars in our eyes undermine our realist folk psychology. (Personally, I have my own unambiguous proof. With my normal left eye, I see a light bulb as a single point of light; with my right eye, afflicted with advanced keratoconus, I see about 100 points of skewed, smeared light.)
Another approach claiming that there is no phenomenal consciousness draws on arguments from Buddhist philosophy of mind to show that the sense that there is this kind of consciousness is an instance of cognitive illusion. As articulated by Jay Garfield, “there is nothing ‘that it is like’ to be me. To believe in phenomenal consciousness or ‘what-it’s-like-ness’ or ‘for-me-ness’ is to succumb to a pernicious form of the Myth of the Given.” He argues that “there are no good arguments for the existence of such a kind of consciousness” (Garfield, 2016).
The fact that some deny the existence of experience, says philosopher Galen Strawson, should make us “feel very sober, and a little afraid, at the power of human credulity.” This particular denial, he says with flourish, “is the strangest thing that has ever happened in the whole history of human thought, not just the whole history of philosophy” (Strawson, 2009).
While dismissing eliminative materialism and illusionism might at first seem obviously right, a prima facie case, I’d not so quickly jump to that conclusion: it could self-limit the awareness of subtleties and the nature of boundaries in the hunt for consciousness.
9.1.2. Epiphenomenalism
In epiphenomenalism, consciousness is entirely physical, solely the product of biological brains, but mental states cannot be entirely reduced to physical states (brains or otherwise), and mental states have no causal powers. Constrained by the “causal closure of the physical,” the mind, whatever else it might be, is entirely inert: our awareness of consciousness is real, but our sense of mental causation is not. Consciousness is still a kind of illusion or trick in that there is no “top-down causation”; our sense that our thoughts can cause things is mistaken. In this manner, epiphenomenalism is a weaker form of non-reductive physicalism (10). All conscious mental events, including conscious perceptions, involve unconscious processing. The classic analogy for consciousness as an epiphenomenon is “foam on an ocean wave:” always there, apparently active, but never really doing anything.
More formally, epiphenomenalism holds that phenomenal properties are ontologically distinct from physical properties, and that the phenomenal has no effect on the physical. Physical states cause phenomenal states, but not vice versa. The arrow of psychophysical causation points in only one direction, from physical to phenomenal (Chalmers, 2003). This makes epiphenomenalism a weak form of Dualism (15), but by affirming the complete causal closure of the physical, it well deserves its spot in Materialism Theories.
Apparent support for consciousness epiphenomenalism comes from the famous Libet experiment, which demonstrated that brain activity associated with a voluntary movement (“readiness potential”) precedes conscious experience of the intention to make that movement by several hundred milliseconds (Frith and Haggard, 2018). The implication is that the brain, rather than conscious “free will”, initiates voluntary acts. Studied extensively, the Libet readiness potential data are reproducible and robust under diverse experiment designs. However, its theoretical and methodological foundations have been challenged (Gholipour, 2019), particularly with respect to stochastic noise in brain, the spontaneous fluctuations in neuronal activity (Schurger et al., 2012).
Epiphenomenalism highlights the need to recognize that the search for a metaphysical theory of consciousness must integrate a theory of mental causation, which in turn must deal with the epistemic problem of self-knowledge. In epiphenomenalism, the integration is obvious because the lack of mental causation is its primary feature. In other theories of consciousness, mental causation will be less obvious but perhaps no less important.
Daniel Stoljar notes that if phenomenal consciousness would be “merely an epiphenomenon with no causal force,” perhaps “this will end up being the best option for dualism 2.0 (15.10), despite its being counterintuitive—after all, it certainly seems to us that our phenomenally conscious states causally matter. But any view on the problem of consciousness is likely going to have to embrace some counterintuitive result at some point” (Kind and Stoljar, 2023, p. 55).
Parallelism, a similar but less popular theory than epiphenomenalism, holds that physical events entirely cause physical events and mental events entirely cause mental events, but there is no causal connection between physical and mental worlds in either direction. But if no connection, what would maintain such perfect correspondences? It is no challenge to discern why parallelism is less popular.
9.1.3. Functionalism
Functionalism in philosophy of mind is the theory that functions are dispositive—activities, roles, results, outputs—mediums are not. What’s critical is how mental states work, not in what substrates mental states are found (Levin, 2023). Mental states are not dependent on their internal constitutions, what they are, but rather only on their outputs or roles, what they do. As long as the functions (activities) are conducive to creating consciousness, it does not matter whether the substrates are neural tissue or computer chips or any form of matter that can instantiate information.
Ned Block defines functionalism as the theory that “mental states are constituted by their causal relations to one another and to sensory inputs and behavioral outputs.” Functionalism can be appreciated, he says, by attending to “artifact concepts like carburetor and biological concepts like kidney. What it is for something to be a carburetor is for it to mix fuel and air in an internal combustion engine—carburetor is a functional concept. In the case of the kidney, the scientific concept is functional—defined in terms of a role in filtering the blood and maintaining certain chemical balances” (Block, 1980; Block, 2007b).
Block gives the functionalist answer to the perennial question, “What are mental states?”, stating simply that “mental states are functional states.” The significance of this simple identity is precisely this simple identity. Thus, he says, “theses of metaphysical functionalism are sometimes described as functional state identity theses” (Block, 1980; Block, 2007b).
Block explores the relationship between functionalism and reductive physicalism. “The first step in a reductive physicalist enterprise,” he says, “is to functionally characterize the property to be reduced and the second step is to find the physical property that fills the functional role. Reductive physicalism is true for the mind if both steps can always be carried out.” Block makes the at-first counterintuitive claim that reductive physicalism and functionalism are “incompatible rivals,” explaining that when understood as metaphysical theses, “appearances to the contrary stem from failure to sufficiently appreciate the upshot of the difference between metaphysics and ontology”—in that functionalism is agnostic on the existence of nonphysical substances (Block, 2008).
David Chalmers uses a silicon-chip-replacement thought experiment to support a functional approach to consciousness.16 “When experience arises from a physical system,” he says, “it does so in virtue of the system’s functional organization.” The thought experiment replaces brain neurons with microchips that can duplicate 100% of the neuron’s functions, and to do so slowly, even one by one. (That such technology is fiendishly complex is irrelevant.) The question is, what happens to one’s conscious experience, one’s qualia? Would it gradually wink or fade out? Chalmers says no: the conscious experience, the qualia, would not change—there would be no difference at all. This result would support Chalmers’s “principle of organizational invariance, holding that experience is invariant across systems with the same fine-grained functional organization” (Chalmers, 1995a). Not everyone agrees, of course (Block, 2023; Van Heuveln et al., 1998).
Computational functionalism goes further and commits to the thesis that performing computations of a particular, natural and likely discoverable kind is both necessary and sufficient for consciousness in general and ultimately for human-level consciousness (and perhaps for speculative higher forms of consciousness). Whether consciousness is indeed computational elicits probative and profound debate (e.g., Penrose, 1999; 1996).
Functionalism with respect to consciousness is more an overarching principle, a way of thinking, than a proffered model, a claimed explanation on its own. Functionalism can apply in many Materialist Theories and it is often assumed as an a priori premise. Functionalism is the theoretical foundation of “virtual immortality,” the theory that the fullness of our mental selves can be uploaded with first-person perfection to non-biological media, so that when our mortal bodies die our mental selves will live on (Kuhn, 2016a). (See Virtual Immortality.)
9.1.4. Emergence
Emergence is the claim that qualitatively new, even radically novel properties in biological systems and psychological states arise from physical properties governed entirely by the laws of physics. The re-emergence of emergence in the sciences, where whole entities are, or seem to be, more than the sum of all their parts, has been controversial, its assessment ranging from trivial and distracting to radical and revolutionary (Clayton and Davies, 2008). Emergence in the study of consciousness is especially foundational, more as a basic principle undergirding and enhancing various theories than as a specific theory in its own right.
Emergence, according to Paul Davies, means that “at each level of complexity, new and often surprising qualities emerge that cannot, at least in any straightforward manner, be attributed to known properties of the constituents. ln some cases, the emergent quality simply makes no sense when applied to the parts. Thus water may be described as wet, but it would be meaningless to ask whether a molecule of H2O is wet” (Davies, 2008). Moreover, it could seem astonishing that the properties of two common gases, hydrogen and oxygen, can combine to form a liquid that is wet and a solid that expands when cooled. Yet, physics and physical chemistry can explain all of this, in terms of atomic structures and bonding angles.
Emergence can be appreciated in contrast with its mortal conceptual rival: reductionism. Reductionism is mainstream science, the bedrock assumption of the scientific method: All, in principle, can be explained by physics, even if all, in practice, cannot be.
Davies defines “ontological reductionism” as the state of affairs where all reality “is, in the final analysis, nothing but the sum of the parts, and that the formulation of concepts, theories, and experimental procedures in terms of higher-level concepts is merely a convenience.” (He distinguishes “methodological reductionism,” where reductionism is a “fruitful methodology,” from “epistemological reductionism” where all we can know is that reductionism works by explaining one scientific level in terms of lower or more fundamental levels, without making any claim on ultimate reality.) (Davies, 2008).
But “for emergence to be accepted as more than a methodological convenience—that is, for emergence to make a difference in our understanding of how the world works,” Davies argues that “something has to give within existing theory.” Davies himself has been a leader in “a growing band of scientists who are pushing at the straitjacket of orthodox causation to ‘make room’ for strong emergence (see below), and although physics remains deeply reductionistic, there is a sense that the subject is poised for a dramatic paradigm shift in this regard” (Davies, 2008).
To make sense of emergence, we distinguish between its “weak” and “strong” forms. In its weak form, while it may not be apparent how the properties of one level can be entirely explained by the properties of a lower, more fundamental level, in principle, they can be explained, and ultimately, science will advance to explain them.
In its strong form, properties at one level can never be explained in terms of properties of lower levels, not even in principle, no matter how ultimate the science. As Davies explains, “Strong emergence is a far more contentious position, in which it is asserted that the micro-level principles are quite simply inadequate to account for the system’s behaviour as a whole. Strong emergence cannot succeed in systems that are causally closed at the microscopic level, because there is no room for additional principles to operate that are not already implicit in the lower-level rules.” He posits only three “loopholes”: the universe is an open system, non-deterministic quantum mechanics, and computational imprecision at fundamental levels—all three have obvious problems, which is why they are “considered unorthodox departures from standard physical theory” (Davies, 2008).
David Chalmers says that “a high-level phenomenon is strongly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principle from truths in the low-level domain.” He distinguishes a high-level phenomenon that is “weakly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are unexpected given the principles governing the low-level domain” (Chalmers, 2008).
Strong emergence, Chalmers contends, has “radical consequences,” such that “If there are phenomena that are strongly emergent with respect to the domain of physics, then our conception of nature needs to be expanded to accommodate them. That is, if there are phenomena whose existence is not deducible from the facts about the exact distribution of particles and fields throughout space and time (along with other laws of physics), then this suggests that new fundamental laws of nature are needed to explain these phenomena” (Chalmers, 2008).
By contrasting strong and weak emergence, Chalmers sets the stage to enact the grand epic of consciousness. “In a way, the philosophical morals of strong emergence and weak emergence are diametrically opposed. Strong emergence, if it exists, can be used to reject the physicalist picture of the world as fundamentally incomplete. By contrast, weak emergence can be used to support the physicalist picture of the world, by showing how all sorts of phenomena that might seem novel and irreducible at first sight can nevertheless be grounded in underlying simple laws” (Chalmers, 2008).
Chalmers is not shy: “I think there is exactly one clear case of a strongly emergent phenomenon, and that is the phenomenon of consciousness.” He suggests that “the lawful connection between physical processes and consciousness is not itself derivable from the laws of physics but is instead a further basic law or laws of its own. The laws that express the connection between physical processes and consciousness are what we might call fundamental psychophysical laws” (Chalmers, 2008).
The challenge of strong emergence, especially in consciousness, is a deep probe of not only how the mind works but also how the world works. Its influence is felt all along the Landscape of Consciousness.
9.1.5. Mind-brain identity theory
As noted, mind-brain identity theory holds that states and processes of the mind are identical to states and processes of the brain (Smart, 2007) and as such can be considered the exemplar of materialism. Early on, in the mid-20th century, mind-brain identity theory had been a leader as an explanation of consciousness, but today, in its original form, it is no longer a major contender. Though the original identity theory has evolved in a kind of arms race with critics, it is generally considered undermined by various objections, the most common being multiple realizability (Aranyosi, PhilPapers).
9.1.6. Searle’s biological naturalism
“Biological Naturalism” is the name philosopher John Searle gave to a neurobiological solution to the mind-body problem. His approach is to ignore the mind-body problem’s philosophical history and focus on “what you know for a fact.” He starts with a mundane, working definition of consciousness: “Conscious states are those states of awareness, sentience or feeling that begin in the morning when we wake from a dreamless sleep and continue throughout the day until we fall asleep or otherwise become ‘unconscious’” (Searle, 2007b; Searle, 2014a).
Searle identifies four essential features of consciousness: “1. Conscious states, so defined, are qualitative, in the sense that there is a qualitative feel to being in any particular conscious state …. 2. Such conscious states are also ontologically subjective in the sense that they only exist as experienced by a human or animal subject …. 3. Furthermore, a striking fact, at any moment in your conscious life, all of your conscious states are experienced by you as part of a single unified conscious field …. 4. Most, but not all, conscious states are intentional, in the philosopher’s sense that they are about, or refer to, objects and states of affairs.”17
Next is crucial: “The reality and irreducibility of consciousness: Conscious states, so defined, are real parts of the real world and cannot be eliminated or reduced to something else.” This means that one cannot do an ontological reduction of consciousness to more fundamental neurobiological processes, because, as stated, consciousness has a subjective or a first-person ontology, while the neurobiological causal basis of consciousness has an objective or third person ontology (Searle, 2007b).
The causal reducibility of consciousness leads to Searle’s major move: “The neuronal basis of consciousness: All conscious states are caused by lower-level brain processes.” Not knowing all the details of exactly how consciousness is caused by brain processes casts “no doubt that it is in fact.” Searle asserts with confidence, “The thesis that all of our conscious states, from feeling thirsty to experiencing mystical ecstasies, are caused by brain processes is now established by an overwhelming amount of evidence (Searle, 2007b). (Others, of course, disagree.)
Finally, Searle’s two-point conclusion: (i) The neuronal realization of consciousness: All conscious states are realized in the brain as higher level or system features, and (ii) The causal efficacy of consciousness: Conscious states, as real parts of the real world, function causally (Searle, 2007b).
Searle celebrates the fact that his approach to consciousness does not mention any of the usual-suspect theories, such as dualism, materialism, epiphenomenalism, or any of the rest of them. He argues that “if you take seriously the so-called ‘scientific worldview’ and forget about the history of philosophy,” the views he puts forth are “what you would come up with.”
Searle explains the name with which he “baptized this view,” Biological Naturalism. “‘Biological’ because it emphasizes that the right level to account for the very existence of consciousness is the biological level … [given] we know that the processes that produce it are neuronal processes in the brain. ‘Naturalism’ because consciousness is part of the natural world along with other biological phenomena such as photosynthesis, digestion or mitosis, and the explanatory apparatus we need to explain it we need anyway to explain other parts of nature.”
Searle responds to critics of Biological Naturalism, striking at a key objection. “Sometimes philosophers talk about naturalizing consciousness and intentionality, but by ‘naturalizing’ they usually mean denying the first person or subjective ontology of consciousness. On my view, consciousness does not need naturalizing: It already is part of nature and it is part of nature as the subjective, qualitative biological part” (Searle, 2007a, 2007b).
9.1.7. Block’s biological reductionism
Philosopher Ned Block represents a majority of philosophers (and a large majority of scientists) who hold that “phenomenal consciousness is reducible to its physical basis.” (Block, 2023, p. 445; Block, 2007a). The best candidates for this reduction, he says, involve neurobiology. “For example, in the creatures that seem to have consciousness (e.g., primates, octopi), neurons operate via electrical signals triggering the release of neurotransmitters, and the neurotransmitters in turn engender further electrical signals. Neurons operate in a chemical soup, with direct effects from one neuron to another mediated by chemicals. The release of chemicals is not confined to the synapse but can also happen in dendrites” (Block, 2023, p. 446).
These propagating neurophysiological sparks and diffusing neurochemical transmitters compose a magnificently complex and integrated system that carries and conveys meaning. Block appeals to “this electrochemical nature of known cases of consciousness as an example of a candidate for neurobiological reduction of consciousness.”
To Block, “the border between seeing and thinking” provides insight into consciousness and helps adjudicate best theories (Block, 2023). He highlights this “joint in nature” between perception and cognition and advocates its study for demystifying the mind. He argues against theories of consciousness that focus on prefrontal cortex, arguing that perceptual consciousness does not require cognitive processing.
9.1.8. Flanagan’s constructive naturalism
To philosopher Owen Flanagan, “consciousness is neither miraculous nor terminally mysterious,” and he argues that “it is possible to understand human consciousness in a way that gives its subjective, phenomenal aspects their full due, while at the same time taking into account the neural bases of subjectivity.” The result, he says, “is a powerful synthetic theory of consciousness, a ‘constructive naturalism,’ according to which subjective consciousness is real, plays an important causal role, and resides [without residue] in the brain” (Flanagan, 1993).
The “constructive naturalistic theory” that Flanagan sketches is “neurophilosophical” in that “it tries to mesh a naturalistic metaphysic of mind with our still sketchy but maturing understanding of how the brain works.” It pictures consciousness “as a name for a heterogeneous set of events and processes that share the property of being experienced. Consciousness is taken to name a set of processes, not a thing or a mental faculty.” The theory is neo-Darwinian, he says, “in that it is committed to the view that the capacity to experience things evolved via the processes responsible for the development of our nervous system.” The theory, he stresses, “denies that consciousness is as consciousness seems at the surface.” Rather, consciousness has a complex structure, and getting at it requires “coordination of phenomenological, psychological, and neural analyses” (Flanagan, 1993).
Flanagan explains that “there is no necessary connection between how things seem and how they are … [and] we are often mistaken in our self-reporting, including in our reporting about how things seem.” This is why he cautions that phenomenology might do “more harm than good when it comes to developing a proper theory of consciousness, since it fosters certain illusions about the nature of consciousness” (Flanagan, 1993).
“The most plausible hypothesis,” Flanagan states, “is that the mind is the brain, a Darwin machine that is a massively well-connected system of parallel processors interacting with each other from above and below, and every which way besides.” It is no wonder, he says, that “meaning holism is true, that we somehow solve the frame problem, and that my belief that snow is white is realized quite possibly in a somewhat different way in my brain than the same belief is realized in yours.”
Flanagan addresses “the gap between the first-person way in which conscious mental life reveals itself and the way it is, or can be described, from an objective point of view” by asserting bluntly, “mind and brain are one and the same thing seen from two different perspectives. The gap between the subjective and the objective is an epistemic gap, not an ontological gap.” Indeed, he claims, “it is precisely the fact that individuals possess organismic integrity that explains why subjectivity accrues first-personally” (Flanagan, 1993).
As a physicalist, Flanagan recognizes the role of emergence, that “there are emergent natural properties that, despite being obedient to the laws of physics, are not reducible to physics” (Flanagan, 2003). He rejects epiphenomenalism, where “conscious thought plays no role in the execution of any act.” The sense that we control our actions is real, not illusion, but the mechanism is all brain-bound; for example, an idea originating in the prefrontal cortex that calls up information or memories from parietal association cortex (Campbell, 2004).
To Flanagan, the “really hard problem” is finding “meaning in a material world” (Flanagan, 2007). To this end, he explores “neuroexistentialism,” the condition “caused by the rise of the scientific authority of the human sciences and a resultant clash between the scientific and the humanistic image of persons” (Flanagan and Caruso, 2018).
9.1.9. Papineau’s mind-brain identity
Philosopher David Papineau argues for neurobiological physicalism with his theory of unabashed, robust, fundamental mind-brain identity. It is an important argument, with implications for all materialist theories (Papineau, 2020b).
In constructing the argument, one of Papineau’s intuitions is that “there seems no immediate reason why consciousness should be singled out as posing some special puzzle about its relation to the rest of reality”—given that “reality contains many different kind of things, biological, meteorological, chemical, electrical, and so on, all existing alongside each other, and all interacting causally in various ways” (Papineau, 2020b).
One Papineau premise is that while we feel “conscious mind influences non-conscious matter, by controlling bodily behaviour, and similarly that matter influences mind, giving rise to sensory experiences, pains and other conscious mental states,” the “compelling argument … against this kind of interactionist stance … derives from the so-called ‘causal closure of the physical’ … the physical realm seems causally sufficient unto itself.”
Papineau notes that we remain puzzled about why brain states give rise to mental states “in a way that we don’t feel puzzled about why NaCl gives rise to salt, or electrical discharges to lightning.” He attributes our puzzlement—the “explanatory gap” of consciousness—to the psycho-social fact that “we find it hard to escape the spontaneous dualist thought that the feeling and the physical state are not one thing, but two different states that somehow invariably accompany each other” (Papineau, 2020b).
Given this, Papineau says, “our knowledge of mind-brain identities can only be based on some kind of a posteriori abductive inference, rather than a principled a priori demonstration that a certain physical state fills some specified role. For example, we might observe that pains occur whenever prefrontal nociceptive-specific neurons fire, and vice versa; we might also note that, if pains were the firing of nociceptive-specific neurons, then this would account for a number of other observed facts about pain, such as that it can be caused by trapped nerves, and can be blocked by aspirin; and we might conclude on this basis that pains are indeed identical to the firing of nociceptive-specific neurons.” Papineau singles out “the peculiarly direct nature of our concepts of conscious states” as what “stops us deriving mind-brain identities a priori from the physical facts.”
In exploring the basis of identity claims, Papineau states “it can only be on the basis of an abductive inference from direct empirical evidence, such as that the two things in question are found in the same places and the same times, and are observed to bear the same relations to other things, not because we can deduce the identities a priori from the physical facts.” His examples include “Cary Grant = Archie Leach”, and “that dog = her pet.” “Why shouldn’t this same way of thinking be applied to consciousness, he asks?” (Papineau, 2020b).
Because, he answers, “even after we are given all the abductive evidence, we still find mind-brain identity claims almost impossible to believe. We cannot resist the dualist conviction that conscious feelings and the physical brain states are two different things.” And this, in Papineau’s view, “is the real reason why we feel a need for further explanation. We want to know why the neuronal activity is accompanied by that conscious feeling, rather than by some other, or by no feeling at all. Our dualist intuitions automatically generate a hankering for further explanation.” Thus, Papineau concludes, “the demand for explanation arises, not because something is lacking in physicalism, but because something is lacking in us.”
“If only we could fully embrace physicalism,” Papineau suggests, “the feeling of an explanatory gap would disappear. If we could fully accept that pains are nociceptive-specific neuronal firing, then we would stop asking why ‘they’ go together—after all, nothing can possibly come apart from itself.”
To Papineau, this kind of robust physicalism can dissolve “the problem of consciousness”. The move is to “simply deny that any puzzle is raised by the fact that it feels painful to be a human with active nociceptive-neurons. Why shouldn’t it feel like that? That’s how it turns out. Why regard this as puzzling?” (Papineau, 2020a).
An insight is the connotation of verbs used to describe the relation between mind and brain. Brain processes are said to “generate”, or “yield”, or “cause”, or “give rise to” conscious states. But this phraseology, Papineau says, undermines physicalism from the start—even when used by physicalists. As he puts it, “Fire ‘generates’, ‘causes’, ‘yields’ or ‘gives rise to’ smoke. But NaCl doesn’t ‘generate’, ‘cause’, ‘yield’ or ‘give rise to’ salt. It is salt. The point is clear. To speak of brain processes as ‘generating’ conscious states, and so on, only makes sense if you are implicitly thinking of the conscious states as separate from the brain states” (Papineau, 2020b). (But even if consciousness as an “output” or “effect” of the brain were wrongheaded, why are only certain sorts of neural activity identical with consciousness while others are not?)
To sustain his argument, Papineau must deal with zombies. Are zombies possible? “Could a being share all your physical properties but have no conscious life?” Everybody’s first thought is, he says, “Sure. Just duplicate the physical stuff and leave out the feelings.”
That’s the anti-physicalist “trap”: the physicalist has already lost. Papineau rightly states that physicalists must deny that zombies are possible, “given that the mind is ontologically inseparable from the brain.” If conscious states are physical states—radically identical—then, he says, “the ‘two’ cannot come apart,” much like Marilyn Monroe cannot exist without Norma Jean Baker. How could she exist without herself? That makes no sense, he says.18
Papineau rejects the anti-physicalist argument that phenomenal concepts are revelatory, in that they reveal conscious states not to be physical. “Physicalists respond that there is no reason to suppose that phenomenal concepts have the power to reveal such things … that experiences are non-physical.” Why should introspection, he asks rhetorically, “be guaranteed to tell us about all their necessary properties [of experience]?” (Papineau, 2020b).
Papineau is blunt: “I never viewed the so-called ‘hard problem’ as any problem at all.” The obvious answer, he says, is that brain processes feel like something for the subjects that have them. “What’s so hard about that?.. How would you expect them to feel? Like nothing? Why? That’s how they feel when you have them.” The only reason that many people believe there is a problem, Papineau stresses, is that “they can’t stop thinking in dualist terms” (Papineau, 2020b).
As for the conventional materialist claim that ultimately neuroscience will uncover the complete neurobiological basis of consciousness, Papineau is skeptical. He does not expect that “there are definite facts about consciousness to which we lack epistemological access—that there is some material property that really constitutes being in pain, say, but which we can’t find out about.” Rather, he argues, “our phenomenal concepts of conscious states are vague—nothing in the semantic constitution of phenomenal concepts determines precisely which of the candidate material properties they refer to” (Papineau, 2003).
Scientific research, he says, will identify “a range of material properties that correlate in human beings with pain, say, or colors, or indeed being conscious at all. However, this won’t pinpoint the material essence of any such conscious state, for there will always be a plurality of such human material correlates for any conscious property … It is not as if conscious properties have true material essences, yet science is unable to discover them. Rather the whole idea of identifying such essences is a chimera, fostered by the impression that our phenomenal concepts of conscious states are more precise than they are” (Papineau, 2003).
9.1.10. Goldstein’s mind-body problem
Philosopher-novelist Rebecca Newberger Goldstein centers the mind-body problem around the nature of the person, with two distinct kinds of descriptions: our physical bodies and brains, which science can, in principle, analyze completely; and our inner thoughts, perceptions, emotions, dreams, which science can never access completely (Goldstein, 2011a, 2011b).
Goldstein thinks that the internal description of what it’s like to be a person—“what I try to do in creating a character in a novel”—is “really about the body because ultimately there are no nonmaterial states.”
Goldstein states that the kind of stuff underlining these intentional states or states of feeling that we describe in terms of consciousness is entirely brain stuff. “Could we ever derive the one description from the other? Could we ever know enough about the brain stuff so that we could actually know everything there is to be a person, just by the description of the brain stuff? I don’t think so” Goldstein (2011a), 2011b).
Goldstein says that panpsychism (13) seems plausible and she understands why some are dualists, where that internal point of view is something that is not the body, and could, in principle, exist separate from the body. She appreciates why some people who hope for immortality hope dualism is true. (She herself rejects dualism.)
9.1.11. Hardcastle’s argument against materialism skeptics
Philosopher Valerie Gray Hardcastle argues that the points of division between materialists and materialism-skeptics “are quite deep and turn on basic differences in understanding the scientific enterprise.” This disagreement, “the rifts,” which she frames, in part, between David Chalmers and herself, concerns whether consciousness is a brute fact about the world, which materialists deny and its skeptics affirm. Rather, materialists believe that consciousness is part of the physical world, just like everything else. “It is completely nonmysterious (though it is poorly understood) [and materialists] have total and absolute faith that science as it is construed today will someday explain this as it has explained the other so-called mysteries of our age” (Section: Hardcastle, 1996).
Hardcastle gives her clear-eyed assessment: “I am a committed materialist and believe absolutely and certainly that empirical investigation is the proper approach in explaining consciousness. I also recognize that I have little convincing to say to those opposed to me. There are few useful conversations; there are even fewer converts.” She epitomizes the skeptics’ position: “Isolating the causal relations associated with conscious phenomena would simply miss the boat, for there is no way that doing that ever captures the qualitative aspects of awareness. What the naturalists might do is illustrate when we are conscious, but that won’t explain the why of consciousness.” Thus, she continues, whatever the neural correlate(s) of consciousness may be, the naturalists would not have explained why it is that (or those). Part of a good explanation, skeptics maintain, “is making the identity statement (or whatever) intelligible, plausible, reasonable” and this is what materialists have not done and thus have not closed the explanatory gap.
In response, Hardcastle is frank: “To them, I have little to say in defence of naturalism, for I think nothing that I as an already committed naturalist could say would suffice, for we don’t agree on the terms of the argument in the first place.” The consciousness identity, whatever it turns out to be, could be a brute fact about the world, just like the laws of physics. At some point, in all theories, explanations must end. Hardcastle asks, “How do I make my identification of consciousness with some neural activity intelligible to those who find it mysterious? My answer is that I don’t. The solution to this vexing difficulty, such as it is, is all a matter of attitude. That is, the problem itself depends on the spirit in which we approach an examination of consciousness.” In characterizing “consciousness-mysterians,” she states, “They are antecedently convinced of the mysteriousness of consciousness and no amount of scientific data is going to change that perspective. Either you already believe that science is going to give you a correct identity statement, or you don’t and you think that there is always going to be something left over, the phenomenal aspects of conscious experience” (Hardcastle, 1996).
Hardcastle’s advice to skeptics? “Consciousness-mysterians need to alter their concepts. To put it bluntly: their failure to appreciate the world as it really is cuts no ice with science. Their ideas are at fault, not the scientific method. Materialists presume that there is some sort of identity statement for consciousness. (Of course, we don’t actually have one yet, but for those of us who are not consciousness-mysterians, we feel certain that one is in the offing.) Hence, the skeptics can’t really imagine possible worlds in which consciousness is not whatever we ultimately discover it to be because they aren’t imagining consciousness in those cases (or, they aren’t imagining properly). But nevertheless, what can I say to those who insist that they can imagine consciousness as beyond science’s current explanatory capacities? I think nothing …”
The fundamental difference between materialists and their skeptics, according to Hardcastle, is that “Materialists are trying to explain to each other what consciousness is within current scientific frameworks … If you don’t antecedently buy into this project …, then a naturalist’s explanation probably won’t satisfy you. It shouldn’t. But that is not the fault of the explanation, nor is it the fault of the materialists. If you don’t accept the rules, the game won’t make any sense” (Hardcastle, 1996).
Hardcastle’s own approach to consciousness includes: viewing it as a lower-level dynamical structure underpinning our information processing (Hardcastle, 1995); the relation between ontology and explanation providing a framework for referring to mental states as being the causally efficacious agents for some behavior (Hardcastle, 1998); a more nuanced approach to the neural correlates of consciousness (NCC) in that it “there might not be an NCC—even if we adopt a purely materialistic and reductionistic framework for explaining consciousness (for example, perhaps consciousness is located out in the world just as much as it is located inside the head) (Hardcastle, 2018; Hardcastle and Raja, 1998); and action selection and projection to help refine notions of consciousness from an embodied perspective (Hardcastle, 2020).
9.1.12. Stoljar’s epistemic view and non-standard physicalism
Philosopher Daniel Stoljar has long focused on physicalism, its interpretation, truth and philosophical significance; his views are nuanced and largely deflationary (Stoljar, 2010). He defines physicalism as the thesis that “every instantiated property is either physical or is necessitated by some physical property,” where physical property is described by “all and only the following elements: it is a) a distinctive property of intuitively physical objects, b) expressed by a predicate of physics, c) objective, d) knowable through scientific investigation, and e) not a distinctive property of souls, ectoplasm, etc.” (Montero, 2012). According to Stoljar, “Physicalism has no formulations on which it is both true and deserving of the name”—but this “does not entail that philosophical problems stated in terms of it [physicalism] have no reasonable formulation” (Stoljar, 2010; Montero, 2012).
As everyone knows, the philosophical problem of phenomenal consciousness is the poster-child test case for physicalism, the standard physicalist framework being that “consciousness can be explained by contemporary physics, biology, neuroscience, and cognitive science” (Kind and Stoljar, 2023, p. i). To Stoljar, the problem (or problems) of consciousness is “whether two big ideas can both be true together. The first is the existence of consciousness. The second is a worldview (a picture of everything that exists) that many people think you must believe if you hold a vaguely scientific or rational approach to the world, namely, physicalism.” Stoljar calls it the “compatibility problem”— “i.e., the problem of whether physicalism and claim that consciousness exists can both be correct”—and he says that the solution is “right under our nose.” The solution to the compatibility problem, Stoljar tells us, “is that we are missing something”—and the depth and implications of this simple statement are surprisingly profound (Kind and Stoljar, 2023, pp. 64–65).
What we are missing, according to Stoljar, “is a type of physical fact or property relevant to consciousness. More than this, we are profoundly ignorant of the nature of the physical world, and ignoring this ignorance is what generates the problem.” He calls “the idea that we are ignorant of a type of fact or property that is relevant to consciousness the ignorance hypothesis” and he calls “the idea that the ignorance hypothesis solves the compatibility problem the epistemic view.” Stoljar contends that all arguments for the opposing view—i.e., that physicalism and consciousness are incompatible—“fail, and for a single reason.” These arguments, he says, “all presuppose that we have complete knowledge of the physical facts relevant to consciousness. According to the epistemic view, that presupposition is false, so the arguments [against physicalism-consciousness compatibility] don’t work.” That physicalism cannot be shown affirmatively to be true does not bother Stoljar, because, he says, physicalism is an empirical truth, not an a priori argument. “What the epistemic view says is that … there is no persuasive ‘here and now’ argument for incompatibility.” Thus, Stoljar argues, the epistemic view helps us think about the problems of consciousness in a clearer way, disentangling them from the compatibility problem (Kind and Stoljar, 2023, pp. 64–66).
Stoljar is no traditional physicalist. He critiques “standard physicalism,” by which he means “versions of physicalism that make no theoretical use of the ignorance hypothesis.” He conjectures that there are properties of the physical world that go beyond the capacity of the physical sciences to access and measure through its devices and instruments. Is this incapacity in practice, as per current science, or in principle, such that ultimate truth is forever out of reach? Who knows? Either way, he says, would support his ignorance hypothesis defense of physicalism (Kind and Stoljar, 2023, p. 67). More subtly, Stolar contends that the epistemic view does provide an “explanation of consciousness,” at least in an abstract sense. “It tells us, for example, that conscious states are not fundamental and so depend on other things, even if it leaves open what exactly they depend on” (Kind and Stoljar, 2023, p. 112).
Yet Stoljar believes it is possible to construct “a science of consciousness”—to study “empirical laws between each conscious state and some physical system”— but he is skeptical of “the attempt to provide systematic knowledge of such laws” which he rejects as “implausible on its own terms.” Preferring “to understand the science in a more modest way,” Stoljar is ready to accept “that we do not and may never have a complete theory of the world” (Kind and Stoljar, 2023, pp. 67–68).
9.2. Neurobiological theories
Neurobiological theories are based primarily on known mechanisms of the brain, such as neuronal transmission, brain circuits and connectome pathways, electric fields, and, of course, neural correlates of consciousness.
9.2.1. Edelman’s neural Darwinism and reentrant neural circuitry
Nobel laureate biologist Gerald Edelman presents a purely biological theory of consciousness, founded on Darwinian natural selection and complex brain morphology. His foundational commitment is that “the neural systems underlying consciousness arose to enable high-order discriminations in a multidimensional space of signals,” that “qualia are those discriminations” and that “differences in qualia correlate with differences in the neural structure and dynamics that underlie them” (Edelman, 2000, 2003, 2024).
Rejecting theories that the brain is like a computer or instructional system, Edelman proposes that “the brain is a selectional system, one in which large numbers of variant circuits are generated epigenetically, following which particular variants are selected over others during experience. Such repertoires of variant circuits are degenerate, i.e., structurally different circuit variants within this selectional system can carry out the same function or produce the same output. Subsequent to their incorporation into anatomical repertoires during development, circuit variants that match novel signals are differentially selected through changes in synaptic efficacy. Differential amplification of selected synaptic populations in groups of neurons increases the likelihood that, in the future, adaptive responses of these groups will occur following exposure to similar signals” (Edelman, 2003).
Edelman’s way of thinking is motivated by his work on the immune system (for which he was awarded the Nobel) and his theory is developed in two domains: Neural Darwinism (neural group selection) and Dynamic Core (reentrant neural circuitry).
Neural Darwinism is “the idea that higher brain functions are mediated by developmental and somatic selection upon anatomical and functional variance occurring in each individual animal” (Edelman, 1989). Neural Darwinism has two aspects: (i) development selection, which controls the gross anatomy and microstructure of the brain, allowing for great variability in the neural circuitry; and (ii) experiential selection, especially of the synaptic structure where functional plasticity is essential given the vast number of synapses (estimated at over 100 trillion, possibly 600 trillion or more). Edelman notes that a child’s brain contains many more neural connections than will ultimately survive to maturity—estimates go as high as 1000 trillion—and he argues that this redundant capacity, this functional plasticity, is needed because “neurons are the only cells in the body that cannot be renewed and because only those networks best adapted to their ultimate purpose will be selected as they organize into neuronal groups” (Edelman, 2024). According to Edelman’s theory of neuronal group selection (TNGS), “selectional events in the brain are necessarily constrained by the activity of diffuse ascending value systems. The activity of these systems affects the selectional process by modulating or altering synaptic thresholds” (Edelman, 2003).
Dynamic Core is Edelman’s term encompassing reentrant neural circuitry, the ongoing process of recursive signaling among neuronal groups taking place across networks of massively parallel reciprocal fibers, especially in the connections between thalamus and cerebral cortex. This dynamic, relentless activity in thalamocortical circuits generates a continuing sequence of different metastable states that change over time, yet each of which has a unitary phenomenology at any given moment. Edelman asserts “there is no other object in the known universe so completely distinguished by reentrant circuitry as the human brain” (Edelman, 2003, 2024).
Edelman stresses that reentry is “a selectional process occurring in parallel” and that “it differs from feedback, which is instructional and involves an error function that is serially transmitted over a single pathway.” As a result of the correlations that reentry imposes on diverse, interacting neuronal groups, “synchronously active circuits across widely distributed brain areas are selectively favored.” This, Edelman suggests, “provides a solution to the so-called binding problem: how do functionally segregated areas of the brain correlate their activities in the absence of an executive program or superordinate map?” Binding of the outputs of every sensory modality, each generated by segregated cortical areas, is essential for our commonly perceived but underappreciated unity of consciousness (Edelman, 2003).
It is worth noting the close relationship between the Dynamic Core and Global Workspace (9.2.3) hypotheses, as jointly suggested by the authors of each, Edelman and Baars—each hypothesis having been put forward, independently, “to provide mechanistic and biologically plausible accounts of how brains generate conscious mental content.” Whereas “the Dynamic Core proposes that reentrant neural activity in the thalamocortical system gives rise to conscious experience,” the “Global Workspace reconciles the limited capacity of momentary conscious content with the vast repertoire of long-term memory.” The close relationship between the two hypotheses is said to allow “for a strictly biological account of phenomenal experience and subjectivity that is consistent with mounting experimental evidence.” The authors suggest that “there is now sufficient evidence to consider the design and construction of a conscious artifact” (Edelman et al., 2011).
The theory of neuronal group selection (TNGS), pioneered by Edelman (1987), has come to undergird a cluster of theories. As Anil Seth explains, “According to the TNGS, primary (sensory) consciousness arose in evolution when ongoing perceptual categorization was linked via reentry to a value-dependent memory creating the so-called ‘remembered present’ (Edelman 1989). Higher-order consciousness, distinguished in humans by an explicit sense of self and the ability to construct past and future scenes, arose at a later stage with reentrant pathways linking value-dependent categorization with linguistic performance and conceptual memory (Edelman 2003; Seth, 2007).
As Edelman’s mechanism for consciousness is based on the TNGS, he first distinguishes primary from higher-order consciousness. “Animals with primary consciousness can integrate perceptual and motor events together with memory to construct a multimodal scene in the present”—what James called the “specious present” and which Edelman calls “the remembered present” (Edelman, 1989). Such an animal with primary consciousness, Edelman says, “has no explicit narrative capability (although it has long-term memory), and, at best, it can only plan to deal with the immediate scene in the remembered present” (Edelman, 2003).
As for higher-order consciousness, Edelman is mainstream: “It emerges later in evolution and is seen in animals with semantic capabilities such as chimpanzees. It is present in its richest form in the human species, which is unique in possessing true language made up of syntax and semantics. Higher-order consciousness allows its possessors to go beyond the limits of the remembered present of primary consciousness. An individual’s past history, future plans, and consciousness of being conscious all become accessible” (Edelman, 2003).
How did the neural mechanisms underlying primary consciousness arise during evolution? Edelman’s proposal is as follows. “At some time around the divergence of reptiles into mammals and then into birds, the embryological development of large numbers of new reciprocal connections allowed rich reentrant activity to take place between the more posterior brain systems carrying out perceptual categorization and the more frontally located systems responsible for value-category memory. This reentrant activity provided the neural basis for integration of a scene with all of its entailed qualia … [which] conferred an adaptive evolutionary advantage” (Edelman, 2003).
In summary, according to Edelman, “consciousness arises as a result of integration of many inputs by reentrant interactions in the dynamic core. This integration occurs in periods of <500 ms. Selection occurs among a set of circuits in the core repertoire; given their degeneracy, a number of different circuits can carry out similar functions. As a result of the continual interplay of signals from the environment, the body, and the brain itself, each integrated core state is succeeded by yet another and differentiated neural state in the core … The sequences and conjoined arrays of qualia entailed by this neural activity are the higher-order discriminations that such neural events make possible. Underlying each quale are distinct neuroanatomical structures and neural dynamics that together account for the specific and distinctive phenomenal property of that quale. Qualia thus reflect the causal sequences of the underlying metastable neural states of the complex dynamic core” (Edelman, 2003).
Finally, Edelman addresses the hard problem. “The fact that it is only by having a phenotype capable of giving rise to those qualia that their ‘quality’ can be experienced is not an embarrassment to a scientific theory of consciousness. Looked at in this way, the so-called hard problem is ill posed, for it seems to be framed in the expectation that, for an observer, a theoretical construct can lead by description to the experiencing of the phenomenal quality being described. If the phenomenal part of conscious experience that constitutes its entailed distinctions is irreducible, so is the fact that physics has not explained why there is something rather than nothing. Physics is not hindered by this ontological limit nor should the scientific understanding of consciousness be hindered by the privacy of phenomenal experience.” Edelman is confident. “At the end of our studies, when we have grasped its mechanisms in greater detail, consciousness will lose its mystery and be generally accepted as part of the natural order” (Edelman, 2003).
Personally, I like analogizing the something/nothing ontological limit in physics to the phenomenal consciousness psychophysical privacy limit in neuroscience—the two ultimate questions of existence and sentience. But I hesitate to draw the analogy too tightly. Something/nothing is a kind of historical question of what happened, that is, explaining the hypothetical process. For example, it could be that nothing is in principle impossible. Phenomenal consciousness is a clearly contemporary question of what is, that is, explaining the actual thing. Moreover, I agree that even with its something/nothing ontological limit, physics can do its work, as with its phenomenal consciousness privacy limit, neuroscience can do its work. But that work, remember, constitutes the “easy problems.”
9.2.2. Crick and Koch’s neural correlates of consciousness (NCC)
The neural correlates of consciousness (NCC) is defined as the minimum activities in the brain jointly sufficient (and probably necessary) for any one specific conscious perception, and, extended, for subjective experience in general, the inner awareness of qualia. Originally applied to sleep and wakefulness (i.e., the reticular activating system in the brain stem), the NCC were formally proposed by Francis Crick and Christof Koch as a scientific approach to what had been believed to be the vague, metaphysical and somewhat discredited idea of consciousness (Crick and Koch, 1990), a program then championed by Koch (Koch, 2004, Closer To Truth) and others (though Koch has become something of a “romantic reductionist” [Koch, 2012a]).
While there are complex methodological issues, NCC mechanisms include neuronal electrophysiological action potentials (spikes), their frequencies and sequences; neurochemical transmitter flows in the synapses between neurons; and recurrent brain circuits in specific brain areas. An example is clusters of neurons that underlie wakefulness in the brainstem connecting to clusters of neurons in the thalamus, hypothalamus, basal ganglia and cerebral cortex related to awareness/consciousness (Wong, 2023).
Similarly, a “default ascending arousal network” (dANN) has been proposed, with subcortical nodes in the brainstem, hypothalamus, thalamus, and basal forebrain (Edlow, 2024). While necessary for conscious arousal and wakefulness, the dANN is not sufficient for phenomenal conscioiusness and is not what this Landscape is about.
As an example of the NCC way of thinking, an early NCC candidate was the claustrum, which receives input from almost all regions of cortex and projects back to almost all regions of cortex, and which, Crick and Koch speculated, could give rise to “integrated conscious percepts.” They used the analogy of the claustrum to a “conductor” and the cortex to an “orchestra,” such that the claustrum as a conductor ‘coordinates a group of players in the orchestra, the various cortical regions.” Without the conductor, as they build the analogy, “players can still play but they fall increasingly out of synchrony with each other. The result is a cacophony of sounds.” In the absence of the claustra in both cerebral hemispheres, attributes such as sensory modalities “may not be experienced in an integrated manner and the subject may fail to altogether perceive these objects or events or only be consciously aware of some isolated attribute.” This would mean, they suggest, “that different attributes of objects … are rapidly combined and bound in the claustrum” (Crick and Koch, 2005).
A more recent candidate for full and content-specific NCC is located in the posterior cerebral cortex, in a temporo-parietal-occipital hot zone (Koch et al., 2016), though no one is yelling “Eureka” and the search continues. Even so, while everyone knows that even strong correlation is not causation, strong correlation is still something. NCCs can be considered macroscopic materialism.
It was in 1998 that Christof Koch made the now legendary 25-year bet with philosopher David Chalmers—they are long-time friends—that neuroscientists would discover a “clear” NCC by 2023. No surprise that the bet paid off in Chalmers’ favor. (Koch presented Chalmers with a case of 1978 Madeira wine.) As Chalmers said, notwithstanding neuroscience’s great progress, “It’s clear that things are not clear,” while Koch, feigning chagrin, agreed (Horgan, 2023).
Koch was down but not out: he may have lost this consciousness battle, but the consciousness war would still be waged. Koch offered to re-up: another bet, another 25 years to achieve that “clear” NCC, another case of wine. “I hope I lose,” Chalmers said, smiling, taking the new bet, “but I suspect I’ll win.”
The smart money is again on Chalmers, although I have a different issue. What would a “clear” NCC mean? Suppose a specific group of neurons were proven to be both necessary and sufficient for a particular conscious experience, a direct correlation that no other group of neurons could claim? Koch would rightly win the bet, but would consciousness have been explained? Still, the perennial question: How can action potentials zipping along neurons and chemicals flowing between neurons literally be the phenomenal consciousness of inner experience? By what magic?
9.2.3. Baars’s and Dehaene’s global workspace theory
Proposed originally by Bernard Baars (Baars, 1988, 1997, 2002), extended with neuroimaging and computer modeling by Stanislas Dehaene (Dehaene and Naccache, 2000), the core claim of Global Workplace Theory (GWT) is brain-wide presence and broad accessibility of specific multi-sensory, multi-cognitive information, the total package being what constitutes conscious awareness. GWT is founded on the concept of an inner “theater of consciousness,” where the mental spotlight of awareness shines on sequential sets of integrated perceptions that are dominant, at least momentarily. (The global workspace “Theater of Consciousness” is said not to contradict Dennett’s rejected “Cartesian Theater,” because the former is not dualistic and does not reside in only one location in the brain; rather, the Theater of Consciousness is passive not active and is spread across much of the brain.)
GWT holds that conscious mental states are those which are “globally available” to a wide range of brain processes including attention, perception, assessment, memory, verbal description, and motor response. Which sets of integrated perceptions become dominant, move to centerstage, and thus leap into conscious awareness? It’s a competition. Diverse data flows originating both within the brain (e.g., memories) and from external stimuli (i.e., sensory information) are in constant competition, such that the “winner” is broadcast broadly (i.e., globally) in the brain and becomes accessible throughout the brain, which is how we become aware of it as the content of our consciousness.
This brain-wide focus on a particular phenomenological package integrates all the relevant sensory and cognitive streams by recruiting all the relevant brain areas into an organic whole—while inhibiting other, extraneous, conflicting data flows—such that what resides in the global workspace is perceived as consciousness “snapshots” in continuous, movie-like motion. This means that while our conscious awareness may seem unified and seamless, in fact it is neither.
Whereas GWT started in the 1980s as a purely psychological theory of conscious cognition, it has become a “family” of theories adapted to today’s far more detailed understanding of the brain. The brain-based version of GWT is called Global Workspace Dynamics because the cortex is viewed as a “unified oscillatory machine”. GWT, therefore, according to its advocates, joins other theories in taking consciousness as the product of highly integrated and widespread cortico-thalamic activity, including evidence that the prefrontal cortex participates in the visual conscious stream. Cortex is extraordinarily flexible in its dynamic recruitment of different regions for different tasks. Therefore, an arbitrary division between prefrontal and other neuronal regions is said to be misleading. Consciousness requires a much broader, more integrative view (Baars et al., 2021).
In a pioneering set of “adversarial collaboration” experiments to test hypotheses of consciousness by getting rival researchers to collaborate on the study design,19 preliminary results did not perfectly match GWT’s prediction that consciousness arises when information is broadcast to areas of the brain through an interconnected network. The transmission, according to GWT, happens at the beginning and end of an experience and involves the prefrontal cortex, at the front of the brain. But independent “theory-neutral” researchers found that only some aspects of consciousness, but not all of them, could be identified in the prefrontal cortex. Moreover, while they found evidence of brain broadcasting, the core of GWT, it was only at the beginning of an experience—not also at the end, as had been predicted. Further experiments are to come, but revisions to GWT are believed likely (Lenharo, 2023a, Lenharo, 2023b, 2024).
9.2.4. Dennett’s multiple drafts model
In his intellectual memoirs, I’ve Been Thinking, philosopher Daniel Dennett highlights two fundamental questions on which his career is founded—the two related philosophical problems he set himself to solve. “First, how can it be that some complicated clumps of molecules can be properly described as having states or events that are about something, that have meaning or content. And second, how can it be that at least some of these complicated clumps of molecules are conscious—that is, aware that they are gifted with states or events that are about something?” (Dennett, 2023a, 2023b).
In dealing with these questions, Dennett realized, way back in his PhD dissertation in 1965, that “the best—and only—way of making sense of the mind and consciousness is through evolution by natural selection on many levels.” Dennett’s core insight subsuming biological evolution in general and the development of mind in particular is concise: reasons without a reasoner, design without a designer, and competence without comprehension (Dennett, 2007).
Dennett’s theory of consciousness is distinguished by four ideas: (i) there is no “Cartesian Theater,” no inner witness viewing the consciousness show; (ii) different brain regions or modules develop different kinds of content, which Dennett calls “multiple drafts”; (iii) the multiple drafts compete with one another for attention, the winner of the winner-take-all competition occupying the entirety of the conscious moment, which Dennett calls “fame in the brain”; and (iv) the collection of all these conscious moments coalesces into a kind of life story, the emergence of a sense of “self,” which Dennett describes as a “center of narrative gravity.”
In Consciousness Explained, Dennett presents his multiple drafts model of consciousness (Dennett, 1992). He states that all varieties of perception, thought, or mental activity are processed in the brain via parallel, multitrack interpretations and elaborations, subject to continuous “editorial revision.” These “yield, over the course of time, something rather like a narrative stream or sequence, the product of continual editing by many processes distributed around the brain.” Dennett has the brain consisting of a “bundle of semi-independent agencies,” and his metaphor “fame in the brain” tells us what it takes for competing ideas to determine the content of consciousness at any given moment.
In supporting his theory, Dennett needs to undermine what we take to be common sense. He challenges the verisimilitude of inner experience, which he calls more like theorizing than like describing. He rejects the notion of a single central location (his “Cartesian theater”) where conscious experience can be “viewed.” He dissolves the idea of the “self” as the central character of stories made up by content fixation and propagation in the brain. Moreover, he argues that the properties of qualia are incompatible and therefore incoherent, thus obviating the need to solve Chalmers’s hard problem.20 Dennett needs all four of these counterintuitive yet deeply probative assertions; the package is admirably coherent, but buying it is a tall order.
Of Dennett’s four assertions, his desired demolition of qualia is perhaps his most critical move. Here is how he defends it. “Qualia are user-illusions, ways of being informed about things that matter to us in the world (our affordances) because of the way we and the environment we live in (microphysically) are. They are perfectly real illusions! They just aren’t what they seem to be; they are not intrinsic, unanalyzable properties of mental states; they are highly structured and complex activated neural networks that dispose us to do all sorts of things in response—such as declare that we’re seeing something blue. The key move is to recognize that we have underprivileged access to the source or cause of our convictions about what we experience” (Rosenberg and Dennett, 2020).
Ironically, while Dennett calls as evidence “user illusions” in his case to deflate consciousness and support materialism, cognitive psychologist Donald Hoffman calls as evidence “user illusions” in his case to inflate consciousness and deny materialism. (16.5). This contrasting interpretation of precisely the same data by two first-rate thinkers is fascinating, perhaps telling.
Dennett is not shy in asserting that people still underestimate by a wide margin the challenges that the brain-in-vat thought experiment raises for views of consciousness other than Dennett’s own. The key fact is that “you don’t know anything ‘privileged’ about the causation of your own thoughts. You cannot know ‘from the inside” what events cause you to think you see something as red or green, for instance, or cause you to push button A instead of button B.” In short, to truly understand consciousness, Dennett says “you need to go outside yourself and adopt the ‘third-person point of view’ of science” (Dennett, 2023a, 2023b).
Dennett stresses the importance of treating subjects’ beliefs about their own consciousness as “data to be explained, not necessarily as true accounts of mental reality.” He states, “This is the major fault line in philosophy of mind today, with John Searle, Tom Nagel, David Chalmers, Galen Strawson, and Philip Goff [all represented in this paper], among others, thinking they can just insist they know better. They don’t. Those who object, who hold out for some sort of ‘first-person science of consciousness,’ have yet to describe any experiments or results that are trustworthy but unobtainable by heterophenomenology” (the term Dennett coined for the third-person method, the phenomenology of other minds, which is standard procedure in cognitive science). Dennett says his meeting with leading scientific researchers on consciousness enabled him “to begin to form at least vague ideas of how mechanisms of the brain might do all the work,” but only, he insists, “if we deflated some of the overconfident pronouncements of introspectors about the marvels of the phenomena” (Dennett, 2023a, 2023b).
In describing his early book, Content and Consciousness, where he puts content before consciousness, Dennett differentiates himself from John Searle, who puts consciousness before content. Although Searle and Dennett are both biological naturalists and both, for example, eschew panpsychism, Dennett believes that by prioritizing content, the mystery of consciousness is mitigated.
Dennett has had a long, friendly, though surely adversarial relationship with Chalmers. “Even expert scientists have been fooled by Chalmers’ ‘the Hard Problem’ into thinking that there’s one big mysterious fact that needs explaining, when in fact there are hundreds of lesser problems that can be solved without any scientific revolutions, and when they are all solved, the so-called Hard Problem will evaporate” (Dennett, 2023a, 2023b).
It is worth noting the more general case of a multiple module way of thinking, which posits separate if not independent cognitive components of the mind rooted in the brain (though not needing to correspond to identifiable brain structures). (9.2.5.)
9.2.5. Minsky’s society of mind
Artificial intelligence pioneer Marvin Minsky calls the multiple semi-independent modules in the human mind, generated by physically locatable modules in the human brain, The Society of Mind (not coincidentally the name of his book). It is a model of human cognition constructed, step by step, from the nonconscious interactions of simple mindless elements he calls “agents” (Minsky, 1986).
“What does it mean to say you’re aware of yourself?” Minsky asks. It would be impossible “for any one part of the brain to know what’s happening in all the other parts of the brain because there’s just too much. Each part of the brain has connections to other parts of the brain and can get some ideas, but there’s no place that knows everything” (Minsky, 2007b).
“The Society of Mind,” according to Minsky, is the end product of a vast evolutionary history, beginning with just clumps of neurons. Because neurons evolved early and had to keep their physiological integrity, progress was made by neurons gathering together, which led to the first small brains, and when these small brains began to specialize as well as to associate, “mind” began to develop (Minsky, 2007b).
Minsky is as blunt as he is insightful. “While many neuroscientists focus on how brain cells [neurons] work, to me, that’s pretty much like trying to understand a computer from how transistors work. The neurons and synapses are maybe six levels of organization below the thoughts that you’re actually aware of, the important things that distinguish a human from a crayfish. These high-level descriptions are what counts, and each of them has to be understood by itself. Any particular thing that happens in Level 5 can be understood as a combination of maybe 20 or 50 things that happen in Level 4 and so forth. But you can’t understand Level 5 even if you know everything about how neurons and synapses work. The difference between a human and a crayfish is that a human has these multiple levels of brain organization that the earlier animals did not have” (Minsky, 2007b).
Actually, Minsky says, “I’m interested in how this piece of machine, the brain, can do things like decide that what it’s doing isn’t working. How does it develop new goals? How does it develop new methods for achieving its goals? And, most important, how does it make a model of itself as a being in a world and think high-level stuff about its own past and its future?”
It has been known for well over 100 years that the brain has many different parts. Minsky envisions something “like a great network of computers, each of which is specialized. It’s not that it’s a society of little people, but rather a society of biological machines, say 400 or more of these, each with different top-level functions, including the capacity to imagine planning proposals and counterfactual histories.”
Minsky speculates that cortical columns of related neurons, which are intermediate in complexity, can store things for a certain period without any changes in probability or conductions. We evolved these structures, he says, “so we could have reliable short-term memories that represent knowledge in many different ways.” In context, Minsky advises studying “insulation theory.” He says, “Theorists called ‘connectionists’ say what’s important about the brain is how things are connected to each other. You could argue that it’s even more important to know how things are insulated from each other—why you don’t get a big traffic jam because there’s too many connections” (Minsky, 2007b).
9.2.6. Graziano’s attention schema theory
Advanced by neuroscientist Michael Graziano, attention schema theory asserts that for the brain to handle a profusion of information it must have developed a quick and dirty model, a simplified version of itself, which it then reports “as a ghostly, non-physical essence, a magical ability to mentally possess items” (Graziano, 2019a, 2019b). He likens the attention schema to “a self-reflecting mirror: it is the brain’s representation of how the brain represents things, and is a specific example of higher-order thought. In this account, consciousness isn’t so much an illusion as a self-caricature.”
Graziano claims that this idea, attention schema theory, gives a simple reason, straight from control engineering, for why the trait of consciousness would evolve, namely, to monitor and regulate attention in order to control actions in the world. Thus, Graziano argues that “the attention schema theory explains how a biological, information processing machine can claim to have consciousness, and how, by introspection (by assessing its internal data), it cannot determine that it is a machine whose claims are based on computations” (Graziano, 2019a, 2019b).
9.2.7. Prinz’s neurofunctionalism: how attention engenders experience
Philosopher Jesse Prinz accounts for consciousness with two main claims: first, consciousness always arises at a particular stage of perceptual processing, the intermediate stage; and second, consciousness depends on attention. “Attention” is Prinz’s focus in that it “changes the flow of information allowing perceptual information to access memory systems.” Neurobiologically, he says, “this change in flow depends on synchronized neural firing. Neural synchrony is also implicated in the unity of consciousness and in the temporal duration of experience” (Prinz, 2012).
What Prinz calls “attention” is a particular process of making an integrated representation of a stimulus’ multiple properties, as perceived from a given point of view, available to working memory—and it is this process, and only this process, that generates consciousness. “Intermediateness,” as Prinz’s term of art, locates the critical transformation when representations are “integrated into a point-of-view-retaining format that gets made available by this ‘attention process’” to working memory. This is why Prinz’s theory earns the appellation, “Attended Intermediate Representation Theory” (Mole, 2013). [Note: Prinz’s theory could be classified under Representational Theories.]
In exploring the limits of consciousness, Prinz states, “We have no direct experience of our thoughts, no experience of motor commands, and no experience of a conscious self.” His strong assertion is that “All consciousness is perceptual, and it functions to make perceptual information available to systems that allow for flexible behavior.” Thus, Prinz provides “a neuroscientifically grounded response to the leading argument for dualism,” and he argues that “materialists need not choose between functional and neurobiological approaches, but can instead combine these into neurofunctional response to the mind-body problem” (Prinz, 2012).
Prinz encourages a direct, head-to-head competition, as it were, between his neurofunctionalism and David Chalmers’s hard problem (Mole, 2013). “Where he [Chalmers] sought to synthesize two decades of dualist argumentation, I [Prinz] try here to synthesize two decades of empirical exploration” (Prinz, 2012; Mole, 2013). Whereas Chalmers famously declares that “no explanation given in wholly physical terms can ever account for the emergence of conscious experience.”). Prinz counters that there is now “a satisfying and surprisingly complete theory [contained entirely within materialism] of how consciousness arises in the human brain” (Prinz, 2012).
9.2.8. Sapolsky’s hard incompatibilism
Neuroendocrinologist and biological anthropologist Robert Sapolsky counts himself as a “hard incompatibilist,” affirming the truth of determinism (i.e., all events and actions are the product of prior events and actions) and denying the existence of free will. There is no possibility, he says, “of reconciling our being biological organisms built on the physical rules of the universe with there being free will, a soul, a ‘Me’ inside there which is somehow free of biology. You have to choose one or the other and, philosophically, I am completely in the direction of us being nothing more or less than our biology (and its interactions with the environment)” (Sapolsky, 2023b).
Sapolsky’s target is free will, not consciousness, but to deal with free will, he must deal with consciousness—after all, free will, if it exists, would be a product of consciousness, not the reverse.
But Sapolsky is a reluctant consciousness warrior. Introducing a section of his book labeled “What Is Consciousness?”, he enjoys some self-deprecation. “Giving this section this ridiculous heading,” he says, seemingly smiling, “reflects how unenthused I am about having to write this next stretch. I don’t understand what consciousness is, can’t define it. I can’t understand philosophers’ writing about it. Or neuroscientists’, for that matter, unless it’s ‘consciousness’ in the boring neurological sense, like not experiencing consciousness because you’re in a coma” (Sapolsky, 2023a).
Referencing the Libet experiments (9.1.2), which purport to dissociate conscious awareness from brain decision-making, Sapolsky argues that “three different techniques, monitoring the activity of hundreds of millions of neurons down to single neurons, all show that at the moment when we believe that we are consciously and freely choosing to do something, the neurobiological die has already been cast. That sense of conscious intent is an irrelevant afterthought.” In another context with another metaphor, he calls consciousness “an irrelevant hiccup” (Sapolsky, 2023a).
Yet Sapolsky is not prepared to dismiss consciousness as “just an epiphenomenon, an illusory, reconstructive sense of control irrelevant to our actual behavior.” This strikes me, he says, “as an overly dogmatic way of representing just one of many styles of neuroscientific thought on the subject” (Sapolsky, 2023a).
Pushed to state what he believes consciousness is, Sapolsky demurs. “Consciousness is beyond me to understand—every few years I read a review from the people trying to understand it on a neurobiological level, and I cannot understand a word of what they are saying. For me, consciousness arises as a ‘complex emergent property’—which explains everything and nothing” (Sapolsky, 2023b).
9.2.9. Mitchell’s free agents
While neuroscientist Kevin Mitchell argues, contra many scientists and philosophers, that free will, or agency, is not an illusion—that “we are not mere machines responding to physical forces but agents acting with purpose”—he still asserts, “you cannot escape the fact that our consciousness and our behavior emerge from the purely physical workings of the brain” (Mitchell, 2023, p. 3).
Mitchell mounts an evolutionary case for how living beings capable of choice arose from lifeless matter, stressing “the emergence of nervous systems provided a means to learn about the world,” thus enabling sentient animals to model, predict, and simulate. These faculties reach their peak in humans with our capacities “to imagine and to be introspective, to reason in the moment, and to shape our possible futures through the exercise of our individual agency” (Mitchell, 2023).
Normally, there is high correlation between those who deny “real” (libertarian) free will with the commitment that consciousness is entirely physical, and conversely, those who affirm “real” (libertarian) free will, are more likely to opt for nonphysical theories. Mitchell is significant in that he defends “real” free will, but unambiguously has consciousness as entirely physical. He describes creaturely acts of what he considers “free will” before consciousness even evolved. “Thoughts are not immaterial,” he says; “they are physically instantiated in patterns of neural activity in various parts of the brain … There’s no need to posit a ‘ghost in the machine’—you’re not haunting your own brain. The ‘ghost’ is the machine at work” (Mitchell, 2023, pp. 267–268).
9.2.10. Bach’s cortical conductor theory
Cognitive scientist Joscha Bach posits a functional explanation for phenomenal consciousness, the cortical conductor theory (CTC), where “cortical structures are the result of reward-driven learning, based on signals of the motivational system, and the structure of the data that is being learned.” Critical is the “conductor,” which is “a computational structure that is trained to regulate the activity of other cortical functionality. It directs attention, provides executive function by changing the activity and parameterization and rewards of other cortical structures, and integrates aspects of the processes that it attended to into a protocol. This protocol is used for reflection and learning” (Section: Bach, 2019).
Bach has CTC’s “elementary agents” as columns in the cerebral cortex that “self-organize into the larger organizational units of the brain areas as a result of developmental reinforcement learning. The activity of the cortical orchestra is highly distributed and parallelized, and cannot be experienced as a whole.” However, its performance is coordinated by the conductor, which is not a homunculus, “but like the other instruments, a set of dynamic function approximators” (situated in prefrontal cortex21). Whereas most cortical instruments, he says, “regulate the dynamics and interaction of the organism with the environment (or anticipated, reflected and hypothetical environments), the conductor regulates the dynamics of the orchestra itself.” The process is based on signals of the motivational system and it provides executive function, resolves conflicts between cortical agents, and regulates their activities (Bach, 2019).
“The conductor is the only place where experience is integrated,” Bach states. “Information that is not integrated in the protocol cannot become functionally relevant to the reflection of the system, to the production of its utterances, the generation of a cohesive self model, and it cannot become the object of access consciousness.” Without the conductor, he asserts, our brain can still perform most of its functions, but we would be “sleepwalkers, capable of coordinated perceptual and motor action, but without central coherence and reflection.”
Memories empower Bach’s theory. “Memories can be generated by reactivating a cortical configuration via the links and parameters stored at the corresponding point in the protocol. Reflective access to the protocol is a process that can itself be stored in the protocol, and by accessing this, a system may remember having had experiential access.” For phenomenal consciousness, Bach claims “it is necessary and sufficient that a system can access the memory of having had an experience—the actuality of experience itself is irrelevant.”
Phenomenal consciousness, according to Bach, “may simply be understood as the most recent memory of what our prefrontal cortex attended to. Thus, conscious experience is not an experience of being in the world, or in an inner space, but a memory. It is the reconstruction of a dream generated [by] more than fifty brain areas, reflected in the protocol of a single region. By directing attention to its own protocol, the conductor can store and recreate a memory of its own experience of being conscious” (Bach, 2019).
Unlike Integrated Information Theory (12), Bach says CTC is a functionalist model of consciousness, with similarity to other functionalist approaches, such as the ones suggested by Dennett (9.2.4) and Graziano (9.2.6) (Bach, 2019).
9.2.11. Brain circuits and cycles theories
Brain circuits and cycles as mechanisms of consciousness are older explanations, no longer considered sufficient in themselves, having evolved into more sophisticated theories. Brain circuits cover the following kinds of large-scale brain structures: lateral pathways across the cerebral cortex linking diverse cortical areas (e.g., especially in the prefrontal, cingulate and parietal regions of the cortex, which are involved in higher-level activities such as planning and reasoning); the reticular activating system focusing attention, shaping behaviors, and stimulating motivation; and vertical thalamocortical radiations mediating sensory and motor systems.22 Brain cycles cover electroencephalogram (EEG) waves over broad regions of the cerebral cortex, the product of massive numbers of neurons firing synchronously (e.g., gamma waves at 40 Hz).
A contemporary explanation recruits bidirectional information transfer between the cortex and the thalamus—recurrent corticothalamic and thalamocortical pathways—which are said to regulate consciousnesss. Evidence suggests “a highly preserved spectral channel of cortical-thalamic communication that is present during conscious states, but which is diminished during the loss of consciousness and enhanced during psychchedlic states” (Toker et al., 2024).
Dendritic Integration Theory (DIT), linking neurobiology and phenomenology, relates cellular-level mechanisms to conscioius experience by leveraging “the intricate complexities of dendritic processing” in brain circuits. Jaan Aru et al. propose that “consciousness is heavily influenced by, or possibly even synonymous with, the functional integration of two streams of cortical and subcortical information that impinge on different compartments of cortical layer 5 pyramidal (L5p) cells” (Aru, 2023). The biophysical properties of pyramidal cells “allow them to act as gates that control the evolution of global actiatation patterns,” such that “In conscious states, this cellular mechanism enables complex sustained dynamics withn the thalamocortical system, whereas during unconscious states, such signal propagation is prohibiited,” Aru et al. suggest that the DIT “hallmark of conscious processing is the flexible integration of bottom-up and top-down data streams at the cellular level” Aru, 2023Aru, 2020
9.2.12. Northoff’s temporo-spatial sentience
Psychiatrist and neuroscientist Georg Northoff postulates what he calls “sentience” as “a more basic and fundamental dimension of consciousness,” and he proposes that sentience arises via “temporo-spatial mechanisms”—characterized by brain activity, spatiotemporal relationship, and structure—with which “the brain constructs its own spontaneous activity [that] are key for making possible the capacity to feel, namely sentience.” Northoff’s model is based on his supposition that “in addition to the level/state and content of consciousness, we require a third dimension of consciousness, the form or structure or organization of consciousness.” Thus, his “temporo-spatial theory of consciousness” leads him to posit “specific neuro-ecological and neuro-visceral mechanisms that are, in their most basic nature, intrinsically temporospatial.” We have this capacity to feel and thus for sentience, he says, “because our brain continuously integrates the different inputs from body and environment within its own ongoing temporo-spatial matrix” (Northoff, 2021).
Northoff distinguishes “spatiotemporal neuroscience” from cognitive neuroscience and related branches (like affective, social, etc.) in that spatiotemporal neuroscience focuses on brain activity (rather than brain function), spatiotemporal relationship (rather than input-cognition-output relationship), and structure (rather than stimuli/contents). In this sense, spatiotemporal neuroscience “allows one to conceive the neuro-mental relationship in dynamic spatiotemporal terms that complement and extend (rather than contradict) their cognitive characterization” (Northoff et al., 2020).
Finally, Northoff and colleagues feel “the need to dissolve the mind-body problem (and replace it by the world-brain relation).” They also address other philosophical issues like assuming “time (and space) to be constructed in different scales, small and long, with all different scales being nested (like the different Russian dolls) within each other.” For example, “a mental feature may be characterized by an extremely short and restricted spatiotemporal scale which, if abstracted and thereby detached from its underlying longer and more extended scale may seem to be non-dynamic and thus a re-presentation of an event or object. This is like taking one smaller Russian doll out and consider it in isolation from all the others (and, even worse, forgetting that any of the others were ever present).” If, in contrast, they suggest, “one conceives the spatiotemporal scale of mental features in the larger context of other spatiotemporal scales, one can take into view their nestedness.” In this view, Northoff has mental features as “nothing but a small Russian doll that is nested within the longer and more extended scales of the brain’s spontaneous activity (which, by itself, is nested within the yet much larger spatiotemporal scales of body and world)” (Northoff et al., 2020).
9.2.13. Bunge’s emergent materialism
Philosopher and physicist Mario Bunge rejects any “separate mental entity,” calling it “a stumbling block to progress.” It is “unwarranted by the available data and the existing psychological models,” he says, and it collides “head-on with the most fundamental ideas of all modern science.” Rather, Bunge argues that the mind-body problem requires a psychobiological approach, based on the assumption that behavior is an external manifestation of neural processes—an approach that also abandons ordinary language in favor of a “state space language, which is mathematically precise and is shared by science and scientific philosophy” (Bunge, 1980; 2014). More broadly, he presents a systematic model of mankind as a “biopsychosocial entity” and he favors “the multilevel approach” over “the holistic, the analytic, and the synthetic approaches” (Bunge, 1989).
Upfront, Bunge defines his idiosyncratic position: ‘‘I am an unabashed monist’’—his objective is “to reunite matter and mind”—and ‘‘I am a materialist but not a physicalist.’’ By the latter distinction, Bunge means that while the material world is all there is (i.e., there are no nonmaterial substances), the laws of physics cannot explain all phenomena (i.e., “physics can explain neither life nor mind nor society”) (Bunge, 2011; Slezak, 2011).
Bunge calls his theory, or more precisely, his “programmatic hypothesis,” about the mind-body problem “emergent materialism”—his core concept being that “mental states form a subset (albeit a very distinguished one) of brain states (which in turn are a subset of the state space of the whole animal).” The hypothesis is unambiguously materialist, even though “biosystems, including their mental states, have properties that are not reducible to their physical and chemical properties.” Mind, according to Bunge, “is just a collection of functions (activities, events) of an extremely complex central nervous system.” Mental states are distinguished from brain states broadly in that mental states reflect only those brain states that exhibit neural plasticity, especially learning, in contrast to brain states that are more phylogenetically fixed (Bunge, 1980; 2014).
Approaching the mind-body problem as a general systems theorist, Bunge shows, in particular, “how the concept of a state space can be used to represent the states and changes of state of a concrete thing such as the central nervous system.” He stresses the concept of emergence—he defines an emergent property as “a property possessed by a system but not by its components.” He then focuses on the level where such emergence occurs, arguing that “the mental cannot be regarded as a level on a par with the physical or the social.” The upshot, he says, is “a rationalist and naturalist pluralism.” While he rejects Dualism (15) as both untestable and contradictory to science, he also rejects Eliminative Materialism (9.1.1) and reductive materialism (9.1.7) “for ignoring the peculiar (emergent) properties of the central nervous system.” He opts for “emergentist materialism” as a variety of “psychoneural monism,” but cautions that it needs detailed mechanisms, especially mathematical ones (Bunge, 1977).
Bunge trains his delightfully acerbic guns on choice theories: computationalism (“a sophisticated version of behaviorism,” “brainless cognitive science”); studying higher level mental phenomena rather than neuroscience and ‘‘objective brain facts’’ (“Cartesian mind-body dualism,” “psychoneural dualism”); philosophical zombies (‘‘responsible people do not mistake conceptual possibility, or conceivability, for factual possibility or lawfulness; and they do not regard the ability to invent fantasy worlds as evidence for their real existence’’); and panpsychism (‘‘illustrates the cynical principle that, given an arbitrary extravagance, there is at least one philosopher capable of inventing an even more outrageous one’’) (Slezak, 2011; Bunge, 2011).
Bunge also criticizes that “the division of scientific labor has reached such a ridiculous extreme that many workers in neuroscience and psychology tend to pay only lip service to the importance of studies in development and evolution for the understanding of their subject.” Such neglect of development and evolution, he says, has had at least three undesirable consequences: 1) overlooking the biological maturation of the central nervous system (e.g., the corpus callosum takes up to a decade to develop); 2) exaggerating leaps at the expense of graduality (particularly of the information-processing variety); and conversely, 3) exaggerating continuity at the expense of quantitative novelty (animal psychologists who claim that human mental abilities differ only in degree from prehuman ones) (Bunge, 1989).
In sum, to explain behavior and mentation in scientific terms, Bunge calls for a synthesis or merger of neuroscience and social science, rather than for a reduction, “even though the behavioral and mental processes are neurophysiological.” Put philosophically, “this is a case of ontological reduction without full epistemological reduction” (Bunge, 1989).
9.2.14. Hirstein’s mindmelding
William Hirstein argues that it is “the assumption of privacy”—the deep, metaphysical impossibility for one person to ever experience the conscious states of another—that has led philosophers and scientists to claim wrongly that the conscious mind can never be explained in straightforwardly physical terms and thus to “create vexing dualisms, panpsychisms, views that would force changes in our current theories in physics, views that deny the reality of consciousness, or views that claim the problem is insoluble.” Hirstein seeks to undermine “the assumption of privacy” by the thought experiment of “mindmelding”: connecting one person’s cerebral cortex control network to another person’s cerebral cortex visual attention network. This would entail inter-brain rather than the normal intra-brain coupling. Then the first person might correctly say, “Wow, I am experiencing your conscious visual states. Did you know you are color blind?” The control network functions as a referent for “I”—the subject of the visual states—and the other person’s conscious visual states are the referent for “your conscious visual states.” As such, mindmelding would support phenomenal consciousness as entirely physical, realizable in terms of neurobiology, which would be both necessary and sufficient (Hirstein, 2012).
9.3. Electromagnetic field theories
Electromagnetic (EM) Field Theories treat minds as identical to, or derivative from, the broader, brain-spanning EM fields generated by the cumulative aggregate of multiple, specific neural currents. The brain is packed with an intricate three-dimensional web of these EM fields—the question is what functions do these EM fields serve (if any), and whether these fields in any way relate to consciousness?
Diverse studies are said to support an EM field theory. For example, “transient periods of synchronization of oscillating neuronal discharges in the frequency range 30–80 Hz (gamma oscillations) have been proposed to act as an integrative mechanism that may bring a widely distributed set of neurons together into a coherent ensemble that underlies a cognitive act.” Transitions between the moment of perception and the motor response are marked by periods of strong desynchronization, which suggests “a process of active uncoupling of the underlying neural ensembles that is necessary to proceed from one cognitive state to another” (Rodriguez, 1999).
The stability of working memory is said to emerge at the level of the electric fields that arise from neural activity, more than from the specific neural activity itself, as “the exact neurons maintaining a given memory (the neural ensemble) change from trial to trial.” In the face of this “representational drift,” electric fields carry information about working memory content, enable information transfer between brain areas and “can act as ‘guard rails’ that funnel higher dimensional variable neural activity along stable lower dimensional routes” (Pinotsis and Miller, 2022).
Electric fields, applied externally, have been shown to modulate pharmacologically evoked neural network activity in rodent hippocampus and to enhance and entrain physiological neocortical neural network activity (i.e., neocortical slow oscillation) in vitro as a model system. Both show the neural efficacy of weak sinusoidal and naturalistic electric fields (Fröhlich and McCormick, 2010).
Neuroinformatics/EEG neuroscientists Andrew and Alexander Fingelkurts formulate a framework of “Operational Architectonics (OA) of Brain-Mind Functioning,” where “consciousness is an emergent phenomenon of coherent but dynamic interaction among operations produced by multiple, relatively large, long-lived and stable, but transient neuronal assemblies in the form of spatiotemporal patterns within the brain’s electromagnetic field.” OA’s architectural structure is “characterized by a nested hierarchy of operations of increasing complexity: from single neurons to synchronized neuronal assemblies and further to the operational modules of integrated neuronal assemblies.” Conscious phenomena are “brought to existence” by the brain generating a “dynamic, highly structured, extracellular electromagnetic field in spatiotemporal domains and over a wide frequency range.” Neurophysiological substrates of single operations (standing electromagnetic fields), produced by different neuronal assemblies, “present different qualia or aspects of the whole object/scene/concept.” At the same time, “the wholeness of the consciously perceived or imagined is a result of synchronized operations (electromagnetic fields) of many transient neuronal assemblies in the form of dynamic and ever-increasing spatiotemporal patterns termed Operational Modules (OM)”—where new OM configurations generate an almost infinite number and complexity of phenomenal qualities, patterns, and objects (Fingelkurts, 2024; Fingelkurts et. al., 2019, 2020).
Adding credence to electromagnetic field theories are recent discoveries of large-scale, cerebral cortex-wide interacting spiral wave patterns of brain waves that are said to underlie complex brain dynamics and are related to cognitive processing. That the human brain exhibits rich and complex electromagnetic patterns, with brain spirals propagating across the cortex and giving rise to spatiotemporal activity dynamics with non-stationary features and having functional correlates to cognitive processing, would be consistent with their role in consciousness (Xu et al., 2023).
9.3.1. Jones’s electromagnetic fields
Philosopher Mostyn Jones gathers, explains and classifies various electromagnetic-field theories, each with its own theoretical foundation: computationalist, reductionist, dualist, realist, interactionist, epiphenomenalist, globalist, and localist. He uses three questions to classify the field theories: 1. How do minds exist relative to fields? 2. Are minds unified by global or local fields? 3. How extensively do fields and neurons interact? (Jones, 2013).
The claim is made that electromagnetic fields in the brain can solve the “binding problem,” where distinct sensory modules combine to give a unified sense of phenomenal experience—say, melding the red and roundness of a balloon into a single percept. For example, there doesn’t seem to be a single synthesizing brain area into which all visual circuits feed, nor any well-known cortical circuits that bind (unite) color and shape to form unified images. However, perceptual binding does seem to involve the synchronized firing of circuits in unified lockstep (with a temporal binding code) for specific sensory modalities (e.g., shape), but neurons in color and shape circuits don’t synchronize. Mostyn states that “while binding involves synchrony, binding seems to be more than synchrony,” thus giving field theories the opening to unify visual experience via a single field, not by a single brain area or by synchrony (yet synchrony does amplify field activity) (Jones, 2013).
Mostyn claims that evidence is mounting that unified neural electromagnetic fields interact with neuronal cells and circuits to explain correlations and divergences between synchrony, attention, convergence, and unified minds, and that the simplest explanation for the unity of minds and fields is that minds are fields (Jones, 2017). Moreover, some electromagnetic-field theorists even put qualia itself on the explanatory agenda (Jones, 2013).
Jones poses “neuroelectrical panpsychism” (NP) as “a clear, simple, testable mind–body solution” based on the conjunction of its two component theories: (i) “everything is at least minimally conscious,” and (ii) “electrical activity across separate neurons creates a unified, intelligent mind.” According to Jones, NP is bolstered by neuroelectrical activities that generate different qualia, unite them to form perceptions and emotions, and help guide brain operations. He claims, ambitiously, that “NP also addresses the hard problem of why minds accompany these neural correlates.” He offers the radical identity that “the real nature of matter-energy (beyond how it appears to sense organs) is consciousness that occupies space, exerts forces, and unites neuroelectrically to form minds.” He also has NP solving panpsychism’s combination problem “by explaining how the mind’s subject and experiences arise by electrically combining simple experiences in brains” (Jones, 2024).
9.3.2. Pockett’s conscious and non-conscious patterns
Psychologist Susan Pockett’s electromagnetic field theory of consciousness proposes that “while conscious experiences are identical with certain electromagnetic patterns generated by the brain” have always been acknowledged, it is critical to “specify what might distinguish conscious patterns from non-conscious patterns … the 3D shape of electromagnetic fields that are conscious, as opposed to those that are not conscious.” She calls this “a testable hypothesis about the characteristics of conscious as opposed to non-conscious fields” (Pockett, 2012).
Moreover, Pockett argues that the central dogma of cognitive psychology that “consciousness is a process, not a thing” is “simply wrong.” All neural processing is unconscious, she asserts. “The illusion that some of it is conscious results largely from a failure to separate consciousness per se from a number of unconscious processes that normally accompany it—most particularly focal attention. Conscious sensory experiences are not processes at all. They are things: specifically, spatial electromagnetic (EM) patterns, which are presently generated only by ongoing unconscious processing at certain times and places in the mammalian brain, but which in principle could be generated by hardware rather than wetware” (Pockett, 2017).
9.3.3. McFadden’s conscious electromagnetic information theory
Molecular geneticist Johnjoe McFadden proposes conscious electromagnetic information (CEMI) field theory as an explanation of consciousness. His central claim is that “conventional theories of consciousness (ToCs) that assume the substrate of consciousness is the brain’s neuronal matter fail to account for fundamental features of consciousness, such as the binding problem,” and he posits that the substrate of consciousness is best accounted by the brain’s well-known electromagnetic (EM) field (McFadden, 2023).
Electromagnetic field theories of consciousness (EMF-ToCs) were first proposed in the early 2000s primarily to account for the experimental discovery that synchronous neuronal firing was a strong neural correlate of consciousness (NCC) (McFadden, 2002). While McFadden has EMF-ToCs gaining increasing support, he recognizes that “they remain controversial and are often ignored by neurobiologists and philosophers and passed over in most published reviews of consciousness.” In his own review, McFadden examines EMF-ToCs against established criteria for distinguishing between competing ToCs and argues that “they [EMF-ToCs] outperform all conventional ToCs and provide novel insights into the nature of consciousness as well as a feasible route toward building artificial consciousnesses” (McFadden, 2023).
McFadden references the neurophysiology of working memory in support of CEMI theory. He states that “although the exact neurons (the neural ensemble) maintaining a given memory in working memory varies from trial to trial, what is known as representational drift, stability of working memory emerges at the level of the brain’s electric fields as detected by EEG.” This means, he argues that “since working memory is considered to be, essentially, conscious memory,” consciousness “resides in the brain’s electromagnetic fields rather than in its neurons, acting as the brain’s global workspace.” He asserts that “the higher level of correlation between the contents of working memory and the brain’s EM fields, rather than the state of the brain’s matter-based neurons, is a considerable challenge to all neural-ToCs” (McFadden, 2023).
McFadden positions CEMI field theory (or EMF-ToCs) as providing “an objective criterion for distinguishing conscious from non-conscious EM fields. This arises from the requirement that, to be reportably conscious, a system must be able to generate (rather than merely transmit) thoughts as gestalt (integrated) information—our thoughts—that can be communicated to the outside world via a motor system” (McFadden, 2023).
In distinguishing CEMI field theory from Integrated Information Theory (12), McFadden argues that “nearly all examples of so-called ‘integrated information’, including neuronal information processing and conventional computing, are only temporally integrated in the sense that outputs are correlated with multiple inputs: the information integration is implemented in time, rather than space, and thereby cannot correspond to physically integrated information.” He stresses that “only energy fields are capable of integrating information in space” and he defines CEMI field theory whereby “consciousness is physically integrated, and causally active, [with] information encoded in the brain’s global electromagnetic (EM) field.” Moreover, he posits that “consciousness implements algorithms in space, rather than time, within the brain’s EM field,” and he describes CEMI field theory as “a scientific dualism that is rooted in the difference between matter and energy, rather than matter and spirit” (McFadden, 2020).
9.3.4. Ephaptic coupling
An ephaptic coupling theory of consciousness leverages the idea that neurons, being electrogenic, produce electric fields, which, if sufficiently strong and precisely placed, can influence the electrical excitability of neighboring neurons near-instantaneously (Chen, 2020). Assuming that ephaptic coupling occurs broadly in the brain, it could support, or even help constitute, an electromagnetic field theory of consciousness.
Experiments show that a neural network can generate “sustained self-propagating waves by ephaptic coupling, suggesting a novel propagation mechanism for neural activity under normal physiological conditions.” There is clear evidence that “slow periodic activity in the longitudinal hippocampal slice can propagate without chemical synaptic transmission or gap junctions, but can generate electric fields which in turn activate neighboring cells.” These results “support the hypothesis that endogenous electric fields, previously thought to be too small to trigger neural activity, play a significant role in the self-propagation of slow periodic activity in the hippocampus” (Chiang et al, 2019).
Ephaptic coupling of cortical neurons, independent of synapses, has been demonstrated by stimulating and recording from rat cortical pyramidal neurons in slices. Results showed that extracellular fields, despite their small size, “could strongly entrain action potentials, particularly for slow (<8 Hz) fluctuations of the extracellular field,” indicating that “endogenous brain activity can causally affect neural function through field effects under physiological conditions” (Anastassiou et al., 2011).
Mesoscopic ephaptic activity in the human brain has been explored, including its trajectory during aging, in a sample of 401 realistic human brain models from healthy subjects aged 16–83. “Results reveal that ephaptic coupling … significantly decreases with age, with higher involvement of sensorimotor regions and medial brain structures. This study suggests that by providing the means for fast and direct interaction between neurons, ephaptic modulation may contribute to the complexity of human function for cognition and behavior” (Ruffini et al., 2020).
9.3.5. Ambron’s local field potentials and electromagnetic waves
Biologist and pain researcher Richard Ambron suggests that understanding the specific consciousness of pain might help to understand the mechanism of consciousness in general. Pain is ideal for studying consciousness, he says, because it receives priority over all other sensations, reflecting its criticality for survival (Ambron, 2023a, 2023b; Ambron and Sinav, 2022).
Pain starts at the site of injury where damaged cells release small molecular compounds that bind to the terminals of peripheral neurons and trigger action potentials which encode information about the injury. The greater the severity of the injury, the greater the number and frequency of action potentials, and the greater the intensity of pain.
The pain pathway is well documented: from periphery to spinal cord to the thalamus, where we first become aware of the injury but do not feel the affect of onerous pain. Rather, the region for feeling the hurtfulness of pain is the anterior cingulate cortex (ACC), where input from the thalamus activates a complex neuronal circuit. Essential are the pyramidal neurons, which have a triangular cell body and a long dendrite with many branches that are vital for experiencing pain.
Because information transmitted between neurons must traverse the minuscule space between them—the synapse—axons from thalamic neurons transmit to dendrites of ACC neurons by releasing a neurotransmitter that traverses the gap, binds to the dendritic endings and triggers action potentials. When there is prolonged activity at the synapse in response to a serious injury, the synapses become “hyperresponsive” and strengthened. This strengthening, called long-term potentiation (LTP), sensitizes the synapse so that it takes fewer action potentials to cause pain. This is why even a gentle touch to the site of an injury will hurt (Ambron, 2023a, 2023b; Ambron and Sinav, 2022).
In addition to housing circuits for pain, the ACC receives information from other brain regions. For example, inputs from the amygdala can increase the intensity of the pain due to anxiety or fear, whereas those from the nucleus accumbens can reduce the pain if the reward for bearing the pain is considered worthwhile. Thus, what we experience as pain depends on interactions among several areas of the brain.
To maintain electro-neutrality after an injury, there is an efflux of positive ions from the cell body that forms a local field potential (LFP) and creates electromagnetic (EM) waves in the extracellular space around the pyramidal neurons. In Ambron’s novel move, he posits that these EM waves now contain the information about the pain that was previously encoded in the action potentials. In other words, the pain information was transferred from action potentials to LFPs to EM waves, which could influence nearby circuits, such as those for attention.
Ambron speculates that these EM waves contribute to consciousness. Assuming information from other senses is also transformed into EM waves, it also might help solve the “binding/combination problem,” because integrating information from all the waves could explain how individual sensory inputs combine to create “a unified, coherent version of the world.” Unlike most theories of consciousness, Ambron believes his hypothesis can be tested (Ambron, 2023a, 2023b).
9.3.6. Llinas’s mindness state of oscillations
Neuroscientist Rodolfo Llinas’s theory of the “mindness state” is centered on the concept of oscillations. Many neurons possess electrical activity, manifested as oscillating variations in the minute voltages across the cell membrane. On the crests of these oscillations occur larger electrical events that are the basis for neuron-to-neuron communication. Like cicadas chirping in unison, a group of neurons oscillating in phase can resonate with a distant group of neurons. This simultaneity of neuronal activity, Llinas maintains, is the neurobiological root of cognition. Although the internal state that we call the mind is guided by the senses, it is also generated by the oscillations within the brain. Thus, in a certain sense, Llinas would say that reality is not all “out there,” but is a kind of virtual reality (Llinas, 2002, 2007).
9.3.7. Zhang’s long-distance light-speed telecommunications
Synaptic neuroscientist Ping Zhang suggests that “the long-time puzzle between brain and mind” might be solved by “a light-speed telecommunication between remote cells that are arranged in parallel.” He bases his theory on “the law of synchronization,” where “all the individuals are connected to each other rigidly (or in a light-speed momentum network), energy radiated from one individual will be propagated to and conserved in all other individuals in light speed” (Zhang, 2019).23
In explaining “how a ‘school’ of neurons in human brain behaves like a light-speed rigid network and concentrates on a task,” Zhang cites his own observation of “the traveling electrical field mediated transmission of action potentials between excitable cells with the cell-cell distance more than 10 mm (an anatomically astronomical distance in cortex).” Moreover, “when longitudinal cells are arranged in parallel separately, the action potential generated from one cell can ‘jump’ to other cells and cause all the cells to fire action potentials in concert. If two cells fire action potentials spontaneously and have their own rhythm, they tend to ‘learn’ from each other, adjust their own pace, eventually lock their phases, and ‘remember’ this common rhythm for a long while” (Zhang, 2019).
Zhang notes, “unlike synaptic neuronal network, which is a physiological transmission with the velocity of 0.2–120 m/s (synaptic delay period is not included), traveling electrical field mediated transmission … [has] the velocity of light speed.” In a cortical circuit, he says, “the synaptic elements provide delicate and precise connections; while the traveling electrical field, may provide transient, rapid, flexible rather than fixed connections to synchronize rhythmic action potentials fired from axons which are arranged in parallel and are well insulated by dielectric media.”
How does “this invisible ‘tele’ bridge-linked synchronization or harmony” work? According to Zhang, neural action potentials in human brain circuits produce clusters of traveling electrical fields. Those with similar frequency tend to be synchronized. Integration, imagination, remembering, creating, etc. require considerable energy, and if these processes are simply synchronizations between different brain regions, the energy conserving property of sync facilitates performing these mental activities.
Having worked on synaptic transmission for 20 years, Zhang muses: “Glutamate receptors, for instance, are found in both human and crayfish synapses. Human receptors are not any ‘smarter’ than those of crayfish.” It would be very narrow minded, he says, “to study human synapses, which evolved from those of squid and crayfish, hoping to find a magic thinking molecule.” If there is no super-highway (light speed) above the traditional synaptic networks, he concludes, “I just cannot imagine how people can be an intelligent life-form” (Zhang, 2019).
9.4. Computational and Informational Theories
Computation and Information Theories feature advanced computational structures, resonance systems, complex adaptive systems, information-theory models, and mathematical models, all of which are held, in whole or in part, as theories of consciousness.
9.4.1. Computational theories
Computational theories of mind developed organically as the processing power of computers expanded exponentially to enable the emulation of mind-like capabilities such as memory, knowledge structure, perception, decision-making, problem solving, reasoning and linguistic comprehension (especially with the advent of human-like large language models like ChatGPT). The growing field of cognitive science owes its development to computational theories (Rescorla, 2020).
There is a reciprocal, recursive, positive-feedback relationship as computational theories of mind seek both to enhance the power and scope of computing and to advance understanding of how the human mind actually works. Classical computational theories of mind, which exemplify functionalism (9.1.3), are based on algorithms, which are routines of systematic, step-by-step instructions, and on Turing machines, which are abstract models of idealized computers with unlimited memory and time that process one operation at a time (with super-fast but not unlimited speed).
Artificial intelligence adds logic, seeking to automate reasoning—deductive at first, then inductive and higher-order forms. Neural networks, with a connectionism construct, were a step-function advance. For example, chess computers have reigned supreme since 1997 when Deep Blue defeated the world chess champion, Gary Kasparov. But whereas the process has been literally massive brute-force calculations—hundreds of millions of “nodes” per second (a “node” is a chess position with its evaluation and history)—recent advances in algorithmic theory are dramatically improving capabilities. The implications go way beyond chess and are apparent.
Philosopher-futurist Nick Bostrom espouses a computational theory of consciousness, which is consistent with his view that there is a distinct possibility that our world and universe, our total state of affairs, is a computer simulation (Bostrom, 2003, 2006). The logic is almost a tautology: A computer simulation would require, by definition, that our consciousness, and the consciousnesses of all sentient creatures, would be, ipso facto, computational consciousness. Of course, Bostrom does not argue that we are living in a simulation, so his computationalism as a theory of consciousness is motivated by other factors, including computational neuroscience. In fact, one could make the case that the arrow of causal explanation points in the reverse direction: Consciousness as computational would need to be a condition precedent, necessary but not sufficient, for the simulation argument to be coherent.
Computer/AI scientist James Reggia explains that efforts to create computational models of consciousness have been driven by two main motivations: “to develop a better scientific understanding of the nature of human/animal consciousness and to produce machines that genuinely exhibit conscious awareness.” He offers three conclusions: “(1) computational modeling has become an effective and accepted methodology for the scientific study of consciousness; (2) existing computational models have successfully captured a number of neurobiological, cognitive, and behavioral correlates of conscious information processing as machine simulations; and (3) no existing approach to artificial consciousness has presented a compelling demonstration of phenomenal machine consciousness, or even clear evidence that artificial phenomenal consciousness will eventually be possible” (Reggia, 2013).
Computer scientist Kenneth Steiglitz argues that all available theories of consciousness “aren’t up to the job” in that “they don’t tell me how I can know whether a particular candidate is or is not phenomenally conscious.” Moreover, he says, we will never be able to answer the question of AI consciousness—because “it is simply not possible to test for consciousness.” This presents, Steiglitz worries, dangers of two kinds: (1) damaging or even destroying our own consciousness, and (2) bringing about new consciousness that will not be treated with proper respect and quite possible suffer (Steiglitz, 2024).
Steiglitz states three principles of what we think we know about consciousness—the dual nature of mind and body, the dependence of mind on body, and the dependence of mind on computation—and he calls them all absurd, because “these do not follow from physics, biology, or logic.” He muses, “I wish I had a theory to account for consciousness—but I don’t see how any theory could” (Steiglitz, 2024).
Philosophy-savvy attorney Andrew Hartford proposes an EP (Eternal Past) Conjecture such that “If there ever is something there always was something, because no-thing comes from Nothing,” and that “the always existor exists before all time, process or computation.” What follows, he says, is that while “it remains to be seen whether artificial consciousness is in the domain of all possibilities, we should not presume that we will necessarily build computational consciousness” (Hartford, 2014).
The mildly dismissive critique is that the computational theory of mind follows the historical trend of analogizing the mind to “the science of the day,”.24
9.4.2. Grossberg’s adaptive resonance theory
To computational neuroscientist Stephen Grossberg, “all conscious states are resonant states.” The conscious brain is the resonant brain where attentive consciousness regulates actions that interact with learning, recognition, and prediction (Grossberg, 2019). Grossberg’s idea is that the mind is an activity, not a thing, a verb not a noun—it’s what you do, not what you have or use. His theoretical foundation is “Adaptive Resonance Theory” (ART), a cognitive and neural concept of how the brain autonomously learns to consciously attend, learn, categorize, recognize, and predict objects and events in a changing world (Grossberg, 2013). Central to ART’s predictive power is its ability to carry out fast, incremental, and stable unsupervised and supervised learning in response to external events.
ART specifies mechanistic links in advanced brains that connect processes regulating conscious attention, seeing, and knowing, with those regulating looking and reaching. Consciousness thus enables learning, expectation, attention, resonance, and synchrony during both unsupervised and supervised learning. These mechanistic links arise from basic properties of brain design principles such as complementary computing, hierarchical resolution of uncertainty, and adaptive resonance. These principles, recursively, require conscious states to mark perceptual and cognitive representations that are complete, context sensitive, and stable enough to control effective actions (Grossberg, 2019).
Foundational to Grossberg’s way of thinking is the idea that all biological processes, notably our brains, self-organize, and that all cellular systems illustrate variations of a universal developmental code. All these processes are regulated using physically different instantiations of mechanistically similar laws of short-term memory or activation, and long-term memory or learned memory, that are conserved across species, including in our brains (Grossberg, 2021).
Resonance in the brain comes about via bottom-up patterns interacting with learned top-down expectations, leading to a persistent resonant state that can also lead to conscious awareness when it includes feature-selective cells that represent qualia. In this way, Grossberg uses ART to explain many mind and brain data about how humans consciously see, hear, feel, and know things (Grossberg, 2023).
At the risk of oversimplification, Grossberg’s unified theory of mind has three “laws” of consciousness: (i) All conscious states are resonant states; (ii) only resonant states with feature-based representations can become conscious; (iii) multiple resonant states can resonate together. He believes that the varieties of brain resonances and the conscious experiences that they support make progress towards solving the hard problem of consciousness (Grossberg, 2017).
9.4.3. Complex adaptive systems models
A complex adaptive system (CAS) is a dynamic network of interactions whose collective behavior may not be predictable from its component behaviors and that can “adapt” or alter its individual and collective behavior, creating novelties. A CAS works, broadly, via kinds of mutation and self-organizing principles related to change-initiating events at different levels of its organizational structure (from micro to collective), motivated in a loose sense by kinds of rules or trophisms (Complex Adaptive System, 2023).
The application of CAS to consciousness can be argued from two perspectives. First, because the brain is a classic CAS in that it is the most complex system in the known universe—the brain has roughly (order of magnitude) 100 billion neurons and one quadrillion (1015) connections—with constant adaptations and emergences of novel functions or activities, and because consciousness is the output of the brain, therefore consciousness is a CAS.
Second, characteristics of consciousness per se are characteristics of a CAS: interactions are non-linear and chaotic in that small changes in inputs can cause large changes in outputs (e.g., minor physical or psychological stimuli can trigger major behavioral responses); histories are relevant for current and future evolution of the system; thresholds are critical for initiating new actions; interactions can be recursive and unpredictable; and the system is open such that boundaries may not be definable (Rose, 2022).
Understanding consciousness as an intelligent CAS may affect how we assess its impact on its environment; for example, how anthropology conceives of culture (Laughlin, 2023). Consciousness may be modeled as an intelligent CAS where intelligence means solving problems by mediating between sensory input and behavioral output. Evolution of an intelligent CAS is said to result in emergent properties.
9.4.4. Critical brain hypothesis
According to biophysicist John Beggs, the Critical Brain Hypothesis “suggests that neural networks do their best work when connections are not too weak or too strong.” This intermediate “critical” case avoids “the pitfalls of being excessively damped or amplified.” In criticality, the brain capacity for transmitting more bits of information is enhanced (Beggs, 2023).
The hypothesis posits that the brain operates optimally near the critical point of phase transitions, oscillating between subcritical, critical, and modestly supercritical conditions. “The brain is always teetering between two phases, or modes, of activity,” Beggs explains; “a random phase, where it is mostly inactive, and an ordered phase, where it is overactive and on the verge of a seizure.” The hypothesis predicts, he says, that “between these phases, at a sweet spot known as the critical point, the brain has a perfect balance of variety and structure and can produce the most complex and information-rich activity patterns. This state allows the brain to optimize multiple information processing tasks, from carrying out computations to transmitting and storing information, all at the same time” (Beggs, 2023).
The Critical Brain Hypothesis traces its origin to physicist Per Bak, who suggests that “the brain exhibits ‘self-organized criticality,’ tuning to its critical point automatically. Its exquisitely ordered complexity and thinking ability arise spontaneously … from the disordered electrical activity of neurons.” Founding his ideas on statistical mechanics, Bak hypothesizes that, “like a sandpile, the network balances at its critical point, with electrical activity following a power law. So when a neuron fires, this can trigger an ‘avalanche’ of firing by connected neurons, and smaller avalanches occur more frequently than larger ones” (Ouellette, 2018).
The same sense of a critical brain being “just right,” Beggs says, also explains why information storage, which is driven by the activation of groups of neurons called assemblies, can be optimized. “In a subcritical network, the connections are so weak that very few neurons are coupled together, so only a few small assemblies can form. In a supercritical network, the connections are so strong that almost all neurons are coupled together, which allows only one large assembly. In a critical network, the connections are strong enough for many moderately sized groups of neurons to couple, yet weak enough to prevent them from all coalescing into one giant assembly. This balance leads to the largest number of stable assemblies, maximizing information storage” (Beggs, 2023).
Beggs claims that “experiments both on isolated networks of neurons and in intact brains have upheld many of these predictions” derived from networks operating near the critical point, especially in the cortex of different species, including humans. For example, it is possible to disrupt the critical point. “When humans are sleep deprived, their brains become supercritical, although a good night’s sleep can move them back toward the critical point.” It thus appears, he suggests, that “brains naturally incline themselves to operate near the critical point, perhaps just as the body keeps blood pressure, temperature and heart rate in a healthy range despite changes to the environment” (Beggs, 2023).
Two challenges are identified: (i) how is criticality maintained or “fine-tuned” in a biological environment (Ouellette, 2018), and (ii) “distinguishing between the apparent criticality of random noise and the true criticality of collective interactions among neurons” (Beggs, 2023).
9.4.5. Pribram’s holonomic brain theory
Neurosurgeon/neuroscientist Karl Pribram’s Holonomic Brain Theory is the novel idea that human consciousness comes about via quantum effects in or between brain cells such that the brain acts as a holographic storage network (building on theories of holograms formulated by Dennis Gabor). (“Holonomic” refers to representations in a Hilbert phase space defined by both spectral and space-time coordinates.) (Section: Holonomic brain theory, 2023).
Holograms are three-dimensional images encoded on two-dimensional surfaces and Pribram’s claim is that this counterintuitive capacity is fundamental in explaining consciousness. (There is precedent in that the holographic principle in quantum cosmology describes black hole entropy and information, with applications in string theory and quantum gravity [Holographic principle, 2024].)
Holograms are generated from patterns of interference produced by superimposed wavefronts, created by split beams of coherent radiation (i.e., lasers) that are recorded and later re-constructed. A prime characteristic is that every part of the stored information is distributed over the entire hologram. Even if most parts of the hologram are damaged, as long as any part of the hologram is large enough to contain the interference pattern, that part can recreate the entirety of the stored image (but if the image is too small it will be noisy, blurry)
The application of holographic models to consciousness was inspired by this non-locality of information storage within the hologram. It was Karl Pribram who first noted the similarities between an optical hologram and memory storage in the human brain, extrapolating what psychologist Karl Lashley had discovered about the wide distribution of memory in the cerebral cortex of rats following diverse surgical lesions. Pribram had worked with Lashley on Lashley’s engram experiments, which sought to determine exact locations of specific memories in primate brains by making small lesions. The surprising result was that these targeted extirpations had little effect on memory. In contrast, removing large areas of cortex caused multiple serious deficits in memory and cognitive function. The conclusion was a milestone in neuroscience: Memories are not stored in a single circuit or exact location, but were spread over the entirety of a neural network. Thus, according to Holonomic Brain Theory, memories are stored in holographic-like fashion within certain general regions, but stored non-locally within those regions. This enables the brain to maintain function and memory even after it is damaged. (This can explain why some children retain normal intelligence when large portions of their brains—in some cases, half—are removed.) (Holonomic brain theory, 2023).
More fundamentally, Holonomic Brain Theory conjectures that consciousness is formed by quantum events within or between neurons. This early theory of quantum consciousness, which Pribram developed initially with physicist David Bohm, combines quantum biology with holographic storage. Pribram suggests these processes involve electric oscillations in the brain’s fine-fibered dendritic webs, which differ from the commonly accepted action potentials along axons and traversing synapses. These oscillations are waves and create wave interference patterns in which memory is encoded such that a piece of a long-term memory is similarly distributed over a dendritic arbor. The remarkable result is that each part of the dendritic network contains all the information stored over the entire network—a mechanism that maps well onto laser-generated holograms. Thus, Holonomic Brain Theory is said to enable distinctive features of consciousness, including the fast associative memory that connects different pieces of stored information and the non-locality of memory storage (a specific memory is not stored in a single location; there is no dedicated group or circuit of specific neurons) (Holonomic brain theory, 2023).
Although Holonomic Brain Theory has not come to threaten mainstream neuroscience, it has intriguing features that should be explored. I don’t hold it against the theory that it has stimulated unusual and creative speculations; for example, holographic duality and the physics of consciousness (Awret, 2022); holographic principle of mind and the evolution of consciousness (Germine, 2018); and quantum hologram theory of consciousness as a framework for altered states of consciousness research (Valverde et al., 2022). In fact, for a theory to have a shot at explaining consciousness, if it does not stimulate strange ideas, it probably doesn’t have the disruptive firepower that is surely required.
For example, physicist Uziel Awret’s dual-aspect information theory of consciousness—holographic-duality—is motivated by certain anti-physicalist problem intuitions associated with representational content and spatial location and attempts to provide these with a topic neutral, consciousness-independent explanation—which, he says “is ‘hard’ enough to make a philosophical difference and yet ‘easy’ enough to be approached scientifically.” This is achieved by, “among other things, showing that it is possible to conceive of physical scenarios that protect physicalism from the conceivability argument without needing to explain all the other anti-physicalist problem intuitions.” Awret argues that “abstract algorithms are not enough to solve this problem and that a more radical ‘computation’ that is inspired by physics and that can be realized in ‘strange metals’ may be needed” (Awret, 2022).
9.4.6. Doyle’s experience recorder and reproducer
“Information Philosopher” Bob Doyle proposes the “Experience Recorder and Reproducer (ERR)” as an information model for the mind. He says that the mind, like software, is immaterial information, a human being “is not a machine, the brain is not a computer, and the mind is not processing digital information.” His proposal is that “a minimal primitive mind would need only to ‘play back’ past experiences that resemble any part of current experience, because “remembering past experiences has obvious relevance (survival value) for an organism.” However, beyond its survival value, “the ERR evokes the epistemological ‘meaning’ of information perceived in that it may be found in the past experiences that are reproduced by the ERR, when stimulated by a new perception that resembles past experiences in some way” (Section: Doyle, n.d.b).
Without prior similar experience, new perceptions will be “meaningless.” A conscious being is constantly recording information about its perceptions of the external world and most importantly for ERR, it is simultaneously recording its feelings. Experiential data such as sights, sounds, smells, tastes, and tactile sensations are recorded in a sequence in association with emotional states, such as pleasure and pain, fear and comfort levels, etc. This means that when the experiences are reproduced (played back in a temporal sequence), the accompanying emotions are once again felt, in synchronization. The capability of reproducing experiences is critical to learning from past experiences, so as to make them guides for action in future experiences.
The ERR biological model has information stored in “neurons that have been wired together.” (Neuroscientist Donald Hebb said that “neurons that fire together wire together.”) The stored information does not get recalled or retrieved (as computers do) to create a representation that can be viewed. Doyle prefers to call the reproduction a “re-presentation” in that the ERR is simply presenting or “re-presenting” the original experience in all parts of the conscious mind connected by the neural assembly. Humans are conscious of our experiences because they are recorded in (and reproduced on demand from) the information structures in our brains. Mental information houses the content of an individual (Doyle, n.d.b).
ERR, Doyle says, also solves the “binding problem,” the unification of experience, because the sensory components are bound together when initially stored in the ERR (together with the accompanying emotion). They remain bound on playback. “They do not have to be assembled together by an algorithmic scheme.”
Consciousness, Doyle says, can be defined in information terms as a property of an entity (usually a living thing but can also include computers and artificial intelligence) that reacts appropriately to the information (and particularly to changes in the information) in its environment. In the context of information philosophy, Doyle posits that the Experience Recorder and Reproducer can provide us with “information consciousness.”
The treatment of information is said to link the physical and the phenomenal. Wherever there is a phenomenal state, it realizes an information state, which is also realized in the cognitive system of the brain. Conversely, for at least some physically realized information spaces, whenever an information state in that space is realized physically, it is also realized phenomenally. This leads Doyle to suppose that “this double life of information spaces corresponds to a duality at a deep level.” He even suggests that this “double realization” of information is the key to the fundamental connection between physical processes and conscious experience. If so, Doyle concludes, we might develop a truly fundamental theory of consciousness. And it may just be that information itself is fundamental (Doyle, n.d.b).
9.4.7. Informational realism and emergent information theory
Philosopher/theologian/mathematician William Dembski argues that “informational realism,” understood properly, can “dissolve the mind-body problem.” Information realism “asserts that the ability to exchange information is the defining feature of reality, of what it means, at the most fundamental level, for any entity to be real.” It does not deny, he says, the existence of things (i.e., entities or substances). Rather, it defines things as “their capacity for communicating or exchanging information with other things,” such that “things make their reality felt by communicating or exchanging information.” This means that information is “the relational glue that holds reality together” and “thus assumes primacy in informational realism” (Dembski, 2021, 2023).
A key move in dissolving the mind-body problem, according to Dembski, is to substitute information for perception under an informational realism framework, thereby giving the mind direct access to fundamental properties (9.8.10). Moreover, he says, informational realism is “able to preserve a common-sense realism that idealism has always struggled to preserve” because all things simply communicate information to their “immediate surroundings, which then ramifies through the whole of reality, reality being an informationally connected whole” (Dembski, 2021, 2023).
Engineering professor Jaime Cardenas-Garcia links consciousness with “infoautopoiesis” (i.e., the process of self-production of information) and seeks to “demystify” both. Infoautopoiesis, he says, “allows a human organism-in-its-environment to uncover the bountifulness of matter and/or energy as expressions of their environmental spatial/temporal motion/change, i.e., as information or Batesonian differences which make a difference.” Thus, “individuated, internal, inaccessible, semantic information is the essence of consciousness,” and neither self-produced information nor consciousness is “a fundamental quantity of the Universe” (Cardenas-Garcia, 2023).
Independent researcher Daniel Boyd presents Emergent Information Theory (EIT) to bridge the mind-body gap by considering biological and technological information systems as a possible mechanism of “non-material mind” (as defined in an informational context) influencing the physical body. EIT uses the term “information” as exemplified by computer binary “values.” While associated with a physical state (e.g., a magnetic polarity) they are distinct from it. The system design allows the “value” to be deduced from the state. However, being not composed of matter or energy the value itself, as defined, cannot interact with or be detected by any device. Yet it is these values that underlie the computer’s function. EIT proposes that brain function is based on comparable primitive information associated with neuronal states (Boyd, 2020).
These basic units of information are of no use individually. In computers they are combined to form hierarchical levels of organization—bytes, subroutines and programs—which cannot be observed, but can be deduced using the coding systems used to create them. Each level has properties that do not exist in underlying levels: the “emergence” referred to in EIT. Brain functions are based on equivalent hierarchical, emergent phenomena which are equally non-detectable. This applies not just to consciousness, but to all functional brain phenomena. That, in an organic system, this generic approach can result in the remarkable properties of consciousness should come as no surprise. Based on the top-down causation that is common in strongly emergent systems, EIT provides a mechanism for the influence of non-material mind over the physical body (Boyd, 2020).
9.4.8. Mathematical theories
Mathematics can apply to consciousness in two ways. The first approach involves methods, models and simulations that are increasingly rigorous and sophisticated, describing and explaining essential features and mechanisms of conscious experience, primarily its structure, level, content and dynamics (Labh, 2024). Here mathematics supports various headline theories. Integrated Information Theory (12) relies on a mathematical determination of consciousness. Friston’s Free-Energy Principle formalizes and optimizes the representational capacities of physical/brain systems (9.5.4). Hoffman’s Conscious Realism (Idealism) utilizes a mathematical formulation of consciousness (16.5).
The second approach posits deep claims that mathematical structures form the foundations of consciousness, much as mathematical structures form the foundations of quantum mechanics. In a sense, the first way, clear and common, is epistemological; the second, highly speculative, is ontological.
As for mathematics as ontology, Max Tegmark has the entire universe, all reality, as a fundamental mathematical structure (Tegmark, 2014a). Roger Penrose has the Platonic world of perfect forms as primary such that physical and mental worlds are its “shadows.” We “perceive mathematical truths directly,” Penrose says, in that “whenever the mind perceives a mathematical idea, it makes contact with Plato’s world of mathematical concepts” (Penrose, 1996). Both visions, certainly controversial, would be consistent with mathematical constructions of consciousness, suggesting that consciousness is “made of’ mathematics.
Initiatives to link the abstract formal entities of mathematics, on the one hand, and the concreta of conscious experience, on the other hand, have proliferated, the challenge being to “represent conscious experience in terms of mathematical spaces and structures.” But what is “a mathematical structure of conscious experience?” (Kleiner and Ludwig, 2023).
Mathematicians Johannes Kleiner and Tim Ludwig seek a general method to identify and investigate structures of conscious experience—quality, qualia or phenomenal spaces—to perhaps serve as a framework to unify approaches from different fields. Their prime criterion is that for a mathematical structure to be literally of conscious experience, rather than merely a tool to describe conscious experience, “there must be something in conscious experience that corresponds to that structure.” In simple terms, they say, such a mathematical structure consists of two building blocks: the first brings in one or more sets called the ‘domains’ of the structure, where the elements of sets correspond to aspects of conscious experiences. The second are relations or functions which are defined on the domains. The authors claim that this definition does not rely on any specific conception or aspects of conscious experience. Rather, it can work with any theory of consciousness in that “every conscious experience comes with a set of aspects,” whether holistic, irreducible approaches to qualia and phenomenal properties, or theories built on atomistic conceptions of consciousness such as multiple mind modules (Kleiner and Ludwig, 2023).
Mathematician Yucong Duan proposes a mathematically based “bug” theory of consciousness in that, with respect to consciousness, a bug is “not only a limitation in information processing, but also an illusion that leads human beings to create abstract and complete semantics and use them as tools” (Duan and Gong, 2024a). He calls mathematics as “the language of consciousness,” required to find patterns, periodicity, relevance and other characteristics in consciousness, to reveal causal relationships and interactions among them, and to understand the structure, dynamics and functions of consciousness.” For example, “dynamic system theory can describe the evolution track and stable state of consciousness, and information theory can quantify the information flow and entropy value in consciousness, thus revealing the dynamic characteristics and information processing mechanism of consciousness.” Moreover, Fourier transform can “decompose complex consciousness signals into simple frequency components and reveal the laws and mechanisms of consciousness activities through frequency domain analysis, filtering and time-frequency analysis”—combining to yield “new perspectives of consciousness regularities.” Duan does recognize the limitations of mathematics (Duan and Gong, 2024b).
9.5. Homeostatic and affective theories
Homeostatic and Affective Theories stress predictive, homeostatic, free-energy (active inference), equilibrium, and emotion-related theories, and have become increasingly recognized as important theories of consciousness.
9.5.1. Predictive theories (Top-down)
Top-down predictive theories highlight brain-based, central-to-peripheral, efferent influence on sensory organs more than peripheral-to-central, afferent sensory perceptions—and while top-down predictive models may or may not be themselves explanations of consciousness, they give insight into the nature of consciousness and its evolutionary development. Top-down is a fundamental principle of how brains work and it would be surprising if it were not relevant for understanding consciousness.
According to Anil Seth and Seth Bayne, there are two general approaches to understanding consciousness via the centrality of top-down signaling in shaping and enabling conscious perception. The first is reentry theories where recurrent, reentrant pathways are in some sense conscious perceptions—and thus reentry theories are theories of consciousness per se. The second approach, broadly described as predictive processing, starts instead from a foundation principle of how the brain works—in terms of prediction as a core principle underlying perception, action, and cognition, and therefore does not directly specify theories of consciousness. Nonetheless, the “core claim of reentry theory and predictive processing (PP) is that conscious mental states are associated with top-down signaling (reentry, thick arrows) that, for PP, convey predictions about the causes of sensory signals (thin arrows signify bottom-up prediction errors), so that continuous minimization of prediction errors implements an approximation to Bayesian inference” (Seth and Bayne, 2022).
Cognitive philosopher Andy Clark puts it succinctly: Rather than your brain perceiving reality passively, your brain actively predicts it. Your brain is a powerful, dynamic prediction engine, mediating our experience of both body and world. From the most mundane experiences to the most sublime, reality as we know it is the complex synthesis of predictive expectation and sensory information, “sculpting” all human experience. Thus, the extraordinary explanatory power of the predictive brain (Clark, 2023).
Leveraging the work of Karl Friston (9.5.4), Clark states that in predictive processing, perception is structured around prediction, which he suggests is the fundamental operating principle of the brain (Musser, 2023a, Musser, 2023b). While the rudimentary evolutionary driver of the predictive brain is simply survival, staying alive, the emergence of consciousness can be seen as facilitating the predictive capabilities in terms of awareness, responsiveness, and conformity to external realities.
Clark stresses that even though biological brains are increasingly cast as “prediction machines” this should not constrain us “to embrace a brain-bound ‘neurocentric’ vision of the mind.” The mind, such views mistakenly suggest, consists entirely of the skull-bound activity of the predictive brain, an inference from predictive brains to skull-bound minds that Clark rejects. Predictive brains, he argues, can be apt participants in larger cognitive circuits. The path is thus cleared for a new synthesis in which predictive brains act as entry-points for extended minds (9.7.1), and embodiment and action contribute constitutively to knowing contact with the world (Clark, 2017a; 2017b.)
Cognitive psychologist Richard Gregory pioneered conceptualizing the brain as actively shaping perception, not the assumed inert receptacle of sensory signals. (Gregory himself credited Herman von Helmholtz for realizing that “perception is not just a passive acceptance of stimuli, but an active process involving memory and other internal processes.”) Gregory’s key insight was that “the process whereby the brain puts together a coherent view of the outside world is analogous to the way in which the sciences build up their picture of the world, by a kind of hypothetico-deductive process.” Although timescales differ, Gregory advocated the guiding principle that perception shares processes with the scientific method. In particular, Gregory incorporated “explicitly Bayesian concepts” into our understanding of how sensory data is combined with pre-existing beliefs (“priors”) to modify and mold perceptions. Consciousness evolved, according to Gregory, to enable rapid comparisons between real-world events and counterfactual simulations in order to make optimum decisions (Gregory, 2023).
Neuroscientist Rudolfo Llinas traces the evolution of the “mindness state” to enable predictive interactions between mobile creatures and their environment, arguing that the nervous system evolved to allow active movement in animals. Because a creature must anticipate the outcome of each movement on the basis of incoming sensory data, the capacity to predict is most likely the ultimate brain function. Llinas even suggests that Self is the centralization of prediction (Llinas, 2002).
9.5.2. Seth’s “beast machine” theory
Neuroscientist Anil Seth extends top-down predictive theories with his neuroscience-informed “beast machine” theory that conscious experiences can be understood as forms of brain-based perceptual prediction, within the general framework of predictive processing accounts of brain perception, cognition, and action. More specifically, his theory proposes that phenomenological properties of conscious experiences can be explained by computational aspects of different forms of perceptual prediction. A key instance of this is in the ability to account for differences between experiences of the world and experiences of the self. The theory also proposes that the predictive machinery underlying consciousness arose via a fundamental biological imperative to regulate bodily physiology, namely, to stay alive. We experience the world around us, and ourselves within it, with, through, and because of our living bodies (Seth, 2021a, 2021b).
Seth says that our conscious experiences of the world and the self are forms of brain-based prediction—which he labels “controlled hallucinations.”25 He asks, how does the brain transform what are inherently ambiguous, electrical sensory signals into a coherent perceptual world full of objects, people, and places? The key idea is that the brain is a “prediction machine,” and that what we see, hear, and feel is nothing more than the brain’s “best guess” of the causes of its sensory inputs. Because perceptual experience is determined by the content of the (top-down) predictions, and not by the (bottom-up) sensory signals, we never experience sensory signals themselves, we only ever experience interpretations of them. Thus, “what we actually perceive is a top-down, inside-out neuronal fantasy that is reined in by reality, not a transparent window onto whatever that reality may be.” Taking this idea seriously and seeking its implications, Seth proposes that the contents of consciousness are a kind of waking dream—the “controlled hallucination”—that is both more than and less than whatever the real world really is. He offers slyly the insight that “you could even say that we’re all hallucinating all the time. It’s just that when we agree about our hallucinations, that’s what we call reality” (Seth, 2021a, 2021b).
9.5.3. Damasio’s homeostatic feelings and emergence of consciousness
Neuroscientist Antonio Damasio’s perspective on consciousness is distinctive in a variety of ways. Crucially, the root process behind consciousness, he argues, is that of feelings related to the interior of complex organisms endowed with nervous systems. These feelings, which Damasio calls “homeostatic” to distinguish them from the feelings of emotions, continuously represents the ongoing state of the life of an organism in terms of how close or how far that state is from ideal, that ideal being homeostasis (Damasio and Damasio, 2023, 2024; Damasio, 1999).
Neuroanatomically, the homeostatic feeling representations are achieved by the interoceptive system which collects signals—via interoceptive axons in peripheral nerves and spinal and brainstem nuclei—from the entire spectrum of viscera, from smooth musculature to end organs. Interoception is distinct from exteroception in a number of ways, but quite importantly because it pertains to an internal, animated landscape. Feelings represent evolving, active states but the “describer”—the nervous system—happens to be located inside the organism being “described”, with the consequence that the describer and described can interact. Moreover, the interaction is facilitated by the fact that the interoceptive nervous system is especially open, given its primitive nature, which includes neurons without myelin, whose axons are open to receiving signals at any point in their course, away from synapses (Damasio and Damasio, 2023, 2024).
Other reasons why homeostatic feelings are distinct, according to Damasio, include (1) the fact that they are naturally, spontaneously, informative; and (2) that the information they provide is used to adjust the life process such that it may best correspond to ideal conditions. In brief, homeostatic feelings are regulatory because their spontaneous consciousness is used to achieve homeostasis and guarantee the continuation of life.
Homeostatic feelings are the natural source of experiences. When they are combined with images generated by exteroceptive channels such as vision, they produce subjectivity.
Thus, according to Damasio, homeostatic feelings are the core phenomena of consciousness. They are spontaneously conscious processes of hybrid nature, combining mental features and bodily features. Their presence informs the rest of the mind, e.g., the images that correspond to current perceptions or to perceptions retrieved from memory, that (1) life is ongoing inside a specific body/organism, and that (2) the life process is (or is not) operating within a range conducive to the continuation of life. Feelings offer spontaneous guidance on this specific issue and are thus a key to life regulation and survival (Damasio and Damasio, 2023, 2024).
Damasio recounts that “the approach to the nature and physiology of consciousness has taken two distinct paths. One of those paths, by far the most frequent, has tied consciousness to cognitive processes, mainly exteroception, and most prominently, to vision. The other path has related consciousness to affective processes, specifically to feeling. ‘The cognitive path’ has seen consciousness as a complex and late arrival in biological history. It culminates in cognition writ large, e.g. exteroceptive processes, memory, reasoning, symbolic languages, and creativity. The ‘affect path’ has located the emergence of consciousness far earlier in biological history, and interoceptive processes provide the key” (Damasio and Damasio, 2021b, 2023, 2024; Damasio, 2019).
In making his argument, Damasio explains “how and why consciousness entered biology through the avenue of affect. The feelings that translate fundamental homeostatic states—hunger, thirst, malaise, pain, well-being, desire—offer organisms a new layer of life regulation because of their inherent conscious status. Consciousness spontaneously delivers valuable knowledge into the decision-making mental space. Consciousness allows organisms to act deliberately and knowingly, rather than acting or failing to act, automatically and blindly. Consciousness is what makes deliberate life regulation possible. The intrinsic conscious nature of feelings is their grace and was their passport into natural selection. Their conscious nature is not a neutral trait.” Damasio assumes that “the emergence of consciousness occurred when homeostatic feelings first arose, there and then, and naturally provided knowledge concerning life” (Damasio, 2019, 2021a; Damasio, 2019).
9.5.4. Friston’s free-energy principle and active inference
Theoretical neuroscientist Karl Friston conceptualizes consciousness as the natural outcome of his “free-energy principle for action and perception (active inference),” which stresses the primacy of minimizing in all organisms the difference between perceptual expectations (required for homeostasis) and real-time sensory inputs (Friston et al, 2017). In this mechanism, human brains seek to minimize the difference—reduce the “surprise,” as it were—by generating internal models that predict the external world.
As a physicist and psychiatrist, Friston says: “I find it difficult to engage with conversations about consciousness. My biggest gripe is that the philosophers and cognitive scientists who tend to pose the questions often assume that the mind is a thing, whose existence can be identified by the attributes it has or the purposes it fulfills.” The deeper question, he asks, is “what sorts of processes give rise to the notion (or illusion) that something exists?” Thus, Friston treats consciousness “as a process to be understood, not as a thing to be defined.” Simply put, his argument is that “consciousness is nothing more and nothing less than a natural process such as evolution or the weather” (Friston, 2017).
Friston’s perspective on process leads him to “an elegant, if rather deflationary, story about why the mind exists.” It focuses on “inference,” which Friston characterizes as “actually quite close to a theory of everything—including evolution, consciousness, and life itself.” We are processes and processes can only reason towards what is “out there” based on “sparse samples of the world; ” hence, the criticality of inference. This view, Friston says, “dissolves familiar dialectics between mind and matter, self and world, and representationalism (we depict reality as it is) and emergentism (reality comes into being through our abductive encounters with the world)” (Friston, 2017).
But how did inert matter ever begin the processes that led to consciousness? It starts with complex systems that are self-organizing because they possess “attractors,” which are “cycles of mutually reinforcing states that allow processes to achieve a point of stability, not by losing energy until they stop, but through what’s known as dynamic equilibrium. An intuitive example is homeostasis ….” (Friston, 2017).
It’s at this point that Friston focuses on inference, “the process of figuring out the best principle or hypothesis that explains the observed states of that system we call ‘the world.’” Every time you have a new experience, he says, “you engage in some kind of inference to try to fit what’s happening into a familiar pattern, or to revise your internal states so as to take account of this new fact.”
That’s why attractors are so crucial, he stresses, “because an attracting state has a low surprise and high evidence.” A failure to minimize surprise means “the system will decay into surprising, unfamiliar states” – which would threaten its existence. “Attractors are the product of processes engaging in inference to summon themselves into being,” he says. “In other words, attractors are the foundation of what it means to be alive” (Friston, 2017).
Friston applies the same thinking to consciousness and suggests that consciousness must also be a process of inference. “Conscious processing is about inferring the causes of sensory states, and thereby navigating the world to elude surprises … This sort of internalization of the causal structure of the world ‘out there’ reflects the fact that to predict one’s own states you must have an internal model of how such sensations are generated” (Friston, 2017).
Learning as well as inference, Friston continues, relies on minimizing the brain’s free energy. “Cortical responses can be seen as the brain’s attempt to minimize the free energy induced by a stimulus and thereby encode the most likely cause of that stimulus. Similarly, learning emerges from changes in synaptic efficacy that minimize the free energy, averaged over all stimuli encountered” (Friston, 2005).
In short, consciousness is the evolved mechanism for simulating scenarios of the world. It is the internal emergent model that monitors and minimizes the free energy principle, the difference between internal perceptual expectations and real-time sensory input that reflects the external world. Friston proposes that “the mind comes into being when self-evidencing has a temporal thickness or counterfactual depth, which grounds the inferences it can make about the consequences of future actions.” Consciousness, he contends, “is nothing grander than inference about my future” (Friston, 2017).
Friston’s consciousness as active inference leads to its metaphysical stamp as “Markovian monism,” which, he says, rests upon the information geometry induced in any system whose internal states can be distinguished from external states—such that “the (intrinsic) information geometry of the probabilistic evolution of internal states and a separate (extrinsic) information geometry of probabilistic beliefs about external states that are parameterized by internal states.” Friston calls these information geometries intrinsic (i.e., mechanical, or state-based) and extrinsic (i.e., Markovian, or belief-based). He suggests the mathematics may help frame the origins of consciousness (Friston et al., 2020).
Several theories of consciousness build on the free-energy paradigm, including Solms’s Affect (9.5.5), Carhart-Harris’s Entropic Brain (9.5.6) and Projective Consciousness Model (9.5.11).
9.5.5. Solms’s affect as the hidden spring of consciousness
Neuroscientist and psychoanalyst Mark Solms applies Friston’s free energy principle to the hard problem of consciousness. He identifies the elemental form of consciousness as affect and locates its physiological mechanism (an extended form of homeostasis) in the upper brainstem. Free energy minimization (in unpredicted contexts) is operationalized “where decreases and increases in expected uncertainty are felt as pleasure and unpleasure, respectively.” He offers reasons “why such existential imperatives feel like something to and for an organism” (Solms, 2019).
A physicalist, Solms argues that the brain does not “produce” or “cause” consciousness. “Formulating the relationship between the brain and the mind in causal terms,” he says, “makes the hard problem harder than it needs to be. The brain does not produce consciousness in the sense that the liver produces bile, and physiological processes do not cause—or become or turn into—mental experiences through some curious metaphysical transformation” (Solms, 2019).
Objectivity and subjectivity are observational perspectives, he says, not causes and effects. “Neurophysiological events can no more produce psychological events than lightning can produce thunder. They are dual manifestations of a single underlying process. The cause of both lightning and thunder is electrical discharge, the lawful action of which explains them both. Physiological and psychological phenomena must likewise be reduced to unitary causes, not to one another. This is merely a restatement of a well-known position on the mind–body problem: that of dual-aspect monism”26 (Solms, 2021b). (6.)
Given the centrality of affect in Solms’ theory of consciousness, he must argue that emotion is the most efficient mechanism, perhaps the only effective mechanism, to optimize survival. His reasoning applies the free energy principle (9.5.4) in neurobiology such that feelings would uniquely enable humans to monitor interactions with unpredictable environments and modify their behaviors accordingly.
Solms explains that “complex organisms have multiple needs, each of which must be met in its own right, and, indeed, on a context-dependent basis, they cannot be reduced to a common denominator. For example … fear trumps sleepiness in some contexts but not in others.” So, he says, the needs of complex organisms like ourselves must be coded as categorical variables, which are distinguished qualitatively, not quantitatively. Thirst feels different from sleepiness feels different from separation distress feels different from fear, etc., such that their combined optimized resolution must be computed in a context-dependent fashion, which would lead to “excessively complex calculations,” a “combinatorial explosion.” In terms of time spent and energy expended, the invention of affect, emotion, feeling is a much more efficient algorithm. Moreover, Solms adds, since “the needs of complex organisms which can act differentially, in flexible ways, in variable contexts, are ‘color-coded’ or ‘flavored,’ this provides at least one mechanistic imperative for qualia” (Solms, 2021a, 2021b).
Solms seeks to demystify consciousness by showing that “cortical functioning is accompanied by consciousness if and only if it is ‘enabled’ by the reticular activating system of the upper brainstem. Damage to just two cubic millimeters of this primitive tissue reliably obliterates consciousness as a whole.” He rejects arguments that the reticular activating system generates only the quantitative “level” of consciousness (consciousness in a waking/comatose sense) and not its qualitative “contents” (consciousness as experience). This is affect, Solms says, and it is supported by “overwhelming” evidence. Therefore, since cortical consciousness is contingent upon brainstem consciousness, and since brainstem consciousness is affective, Solms concludes that “affect is the foundational form of consciousness. Sentient subjectivity (in its elementary form) is literally constituted by affect” (Solms, 2021a).27
Solms distinguishes between information processing models in cognitive science, which seem to lack question-askers, and self-organizing systems, which are obliged to ask questions—“their very survival depends upon it. They must chronically ask: ‘What will happen to my free energy if I do that?’ The answers they receive determine their confidence in the current prediction.” This is why Solms states “not all information processing (‘integrated’ or otherwise) is conscious; sentience appears to be a property of only some information processing systems with very specific properties, namely those systems that must ask questions of their surrounding world in relation to their existential needs” (Solms, 2021a)
In summary, Solms claims that the functional mechanism of consciousness can be reduced to physical laws, such as Friston’s free-energy law, among others. These laws, he says, “are no less capable of explaining how and why proactively resisting entropy (i.e., avoiding oblivion) feels like something to the organism, for the organism, than other scientific laws are capable of explaining other natural things. Consciousness is part of nature, and is mathematically tractable.”
As a corollary, with respect to Crick’s research program on the neural correlates of consciousness, Solms declares that there can be no objects of consciousness (e.g. visual ones) in the absence of a subject of consciousness. To Solms, the subject of consciousness is literally constituted by affect (Solms, 2021a).
Regarding AI consciousness, Solms posits that if his theory is correct, “then, in principle, an artificially conscious self-organizing system can be engineered.” The creation of an artificial consciousness would be, he says, “the ultimate test of any claim to have solved the hard problem.” But, he warns, “we must proceed with extreme caution.”
9.5.6. Carhart-Harris’s entropic brain hypothesis
Psychopharmacologist Robin Carhart-Harris proposes the Entropic Brain Hypothesis in which the entropy of spontaneous brain activity indexes the informational richness of conscious states (within upper and lower limits, after which consciousness may be lost). A leading psychedelic researcher, Carhart-Harris reports that the entropy of brain activity is elevated in the psychedelic state, and there is evidence for greater brain “criticality” under psychedelics. (“Criticality … is the property of being poised at a ‘critical’ point in a transition zone between order and disorder where certain phenomena such as power-law scaling appear.”) He argues that “heightened brain criticality enables the brain to be more sensitive to intrinsic and extrinsic perturbations which may translate as a heightened susceptibility to ‘set’ and ‘setting.’” Measures of brain entropy, he suggests, can inform the treatment of psychiatric and neurological conditions such as depression and disorders of consciousness (Carhart-Harris, 2018).
The “entropy” in the Entropic Brain Hypothesis is defined as “a dimensionless quantity that is used for measuring uncertainty about the state of a system but it can also imply physical qualities, where high entropy is synonymous with high disorder.” Entropy is then applied in “the context of states of consciousness and their associated neurodynamics, with a particular focus on the psychedelic state … [which] is considered an exemplar of a primitive or primary state of consciousness that preceded the development of modern, adult, human, normal waking consciousness.” Based on neuroimaging data with psilocybin, a classic psychedelic drug, Carhart-Harris argues that “the defining feature of ‘primary states’ is elevated entropy in certain aspects of brain function, such as the repertoire of functional connectivity motifs that form and fragment across time. Indeed, since there is a greater repertoire of connectivity motifs in the psychedelic state than in normal waking consciousness, this implies that primary states may exhibit ‘criticality’” (Carhart-Harris, 2018).
Significantly, “if primary states are critical, then this suggests that entropy is suppressed in normal waking consciousness, meaning that the brain operates just below criticality.” This leads to the idea that “entropy suppression furnishes normal waking consciousness with a constrained quality and associated metacognitive functions, including reality-testing and self-awareness.” Carhart-Harris and colleagues also propose that “entry into primary states depends on a collapse of the normally highly organized activity within the default-mode network” (DMN—a set of regions more active during passive tasks than tasks requiring focused external attention, Buckner, 2013),28 thus maintaining the brain’s homeostasis and “a decoupling between the DMN and the medial temporal lobes (which are normally significantly coupled)” (Carhart-Harris et al., 2014).
Increased entropy in spontaneous neural activity is one of the most notable neurophysiological signatures of psychedelics and is said to be relevant to the psychedelic experience, mediating both acute alterations in consciousness and long-term effects. While overall entropy increases, entropy changes are not uniform across the brain: entropy increases in all regions, but the larger effect is localized in visuooccipital regions. At the whole-brain level, this reconfiguration is related closely to the topological properties of the brain’s anatomical connectivity (Herzog et al 2023). (For how psychedelic experiences and mechanisms may or may not inform theories of consciousness, see 18.21.)
Computational neuroscientist Gustavo Deco uses the concept of equilibrium in physics to explore consciousness. Since a physical system is in equilibrium when in its most stable state, the question is how close to equilibrium are the electrical states of the brain while people perform different tasks? Using a sophisticated mathematical theorem to analyze neuroimaging data, “they found that the brain is closer to a state of equilibrium when people are gambling than when they are cooperating,” suggesting that “there are many shades of consciousness” (Callaghan, 2024).
9.5.7. Buzsáki’s neural syntax and self-caused rhythms
Neuroscientist György Buzsáki presents the brain as “a foretelling device that interacts with its environment through action and the examination of action’s consequence,” restructuring its internal rhythms in the process. In his telling, “our brains are initially filled with nonsense patterns, all of which are gibberish until grounded by action-based interactions. By matching these nonsense ‘words’ to the outcomes of action, they acquire meaning.” Once brain circuits are “calibrated” or trained by action and experience, “the brain can disengage from its sensors and actuators, and examine ‘what happens if’ scenarios by peeking into its own computation, a process that we refer to as cognition.” Buzsáki stresses that “our brain is not an information-absorbing coding device, as it is often portrayed, but a venture-seeking explorer constantly controlling the body to test hypotheses.” Our brain does not process information. He says, our brain “creates it” (Buzsáki, 2019).
Buzsáki focuses on “neural syntax”, which segments neural information and organizes it via diverse brain rhythms to generate and support cognitive functions. One expression is the “hierarchical organization of brain rhythms of different frequencies and their cross-frequency coupling.” Buzsáki shows that “in the absence of changing environmental signals, cortical circuits continuously generate self-organized cell assembly sequences”—clusters of neurons acting as focused functional units—that are the neuronal assembly basis of cognitive functions. He also shows “how skewed distribution of firing rates supports robustness, sensitivity, plasticity, and stability in neuronal networks” (Buzsáki, Wikipedia).
Buzsáki’s foundational idea is that “spontaneous neuron activity, far from being mere noise, is actually the source of our cognitive abilities,” and that “self-emerged oscillatory timing is the brain’s fundamental organizer of neuronal information.” The perpetual interactions among these multiple network oscillators, he says, “keep cortical systems in a highly sensitive ‘metastable’ state and provide energy-efficient synchronizing mechanisms via weak links” (Buzsáki, 2011).
Taken together, Buzsáki coins his “inside-out” view. “The brain,” he says, “is a self-organized system with preexisting connectivity and dynamics whose main job is to generate actions and to examine and predict the consequences of those actions”. Brains draw from and interact with the world, rather than detect it. “In other words, rather than the world filling in the brain with information, the brain fills out the world with action.” Flipping the brain–world relationship, Buzsáki posits that brain activity is fundamentally self-caused (Gomez-Marin, 2021).
Brain rhythms are Buzsáki’s key mechanisms. “Spanning several orders of magnitude, and organized in nested frequency bands, these fascinating neuronal oscillations support neuronal syntax.” As Buzsáki puts it, “activity travels in neuronal space, much like waves in a pond.” Cognition is merely internalized action, and it arises when the brain disengages from the world. He thus recasts “the cognitive into the neural by means of action as a kind of ultimate cognitive source. It is action all the way in, all the way out, and all the way down” (Gomez-Marin, 2021).
Still, Buzsáki must explain how endogenously produced neural syntax acquires its meaning, and to do so, he reaches outside the brain. Semantics are selected by the world, he stresses, and here’s how it works. External inputs, sequences of perceptions that constitute wholes or fragments of meaning, engage and modify self-organized neural patterns so that they become meaningful and useful (broadly). Similarly, Buzsáki has learning as a matching process. “Existing, spontaneous neural patterns are selected rather than constructed anew. The brain is not a blank slate but one filled with syntactically correct gibberish that progressively acquires meaning via the pruning of the arbitrariness that the world affords” (Gomez-Marin, 2021).
Related, Buzsáki and Tingley explain cognition, including memory, “by exaptation and expansion of the circuits and algorithms serving bodily functions.” They explain how “Regulation and protection of metabolic and energetic processes require time-evolving brain computations enabling the organism to prepare for altered future states.” The exaptation of such circuits, according to the authors, was likely exploited for exploration of the organism’s niche, giving rise to “a cognitive map,” which in turn “allows for mental travel into the past (memory) and the future (planning)” (Buzsáki and Tingley, 2023). Moreover, Buzsáki’s “two-stage model of memory trace consolidation, demonstrates how neocortex-mediated information during learning transiently modifies hippocampal networks, followed by reactivation and consolidation of these memory traces during sharp wave-ripple patterns of sleep” (Buzsáki, 2024).
While explaining that cognition is not the same thing as explaining phenomenal consciousness, Buzsáki’s theory of cognition can develop into its own theory of consciousness. Moreover, it can help select among other theories of consciousness, as it aligns more consistently with some Neurobiological Theories (9.2), such as Brain Circuits and Cycles (9.2.11); possibly Electromagnetic Field Theories (9.3); and certainly Homeostatic and Affective Theories (9.5), especially Top-Down Predictive Theories (9.5.1).
9.5.8. Deacon’s self-organized constraint and emergence of self
Neuroanthropologist Terrence Deacon, whose research combines human evolutionary biology and neuroscience, asserts that the origins of life and the origins of consciousness both depend on the emergence of self: the organizational core of both is a form of self-creating, self-sustaining, constraint-generating processes (Deacon, 2011a, 2011b).
Deacon characterizes consciousness as “a matter of constraint,” focusing as much on what isn’t there as on what is. He goes beyond complexity theory, non-linear dynamics and information theory to what he calls “emergent dynamics” theory where constraints can become their own causes, how constraints become capable of maintaining and producing themselves. This, he says, is essentially what life accomplishes. But to do this, life must persistently recreate its capacity for self-creation. What Deacon means by self “is an intrinsic tendency to maintain a distinctive integrity against the ravages of increasing entropy as well as disturbances imposed by the surroundings” (Deacon, 2011a, 2011b).
The nexus to consciousness is the emergence of self: “this kind of reciprocal, self-organizing logic (but embodied in neural signal dynamics) must form the core of the conscious self.” Conceiving of neuronal processes in emergent dynamical terms, Deacon reframes aspects of mental life; for example, the experience of emotion relates to the role metabolism plays in regulating the brain’s self-organizing dynamics, which are triggered whenever a system is perturbed away from its equilibrium, a process that shifts availability of energy in the brain. Thus, Deacon suggests that “conscious arousal is not located in any one place, but constantly shifts from region to region with changes in demand” (Deacon, 2011a, 2011b).
9.5.9. Pereira’s sentience
Neuroscientist Antonio Pereira, Jr. hypothesizes that cognitive consciousness depends on sentience. He distinguishes “two modalities of consciousness: sentience, in the sense of being awake and capable of feeling (e.g., basic sensations of hunger, thirst, pain) and, second, cognitive consciousness, i.e. thinking and elaborating on linguistic and imagery representations.” He proposes that the physiological correlates of sentience are “the systems underpinning the dynamic control of biochemical homeostasis,” while the correlates of cognitive consciousness are “patterns of bioelectrical activity in neural networks. His primary point is that “cognitive consciousness depends on sentience, but not vice versa” (Pereira, 2021).
Pereira applies his concept of sentience as a theory of consciousness to the medical sciences, especially neurology and psychiatry, for both diagnostics and therapy. This implies that “medical practice should also address the physiological correlates of sentience in the diagnostics and therapy of disorders of consciousness.” The minimal requirement, he says, “for considering a person minimally conscious is … if she can feel basic sensations such as hunger, thirst, and pain. The capacity for feeling is conceived as closely related to the capacity of dynamically controlling the physiological processes of homeostasis.”
In applying theories of consciousness to medical care, Pereira posits that higher-level capacities “such as verbal or imagery thinking, the retrieval of episodic memories, and action planning (e.g. imagining playing tennis, a technique for assessing residual consciousness in vegetative states), may not be adequate as a general standard for medical diagnosis of prolonged disorders of consciousness, since … in many cases the person may not be able to perform these tasks but still be able to consciously experience basic sensations” (Pereira, 2021).
Taking general anesthesia as an example, Pereira states that “if the main criterion is not being able to feel pain, the goal of the procedure would be broader than the loss of cognitive consciousness. In some cases, the neural correlates of cognitive representations may not be the main target of treatment, since they correspond to a high-level specific ability that is not necessary for lower-level sentient experiences, which also deserve attention for proper medical and also bioethical reasons” (Pereira, 2021).
9.5.10. Mansell’s perceptual control theory
Clinical psychologist Warren Mansell proposes Perceptual Control Theory (PCT) in which “reorganization is the process required for the adaptive modification of control systems in order to reduce the error in intrinsic systems that control essential, largely physiological, variables.” It is from this system, he says, that primary [phenomenal] consciousness emerges and “is sustained as secondary [access] consciousness through a number of processes including the control of the integration rate of novel information via exploratory behavior, attention, imagination, and by altering the mutation rate of reorganization.” Tertiary [self-awareness] consciousness arises when “internally sustained perceptual information is associated with specific symbols that form a parallel, propositional system for the use of language, logic, and other symbolic systems” (Mansell, 2022).
Mansell’s objective is to give an “integrative account of consciousness,” which “should build upon a framework of nonconscious behavior in order to explain how and why consciousness contributes to, and addresses the limitations of, nonconscious processes.” Such a theory, as noted, “should also encompass the primary (phenomenal), secondary (access), and tertiary (self-awareness) aspects of consciousness,” and “address how organisms deal with multiple, unpredictable disturbances to maintain control.” Such categories of consciousness come about, according to PCT, because of “purposiveness,” which is “the control of hierarchically organized perceptual variables via changes in output that counteract disturbances which would otherwise increase error between the current value and the reference value (goal state) of each perceptual variable” (Mansell, 2022).
9.5.11. Projective consciousness model
The Projective Consciousness Model (PCM) is a mathematical model of embodied consciousness that “relates phenomenology to function, showing the computational advantages of consciousness.” It is based on “the hypothesis that the spatial field of consciousness (FoC) is structured by a projective geometry and under the control of a process of active inference.” The FoC in the PCM is said to combine “multisensory evidence with prior beliefs in memory” and to frame them “by selecting points of view and perspectives according to preferences.” This “choice of projective frames governs how expectations are transformed by consciousness. Violations of expectation are encoded as free energy. Free energy minimization drives perspective taking, and controls the switch between perception, imagination and action” (Rudrauf et al, 2017).
Founding assumptions of the PCM include: consciousness as an evolved mechanism that optimizes information integration and functions as an algorithm for the maximization of resilience; relating the free energy principle (9.5.4) to perceptual inference, active inference and (embodied) conscious experience; an integrative predictive system projecting a global 3-dimensional spatial geometry to multimodal sensory information and memory traces as they access the conscious workspace; and emphasis on the embodied nature of consciousness (9.6.1), without reducing consciousness to embodiment. A pivotal idea is that embodied systems have “an evolutionary advantage of developing an integrative cognition of space in order to represent, simulate, appraise and control spatially distributed information and the consequences of actions” (Rudrauf et al, 2017).
Much is made of “the lived body,” because “in contrast to most contents of consciousness, the lived body is normally always present in the conscious field … a proxy for the integrity of the actual body … an anchor point for our efforts at preserving autonomy and well-being.” The lived body, therefore, is “a kind of inferential representation of the real body in physical space … a sort of virtual ‘user interface’ for the representation and control of the actual body.”
Thus, the PCM claims to account for fundamental psychological phenomena: the spatial phenomenology of subjective experience; the distinctions and integral relationships between perception, imagination and action; and the role of affective processes in intentionality. The PCM suggests that brain states becoming conscious “reflect the action of projective transformations” (Rudrauf et al, 2017).
9.5.12. Pepperell’s organization of energy
Artist and perceptual scientist Robert Pepperell suggests that while energetic activity is fundamental to all physical processes and drives biological behavior, consciousness is a specific product of the organization of energetic activity in the brain. He describes this energy, along with forces and work, as “actualized differences of motion and tension,” and believes that consciousness occurs “because there is something it is like, intrinsically”—from the intrinsic perspective of the system—“to undergo a certain organization of actualized differences in the brain” (Pepperell, 2018).
Pepperell laments that “energy receives relatively little attention in neuroscientific and psychological studies of consciousness. Leading scientific theories of consciousness do not reference it, assign it only a marginal role, or treat it as an information-theoretical quantity. If it is discussed, it is either as a substrate underpinning higher level emergent dynamics or as powering neural information processing.” He argues that “the governing principle of the brain at the neural level is not information processing but energy processing,” although the information-theoretic approach can complement the energetic approach. Pepperell puts “information in the biological context as best understood as a measure of the way energetic activity is organized, that is, its complexity or degree of differentiation and integration.” While “information theoretic techniques provide powerful tools for measuring, modeling, and mapping the organization of energetic processes,” he says, “we should not confuse the map with the territory” (Pepperell, 2018).
In comparison with mainstream brain organization frameworks at the global level or localized, Pepperell offers, as an alternative or complementary way of thinking, how the energetic activity in the brain is organized. The challenge for the model is why energetic processing is associated with consciousness in the brain but not in other organs, like the liver or heart. Pepperell claims that energetic activity in the brain efficiently actuates differences of motion and tension that make the difference, perhaps via dynamic recursive organization – the “appropriate reentrant intracortical activity.”
“If we are to naturalize consciousness,” Pepperell concludes, “then we must reconcile energy and the mind.” Treating the brain as a difference engine that serves “the interests of the organism is a natural approach to understanding consciousness as a physical process” (Pepperell, 2018).
9.6. Embodied and enactive theories
Embodied and Enactive Theories emphasize the importance of the body and its interaction with the environment as an integral part of what consciousness is, not only what consciousness does. It also includes neurophenomenology, unifying two disparate ways of studying consciousness.
9.6.1. Embodied cognition
Embodied Cognition is the concept that what makes thought meaningful are the ways neural circuits are connected to the body and characterize embodied experience, and that abstract ideas and language are embodied in this way as well. While cognition and consciousness are not the same, cognitive linguist George Lakoff argues that the mind is embodied, in that even pure mentality depends on the body’s sensorimotor systems and emotions and cannot be comprehended without engaging them (Lakoff, 2007, 2012).
In their classic book on the embodied mind, Philosophy in the Flesh, Lakoff and Mark Johnson stress three points: “The mind is inherently embodied. Thought is mostly unconscious. Abstract concepts are largely metaphorical. Much of the subject matter of philosophy, they claim, such as the nature of time, morality, causation, the mind, and the self, relies heavily on basic metaphors derived from bodily experience. Thought requires a body, they assert, “not in the trivial sense that you need a physical brain with which to think, but in the profound sense that the very structure of our thoughts comes from the nature of the body” (Lakoff and Johnson, 1999).
9.6.2. Enactivism
Enactivism is the way of thinking that posits to explore mental activities, one must examine living systems interacting with their environments. Cognition is characterized as embodied activities. A mind without a body would be as if incoherent.
“Enaction” was the term introduced in The Embodied Mind, the 1991 book by Varela, Rosch and Thompson (Varela et al., 1991). The enactive view is that cognition develops via dynamic, bidirectional exchanges between an organism and its surroundings. It is not the case that an organism seeks optimum homeostasis in a static environment, but rather that the organism is shaping its environment, and is being shaped by its environment—actively, iteratively, continuously—all mediated by that organism’s sensorimotor processes. Thus, organisms are active agents in the world who affect the world and who are affected by the world. (Section: Hutto, 2023; Enactivism, 2024).
Enactivists would harbor no hope of understanding mentality unless it were founded on histories of such bidirectional organism-environment interactions because that’s the core concept of how minds arise and work. Organisms are self-creating, self-organizing, self-adapting, self-sustaining living creatures who regulate themselves and in doing so can change their environments, which then, iteratively, recycles the whole process.
The scientific consensus is that phenomenal consciousness evolved via stages of cognition and proto-consciousness selected by fitness-enhanced traits in challenging environments. Although focused on cognition, enactivism enriches the consciousness-generating conditions by adding interactive dynamism between the organism and the environment. (Enactment is also said to be “a genuinely metaphysical idea” and “an ontological breakthrough” in that “Something is the case if and only if it is enacted” [Werner, 2023].)
9.6.3. Varela’s neurophenomenology
Neuroscientist and philosopher Francisco Varela proposes what he calls “neurophenomenology,” which seeks to articulate mutual constraints between phenomena present in experience, inspired by the style of inquiry of phenomenology, and the correlative field of phenomena established by the cognitive sciences (Varela Legacy, 2023). He starts with one of Chalmers’s basic points: first-hand experience is an irreducible field of phenomena. He claims there is no “theoretical fix” or “extra ingredient” in nature that can possibly bridge this gap. Instead, the field of conscious phenomena require a rigorous method and an explicit pragmatics. It is a quest, he says, to marry modern cognitive science and a disciplined approach to human experience, thereby placing himself in the lineage of the continental tradition of phenomenology (Varela, 1996).
Varela calls for gathering a research community armed with new tools to develop a science of consciousness. He claims that no piecemeal empirical correlates, nor purely theoretical principles, will do the job. He advocates turning to a systematic exploration of the only link between mind and consciousness that seems both obvious and natural: the structure of human experience itself.
Varela’s phenomenological approach starts with the irreducible nature of conscious experience. Lived experience, he says, is “where we start from and where all must link back to, like a guiding thread.” From a phenomenological standpoint, “conscious experience is quite at variance with that of mental content as it figures in the Anglo-American philosophy of mind.” He advocates examining, “beyond the spook of subjectivity, the concrete possibilities of a disciplined examination of experience that is at the very core of the phenomenological inspiration.” He repeats: “it is the re-discovery of the primacy of human experience and its direct, lived quality that is phenomenology’s foundational project” (Varela, 1996).
Varela’s key point is that by emphasizing a co-determination of both accounts—phenomenological and neurobiological—one can explore the bridges, challenges, insights and contradictions between them. This means that both domains have equal status in demanding full attention and respect for their specificity. It is quite easy, he says, to see how scientific accounts illuminate mental experience, but the reciprocal direction, from experience towards science, is what is typically ignored.
What do phenomenological accounts provide? Varela asks. “At least two main aspects of the larger picture. First, without them the firsthand quality of experience vanishes, or it becomes a mysterious riddle. Second, structural accounts provide constraints on empirical observations.” He stresses that “the study of experience is not a convenient stop on our way to a real explanation, but an active participant in its own right.” And while phenomenal experience is at an irreducible ontological level, “it retains its quality of immediacy because it plays a role in structural coherence via its intuitive contents, and thus keeps alive its direct connection to human experience, rather than pushing it into abstraction” (Varela, 1996).
This makes the whole difference, Varela argues: The “hardness” and riddle become an open-ended research program with the structure of human experience playing a central role in the scientific endeavor. “In all functionalistic accounts what is missing is not the coherent nature of the explanation but its alienation from human life. Only by putting human life back in, will that absence be erased” (Varela, 1996). (The common thread said to run through Varela’s extensive and heterogenous body of work is “the act of distinction”—distinctions as processes, distinctions in ways of distinguishing—“the aim of which was to address and supersede the challenges inherent in the dualist [modernist] thought style, especially the infamous two-pronged problem of the bifurcation and disenchantment of nature” [Vörös, 2023].)
In the quarter century since Varela’s neurophenomenology paper was published, its research program has made some advances and encountered some tensions; for example, investigating the experience of boundaries of the self, both phenomenologically and neurobiologically. The biggest challenge remains first-person reporting and interpretation, such as subtle aspects of self-consciousness. The continuing hope is that neurophenomenology can inform the science of consciousness, that the ongoing interaction between human experience and neuroscience becomes “an act of art, a deep listening, an improvisational dance, which slowly develops into a skillful scientific dialogue” (Berkovich-Ohana et al., 2020).
9.6.4. Thompson’s mind in life
Philosopher Evan Thompson heralds “the deep continuity of life and mind.” His foundational idea is “Where there is life there is mind, and mind in its most articulated forms belongs to life,” and his organizing principle is “Life and mind share a core set of formal or organizational properties, and the formal or organizational properties distinctive of mind are an enriched version of those fundamental to life.” More precisely, he says, “the self-organizing features of mind are an enriched version of the self-organizing features of life. The self-producing or ‘autopoietic’ organization of biological life already implies cognition, and this incipient mind finds sentient expression in the self-organizing dynamics of action, perception, and emotion, as well as in the self-moving flow of time-consciousness” (Thompson, 2002; Maturana and Varela, 1980).29
From this perspective, Thompson sees mental life as bodily life and as situated in the world. The roots of mental life lie not simply in the brain, he says, “but ramify through the body and environment. Our mental lives involve our body and the world beyond the surface membrane of our organism, and therefore cannot be reduced simply to brain processes inside the head.”
With this framework, Thompson seeks to reduce (if not bridge) the so-called “explanatory gap” between consciousness and world, mind and brain, first-person subjectivity and third-person objectivity (i.e., the hard problem of consciousness). He works to achieve this (to oversimplify) by having the same kinds of processes that enable the transition from nonlife to life to enable the transition from life to mind. (I’d think he would rather eliminate the concept of “transition” altogether and consider life-mind as a unified concept—perhaps like, in cosmology, the once apparent independent dimensions of space and time now unified by a single physical concept, spacetime.)
As a pioneer of enactivism (9.6.2), Thompson posits that “the enactive approach offers important resources for making progress on the explanatory gap” by explicating “selfhood and subjectivity from the ground up by accounting for the autonomy proper to living and cognitive beings.” He extends the idea with “embodied dynamism,” a key concept that combines dynamic systems and embodied approaches to cognition. While the former reflects enactivism, the latter is the enhancement (Thompson, 2002).
According to Thompson, the central idea of the dynamic systems approach is that cognition is an intrinsically temporal phenomenon expressible in “the form of a set of evolution equations that describe how the state of the system changes over time. The collection of all possible states of the system corresponds to the system’s ‘state space’ or ‘phase space,’ and the ways that the system changes state correspond to trajectories in this space.” Dynamic-system explanations, he says, consist of “the internal and external forces that shape such trajectories as they unfold in time. Inputs are described as perturbations to the system’s intrinsic dynamics, rather than as instructions to be followed, and internal states are described as self-organized compensations triggered by perturbations, rather than as representations of external states of affairs” (Thompson, 2002).
To make real progress on the explanatory gap, Thompson says, “we need richer phenomenological accounts of the structure of experience, and we need scientific accounts of mind and life informed by these phenomenological accounts.” My aim, he says, “is not to close the explanatory gap in a reductive sense, but rather to enlarge and enrich the philosophical and scientific resources we have for addressing the gap.”
Calling on the philosophical tradition of phenomenology, inaugurated by Edmund Husserl and developed by others, primarily Maurice Merleau-Ponty, Thompson seeks to “naturalize” phenomenology by aligning its investigations with advances in biology and cognitive science and to complement science and its objectification of the world by reawakening basic experiences of the world via phenomenology. His main move is for cognitive science “to learn from the analyses of lived experience accomplished by phenomenologists …. which thus needs to be recognized and cultivated as an indispensable partner to the experimental sciences of mind and life” (Thompson, 2002).
The deeper convergence of the enactive approach and phenomenology, Thompson says, is that “both share a view of the mind as having to constitute its objects.” He stresses that “constitute” does not mean fabricate or create, but rather “to bring to awareness, to present, or to disclose.” Thus, “the mind brings things to awareness; it discloses and presents the world. Stated in a classical phenomenological way, the idea is that objects are disclosed or made available to experience in the ways they are thanks to the intentional activities of consciousness.” Thompson argues that weaving together the phenomenological and neurobiological can “bridge the gap between subjective experience and biology, which defines the aim of neurophenomenology (9.6.4), an offshoot of the enactive approach” (Thompson, 2002).
9.6.5. Frank/Gleiser/Thompson’s “The Blind Spot”
Astrophysicist Adam Frank, theoretical physicist Marcello Gleiser, and philosopher Evan Thompson elevate and promote “the primacy of consciousness” in that “There is no way to step outside consciousness and measure it against something else. Everything we investigate, including consciousness and its relation to the brain, resides within the horizon of consciousness.” Lest they be misunderstood, the authors reject any inference that “the universe, nature, or reality is essentially consciousness or is somehow made out of consciousness,” because “this does not logically follow.” Such “a speculative leap,” they say, goes beyond what we can know or establish on the basis of “consciousness as experienced from within and as an irreducible precondition of scientific knowledge.” Furthermore, “this speculative leap runs afoul” of what they call “the primacy of embodiment,” which “is as equally undeniable as the primacy of consciousness” (Frank et al., 2024, pp. 186, 188).
What now confronts us, Frank/Gleiser/Thompson say, is “a strange loop,” where “horizonal consciousness subsumes the world, including our body experienced from within, while embodiment subsumes consciousness, including awareness in its immediate intimacy.” The authors stress that “the primacy of consciousness and the primacy of embodiment enfold each other.” They call for unveiling and examining this strange loop, which normally disappears from view and is forgotten in what they call The Blind Spot. They describe the Blind Spot as “humanity’s lived experience as an inescapable part of our search for objective truth” (Frank et al., 2024, p. 189), and they seek “to reclaim the central place of human experience in the scientific enterprise by invoking the image of a ‘Blind Spot’” (Gomez-Marin, 2024). In other words, they reject the way of thinking that “we can comprehend consciousness within the framework of reductionism, physicalism, and objectivism or, failing that, by postulating a dualism of physical nature versus irreducible consciousness that we could somehow grasp outside the strange loop.” This is why they label the hard problem of consciousness an “artifact of the Blind Spot.” It is “built into blind-spot metaphysics, and not solvable in its terms” because “it fails to recognize the ineliminable primacy of consciousness in knowledge” (Frank et al., 2024, p. 192).
Frank/Gleiser/Thompson see “only a few options for trying to deal with consciousness within the confines of the blind-spot worldview,” and that “ultimately, they’re all unsatisfactory, because they never come to grips with the need to recognize the primacy of consciousness and the strange loop in which we find ourselves.” They argue that the three major options—neural correlates of consciousness (9.2.2); metaphysical bifurcation of physical reality and irreducible mental properties (whether naturalistic dualism, substance dualism or panpsychism—13, 15); and illusionism (9.1.1)—are all “within the ambit of the Blind Spot” (Frank et al., 2024, p. 196).
What Frank/Gleiser/Thompson offer is “a radically different approach beyond the Blind Spot.” They reference papers by astrophysicist Piet Hut and cognitive psychologist Roger Shepard (Hut and Shepard, 1996), and neuroscientist Francisco Varela (1996), making the case for “a major overhaul of the science of consciousness based on recognizing the primacy of experience.” They note “we inescapably use consciousness to study consciousness,” such that “unless we recover from the amnesia of experience and restore the primacy of experience in our conception of science, we’ll never be able to put the science of consciousness on a proper footing.” A science of consciousness can work, all say, only if “experience really matters” (Frank et al., 2024, p. 218).
The key, according to the authors, is “recognizing [both] the primacy of consciousness and the primacy of embodiment,” which, they claim “changes how we think about the problem of consciousness.” The problem for neuroscience “can no longer be stated as how the brain generates consciousness.” Rather, “the problem is how the brain as a perceptual object within consciousness relates to the brain as part of the embodied conditions for consciousness, including the perceptual experience of the brain as a scientific object. The problem is to relate the primacy of consciousness to the primacy of embodiment without privileging one over the other or collapsing one onto the other. The situation is inherently reflexive and self-referential: instead of simply regarding experience as something that arises from the brain, we also have to regard the brain as something that arises within experience. We are in the strange loop” (Frank et al., 2024, pp. 219–220).
Frank/Gleiser/Thompson support Varela’s neuroscience research program, “neurophenomenology” (9.6.3), based on “braiding together first-person accounts of consciousness with third-person accounts of the brain within the I-and-you experiential realm.” They advocate that phenomenology and neuroscience “become equal partners in an investigation that proceeds by creating new experiences in a new kind of scientific workshop, the neurophenomenological laboratory. First-person experiential methods for refining attention and awareness (such as meditation), together with second-person qualitative methods for interviewing individuals about the fine texture of their experience, are used to produce new experiences, which serve as touchstones for advancing phenomenology. This new phenomenology guides investigations of the brain, while investigations of the brain are used to motivate and refine phenomenology in a mutually illuminating loop” (Frank et al., 2024, pp. 219–220). The authors call neurophenomenology “probably the strongest effort so far to envision a neuroscience of consciousness beyond the Blind Spot (Frank et al., 2024, p. 221). Consciousness, particularly human consciousness, is “an expression of nature and is a source of nature’s self-understanding.”
9.6.6. Bitbol’s radical neurophenomenology
Philosopher of science and phenomenologist Michel Bitbol promotes a “radical neurophenomenology” in which a “tangled dialectic of body and consciousness” is the “metaphysical counterpart” and whose goal is to advance Varela’s neurophenomenology project (9.6.3) of criticizing and dissolving the “hard problem” of consciousness (Bitbol, 2021a). Bitbol claims that the neurophenomenological approach to the “hard problem” is underrated and often misunderstood; indeed, “in its original version, neurophenomenology implies nothing less than a change in our own being to dispel the mere sense that there is a problem to be theoretically solved or dissolved. Neurophenomenology thus turns out to be much more radical than the enactivist kinds of dissolution” (9.6.2) (Bitbol and Antonova, 2016).
Did Varela himself have a theory to solve the hard problem? No, Varela declared (in Bitbol’s report) “only a ‘remedy”—the point being that “there exists a stance (let’s call it the Varelian stance) in which the problem of the physical origin of primary consciousness, or pure experience, does not even arise.” The implications, according to Bitbol, are that “the nature of the ‘hard problem’ of consciousness is changed from an intellectual puzzle to an existential option.” The “constructivist content,” he says, is that “The role of ontological prejudice about what the world is made of (a prejudice that determines the very form of the ‘hard problem’ as the issue of the origin of consciousness out of a pre-existing material organization) is downplayed” (Bitbol, 2012).
Bitbol blames “the standard (physicalist) formulation of this problem” for both generating it and turning it into “a fake mystery.” But he recognizes that dissolving the hard problem is very demanding for researchers, because “it invites them to leave their position of neutral observers/thinkers, and to seek self-transformation instead.” Bitbol’s approach “leaves no room for the ‘hard problem’ in the field of discourse, and rather deflects it onto the plane of attitudes.” This runs the risk, he says, of “being either ignored or considered as a dodge” (Bitbol, 2021a).
Bitbol’s method is “a metaphysical compensation for the anti-metaphysical premise of the neurophenomenological dissolution of the ‘hard problem.’” This can be achieved, he says, by designing this alternative metaphysics “to keep the benefit of a shift from discourse to ways of being, which is “the latent message of neurophenomenology” (Bitbol, 2021a). In its most radical version, “neurophenomenology asks researchers to suspend the quest of an objective solution to the problem of the origin of subjectivity, and clarify instead how objectification can be obtained out of the coordination of subjective experiences. It therefore invites researchers to develop their inquiry about subjective experience with the same determination as their objective inquiry.” Bitbol proposes a methodology to explore lived experience faithfully (via microphenomenological interviews retrieving or “evoking past experiences”) and thereby “addresses a set of traditional objections against introspection” (Bitbol and Petitmengin, 2017).
Bitbol gives neuroscience no privilege, priority or pride of place. “The effective primacy of lived experience should be given such prominence that every other aspect, content, achievement, distortion, and physicalist account of consciousness, is made conditional upon it.” From a (radical) phenomenological standpoint, he says, “one must not mistake objectivity for reality. Reality is what is given and manifest, whereas objectivity is what is constituted by extracting structural invariants from the given experience. Along with this phenomenological approach, an objective science is not supposed to disclose reality as it is beyond appearances, but only to circumscribe some intersubjectively recognized features of the appearing reality.” Having said that, Bitbol stresses that “neuroscientific data should not be granted a higher ontological status than phenomenological descriptions; they should not be given the power to render a compelling verdict about what is real and what is deceptive in our experience.” Thus, he sums up: “from a phenomenological standpoint, the neuro-phenomenological correlation is plainly perceived as an extension of the lived sense of embodiment, not as a sign that some naturalistic one-directional ‘fundamental dependence’ of consciousness on the bodily brain is taking place” (Bitbol, 2015).
Bitbol’s affirmative solution is to formulate a “dynamical and participatory conception of the relation between body and consciousness … with no concession to standard positions such as physicalist monism and property dualism.” Bitbol’s conception is based on Varela’s formalism of “cybernetic dialectic,” “a geometrical model of self-production,” and it is “in close agreement with Merleau-Ponty’s ‘intra-ontology’: an engaged ontological approach of what it is like to be, rather than a discipline of the contemplation of beings” (Bitbol, 2021a).
Bitbol’s approach to quantum physics complements his “radical phenomenology,” such that quantum mechanics becomes more a “symbolism of atomic measurements,” rather than “a description of atomic objects.” He supports the notion that “quantum laws do not express the nature of physical objects, but only the bounds of experimental information.” Similarly, Bitbol supports QBism, where the wave function’s probabilities are said to be, shockingly (to me), Bayesian probabilities, which means they relate to prior subjective degrees of belief about the system, paralleling some ideas in phenomenology (Bitbol, 2023).
Bitbol calls out “three features of such non-interpretational, non-committal approaches to quantum physics” that “strongly evoke the phenomenological epistemology.” These are: “their deliberately first-person stance; their suspension of judgment about a presumably external domain of objects, and subsequent redirection of attention towards the activity of constituting these objects; their perception-like conception of quantum knowledge.” Moreover, Bitbol claims that these new approaches of quantum physics go beyond phenomenological epistemology and “also make implicit use of a phenomenological ontology.” He cites Chris Fuch’s “participatory realism” that “formulates a non-external variety of realism for one who is deeply immersed in reality,” adding, “but participatory realism strongly resembles Merleau-Ponty’s endo-ontology, which is a phenomenological ontology for one who deeply participates in Being” (Bitbol, 2020; Gefter, 2015).
QBist theorists assert that “quantum states are ‘expectations about experiences of pointer readings,’” rather than expectations about pointer positions. Their focus on lived experience, not just on macroscopic variables, is tantamount to performing the transcendental reduction instead of stopping at the relatively superficial layer of the life-world reduction.” Bitbol believes that “quantum physics indeed gives us several reasons to go the whole way down to the deepest variety of phenomenological reduction … not only reduction to experience, or to ‘pure consciousness,’ but also reduction to the ‘living present’” (Bitbol, 2021b).
9.6.7. Direct perception theory
Direct Perception Theory is the idea that “the information required for perception is external to the observer; that is, one can directly perceive an object based on the properties of the distal stimulus alone, unaided by inference, memories, the construction of representations, or the influence of other cognitive processes” (APA, website). Philosopher Ned Block describes non-mainstream views of phenomenal consciousness that take it to work via this kind of “a direct awareness relation to a peculiar entity like a sense datum [i.e., that which is immediately available to the senses] or to objects or properties in the environment.” This direct awareness would seem to have to be “a primitive unanalyzable acquaintance relation that is not a matter of representation.” According to these direct realist or naïve realist theories of consciousness, “the phenomenal character of a perceptual experience is object-constituted in the sense that a perceptual experience of a tomato depends for its existence and individuation on the tomato. Any experience that is of a different tomato will have a different phenomenal character, even if it is phenomenally indistinguishable and even if the different tomato is exactly the same in all its properties and causes exactly the same activations in the brain.” Even subjectively indistinguishable hallucinatory experience would have to be different in phenomenal character as well (Block, 2023).
9.6.8. Gibson’s ecological psychology
Experimental psychologist James J. Gibson proposes an “embodied, situated, and non-representational” approach to perception (which, while not a surrogate for phenomenal consciousness, has features in common). Gibson attacks both behaviorism and cognitivism (e.g., information processing), arguing for direct perception and direct realism. Gibson calls his overarching theory, “Ecological Psychology,” and while his specific aim is “to offer a third way beyond cognitivism and behaviorism for understanding cognition,” an extension to consciousness can be cautiously inferred (Lobo et al., 2018; Gibson, 2024).
Gibson maintains that there is far more information available to our perceptual systems than we are consciously aware of. He posits that “the optical information of an image is not so much an impression of form and color, but rather of invariants. A fixated form of an object only specifies certain invariants of the object, not its solid form.” Perceptual learning is said to be “a process of seeing the differences in the perceptual field around an individual” (Gibson, 2014, 2024).
Gibson rejects “the premise of the poverty of the stimulus, the physicalist conception of the stimulus, and the passive character of the perceiver of mainstream theories of perception.” Rather, he has the main principles of ecological psychology as “the continuity of perception and action” and the “organism-environment system as unit of analysis” (Lobo et al., 2018).
Significantly, Gibson develops the original idea of “affordances” (he coins the term), which are the ways the environment provides opportunities for and motivates actions of animals—human examples include steep slopes inspiring the design of stairs and deposits of hydrocarbons encouraging drilling. Gibson defends the radical idea that “when we perceive an object we observe the object’s affordances and not its particular qualities” because it is both more useful and easier, which would mean that affordances are the objects of perception (Gibson, 2024; Lobo et al., 2018).
If perception is direct, and affordances provide the possibilities, then affordances are a kind of state space of the mind. That environmental affordances may have enabled or selected for consciousness would be consistent with embodied and enactive theories of consciousness.
9.7. Relational theories
Relational Theories of consciousness are those explanations whose distinctive feature is some kind of active or transformative connection with something other than brain circuits and pathways themselves.
9.7.1. A. Clark’s extended mind
The extended mind, according to philosopher Andy Clark, features an “active externalism,” based on the participatory role of the environment in driving cognitive processes. He asserts that when the human organism is linked with an external entity in a two-way interaction, a “coupled system” is created that can be conceptualized as a cognitive system in its own right (independent of the two components). This is because all the components in the system play an active causal role, and they jointly govern behavior in the same sort of way that cognition in a single system (brain) usually does. To remove the external component is to degrade the system’s behavioral competence, just as it would to remove part of its brain. Clark’s thesis is that this sort of coupled process counts equally well as a cognitive process, whether or not it is wholly in the head (Clark and Chalmers, 1998).
Clark concludes his book, Supersizing the Mind, by inviting us “to cease to unreflectively privilege the inner, the biological, and the neural … The human mind, viewed through this special lens, emerges at the productive interface of brain, body, and social and material world.” He marvels that “minds like ours emerge from this colorful flux as surprisingly seamless wholes” (Clark, 2010).
According to Owen Flanagan, “Walking, talking and seeing are all things the enactive, embodied, extended (code words for this hip new view) mind does in the world.” Clark “provides the best argument I’ve seen for the idea that minds are smeared over more space than neuroscience might have us believe, and that mind will continue spreading to other nooks and crannies of the universe as cognitive prostheses proliferate” (Flanagan, 2009).
9.7.2. Noë’s “out of our heads” theory
Philosopher Alva Noë argues that only externalism about the mind and mental content, which requires active and continuous engagement between the brain and its environment, body and beyond, can succeed as a theory of consciousness (Noë, 2010). He uses his attention-alerting phrase “Out of Our Heads” as descriptor, not as metaphor, and he applies it literally. His hypothesis is that expanding the locus of where consciousness occurs may help explain its essence and mechanism. What does this actually mean?
Noë takes issue with both dualism and materialism; attacking the weaknesses of each is not hard going. “We have no better idea how the actions of cells in the head give rise to consciousness than we do how consciousness arises out of immaterial spiritual processes.” So, brain science, he says, while it has the imprimatur of the scientific worldview, is not really going anywhere. It’s like trying to understand what makes a dance “a dance” by studying the movement of muscles (Noë, 2007).
He challenges the assumption that an event in the brain is alone sufficient for consciousness. “We spend all our lives, not as free-floating brains; we’re embodied, we’re environmentally embedded; we’re socially nurtured from the very beginnings of our lives.” His idea is that “The world shows up for us,” with “multiple layers of meaning.”
Noë offers an alternative framework, a novel way of thinking. “There are lots of discrete processes going on inside the head. But that’s not where we should look for consciousness. We occupy a place in the world—all sorts of things are going on around us—and consciousness is that activity of keeping tabs, keeping touch, paying attention to, interacting with the world.”
But what does it mean to say consciousness “is” that activity? “Is” as … “part of the process?” Or “enabling,” “bringing about” or “causing”? Or, in the strong sense of “is” as identity theory?
Noë distinguishes the meaning and purposes of consciousness, which take place “out of our heads,” from the mechanical locus of consciousness, the substrate on which its symbols are physically encoded and manipulated.
Noë uses dreams as corroborating evidence that consciousness occurs outside of the brain. He distinguishes dreams from real-life experiences, in that the latter has greater density, detail and robustness. “You can’t experience in a dream everything that you can experience outside of a dream” (Noë, 2007).
Consciousness to Noë means “How the world shows up for us depends not only on our brains and nervous systems but also on our bodies, our skills, our environment, and the way we are placed in and at home in the world.” This does not happen automatically, passively, done to the organism, but it is what the organism must do deliberately, proactively. “We achieve access to the world. We enact it by enabling it to show up for us.… If I don’t have the relevant skills of literacy, for example, the words written on the wall do not show up for me” (Noë, 2012).
He stresses that consciousness isn’t just a matter of events triggered inside us by things outside us because things are triggered inside us all the time by all sorts of things outside of us and they don’t rise to consciousness. Much depends on context, interest, knowledge and understanding.
Thus, consciousness is what happens when sentient creatures interact with their environment via their brains; consciousness is not what their brains are doing to them. A science of consciousness, Noë says, must explain the role the brain is playing in a dynamic active involvement. It’s not just that consciousness happens in the brain; it’s not like that. “We are not our brains” (Noë, 2012).
9.7.3. Loorits’s structural realism
Philosopher Kristjan Loorits’s Structural Realism posits that “conscious experiences are fully structural phenomena that reside in our brains in the form of complex higher-order patterns in neural activity.” He claims that the structural view of consciousness solves both the hard problem and the problem of privacy (Loorits, 2019).
On the hard problem, according to Loorits, while some properties of our conscious experiences seem to be qualitative and nonstructural—qualia—“these apparently nonstructural properties are, in fact, fully structural.” He conjectures that qualia are “compositional with internal structures that fully determine their qualitative nature” (Loorits, 2019), that “qualia are the structures of vast networks of unconscious associations, and that those associational structures can be found in our neural processes.” He makes the ambitious prediction that “with the proper brain-stimulating technology, it should be possible to reveal the structural nature of qualia to the experiencing subject directly” (Loorits, 2019). Loorits concludes that “consciousness as a whole can be seen as a complex neural pattern that misperceives some of its own highly complex structural properties as monadic and qualitative. Such neural pattern is analyzable in fully structural terms and thereby the hard problem is solved (Loorits, 2014). (As for “the notion of structure,” Loorits’s Structural Realism has some structures existing in the world in an objective sense and has conscious experiences among such structures [Loorits, 2019].)
On the privacy problem, according to Loorits, while our “powerful intuition” is that “the content of my consciousness is directly accessible only to me”—a brain-bound internalist approach to consciousness, which comports well with neurobiological theories—some argue that “we can only talk about phenomena whose defining properties are known to us from the public realm.” According to this externalist approach, “if our conscious experiences were entirely private, we could not talk or theorize about them”—a way of thinking that suggests “conscious experiences should be understood in terms of an organism’s relationship to its socio-physical environment” (Loorits, 2019).
In defending internalism as the “location” of consciousness, Loorits argues that “structural phenomena are describable and analyzable in public terms even if those phenomena themselves are private.” Moreover, “the structure of our consciousness is always present in our neural processes and only sometimes (additionally) in an extended system that includes elements of the environment” (Loorits, 2018).
Loorits offers modest support to illusionists who propose that “the apparently non-structural features of consciousness are in fact fully structural and merely seem to be non-structural.” He argues that “such a position is tenable, but only if the non-structural ‘seemings’ are interpreted as perspectival phenomena and not as theorists’ fictions or absolute nothingness” (Loorits, 2022).
When George Musser was musing that qualia might be relational (9.7), he met with Loorits, and to Musser’s surprise, Loorits “had gone off the idea.” The disjunction is between third and first-person perspectives, where the former is how qualia is explained relationally and the latter is precisely the hard problem. According to Musser, Loorits’s current thinking was that “qualia may well be relational behind the scenes, but as long as they feel intrinsic to us, they still elude scientific description.” Loorits concluded, “There is still a hard problem in a sense that we seem to be able to experience qualia without being aware of their relational components” (Musser, 2023a, Musser, 2023b). (I tip my hat when a philosopher changes their mind.)
9.7.4. Lahav’s relativistic theory
Physicist Nir Lahav characterizes consciousness as a physical phenomenon that is relative to the measurements of a “cognitive frame of reference.” Just as different observers can have different measurements of velocity in a relativistic context, the same is true for consciousness. Two people can have different cognitive frames of reference, experiencing conscious awareness for themselves but only measuring brain activity for the other. The brain doesn’t create conscious experiences through computations; rather, conscious experiences arise due to the process of physical measurement. Different physical measurements in different frames of reference manifest different physical properties, even when measuring the same phenomenon. This leads to different manifestations of conscious experience and brain activity in separate cognitive frames (Lahav and Neemeh, 2022).
9.7.5. Tsuchiya’s relational approach to consciousness
Neuroscientist Nao Tsuchiya’s relational approach to consciousness is not so much a theory of consciousness per se but more a fresh methodology, “an alternative approach to characterize, and eventually define, consciousness through exhaustive descriptions of consciousness’s relationships to all other consciousnesses.” His approach is founded in category theory (i.e., mathematical structures and their relations), which is used to characterize the structure of conscious phenomenology as a category and describe the interrelationships of members with mathematical precision. Tsuchiya proposes several possible definitions of categories of consciousness, both in terms of level and contents—the objective being for these conceptual tools to clarify complex theoretical concepts about consciousness, which have been long discussed by philosophers and psychologists, and for such conceptual clarification to inspire further theoretical and empirical research. To the extent that the project is successful, it will support relational theories of consciousness (Tsuchiya and Saigo, 2021).
9.7.6. Jaworski’s hylomorphism
Philosopher William Jaworski argues that the hard problem of consciousness arises only if hylomorphism is false. Hylomorphism is the claim that structure is a basic ontological and explanatory principle, and is responsible for individuals being the kinds of things they are, and having the powers or capacities they have. As Jaworski explains, “A human is not a random collection of physical materials, but an individual composed of physical materials with a structure that accounts for what it is and what it can do—the powers it has. What is true of humans is true of their activities as well.” Structured activities, he says, include perceptual experiences, which means that everything about a perceptual experience, including its phenomenal character, can be explained by describing the perceiver’s structure: perceptual subsystems, the powers of those subsystems, and the coordination that unifies their activities into the activity of the perceiver as a whole. Conscious experiences, Jaworski concludes, “thus fit unproblematically into the natural world—just as unproblematically as the phenomenon of life” (Jaworski, 2020).
According to Jaworski, from a hylomorphic perspective, “mind-body problems are byproducts of a worldview that rejects structure, and which lacks a basic principle which distinguishes the parts of the physical universe that can think, feel, and perceive from those that can’t. Without such a principle, the existence of those powers in the physical world can start to look inexplicable and mysterious.” But if mental phenomena are structural phenomena, he says, then they are part of the physical world and thus “hylomorphism provides an elegant way of solving mind-body problems” (Jaworski, 2016).
While hylomorphism exemplifies a suite of arguments purporting to undermine the hard problem, its own challenge seems two-fold: (i) by defining structure as primitive and fundamental, it almost embeds the desired conclusion in the definitional premise; and (ii) by not distinguishing kinds of structure, all structure holds the same level of ultimate explanation, which may not fit consciousness.
9.7.7. Process theory
A process theory of consciousness is founded on process philosophy, the metaphysical idea that fundamental reality is dynamic, change, shift—the action of becoming.30 With respect to consciousness, process philosophy has refused to bifurcate human experience from nature, and as a consequence, process philosophy holds to a “panexperientialist” ontology where experience goes all the way down in nature, and consciousness genuinely emerges as an achievement of the evolution of experience through time. Only in the case of God (if God exists, of course) does consciousness belong to nature as an ontological primitive. (Davis, 2020, 2022; Faber, 2023).
David Ray Griffin suggests that “panexperientialist physicalism,” by allowing for “compound individuals” and thereby a “nondualistic interactionism” that combines these strengths, can provide a theory that overcomes the problems of materialist physicalism (Griffin, 1997). Panexperientialist physicalism, he says, portrays the world as comprised of creative, experiential, physical-mental events. His process-type panexperientialism agrees with materialism that there is only one kind of stuff, but enlarges “energy” to “experiential creativity” (thus distinguishing it from panpsychism, 13.12). Process panexperientialists assume that it lies in the very nature of things for events of experiential creativity to occur—for partially self-creative experiences to arise out of prior experiences and then to help create subsequent experiences. The process by which our (sometimes partly conscious) experiences arise out of those billions of events constituting our bodies at any moment is simply the most complex example of this process—and the only one the results of which we can witness from the inside.
9.8. Representational theories
Representational Theories of consciousness elevate the explanatory power of mental representations, which are inner-perceived notions or imagery of things, concrete or abstract, that are not currently being presented to the senses. Representational theories seek to explain consciousness in terms of mental representations rather than simply as neural or brain states. Mental representations utilize cognitive symbols that can be manipulated in myriad ways to describe, consider and explain an endless variety of thoughts, ideas, and concepts (Mental representation, 2024. Wikipedia). According to strict representationalism, conscious mental states have no mental properties other than their representational properties (Van Gulick, 2019).
According to philosopher Michael Tye, “representationalism is a thesis about the phenomenal character of experiences, about their immediate subjective ‘feel’. At a minimum, the thesis is one of supervenience: necessarily, experiences that are alike in their representational contents are alike in their phenomenal character. So understood, the thesis is silent on the nature of phenomenal character. Strong or pure representationalism goes further. It aims to tell us what phenomenal character is.” In this view, “phenomenal character is one and the same as representational content that meets certain further conditions” (Tye, 2002).
Philosopher Fred Dretske’s “Representational Thesis” is the claim that: (1) All mental facts are representational facts, and (2) All representational facts are facts about informational functions (Dretske, 2023).
Philosopher Amy Kind observes that “as philosophers of mind have begun to rethink the sharp divide that was traditionally drawn between the phenomenal character of an experience (what it’s like to have that experience) and its intentional content (what it represents), representationalist theories of consciousness have become increasingly popular” (Kind, 2010).
While almost all theories of consciousness have representational features, the representational theories themselves, including those that follow, are distinguished by the more robust claim that their representational features are what explain consciousness (Van Gulick, 2019). A hurdle for all theories is the need to explain phenomenology in terms of intentionality, the “aboutness” of mental states, under the assumption that intentionality must be represented (Lycan, 2019).
This is Jerry Fodor’s challenge: “I suppose that sooner or later the physicists will complete the catalog they’ve been compiling of the ultimate and irreducible properties of things. When they do, the likes of spin, charm, and charge will perhaps appear on their list. But aboutness surely won’t; intentionality simply doesn’t go that deep” (Fodor, 1989).
9.8.1. First-order representationalism
First-order representationalism (FOR) seeks to account for consciousness in terms of, or by reducing to, external, world-directed (or first-order) intentional states (Gennaro, n.d.). In other words, consciousness can be explained, primarily, by understanding how the directedness of our mental states at objects and states of affairs in the world is generated directly by those objects and states of affairs (Searle, 1979).
Fred Dretske asserts that “the phenomenal aspects of perceptual experiences are one and the same as external, real-world properties that experience represents objects as having.” He argues that “when a brain state acquires, through natural selection, the function of carrying information, then it is a mental representation suited (with certain provisos) to being a state of consciousness.” (In contrast, “representations that get their functions through being recruited by operant conditioning, on the other hand, are beliefs.”) (Dretske, 1997).
As philosopher Peter Carruthers explains, “the goal [of FOR] is to characterize all of the phenomenal—‘felt’—properties of experience in terms of the representational contents of experience (widely individuated). On this view, the difference between an experience of red and an experience of green will be explained as a difference in the properties represented—reflective properties of surfaces, say—in each case. And the difference between a pain and a tickle is similarly explained in representational terms—the difference is said to reside in the different properties (different kinds of disturbance) represented as located in particular regions of the subject’s own body” (Carruthers, 2000).
Carruthers recounts his unusual transition from higher-order theory to first-order theory.31 He originally explained phenomenal consciousness in terms of “dispositionalist higher-order thought theory,” which he characterized as “a certain sort of intentional content (‘analog’, or fine-grained), held in a special-purpose short-term memory store in such a way as to be available to higher-order thoughts … all of those contents are at the same time higher-order ones, acquiring a dimension of seeming or subjectivity” (Carruthers, 2000). (One of his goals, he says, is “to critique mysterian [10.2] and property-dualist accounts of phenomenal consciousness … [by] defending the view that consciousness can be reductively explained in terms of active non-conceptual representations.” He sought to “disarm (and explain away the appeal of) the various ‘hard problem’ thought experiments (zombies, explanatory gaps, and the rest)” (Carruthers, 2017).
The later Carruthers concludes that the earlier Carruthers had “rejected first-order representational theories of consciousness on inadequate grounds.” As a result, “since there is extensive evidence that conscious experience co-occurs with the global broadcasting of first-order non-conceptual contents in the brain [9.2.3], and since this evidence is most easily accommodated by first-order representationalism, the latter is preferable to any form of higher-order account” (Carruthers, 2017).
Philosopher Neil Mehta and anesthesiologist George Mashour describe FOR as consisting of “sensory representations directly available to the subject for action selection, belief formation, planning, etc.” They posit a neuroscientific framework, according to which neural correlates of general consciousness include prefrontal cortex, posterior parietal cortex, and non-specific thalamic nuclei, while neural correlates of specific consciousness include sensory cortex and specific thalamic nuclei” (Mehta and Mashour, 2013).
FOR’s core philosophical idea, Mehta and Mashour state, is that “any conscious state is a representation, and what it’s like to be in a conscious state is wholly determined by the content of that representation. By definition, a representation is about something, and the content of a representation is what the representation is about. For instance, the word ‘dolphins’ (representation) is about dolphins (content).” But, they clarify, “a representation is not identical to its content.” The English word “dolphins” has eight letters, but dolphins themselves do not have any letters. “Conversely, dolphins swim, but the word ‘dolphins’ does not swim.”
This distinction leads to the strong view that neural states seem to have very different properties than conscious perceptions. “For instance, when someone consciously perceives the color orange, normally there is nothing orange in that person’s brain. First-order representationalists explain this by holding that a conscious perception of orange is a representation of orange, and (as the ‘dolphin’ example shows) the properties of a representation can be very different from the properties of its content” (Mehta and Mashour, 2013).
FOR’s core neurobiological idea is that “each specific type of conscious state corresponds to a specific type of neural state.” Ned Block seeks to “disentangle the neural basis of phenomenal consciousness from the neural machinery of the cognitive access that underlies reports of phenomenal consciousness.” He argues that, in a certain sense, “phenomenal consciousness overflows cognitive accessibility.” He posits that “we can find a neural realizer of this overflow if we assume that the neural basis of phenomenal consciousness does not include the neural basis of cognitive accessibility and that this assumption is justified (other things being equal) by the explanations it allows” (Block, 2007c).
Block hypothesizes that the conscious experience of motion is a certain kind of activation of visual area V5, which suggests that sensory systems are the neural correlates of sensory consciousness. He further speculates that what’s required for consciousness in general are connections between these cortical regions and the thalamus, “which suggests that sensory and perhaps post-sensory systems … are the neural correlates of general consciousness, as well” (Block, 2007c).
Block says he favors the first-order point of view, and if it is right, he says, “It may be conscious phenomenology that promotes global broadcasting, something like the reverse of what the global workspace theory of consciousness supposes. First-order phenomenology may be a causal factor in promoting global broadcasting; but according to the global workspace theory, global broadcasting constitutes consciousness rather than being caused by it” (Block, 2023, pp. 8–9).
With a pungent example, Block compares first-order representationalism with higher-order representationalism (9.8.3), higher-order theories (HOT). “We have two perceptions that equally satisfy the descriptive content of the HOT, but one and not the other causes the HOT. But that gives rise to the problem of how a thought to the effect that I am smelling vomit could make a perception of crimson a conscious perception. The perception of crimson could cause the HOT while a simultaneous first-order smell-representation of vomit does not cause any higher-order state. The consequence would be that the perception of crimson is a conscious perception and the perception of vomit is not, even though the subject experiences the perception of crimson as if it were the perception of vomit.” Block concludes that “a descriptivist view based on content is inadequate,” and that “the difficulty for the HOT theory is that it is unclear what relation has to obtain between a HOT and a perception for the perception to be conscious” (Block, 2023, pp. 425–426).
9.8.2. Lamme’s recurrent processing theory
Neuroscientist Victor Lamme proposes Recurrent Processing Theory, which stresses brain sensory systems that are massively interconnected and involve feedforward and feedback connections, as being necessary and sufficient for consciousness. The visual system provides a case where “forward connections from primary visual area V1, the first cortical visual area, carry information to higher-level processing areas, and the initial registration of visual information involves a forward sweep of processing.” Moreover, many feedback connections link visual areas with other brain regions, which, later in processing, are activated and thereby yield dynamic activity within the visual system (Wu, 2018).
Lamme proposes four stages of visual processing: Stage 1: Visual signals are processed locally within the visual system (i.e., superficial feedforward processing). Stage 2: Visual signals travel further forward in the processing hierarchy where they can influence action (i.e., deep feedforward processing). Stage 3: Information travels back into earlier visual areas, leading to local recurrent processing (i.e., superficial recurrent processing). Stage 4: Information activates widespread brain areas (i.e., widespread recurrent processing) (Wu, 2018).
According to Lamme, it is the recurrent processing in Stage 3, which is a first-order theory and can occur in both sensory and post-sensory areas, that he claims to be necessary and sufficient for consciousness. In other words, “for a visual state to be conscious is for a certain recurrent processing state to hold of the relevant visual circuitry” (Wu, 2018).
Ned Block calls Recurrent Processing Theory “basically a truncated form of the global workspace account: It identifies conscious perception with the recurrent activations in the back of the head without the requirement of broadcasting in the global workspace.” Block points out that “first-order theories do not say that recurrent activations are by themselves sufficient for consciousness. These activations are only sufficient given background conditions. Those background conditions probably include intact connectivity with subcortical structures.” What then is “enough for conscious perceptual phenomenology” is “the active recurrent loops in perceptual areas plus background conditions.” Block concludes: “So long as high-level representations participate in those recurrent loops, conscious high-level content is assured” (Block, 2023, pp. 8–9).
Lamme critiques Global Workspace Theory [9.2.3] as “all about access but not about seeing” (even though his Stage 4 is consistent with global workspace access). The crucial distinction is that Global Workspace Theory has recurrent processing at Stage 4 as necessary for consciousness, while Recurrent Processing Theory has recurrent processing at Stage 3 as sufficient. The latter would enable phenomenal consciousness without access by the global neuronal workspace (Wu, 2018).
Overall, Lamme avers that “neural and behavioral measures should be put on an equal footing” and that “only by moving our notion of mind towards that of brain can progress be made” (Lamme, 2006). He depicts “a notion of consciousness that may go against our deepest conviction: ‘My consciousness is mine, and mine alone.’ It’s not,” he says (Lamme, 2010).
9.8.3. Higher-order theories
According to Higher-Order Theories of consciousness, what makes a perception conscious is the presence of an accompanying cognitive state about the perception. This means that phenomenal consciousness is not immediate awareness of sensations. Rather, it is the higher-level sensing of those sensations, a product of second-order thoughts about first-order perceptions or mental states—a two-level process. Higher-Order Theories are distinguished from other cognitive accounts of phenomenal consciousness which assume that first-order perceptions or mental states can themselves be directly conscious—a one-level process (9.8.1, 9.8.2) (Carruthers, 2020, Higher-order theories of consciousness, 2023).
According to Peter Carruthers, “humans not only have first-order non-conceptual and/or analog perceptions of states of their environments and bodies, they also have second-order non-conceptual and/or analog perceptions of their first-order states of perception.” This higher-order perception theory holds that “humans (and perhaps other animals) not only have sense-organs that scan the environment/body to produce fine-grained representations, but they also have inner senses which scan the first-order senses (i.e. perceptual experiences) to produce equally fine-grained, but higher-order, representations of those outputs.” Hence, Higher-Order Theories are also called “inner-sense theory.” Notably, “the higher-order approach does not attempt to reduce consciousness directly to neurophysiology but rather its reduction is in mentalistic terms, that is, by using such notions as thoughts and awareness” (Cardenas-Garcia, 2023).
The main motivation driving higher-order theories of consciousness, according to Carruthers, “derives from the belief that all (or at least most) mental-state types admit of both conscious and unconscious varieties … And then if we ask what makes the difference between a conscious and an unconscious mental state, one natural answer is that conscious states are states that we are aware of.” This translates into the view that conscious states are states “that are the objects of some sort of higher-order representation—whether a higher-order perception or experience, or a higher-order thought” (Cardenas-Garcia, 2023).
Various flavors of higher-order theories can be distinguished, including the following (Cardenas-Garcia, 2023):
-
Actualist Higher-Order Thought Theory (championed by David Rosenthal): A phenomenally conscious mental state is a state that is the object of a higher-order thought, and which causes that thought non-inferentially.
-
Dispositionalist Higher-Order Thought Theory: A phenomenally conscious mental state is a state that is available to cause (non-inferentially) higher-order thoughts about itself (or perhaps about any of the contents of a special-purpose, short-term memory store).
-
Self-Representational Theory: A phenomenally conscious mental state is a state that, at the same time, possesses an intentional content, thereby in some sense representing itself to the person who is the subject of that state.
According to Ned Block, there are two approaches to higher-order thought (HOT) theories of consciousness. The “double representation” approach says that the HOT involves a distinct coding of the perceptual content, such that a conscious perception will be “accompanied” by a thought of that experience, giving two representations of the conscious experience, one perceptual, one cognitive and conceptual. He considers it “mysterious” how a perception can be conscious. The second version of HOT has a thought or at least a cognitive state that makes a perception conscious but that thought does not itself have any perceptual content. Block refers to Hakwan Lau, who sometimes describes the higher-order state as a “pointer” to a first-order state. The pointer theory is cognitive in that the pointer is a thought, but it is not conceptualist since it involves no concept of a conscious experience involved in the thought that is supposed to make a perception conscious (Block, 2023, pp. 425–426).
Lau himself argues that the key to characterizing consciousness lies in its connections to belief formation and epistemic justification on a subjective level (Lau, 2019a); he describes consciousness as “a battle between your beliefs and perceptions” (Lau, 2019b). A clue, he suggests—at least at the level of functional anatomy—is that the neural mechanisms for conscious perception and sensory metacognition are similar, sensory metacognition meaning the monitoring of the quality or reliability of internal perceptual signals. Both mechanisms involve neural activity in the prefrontal and parietal cortices, outside of primary sensory regions (9.8.4).
Reflexive theories, which link consciousness and self-awareness, are either a sister or a cousin of Higher-Order Theories. They differ in that reflexive theories situate self-awareness within the conscious state itself rather than in an independent meta-state focusing on it. The same conscious state is both intentionally outer-directed awareness of external perceptions and intentionally inner-directed awareness of self-sense. A strong claim is that this makes reflexive awareness a central feature of conscious mental states and thereby qualifies as a theory of consciousness. Whether reflexive theories are variants of Higher-Order Theory (“sister”) or a “same-order” account of consciousness as self-awareness (“cousin”) is in dispute (Van Gulick, 2019).
Social psychologist Alexander Durig claims that our two brain hemispheres, operating as two brains, aware of each other and interacting with each other, exist in a system of “interactive reflexivity,” and it is this reflexivity, while being perpetually aware of the world and each other’s perception of the world, that is the foundation of consciousness (Durig, 2023).
9.8.4. Lau’s perceptual reality monitoring theory
Cognitive neuroscientist Hakwan Lau introduces Perceptual Reality Monitoring Theory, which he says is an empirically-grounded higher-order theory of conscious perception. He proposes that conscious perception in an agent occurs “if there is a relevant higher-order representation with the content that a particular first-order perceptual representation is a reliable reflection of the external world right now. The occurrence of this higher-order representation gives rise to conscious experiences with the perceptual content represented by the relevant first-order state.” This structure allows us to distinguish “reality from fantasy in a generally reliable fashion” (Lau, 2019a).
The agent is not conscious of the content of this higher-order representation itself, Lau says, “but the representation is instantiated in the system in such a way to allow relevant inferences to be drawn (automatically) and to be made available to the agent (on a personal level, in ways that make the inferences feel subjectively justified)” (Lau, 2019a). It is a subpersonal process. “That is, we don’t have to think hard to come up with this higher-order representation. It’s not a thought in that sense.” Rather, “this higher-order representation serves as a tag or label indicating the suitable epistemic status of the sensory representation, and functions as a gating mechanism to route the relevant sensory information for further cognitive processing” (Lau, 2022, p. 28).
This structural mechanism, Lau asserts, sets his view “apart from global theories” (9.8.3). This is because, he says, “such further processing is only a potential consequence, but not a constitutive part of the subjective experience … In other words, consciousness is neither cognition nor metacognition. It is the mechanistic interface right between perception and cognition.” Lau believes that “such higher-order mechanisms likely reside within the mammalian prefrontal cortex, where the functions of perceptual metacognition are also carried out” (Lau, 2022, p. 28).
But can we ask what happens when higher-order representation is missing? Wouldn’t subjective experience also be missing? This explains, Lau says, “why sometimes sensory representations alone do not lead to conscious experiences at all, as in conditions like blindsight, where, because of brain damage, a person (or an animal) is able to respond accurately to visual stimuli while denying any conscious awareness of them” (Lau, 2022, pp. 35–36).
Blindsight, in fact, is a litmus test for any theory of consciousness and Lau claims his theory offers the most coherent explanation: Blindsight “occurs when a first-order representation occurs without the corresponding higher-order representation … That’s why the perceptual capacity is there (due to the first-order representations), but the phenomenology of conscious perception is missing” (Lau, 2019b).
Lau says his theory is a functionalist account. As such, he says, “some animals may not be conscious. And yet, perhaps even a robot or computer program could be.” He highlights “the role of memory in conscious experience, even for simple percepts. How an experience feels depends on implicit memory of the relationships between different perceptual representations within the brain” (Lu et al., 2022).
Lau critiques both the global view of consciousness (9.2.3) and the local view (9.8.1 and 9.8.2) as “polar extremes,” arguing that his own intermediate or centrist position is superior (Lau, 2022, pp. 25, 26, 130). As part of his model, he takes from artificial intelligence the idea of a “discriminator,” which can distinguish between “real” and “self-generated” images (Lau, 2022, p. 142). Applied to human consciousness, an analogous “discriminator” “distinguishes between true perceptions of the world, memory, fantasy, and neuronal noise. For conscious perception of an object to occur, this discriminator must confirm that the early sensory information represents the object. This model, Lau asserts, accounts for sensory richness, because higher-order representations access richer, lower-level perceptions of first-order representations (Stirrups, 2023). Bottom line, Lau strikes the ambitious claim that his theory explains the subjective “what-it-is-like-ness” of first-person experience—why it “feels like something” to be in a particular brain state, say with a sharp pain—mediated by higher-order representations in the brain (Lau, 2022, p. 197).
Enhancing his model, Lau proposes that “because of the way the mammalian sensory cortices are organized, perceptual signals in the brain are spatially ‘analog’ in a specific sense,” which enables “computational advantages.” Given this analog nature, “when a sensory representation becomes conscious, not only do we have the tendency to think that its content reflects the state of the world right now, also determined is what it is like to have the relevant experience—in terms of how subjectively similar it is with respect to all other possible experiences.” Lau submits that this addresses the hard problem, “better than prominent alternative views” (Lau, 2022, p. 29).
9.8.5. LeDoux’s higher-order theory of emotional consciousness
Neuroscientist Joseph LeDoux’s Higher-Order Theory of Emotional Consciousness combines his approach to higher-order representationalism (9.8.3) and his commitment to the centrality of emotion. His thesis is that “the brain mechanisms that give rise to conscious emotional feelings are not fundamentally different from those that give rise to perceptual conscious experiences.” Both, he proposes, “involve higher-order representations (HORs) of lower-order information by cortically based general networks of cognition” (GNC). The theory argues that GNC and “self-centered higher-order states are essential for emotional experiences” (Ledoux and Brown, 2017).
LeDoux challenges the traditional view that emotional states of consciousness (emotional feelings) are “innately programmed in subcortical areas of the brain,” and are “as different from cognitive states of consciousness, such as those related to the perception of external stimuli.” Rather, LeDoux argues that “conscious experiences, regardless of their content, arise from one system in the brain” and that “emotions are higher-order states instantiated in cortical circuits.” In this view, all that differs in emotional and nonemotional states are “the kinds of inputs that are processed.” According to LeDoux, “although subcortical circuits are not directly responsible for conscious feelings, they provide nonconscious inputs that coalesce with other kinds of neural signals in the cognitive assembly of conscious emotional experiences.”
For understanding the emotional brain, LeDoux focuses on “fear,” defining it as “the conscious feeling one has when in danger.” In the presence of a threat, he says, “different circuits underlie the conscious feelings of fear and the behavioral responses and physiological responses that also occur.” But it is the “experience of fear,” the conscious emotional feeling of fear, that informs LeDoux’s theory of consciousness, which he explains as follows. “A first-order representation of the threat enters into a higher-order representation, along with relevant long-term memories—including emotion schema—that are retrieved. This initial HOR involving the threat and the relevant memories occurs nonconsciously. Then, a HOROR [i.e., a third-order state, a HOR of a representation, a HOR of a HOR] allows for the conscious noetic experience of the stimulus as dangerous. However, to have the emotional autonoetic experience of fear, the self must be included in the HOROR” (Ledoux and Brown, 2017).
Advancing his theory, LeDoux explores “introspection,” the term given by higher-order theorists to this third level of representations, that is, “to be aware of the higher-order state (to be conscious that you are in that state).” LeDoux proposes “a more inclusive view of introspection, in which the term indicates the process by which phenomenally experienced states result.” Introspection, he says, “can involve either passive noticing (as, for example, in the case of consciously seeing a ripe strawberry on the counter) or active scrutinizing (as in the case of deliberate focused attention to our conscious experience of the ripe strawberry).” Both kinds of introspection lead to phenomenal experience, in LeDoux’s view (Ledoux and Brown, 2017).
HOROR theory states that “phenomenal consciousness does not reflect a sensory state (as proposed by first-order theory) or the relation between a sensory state and a higher-order cognitive state of working memory (as proposed by traditional HOT). Instead, HOROR posits that phenomenal consciousness consists of having the appropriate HOR of lower-order information, where lower-order does not necessarily mean sensory, but instead refers to a prior higher-order state that is rerepresented.” He says, “This second HOR is thought-like and, in virtue of this, instantiates the phenomenal, introspectively accessed experience of the external sensory stimulus. That is, to have a phenomenal experience is to be introspectively aware of a nonconscious HOR.” He distinguishes ordinary introspective awareness, which is the passive kind of “noticing” that he postulates is responsible for phenomenal consciousness, “from the active scrutinizing of one’s conscious experience that requires deliberate attentive focus on one’s phenomenal consciousness.” Active introspection, he stresses, “requires an additional layer of HOR (and thus a HOR of a HOROR).”
In studies of human patients, LeDoux and his PhD adviser, Michael Gazzaniga, “concluded that conscious experiences are the result of cognitive interpretation situations in an effort to help maintain a sense of mental unity in the face of the neural diversity of non-conscious behavioral control systems in our brain” (LeDoux, 2023b).
Rejecting the notion of the “self,” and certainly mind-body dualism, LeDoux positions “consciousness” as the fourth and final “realm of existence” for animal life, the four realms being “bodily, neural, cognitive, and conscious.” LeDoux replaces the self with an “ensemble of being” that “subsumes our entire human existence, both as individuals and as a species” (LeDoux, 2023a).
LeDoux’s views continue to develop. In particular, he picks out two overarching perspectives. First, his multi-state hierarchical model of consciousness, which features an intricate anatomical framework evincing the complexity of higher-order processing via redundancy. The multi-state hierarchical model of consciousness, he says, “replaces the traditional volley between the sensory cortex and the lateral PFC [prefrontal cortex] with a more complex anatomical arrangement consisting of a hierarchy of structures, each of which creates different kinds of states that are re-represented/re-described by circuits of sub-granular and granular PFC and that contribute to higher-order mental modeling and conscious experience. The states that constitute the functional features of the multi-state hierarchical higher-order theory of consciousness, and the brain areas that are associated with these states, include primary lower-order states (areas of the sensory cortex); secondary lower-order states (memory areas and other convergence zones in the temporal and parietal lobes); sub-higher-order states (meso-cortical areas of sub-granular PFC, including the anterior cingulate, orbital, ventromedial, prelimbic, and insula PFC); and higher-order states that re-represent/re-describe/index the various other states to construct mental models in working memory (granular PFC)” (LeDoux, 2023a, p. 234).
LeDoux’s second overarching perspective is the dual mental hypothesis that shows the interplay between preconscious and conscious states and the role of narratives in driving them. In the dual mental-model hypothesis, he says, “explicit consciousness of complex events emerges from interactions between granular and sub-granular PFC states. Lower-order non-PFC states, while often involved as inputs to the PFC, are not necessary for such higher-order conscious experiences. In other words, a thought, which is a higher-order state constructed by a pre-conscious mental model, is sufficient to populate the conscious higher-order state via the second mental model.” The output of the conscious mental model, he says, “much like the output of the pre-conscious mental model, is an abstract mentalese narrative (albeit a conscious one) that feeds distributaries flowing to motor circuits that control overt behavior and verbal expression.” LeDoux senses that “this implies that we have conscious agency, which you may know of as free will”—adding, “the question of whether we actually make conscious choices is a matter of debate” (LeDoux, 2023a, pp. 296–297).
9.8.6. Humphrey’s mental representations and brain attractors
Neuropsychologist Nicholas Humphrey employs an evolutionary framework, combining mental representations with what he calls “attractor states in the brain,” to develop a novel materialistic theory of phenomenal consciousness, which he sees as a late and not ubiquitous evolutionary development. His multi-discipline argument follows (Section: Humphrey, 2023a, Humphrey, 2023b, 2022, 2024; Humphrey, 2023a, Humphrey, 2023b).
Sensations, he says, are ideas we generate: mental representations of stimuli arriving at our sense organs and how they affect us. Their properties are to be explained, therefore, not literally as the properties of brain-states, but rather as the properties of mind-states dreamed up by the brain. Remarkably, we (and presumably other sentient creatures) represent what’s happening as having “phenomenal properties”, or “qualia”, that fill the “thick time” of the subjective present. The result is we come to have a psychologically impressive sense of self—a “phenomenal self” that is semi-independent of our physical bodies. This idea of “what it’s like to be me” may be in some respects “fake news”; but Humphrey’s point is that, to us as the subjects, it’s big news!
When it comes to how sensations are generated in the brain, Humphrey points out this has to be a two-stage process: first the gathering of sensory information, which is the sensory text, then the interpretation of this information, which is the conscious reading. This two-stage process generates our subjective take on what this is like for us. Phenomenal properties arise only at the interpretative stage. This, Humphrey stresses, is “a point often lost on researchers looking for the neural correlates of consciousness, who assume the properties of the brain activity must map onto the phenomenal properties of conscious experience.” He calls the hard problem “the wrong problem” (Humphrey, 2022).
Humphrey believes that our best approach to explaining sentience (which is how he labels phenomenal consciousness) will be “forward engineering”—reconstructing the steps by which natural selection could have invented it. He proposes that sensations originated in primitive animals as evaluative responses to stimulation at the body surface. Thus, sensations started out as something the animal did about the stimulation rather than something it felt about it. Early on, however, animals hit on the trick of monitoring these responses—by means of an “efference copy” of the command signals—to yield a simple representation of what the stimulation is about. In short, a feeling (Humphrey, 2023a, Humphrey, 2023b).
Humphrey’s story quickens, as that feeling became privatised, resulting in activity in neural feedback loops, which became recursive and stretched out in time, taking on complex higher-order properties. It was then refined and stabilised to generate mathematically complex attractor states, which would give rise—“out of the blue”—to the apparently unaccountable qualities of sensory qualia. Quite possibly, he says, phenomenal experience involves the brain generating something like an internal text, which it interprets as being about phenomenal properties. The driving force behind these later developments was the adaptive benefits to the animal of the emergence of the phenomenal self.
This is why Humphrey takes phenomenal consciousness as a relatively late evolutionary invention, having evolved only in animal species that (a) have brains capable of entertaining and enjoying these fancy mental representations, and (b) lead lives in which having this bold sense of self can give them an edge in the fitness game. Thus, Humphrey challenges conventional wisdom that phenomenal consciousness in the animal kingdom is a gradient; his “hunch” is that only mammals and birds make the cut. Chimpanzees, dogs, parrots have it. Lobsters, lizards, frogs do not (Humphrey, 2023a, Humphrey, 2023b).
9.8.7. Metzinger’s no-self representational theory of subjectivity
Philosopher Thomas Metzinger presents a representationalist and functional analysis of subjectivity, the consciously experienced first-person perspective (Metzinger, 2004). What has been traditionally called “conscious thought,” he argues, is actually “a subpersonal process, and only rarely a form of mental action. The paradigmatic, standard form of conscious thought is non-agentive, because it lacks veto-control and involves an unnoticed loss of epistemic agency and goal-directed causal self-determination at the level of mental content.” Conceptually, Metzinger states, “conscious thought … must be described as an unintentional form of inner behaviour” (Metzinger, 2015).
A starting assumption is that phenomenal consciousness (subjective experience), “rather than being an epiphenomenon, has a causal role in the optimisation of certain human behaviours” (Frith and Metzinger, 2016). A leitmotif of Metzinger’s models is that there are no such things as “selves”; selves do not exist in the world: “nobody ever had or was a self.” All that exists, he argues, are “phenomenal selves, as they appear in conscious experience. The phenomenal self, however, is not a thing but an ongoing process; it is the content of a ‘transparent self-model’” (Metzinger, 2004).
Metzinger employs empirical research to support his deflationary no-self model, showing how “we are not mentally autonomous subjects for about two thirds of our conscious lifetime, because while conscious cognition is unfolding, it often cannot be inhibited, suspended, or terminated.” This means that “the instantiation of a stable first-person perspective as well as of certain necessary conditions of personhood turn out to be rare, graded, and dynamically variable properties of human beings” (Metzinger, 2015).
Drawing on a large psychometric study of meditators in 57 countries—more than 500 experiential reports—Metzinger focuses on “pure awareness” in meditation—the simplest form of experience there is—to illuminate, as he puts it, “the most fundamental aspects of how consciousness, the brain, and illusions of self all interact.” Metzinger explores “the increasingly non-egoic experiences of silence, wakefulness, and clarity, of bodiless body-experience, ego-dissolution, and nondual awareness” in order to assemble “what it would take to arrive at a minimal model explanation for conscious experience and create a genuine culture of consciousness” (Metzinger, 2024).
Metzinger uses an interdisciplinary, multi-layer analysis of phenomenological, representationalist, informational-computational, functional, and physical-neurobiological kinds of descriptions. His representationalist theory analyzes its target properties—those aspects of the domain to be explained. He seeks to make progress “by describing conscious systems as representational systems and conscious states as representational states” (Metzinger, 2000). He argues that “individual representational events only become part of a personal-level process by being functionally integrated into a specific form of transparent conscious self-representation, the ‘epistemic agent model’ (EAM).” The EAM, he suspects, “may be the true origin of our consciously experienced first-person perspective” (Metzinger, 2015).
Metzinger’s resolution of the mind-body problem follows directly: our Cartesian intuitions that subjective experiences, phenomenal consciousness, “can never be reductively explained are themselves ultimately rooted in the deeper representational structure of our conscious minds” (Metzinger, 2004).
A corollary of Metzinger’s work concerns individual behavior and collective culture, based on our perception of the experience of being an agent that causes events in the world and the belief that we “could have done otherwise” (the test of libertarian free will). This experience and belief enable us “to justify our behaviour to ourselves and to others and, in the longer term, create a cultural narrative about responsibility.” Metzinger concludes that “conscious experience is necessary for optimizing flexible intrapersonal interactions and for the emergence of cumulative culture” (Frith and Metzinger, 2016).
9.8.8. Jackson’s diaphanous representationalism and the knowledge argument
Philosopher Frank Jackson develops a representationalist view about perceptual experience. “That experience is diaphanousness (or transparent) is a thesis about the phenomenology of perceptual experience. It is the thesis that the properties that make an experience the kind of experience it is are properties of the object of experience.” In other words, “accessing the nature of the experience itself is nothing other than accessing the properties of its object” (Jackson, 2007).
Jackson uses his Diaphanous Representationalism theory to undermine his own prior argument against materialism/physicalism based on the famous thought experiment of Mary the brilliant neurophysiologist who is forced to investigate the world from a black and white room via a black and white television monitor, and who acquires all the physical information there is to obtain about what goes on when we see colors. “What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not? It seems just obvious that she will learn something about the world and our visual experience of it. But then it is inescapable that her previous knowledge was incomplete. But she had all the physical information. Ergo there is more to have than that, and Physicalism is false” (Jackson, 1982).
Jackson argues that “although the diaphanousness thesis alone does not entail representationalism, the thesis supports an inference from a weaker to a stronger version of representationalism. On the weak version, perceptual experience is essentially representational. On the strong version, how an experience represents things as being exhausts its experiential nature.” This means that there is nothing else needed to bring about phenomenal consciousness (qualia). Hence, according to Jackson, “strong representationalism undermines the claim that Mary learns new truths when she leaves the room”—which would defeat the defeater of materialism/physicalism (Jackson, 2007).
Philosopher Torin Alter disagrees, arguing that representationalism provides no basis for rejecting the knowledge argument, because even if representational character exhausts phenomenal character, “the physicalist must still face a representationalist version of the Mary challenge, which inherits the difficulty of the original” (Alter, 2003).
9.8.9. Lycan’s homuncular functionalism
Philosopher William Lycan defends a materialist, representational theory of mind that he calls “homuncular functionalism” and which posits that “human beings are ‘functionally organized information-processing systems’ who have no non-physical parts or properties.” Lycan does recognize “the subjective phenomenal qualities of mental states and events, and an important sense in which mind is ‘over and above’ mere chemical matter” (Lycan, 1987). But he defends materialism in general and functionalist theories of mind in particular by arguing for what he calls the “hegemony of representation,” in that “there is no more to mind or consciousness than can be accounted for in terms of intentionality, functional organization, and in particular, second-order representation of one’s own mental states” (Lycan, 1996).
Reviewing “an explosion of work” in consciousness studies by philosophers, psychologists, and neuroscientists, Lycan is “struck by an astonishing diversity of topics that have gone under the heading of “consciousness”—he lists more than 15, only six of which, he says, deal with “phenomenal experience,” that is, qualia and the explanatory gap. From this he draws “two morals.” First, he says, “no one should claim that problems of phenomenal experience have been solved by any purely cognitive or neuroscientific theory.” (Here Lycan finds himself in “surprising agreement with Chalmers.”) Second and perhaps more importantly, he says, some of “the theories cannot fairly be criticized for failing to illuminate problems of phenomenal experience”—because that is not what they intend to do, that is, “they may be theories of, say, awareness or of privileged access, not theories of qualia or of subjectivity or of ‘what it’s like’” (Lycan, 2004).
Lycan defends “the Representational theory of the qualitative features of apparent phenomenal objects: When you see a (real) ripe banana and there is a corresponding yellow patch in your visual field, the yellowness ‘of’ the patch is, like the banana itself, a representatum, an intentional object of the experience. The experience represents the banana and it represents the yellowness of the banana, and the latter yellowness is all the yellowness that is involved; there is no mental patch that is itself yellow. If you were only hallucinating a banana, the unreal banana would still be a representatum, but now an intentional inexistent; and so would be its yellowness. The yellowness would be as it is even though the banana were not real” (Lycan, 2004).
Lycan agrees that the “explanatory gap” is real. But this is for two reasons, he argues, “neither of which embarrasses materialism.” First, he says, “phenomenal information and facts of ‘what it’s like’ are ineffable. But one cannot explain what one cannot express in the first place. (The existence of ineffable facts is no embarrassment to science or to materialism, so long as they are fine-grained ‘facts,’ incorporating modes of presentation.)” Second, he says, “the Gap is not confined to consciousness in any sense or even to mind; there are many kinds of intrinsically perspectival (fine-grained) facts that cannot be explained” (without first conceding a pre-existing identity) (Lycan, 2004).
In their review, Thomas Polger and Owen Flanagan describe Lycan’s view as, roughly, that “conscious beings are hierarchically composed intentional systems, whose representational powers are to be understood in terms of their biological function.” They call the view “teleological functionalism” or “teleofunctionalism” and state “the homuncular part, for which Lycan and Daniel Dennett argued convincingly, is now so widely accepted that it fails to distinguish Lycan’s view from other versions of functionalism. This, by itself, is a testament to the importance of Lycan’s work” (Polger and Flanagan, 2001).
In his review, Frank Jackson explains that when Lycan argues “there is no special problem for physicalism raised by conscious experience,” he is rightly distinguishing two questions. “Does consciousness per se raise a problem? And: Do qualia pose a special problem?” Lycan answers the first question on consciousness by defending an “inner sense account of consciousness,” holding that “consciousness is the functioning of internal attention mechanisms directed at lower-order psychological states and events.” Jackson is less satisfied by Lycan’s rejection of the knowledge argument, which Jackson calls “the most forceful way of raising the problem posed by qualia for physicalism.” (Jackson says this “as someone who no longer accepts the argument”) (Jackson, 1997).
According to Jackson, Lycan is confident that phenomenal nature is exhausted by functional role. In other words, “for Lycan, it is very hard for functional nature to fail to exhaust phenomenal nature. Almost anything you might cite as escaping the functional net is, by his lights, functional after all.” Moreover, Lycan has “the nature of conscious experience exhausted by the intentional contents or representational nature of the relevant kinds of mental states” in that “the representational facts which make up a package [is] sufficient to capture in full the perceptual experience” (Jackson, 1997).
Lycan attacks neurobiological conventional wisdom in that “all too often we hear it suggested that advances in neuroscience will solve Thomas Nagel’s and Frank Jackson’s conceptual problem of “knowing what it’s like.” To Lycan, “this is grievously confused. For Nagel’s and Jackson’s claim is precisely that there is an irreducible kind of phenomenal knowledge that cannot be revealed by science of any kind. Nagel’s and Jackson’s respective ‘Knowledge Arguments’ for this radical thesis are purely philosophical; they contain no premises that depend on scientific fact.” Lycan now presses his sharp point. “Either the arguments are unsound or they are sound. If they are unsound, then so far as has been shown, there is no such irreducible knowledge, and neither science nor anything else is needed to produce it. But if the arguments are sound, they show that no amount of science could possibly help to produce the special phenomenal knowledge. Either way, neither neuroscience nor any other science is pertinent.”
Lycan seems sure that the “what it’s like to be” and knowledge arguments are unsound and he can go about formulating his Representational theory of mind standing squarely in the materialist camp. (I am not so sure. It is my uncertainty that motivates this Landscape of Consciousness.)
9.8.10. Transparency theory
Transparency theory makes the argument that because sensory (e.g., visual) experience represents external objects and their apparent properties, experience has no other properties that pose problems for materialism. We “see right through” perceptual states to external objects and take no notice that we are actually in perceptual states; the properties we perceive in perception are attributed to the objects themselves, not to the perception (Lycan, 2019). If we look at a tree and try to turn our attention to the intrinsic features of our visual experience, the only features there to turn our attention to are features of the actual tree itself, including relational features of the tree from the perspective of the perceiver (Harman, 1990).
To make the argument, at a minimum, an additional premise is needed: If a perceptual state has mental properties over and above its representational properties, they must be “introspectible.” But “not even the most determined introspection ever reveals any such additional properties.” This is the transparency thesis proper (Lycan, 2019).
Philosopher Amy Kind cites experiential transparency as a major motivation driving representational theories of consciousness, which view phenomenal character as being reduced to intentional content. Assuming experience is transparent in that we “look right through” experience to the objects of that experience, “this is supposed to support the representationalist claim that there are no intrinsic aspects of our experience” (Kind, 2010).
Philosopher Michael Tye states that one important motivation for the theory that “phenomenal character is one and the same as representational content” is “the so-called ‘transparency of experience.’” He addresses introspective awareness of experience and one problem case for transparency, that of blurry vision (Tye, 2002). A similar theory is “intentionalism,” the view that the phenomenal character of experience supervenes on intentional content (Pace, 2007).
Philosopher Dirk Franken characterizes “the transparency of appearing” as follows: “The phenomenal quality of a particular state of appearing is fully exhausted by the sensible properties present to the subject of the state and their distribution over the respective field of appearance.” Starting “from the assumption that the transparency of appearing is a purely phenomenological feature,” Franken describes his “Transparency Thesis” with several propositions: “There are no other properties, next to the sensible properties, that have any bearing on the phenomenal quality of a state of appearing. The presentation of sensible properties is just all there is to the phenomenal quality of a state of appearing. No properties of the subject (insofar as it is the subject of this state) or of the state itself contribute to this phenomenal quality.” He defends “surprising consequences” of the Transparency Thesis. First, “one has to give up the idea of the first-person-perspective as a kind of inner seeming or appearing directed onto mental states (at least, if the relevant states are states of appearing).” Next, two assumptions entailed in numerous popular accounts of phenomenal consciousness are negated: (i) “phenomenal qualities are properties of states of appearing that are independent or partly independent of the (sensible) properties presented in these states; ” and (ii) “there can be phenomenally conscious states of appearing even though there is nothing that is presented to their subjects” (Franken, n.d.).
9.8.11. Tye’s contingentism
Philosopher Michael Tye proposes a theory of consciousness he calls “contingentism,” which is a kind of identity theory (i.e., phenomenal states and physical/brain states are literally the same) but with a novel twist: while the identity is indeed true in our world, it is not metaphysically true in all possible worlds. “Scenarios in which the relevant physical processing is present and consciousness is missing are easily imaginable (and thus metaphysically possible), but this is irrelevant if it is only a contingent fact that consciousness is a physical phenomenon” (Tye, 2023).32
Contingentism, Tye states, “finds its origins in the views of Feigl, Place and Smart in the 1950s and 1960s. These philosophers held that sensations are contingently identical with brain processes, where sensations are understood to be conscious states such as pain or the visual experience of red.” The identity here was taken to be contingent, in part, because “it was taken to be clear that scientific type-type identities generally are contingent.” Smart’s example was that he could imagine that lightning is not an electrical discharge. (These claims are mistaken, Tye says; “If in actual fact lightning is an electrical discharge, it could not have been otherwise.”) (Tye, 2023).
Tye says, “the contingentist about consciousness agrees with the above remarks concerning lightning and is happy to extend them to many other scientific identity statements. But the contingentist holds that the case of conscious mental states—states such that there is something it is like to undergo them—is different. Here the claim is not that such states are contingently identical with brain processes, but that such states are contingently identical with physical states of some sort or other, where the notion of a physical state is to be understood broadly to include not only neurophysiological states but also other states that are grounded in microphysical states, including functional states or states of the sort posited by representationalism, for example. For conscious states, the identities are contingent since we can easily imagine their having not obtained. For example, we can easily imagine a zombie undergoing the physical state with which the experience of fear is to be identified and yet not experiencing fear at all. Similarly, we can easily imagine someone experiencing fear without undergoing the given physical state” (Tye, 2023).
The solution, Tye suggests, “lies with the realization that it is a mistake to model the consciousness case on that of physical-physical relationships. Qualitative character Q is identical with physical property R, if physicalism is true. But this is a contingent identity (even though the designators ‘Q’ and ‘R’ are rigid). So, we can imagine Q without R (and R without Q), but the fact that we can do so is not an indicator of an explanatory gap. A creature could indeed have been in a state having Q without being in a state having R and vice-versa” (Tye, 2023).
Might things have been different in the actual world? Indeed, they might, Tye says. “The physical processing might have gone on just as it does, the information processing might have been just the same, the cognitive machinery might have functioned as it does, and yet along with all of this, Q might not have been present in experience. That is certainly intelligible to us. But it creates no explanatory puzzle; for that is only a metaphysically possible world. It is not the actual world. As far as the actual world goes, there is nothing puzzling or problematic, nothing left to explain … No mystery remains” (Tye, 2023).
This is because “in the actual world,” consciousness is physical, according to the physicalist, “since it is only on the hypothesis of physicalism with respect to the actual world that problems of emergence and causal efficacy can be handled satisfactorily, or so the physicalist believes.”
Thus, Tye concludes, “once we become contingentists, the hard problem has a straightforward and satisfying solution.”
In support of his views, Tye turns to “vagueness” in assessing consciousness in the hierarchical taxonomy of life and in the process of evolution (Tye, 2021). According to Tye, “The two dominant theories of consciousness argue it appeared in living beings either suddenly, or gradually. Both theories face problems. The solution is the realization that a foundational consciousness was always here, yet varying conscious states were not, and appeared gradually.” Given that it is hardly obvious how to discern which organisms are conscious, and, if so, their kind or level of consciousness, borderline cases of consciousness can make no sense. As David Papineau reviews Tye, “But this isn’t because a sharp line is found somewhere as we move from non-conscious physical systems to conscious ones. Rather [according to Tye] it’s because no such line exists at all. Even the most basic constituents of physical reality are already endowed with consciousness” (Papineau, 2022). Thus, Tye transitions from his traditional physicalism to a form of panpsychism, though differing from those of mainstream panpsychists (13).33
In admirable full disclosure, Tye states that his contingentism “is written from the perspective of the reductive physicalist (understood broadly to include functionalists and representationalists),” and that he believes contingentism presents “the best hope for a defense of reductive physicalism.” However, he adds, “I myself am no longer a thoroughgoing reductive physicalist. I now believe that there is an element in our consciousness that cannot be captured via higher level reductions” (Tye, 2023).
In addition, Tye suggests that, from the representationalist perspective and supporting its views, “history matters crucially to phenomenology. What it is like for an individual at a given time is fixed not just by what is going on in the individual at that time but also by what was going on in the individual in the past. Two individuals can be exactly alike intrinsically at a time and yet differ in the phenomenal character of their mental life at that time” (Tye, 2019).
Tye concludes that “once we think of experiences in a representationalist and broadly reductionist way,” we can better appreciate phenomenology, including its presence or absence, such as in thought experiments where “a person slowly acquires a silicon chip brain” (see Virtual Immortality, 25).
9.8.12. Thagard’s neural representation, binding, coherence, competition
Philosopher Paul Thagard poses big questions upfront. “Why do people have conscious experiences that include perceptions such as seeing, sensations such as pain, emotions such as joy, and abstract thoughts such as self-reflection? Why is consciousness central to so much of human life, including dreams, laughter, music, religion, sports, morality, and romance? Are such experiences also possessed by other animals, plants, and robots?” (Thagard, 2024).
Thagard’s theory of consciousness “attributes conscious experiences to interactions of four brain mechanisms: neural representation, binding, coherence, and competition.” It distinguishes itself from current theories in several respects, he says. “The four brain mechanisms described are empirically plausible and clearly stated. Conscious experiences emerge from their interactions in areas across the brain.” The mechanisms, he argues, “explain not only ordinary perceptual experiences such as vision, but also the most complex kinds of conscious experience including self-valuation, dreams, humor, and religious awe.” Moreover, he adds, “A crucial but often neglected aspect of consciousness is timing, but the four mechanisms fit perfectly with recent neuroscientific findings about how time cells enable brains to track experiences” (Thagard, 2024).
Thagard’s founds his theory on strict, empirically based neuroscience. His way of thinking is exemplified by his “Attribution Procedure,” an eight-step process for using what he calls “explanatory coherence” as a touchstone to establish “whether or not an animal or machine has a mental state, property, or process.” (Thagard, 2021, pp. 13–14). For example, he offers twelve features of intelligence (i.e., problem solving, learning, understanding, reasoning, perceiving, planning, deciding, abstracting, creating, feeling, acting, communicating) and eight mechanisms to explain these features (i.e., images, concepts, rules, analogies, emotions, language, intentional action, consciousness). “All eight of these mental mechanisms can be carried out by a common set of neural mechanisms, many of which have been modeled computationally.” This account of twelve features and eight mechanisms, Thagard says, “yields a twenty-item checklist for assessing intelligence in bots and beasts.” A similar way of thinking he applies to consciousness, stating that consciousness results from competition among neural representations (Thagard, 2021, pp. 3–4, 50, 49).
Claiming that his theory of consciousness possesses “the accuracy and breadth of application to mark a solid advance in the grand task of explaining how and why consciousness is so central to human life,” Thagard highlights an empirically supported explanation of consciousness resulting from the four brain mechanisms (i.e., neural representation, binding, coherence, and competition); application to a broad range of conscious experiences including smell, hunger, loneliness, self-awareness, religious experience, sports performance, and romantic chemistry; use of these four brain mechanisms to generate novel theories of dreaming, humor, and musical experience; a new theory of time consciousness; assessment of consciousness in non-human animals and machines, including the new generative AI models such as ChatGPT (Thagard, 2024).
Working together, these four brain mechanisms, Thagard says, “explain the full range of consciousness in humans and other animals, and show why plants, bacteria, and ordinary things lack consciousness.” No current computers are conscious, he asserts, using a checklist of features and mechanisms of consciousness, “but the new generative models in artificial intelligence have similar mechanisms to humans that might enable some degree of consciousness.” He concludes with high physicalist confidence: “Consciousness does not need to be a mystery once we understand how brains build it” (Thagard, 2024).
9.8.13. T. Clark’s content hypothesis
Philosopher Thomas Clark posits phenomenal consciousness as the representational content of a cognitive system’s sufficiently structured representational processing (Clark, T., 2019). Conscious experience exists only for the conscious system, so is categorically subjective, and its basic elements are irreducibly qualitative. As a general rule, he says, we don’t find representational content in the world it participates in representing, which can help explain subjectivity. Moreover, following Metzinger’s concept of an “untranscendable object,” a representational system must have epistemic primitives that resist further representation on pain of a metabolically expensive representational regress. This can help explain the non-decomposable, monadic character of basic sensory qualities such as red, sweet, pain, etc. Developments in the science of representation and representational content, he says, may (or may not) vindicate the Content Hypothesis. Clark says that his model is consistent with Integrated Information Theory, Global Workspace Theory, and Predictive Processing, all of which involve representation (Clark, T., 2019, 2024).
Clark, a proponent of naturalism as a worldview (Clark, T., 2007), believes that a materialist can see that “consciousness, as a strictly physical phenomenon instantiated by the brain, creates a world subjectively immune to its own disappearance … it is the very finitude of a self-reflective cognitive system that bars it from witnessing its own beginning or ending, and hence prevents there being, for it, any condition other than existing” (Clark, T., 1994). While this sounds odd, almost an oxymoron, Clark develops the idea of “generic subjective continuity” based on a thought experiment inspired by the work of philosopher Derek Parfit. Clark argues in that at death we shouldn’t anticipate the onset of nothingness or oblivion—a common secular intuition—but rather the continuation of experience, just not in the context of the person who dies. The end of one’s own consciousness, he offers, “is only an event, and its non-existence a current fact, from other perspectives.” After death we won’t experience non-being, he says, we won’t ‘fade to black’. Rather, as conscious being we continue “as the generic subjectivity that always finds itself here, in the various contexts of awareness that the physical universe manages to create” (Clark, T., 1994).
9.8.14. Deacon’s symbolic communication (human consciousness)
Neuroanthropologist Terrence Deacon asserts that symbolic communication has radically altered the nature of human consciousness, whereas consciousness broadly is coextensive with the development of brains in animals that regulate their movement with the aid of long-distance senses, such as vision, because of the predictive capacity this affords and requires. However, symbolic communication has given humans the capacity of being conscious of a virtual realm that has become untethered from physical contiguity and immediacy (Deacon, 1998, 2024).34
Moreover, by virtue of the way that symbolic communication allows us indirect access to others’ thoughts and experiences, we have become a symbolically eusocial species that derives our personal identities and ability to think from a physically and temporally extended shared mentality. Some, he says, have referred to this structure as “Extended Mind.”
Deacon sees this symbolic mode of cognition as enabling the emergence of novel kinds of remembering and unprecedented forms of emotional experience, as well as unprecedented forms of value, such as ethical norms and aesthetic sense. This is also, he says, the source of our feeling of incompleteness and need to find Meaning.
9.9. Language relationships
Language Relationships discern connections, causal and other, between consciousness and language. Language obviously enriches the content of consciousness, perhaps provides a framework for human consciousness, but is there a deeper relationship? Does consciousness require language, in that if there is no language capability there can be no inner experience? Conversely, does language require consciousness, in that if there is no inner experience, there can be no language capability? (Note that while language does not generate theories of consciousness per se, it features in some and is rejected in others, both of which are worth exploring.)
Much depends on careful definitions. To take the consciousness-requires-language causal paradigm, if by consciousness we mean phenomenal consciousness, raw inner experience only, then if we claim that language is required, then our claim would limit phenomenal consciousness, inner experience, to human beings and would exclude all (or at least almost all) other animals. Argue this to a happy dog owner and you will confront an angry dog owner.
To take the language-requires-consciousness causal paradigm, with a definition of language sufficiently loose to subsume computer languages or communications between paramecia or signals between embryonic stem cells, consciousness would not be required.
The philosophical debate regarding whether language is necessary for consciousness has a long and meandering history. Many argue that consciousness does not at all require language; others, that consciousness is facilitated by language or even is not possible without it. A contemporary consensus is building around the idea that increasing levels of consciousness, ranging from unconsciousness to highly conscious reflective self-awareness, requires increasing use of language. What follows would be that language is not needed for pure phenomenal consciousness, a general state of awareness, or in responding to external stimuli—such as in preverbal infants—but phenomenal consciousness would be needed for complex expressions of consciousness, like self-awareness, information integration, and metaconsciousness, which are based on language-powered capacities, especially inner speech (Ivory Research, 2019).
Because we sense that many animal species are conscious—much like we assume that other humans are conscious like we are conscious—and we know that language is much more restricted, to humans and, in a lesser sense, some other animals (e.g., primates, cetaceans, birds), this would seem to weaken the consciousness-language nexus. Moreover, language seems to be a much more recent evolutionary emergent than consciousness (Berwick and Chomsky, 2016).
Philosopher Rebecca Goldstein maintains that language does not exhaust all that there is in consciousness. She calls as evidence infants prior to or in the early stages of acquiring language, where “it’s clear how much consciousness goes on before there is language” (Goldstein, 2014).
Neuroscientist Colin Blakemore sees an intimate relationship between the structure of language and the high-level aspects of consciousness, especially consciousness of self, the consciousness of intention—“the concept that I am the helmsman of myself, carrying myself around the world, making decisions.” He calls the grammatical forms of language “intentional in their style” and argues that our conscious representation of self is a meta-representation of what’s really doing the work down below, and that the reason “our brains go to the trouble of building this false representation of how we really are is to implement and to support language” (Blakemore, 2012a).
Blakemore speculates that we don’t come pre-programmed to be conscious; that we learn to be conscious and our consciousness develops and changes over time. Recognizing that the term “consciousness” can refer to diverse forms of subjectivity, and that even a newborn baby has “a kind of brute awareness of the world, sensory experiences,” he suggests that the nature of subjectivity grows through individual experience and that the complexities of the internal representation of the self is mediated by language.
Experimental psychologist Jeremy Skipper hypothesizes that language, with an emphasis on inner speech, generates and sustains self-awareness, that is, higher-order consciousness. He develops a “HOLISTIC” model of neurobiology of language, inner speech, and consciousness. It involves a “core” set of inner speech production regions that take on affective qualities, involving a largely unconscious dynamic “periphery,” distributed throughout the whole brain. He claims that the “model constitutes a more parsimonious and complete account of the neural correlates of consciousness’” (at least of self-consciousness) (Skipper, 2022).
Ned Block points to a related distinction between consciousness and cognition. Cognition doesn’t have to be linguistic, he says, because non-linguistic animals have some cognition. But then there are animals that seem to have little or no cognition, just perception. Block concludes, “We can see consciousness at its purest in perceptual consciousness, and it has nothing to do, or little to do, with language” (Block, 2014).
While the overwhelming contemporary consensus is that consciousness does not require language, human consciousness is obviously and fundamentally affected or even framed by language. We explore several approaches to the consciousness-language nexus.
9.9.1. Chomsky’s language and consciousness
Philosopher and linguist Noam Chomsky revolutionized the theory of language, and although language-related theory of consciousness has not been a focus of his contributions, its relevance remains. Chomsky famously posited linguistic capacity, especially syntactic knowledge, as at least partially innate and mostly (if not entirely) unique to human beings. Thus, language acquisition in all human children is somewhat instinctual and surprisingly rapid, conditioned by language-specific features of diverse languages. Chomsky labels this core set of inherited grammatical rules “universal grammar” and characterizes these inborn, subconscious capabilities as “deep structure”.
Does Chomsky’s universal grammar with its deep structure carry implications for consciousness? How does Chomsky approach the hard problem of phenomenal consciousness? His views are complex, not easily categorized (Section: Chomsky, 2022a, 2022b; Feser, 2010, 2022b).
Chomsky is an aggressive critic of behaviorism—it makes no sense, he says, to study internal phenomena by observing external manifestations. The study of language is entirely inconsistent with behaviorist principles. “Nothing there,” he says. To understand it, one must examine internal processes. Thus, the connection between the deep structure of language and the essence of consciousness.
Chomsky is also a critic of the hard problem, labeling it a “pseudo-problem.” Some questions, by their simple structures, are not real questions, he says, in that there is no logical way to answer them. His example question “Why do things happen?” cannot be answered in the general, while a similar-sounding question, say, “Why did this earthquake happen?” can be answered in the specific. Chomsky believes that the hard problem of consciousness is an example of the former and therefore is not a genuine question (while the “easy” problems of consciousness, discovering neural correlates, are examples of the latter).
Exemplifying Chomsky’s unorthodox approach to consciousness, even though he commits to a materialism/physicalism ontology that the mind is generated only in the brain, rather than deflating the ontological status of the mental, his contrarian position is to challenge the ontological status of the physical—arguing that science does not know what matter really is. To Chomsky, matter, not mental, is the main mystery.
As Chomsky says, “The mind-body problem can be posed sensibly only insofar as we have a definite conception of body. If we have no such definite and fixed conception, we cannot ask whether some phenomena fall beyond its range” (Chomsky, 1987). Moreover, “The mind-body problem can therefore not even be formulated. The problem cannot be solved, because there is no clear way to state it. Unless someone proposes a definite concept of body, we cannot ask whether some phenomena exceed its bounds.”
As for clarifying the concept of the body, the physical, matter, Chomsky states, “the material world is whatever we discover it to be, with whatever properties it must be assumed to have for the purposes of explanatory theory. Any intelligible theory that offers genuine explanations and that can be assimilated to the core notions of physics becomes part of the theory of the material world, part of our account of body.”
To Chomsky, a mechanical model of the world, developed in early modern philosophy and inchoate science, could never account for aspects of the mental. Thus, while he understands Descartes’ motivation to postulate a separate, nonphysical “thinking substance,” he rejects Descartes’ classic dualism and trains his analytic guns on the mechanical model in particular and on matter in general.
Chomsky feels no pressure to devise his own theory of consciousness. If anything, he shuns grand solutions. “There seems to be no coherent doctrine of materialism and metaphysical naturalism, no issue of eliminativism, no mind-body problem (Chomsky, 2020). In short, as Edward Feser notes, “if the problem has no clear content, neither do any of the solutions to it” (Feser, 2022b). Chomsky is content to allow science to do its work, advancing knowledge of the brain and of the mind, leaving to the future the construction of proper theories of consciousness irrespective of current notions of the physical and matter.
One may infer that Chomsky contemplates an expanded view of the physical, with matter having features now unknown, which then would “naturally” subsume the mental. (Note: Chomsky rejects panpsychism.) However, in an overarching sense, he remains unsure whether human beings have the capacity to solve what he believes are genuine mysteries about the nature of reality, but he is also unsure whether consciousness will prove to be an ultimate mystery.
9.9.2. Searle’s language and consciousness
To philosopher John Searle, language is crucial for consciousness, just as consciousness is crucial for language, because much of our consciousness is shaped by language and because the parts of language that are most important to us are precisely those that are conscious (Searle, 2014b).
Searle contrasts human and animal consciousness: “My dogs have a kind of consciousness which is incredibly rich. They can smell things I can’t smell and they have a kind of inner life that I don’t have, but all the same, there are all kinds of conscious experiences they simply cannot have. My doggy lying there may be thinking about chasing other dogs but he’s not thinking about doing his income tax or writing his next poem or figuring out how he’s going to have a better summer vacation next year.”
Searle stresses how language gives us enormous power in shaping consciousness. A favorite quotation is from the French philosopher La Rochefoucauld: “Very few people would ever fall in love if they never read about it.” Searle’s point is that language shapes experience; there are all kinds of experiences you just can’t have without language.
As for how language and consciousness articulate and developed over time, Searle envisions an evolutionary “boot-strapping effect.” It starts off with pre-linguistic consciousness, and then develops linguistic meaning and communication, which enrich consciousness. The result is an elaborate structure of language, which makes for a more elaborate structure of consciousness, which then enables you to enrich your language. There is a continuous reinforcing and compound effect (Searle, 2014b).
Non-linguistic animals can’t do this, Searle continues: “My doggie can think somebody is at the door, but he cannot think I wish 17 people were at the door, or I hope we get more people at the door next week. Because to do that, he has got to be able to shuffle the symbols in a way that human beings can with their inner syntax.”
Although animals do not form or express their beliefs in a symbolic language, Searle attributes to them intentional states, and because intentional states require consciousness, it follows that consciousness does not require symbolic language. He cites as evidence that animals “correct their beliefs all the time on the basis of their perceptions” (Searle, 2002; Proust, 2003).
9.9.3. Koch’s consciousness does not depend on language
Neuroscientist Christof Koch asserts without ambiguity, “consciousness doesn’t depend on language,” and he offers vivid clinical cases of brain trauma or insult where language is obviously lost and consciousness is obviously retained. Koch is especially exercised by the claim that “only humans experience anything,” that other animals have no sentience, a belief he calls “preposterous, a remnant of an atavistic desire to be the one species of singular importance to the universe at large. Far more reasonable and compatible with all known facts is the assumption that we share the experience of life with all mammals” (Koch, 2019).
Koch recounts and rejects how “Many classical scholars assign to language the role of kingmaker when it comes to consciousness. That is, language use is thought to either directly enable consciousness or to be one of the signature behaviors associated with consciousness.” He concludes, “language contributes massively to the way we experience the world, in particular to our sense of the self as our narrative center in the past and present. But our basic experience of the world does not depend on it” (Koch, 2019).
9.9.4. Smith’s language as classifier of consciousness
Philosopher Barry Smith states that while we think of consciousness as “moments of experience,” the way we capture what’s similar or different in our experiences over time is via language. The “passing show,” he says, “gets assembled into larger, more meaningful groups when we use language to classify and categorize.” How do we do this? How do we connect up these bits of consciousness with something stable? How do we classify the world, not just our own experience, and communicable between experiencers? The answer is language, he says, which he calls a species-specific property of human beings. With language, we codify our own experience, represent the content of our own minds, and compare it with the contents of other minds (Smith, 2012).
Distinguishing consciousness from language, Smith tells of someone who lost all of their words for fruit and vegetables, and only those words. They could use language normally and they had conscious awareness of fruits and vegetables, but they could not use, pronounce or even recognize words for fruit and vegetables. “It’s as if a whole shelf of meanings had been taken away.”
Smith relates grades of consciousness to grades of language. One can lose the word for an object but can still recognize the object (a form of aphasia). Deeper, one can not only lose the word as a piece of sound representing an object, but also not recognize the object either and lose the whole meaning (a form of agnosia). He describes stroke patients who, for example, can’t use the word “glove”. “What is that?” “Can’t say.” Perhaps just the word is missing, because if they are asked, “Is there a glove on the table?”, they answer, “Yes.” But other stroke patients answer, “I’ve no idea.” And if you show them a glove and ask, “What’s this for?”, they say, “I don’t know, maybe it’s for keeping coins.”
Smith suggests that words are ways that our visual consciousness categorizes and structures the world. And perhaps a deeper loss of language can lead to a dissolution of the very categories that we use to classify our perceptual experiences. So, it’s not just that I can’t name or categorize some object, but without language the actual conscious experience of that object is radically different. If so, language is responsible, at least in part, for organizing consciousness (Smith, 2012).
9.9.5. Jaynes’s breakdown of the bicameral mind
Psychohistorian Julian Jaynes’s 1976 book, The Origin of Consciousness in the Breakdown of the Bicameral Mind, proposes that consciousness, particularly “the ability to introspect,” is a learned behavior rooted in language and culture and arises from metaphor; consciousness is neither innate nor fundamental. To Jaynes, language plays a central role in consciousness; language is “an organ of perception, not simply a means of communication” (Jaynes, 1976; Bicameral Mind, 2024).
Jaynes defines consciousness idiosyncratically by distinguishing it from sensory awareness and cognition; as such it more closely resembles “introspective consciousness,” as he calls it, than it does phenomenal consciousness, which is the target of this Landscape. Nonetheless, it is helpful to work through Jaynes’s definitions and arguments, clarifying how to avoid what could be confounding or muddled thinking about consciousness. While Jaynes’s consciousness is not phenomenal consciousness, his careful parsing of his definition gives insight into the subtleties of the parsing process. Moreover, appreciating the flow of Jaynes’s arguments as well as the substance of his claims sharpens our view of the entire Landscape.
In Jaynes’s words, “Consciousness is not a simple matter and it should not be spoken of as if it were.” He starts with what his consciousness is not. (i) Not the “many things that the nervous system does automatically for us. All the variety of perceptual constancies … all done without any help from introspective consciousness.” (ii) Not what he calls “preoptive” activities, such as how we sit, walk, move. “All these are done without consciousness, unless we decide to be conscious of them.” (iii) Not even speaking, where “the role of consciousness is more interpolative than any constant companion to my words.” Consciousness, he stresses, is not sense perception; it does not copy experience; it is not necessary for learning; it is not even necessary for thinking or reasoning; and it has only an arbitrary and functional location (Jaynes, 1987).
To Jaynes, consciousness, or what he refines as “subjective conscious mind,” is an analog of the real world. “It is built up with a vocabulary or lexical field whose terms are all metaphors or analogs of behavior in the physical world … It allows us to short-cut behavioral processes and arrive at more adequate decisions. Like mathematics, it is an operator rather than a thing or a repository. And it is intimately bound with volition and decision … Every word we use to refer to mental events is a metaphor or analog of something in the behavioral world” (Jaynes, 1987).
Jaynes says that the primary feature of his consciousness is an “associated spatial quality that, as a result of the language used to describe such psychological events, becomes, with constant repetition, this spatial quality of our consciousness or mind-space …. It is the space which you preoptively are introspecting on at this very moment.”
The second most important feature of Jaynes’ consciousness is the subject of the introspecting, the introspective “I”. Here Jaynes uses analogy, which differs from metaphor in that the similarity is between relationships rather than between things or actions. “As the body with its sense organs (referred to as I) is to physical seeing,” he says, “so there develops automatically an analog ‘I’ to relate to this mental kind of ‘seeing’ in mind-space.”
A third feature of Jaynes’ consciousness is narratization, “the analogic simulation of actual behavior.” Consciousness, he says, “is constantly fitting things into a story, putting a before and an after around any event.” Other features of Jaynes’ consciousness include: “concentration, the ‘inner’ analog of external perceptual attention; suppression, by which we stop being conscious of annoying thoughts, the analog of turning away from annoyances in the physical world; excerption, the analog of how we sense only one aspect of a thing at a time; and consilience, the analog of perceptual assimilation.” Jaynes “essential rule” is that “no operation goes on in consciousness that was not in behavior first. All of these are learned analogs of external behavior” (Jaynes, 1987).
Definition in hand, Jaynes asks, “When did all this ‘inner’ world begin?”, which he calls “the most important watershed in our discussion.”
Jaynes famously introduces the hypothesis of the “bicameral mind”, a non-conscious mentality supposedly prevalent in early humans that featured a kind of auditory hallucinations. He argued that relatively recent human ancestors as late as the ancient Greeks did not consider emotions and desires as stemming from their own minds but rather as the actions of external gods (Bicameral mentality, 2024).
Jaynes takes the oldest parts of the Iliad and asks, “Is there evidence of consciousness?” The answer, he thinks, is no. “People are not sitting down and making decisions. No one is. No one is introspecting. No one is even reminiscing. It is a very different kind of world” (Jaynes, 1987).
Who, then, makes the decisions? Whenever a significant choice is to be made, Jaynes suggests that “a voice comes in telling people what to do. These voices are always and immediately obeyed. These voices are called gods.” To Jaynes, this is the origin of gods. He regards them as “auditory hallucinations” similar to, although not the same as, “the voices heard by Joan of Arc or William Blake. Or similar to the voices that modern schizophrenics hear.”
Jaynes coins the “bicameral mind” using the metaphor of a bicameral legislature. It simply means that human mentality at this time was in two parts, a decision-making part and a follower part, and neither part was conscious in the sense in which Jaynes has described it (above) (Jaynes, 1987).
The theory posits that the human mind once operated in a state in which cognitive functions were divided between one part of the brain which appears to be “speaking”, and a second part which listens and obeys—the bicameral mind—and that the breakdown of this division gave rise to consciousness in humans.
Jaynes supports his theory with historical texts and archaeological evidence. He places the origin of consciousness around the 2nd millennium BCE and suggests that the transition from the bicameral mind to consciousness was triggered by the breakdown of the bicameral system of society (Bicameral mentality, 2024).
Jaynes describes bicameral societies as “strict and stable hierarchies,” including bicameral theocracies, where “everything went like clockwork providing there was no real catastrophe or problem.” But such a system is precarious, especially as society grows in population and complexity, such that “given a time of social and political instability, bicamerality can break down like a house of cards.” Whereas all significant decisions previously had been based on the bicameral mind, after its breakdown, after the hallucinated voices no longer told people what to do, a new way of making decisions had to develop, which was a kind of proto-consciousness (Jaynes, 1987).
There is an obvious, perhaps tempting, neurobiological correlate: the two cerebral hemispheres, especially based on the pioneering split-brain research of Michael Gazzaniga and Roger Sperry, which explained functional brain lateralization and how the cerebral hemispheres communicate with each another. Jaynes puts it simply: “the right hemisphere was ‘talking’ to the left, and this was the bicameral mind” (Jaynes, 1987).
Although Jaynes’s physicalist, deflationary theory of consciousness continues to intrigue, it is not accepted by consciousness experts. Nevertheless, Jaynes’s ideas and arguments can inform our view of the Landscape.
9.9.6. Parrington’s language and tool-driven consciousness
Biologist John Parrington proposes that a qualitative leap in consciousness—“human self-conscious awareness”—occurred during human evolution as “our capacity for language and our ability to continually transform the world around us by designing and using tools” transformed our brains. His challenge is to distinguish human language and use of tools from analogous activities of animals, particularly other primates, as contemporary research uncovers more complex animal capacities (Parrington, 2023).
Regarding language, Parrington stresses the “highly distinctive feature of human language” as “an interconnected system of abstract symbols, linked together by grammar.” This is why, he says, “only human beings are able to use language to convey complex ideas like past, present and future, individual versus society, location in space and even more abstract concepts.” (Parrington, 2023, p. 22). He defends his view of human consciousness as language-dependent by stressing our capacity for “inner speech, or more generally inner symbols, as central to human thought” (Parrington, 2023, p. 55).
Regarding use of tools, Parrington argues that “tool use by other species tends to be both occasional and also very limited in the type of tools that are created. In contrast, a unique feature of our species is that practically all of our interactions with the world are through tools that we have created.” Moreover, “we are continually in a process of inventing new types of tools and technologies” (Parrington, 2023, p. 19).
Parrington’s theory focuses on human brains, which are “not just much bigger than those of other primates, but radically different in structure and function” (a claim that hangs on “radically”) (Parrington, 2023, p. 20). He references different brain regions, highlighting the cerebellum, long thought limited to coordinating repetitive movements but now shown to play a role in human creativity and imagination (Parrington, 2023, p. 47), and the prefrontal cortex, greatly expanded in humans, the locus of reasoning, planning, decision making, control of social behavior and some aspects of language, all of which relate to human uniqueness (Parrington, 2023, p. 126). He has brain waves of different frequencies conveying specific sensory signals and combining together into a unified conscious whole, thus explaining how we bind together different aspects of experience into a seamless experience (Parrington, 2023, p. 19).
Parrington argues that “the effect of language and other cultural tools” have transformed human consciousness, which “provides another level of binding.” This surely means, he says, that “our sense of self is not an illusion, but rather a very real phenomenon based on the binding role of brain waves and the extra element of unity based on conceptual thought” (Parrington, 2023, p. 147). Rejecting what he calls “outdated models of the brain as a hard-wired circuit diagram,” he argues that meaning is created within our heads through a dynamic interaction of oscillating brain waves.
Parrington believes that “in some ways” he has addressed the hard problem and “hopefully demonstrated that there is nothing magical about human consciousness” (Parrington, 2023, p. 196). He frames his theory, as he must, within an evolutionary context, seeking to explain inner speech, thought, and self-conscious awareness in terms of the evolved neural circuitry that undergirds these uniquely human capacities, especially as manifest in language and tools. While Parrington’s goal, as Susan Blackmore puts it, is to develop “a material explanation of human consciousness”—and “he has done a great job of exploring material explanations of thought, perception, self-representation and behavioral control”—but none of this, Blackmore concludes, “gets at the deeper questions about subjective experience” (Blackmore, 2023).
9.10. Phylogenetic evolution
Phylogenetic Evolution, the phylogenetic evolution of consciousness, at first blush, is not a specific theory of consciousness per se. Rather, it is recruited as the mechanistic process for many (but not all) of the theories on the Landscape. Yet, is there a sense in which phylogenetic evolution can become a prime explanation in its own right?
Certainly, according to Dennett (9.10.1), LeDoux (9.10.2) and Ginsburg/Jablonka (9.10.3), consciousness exemplifies Theodosius Dobzhansky famous adage, “Nothing in biology makes sense except in the light of evolution” (Dobzhansky, 1973).
Neuroscientists and writers Ogi Ogas and Sai Gaddam present a step-by-step simulation of how evolution produced consciousness. It is a tale of eighteen “increasingly intelligent minds,” as they say, from the simple stimulus-response of microbes interacting with their environments to the limitless creativity of humankind (and beyond). Leveraging the “resonance” theories of Stephen Grossberg (9.4.2), their mentor, they tell a story of what each “new” mind could do that previous minds could not (Ogas and Gaddam, 2022).
To physicist Lawrence Krauss, “consciousness is a slippery quality because it exists on a spectrum in the evolutionary development of life that is very difficult to measure or quantify” (Krauss, 2023, p. 195). He stresses “the phenomenon of consciousness is the one area I know of in science where the forefront discussions seem to be made by philosophers equally as often as they are made by experimental cognitive scientists,” which, he says softly, is “an indication of a science in its early stages” (Krauss, 2023, pp. 193–194).
Amidst the surfeit of competing neurobiological theories, Krauss is most comfortable pursuing “the possible distinct evolutionary advantages that consciousness might endow humans with.” He follows the thread that “feelings emerged as ever more complex systems evolved to incorporate higher-order cognitive processing to issues of survival and homeostasis” (9.5.). Consciousness, through introspection, he says, “could build on the nervous system monitoring of basic internal body conditions to produce novel, rather than innate, survival strategies. The ability to use internal representations of goals, whether from cognitive maps or stored memories, to flexibly respond to the changing environmental conditions, was a huge evolutionary leap, and has been noted to probably exist only in some mammals and perhaps in birds” (Krauss, 2023, pp. 211–212).
Philosophers David Buller and Valarie Hardcastle offer an alternative to the strong evolutionary claim that “the mind contains ‘hundreds or thousands’ of ‘genetically specified’ modules, which are evolutionary adaptations for their cognitive functions.” They argue that “while the adult human mind/brain typically contains a degree of modularization, its ‘modules’ are neither genetically specified nor evolutionary adaptations. Rather, they result from the brain’s developmental plasticity, which allows environmental task demands a large role in shaping the brain’s information-processing structures.” They maintain that “the brain’s developmental plasticity is our fundamental psychological adaptation, and the ‘modules’ that result from it are adaptive responses to local conditions, not past evolutionary environments” (Buller and Hardcastle, 2000).
Questions remain. What creatures are conscious and to what degree? How low on the phylogenetic scale must one descend to wink out anything resembling human consciousness? For example, does an octopus have phenomenal consciousness? Philosopher (and scuba-diver) Peter Godfrey-Smith not only affirms octopus higher intelligence, he also traces the evolution of mental properties in the primordial seas, claiming that “evolution built minds not once but at least twice (Godfrey-Smith, 2016).
Appreciating Godfrey-Smith’s work, Carlo Rovelli uses the “complex intellectual abilities” of octopuses as “a valuable case study” of consciousness. In recent decades, he observes, “the phrase ‘the problem of the nature of consciousness’ has taken the place of what in the past used to be the problem of the meaning of soul, spirit, subjectivity, intelligence, perception, understanding, existing in the first person, being aware of a self …” Consciousness is neurobiological, Rovelli asserts, and one way to tackle the issue is to observe our non-human cousins and even octopuses, an extremely distant relative. The octopus, he offers, “is the extraterrestrial that we have been looking for in order to study a possible independent realization of consciousness” (Carlo Rovelli on what we can learn from the octopus mind, 2020).
Raymond Tallis questions the entire enterprise of assuming “the [evolutionary] advantage of being a conscious organism rather than a self-replicating bag of chemicals innocent of its own existence.” His skeptical argument against “what seems like a no-brainer” is “not to start near the end of the story, with complex, sophisticated organisms such as higher mammals … [whose] life depends on conscious navigation through the world.” No, he says, “we must begin at the beginning: by asking, for example, what survival value is conferred on a photosensitive cell in virtue of its organism being aware of the light incident upon it. And the answer appears to be: ‘none.’” Tallis argues, “If there’s no reason to believe that the sentience of primitive organisms would give them an edge over the competition, there is no starting point for the evolutionary journey to the sophisticated consciousness we see in higher organisms like you and me.” The mystery of consciousness, he concludes, “remains intact” (18.4) (Tallis, 2023).
Most experts, scientists and philosophers who study the evolution of mind, support a gradual, incrementalistic theory of mental development, much like Dennett, Godfrey-Smith, and Ogas/Gaddam. There are dissenting voices: for example, Nicholas Humphrey (9.8.6) and perhaps Noam Chomsky (9.9.1).
Here’s the point. In considering the multifarious theories on the Landscape of Consciousness, one should overlay each theory with its putative phylogenetic evolutionary development. Ask, “What was the process that brought it about?”
9.10.1. Dennett’s evolution of minds
Daniel Dennett delights us with the wondrous and sometimes counterintuitive power of evolution in the development of consciousness (or, more generally, “minds”), notably in his psychohistory journey, From Bacteria to Bach and Back: The Evolution of Minds (Dennett, 2017). Even if one doesn’t wholly subscribe to Dennett’s own explanations of consciousness (9.2.4)—which I don’t—everyone’s understanding of consciousness can be enriched by Dennett’s probative and insightful way of thinking (Dennett, 2007, 2023a, 2023b). Dennett describes evolution as a “universal acid” that “eats through just about every traditional concept, revolutionizing world-views” (Dennett, 1995).
“How come there are minds?” is Dennett’s big evolutionary question, “And how is it possible for minds to ask and answer this question?” His short answer is that “minds evolved and created thinking tools that eventually enabled minds to know how minds evolved, and even to know how these tools enabled them to know what minds are … We know there are bacteria; dogs don’t; dolphins don’t; chimpanzees don’t. Even bacteria don’t know there are bacteria. Our minds are different. It takes thinking tools to understand what bacteria are, and we’re the only species (so far) endowed with an elaborate kit of thinking tools” (Dennett, 2017).
Dennett reflects that he has been struggling through the “thickets and quagmires” of the mind question for over fifty years, and he has found a path, built on evolution, that “takes us all the way to a satisfactory—and satisfying—account of how the ‘magic’ of our minds is accomplished without any magic, but it is neither straight nor easy” (Dennett, 2017).
9.10.2. LeDoux’s deep roots of consciousness
Neuroscientist Joseph LeDoux argues that the key to understanding human consciousness and behavior lies in viewing evolution through the prism of the first living organisms. He tracks the evolutionary timeline to show how even the earliest single-cell organisms had to solve the same problems we and our cells have to solve, and how the evolution of nervous systems enhanced the ability of organisms to survive and thrive and have brought about the emergence of consciousness (LeDoux, 2019).
Motivated by his long-standing interest in how organisms detect and respond to danger, LeDoux found in evolution the “deep roots” of human abilities, hence the “deep roots” of consciousness, which “can be traced back to the beginning of life.” LeDoux argues that what we have inherited from our long chain of biological ancestors is not a fear circuit but rather “a defensive survival circuit that detects threats, and in response, initiates defensive survival behaviours and supporting physiological adjustments.” Fear, on the other hand, from LeDoux perspective, is a recent expression of cortical cognitive circuits. Danger and survival have a deep history; consciousness, a shallower one (LeDoux, 2021).
9.10.3. Ginsburg and Jablonka’s associative learning during evolution
Neurobiologist Simona Ginsburg and evolutionary theorist Eva Jablonka propose that learning during evolution has been “the driving force” in the transition to basic or minimal consciousness. They identify the evolutionary marker as “a complex form of associative learning, which they term “unlimited associative learning” and which “enables an organism to ascribe motivational value to a novel, compound, non-reflex-inducing stimulus or action, and [to] use it as the basis for future learning” (Ginsburg and Jablonka, 2019).
Associative learning, Ginsburg and Jablonka argue, “drove the Cambrian explosion and its massive diversification of organisms.” They suggest that “consciousness can take many forms and is found even in such animals as octopuses (who seem to express emotions by changing color) and bees (who socialize with other bees)” (Ginsburg and Jablonka, 2022). As for the evolutionary transition to human rationality, they propose “symbolic language as a similar type of marker” (Ginsburg and Jablonka, 2019).
9.10.4. Cleeremans and Tallon-Baudry’s phenomenal experience has functional value
Cleeremans and Tallon-Baudry propose that “subject-level experience—’What it feels like’—is endowed with intrinsic value, and it is precisely the value agents associate with their experiences that explains why they do certain things and avoid others.” Because experiences have value and guide behavior, they argue, “consciousness has a function” and that under “this hypothesis of ‘phenomenal worthiness’ … conscious agents ‘experience’ things and ‘care’ about those experiences” (Cleeremans and Tallon-Baudry, 2022).
The authors note that “the function of consciousness” has been “addressed mostly by philosophers,” yet “surprisingly few things have been written about [it] … in the neuroscientific or psychological literature.” The reason, they surmise, is the “classical view” that “subjective experience is a mere epiphenomenon that affords no functional advantage.” They reject such “consciousness inessentialism” by appealing to “how the concept of value has been approached in decision-making, emotion research and consciousness research” and by arguing that “phenomenal consciousness has intrinsic value”—such as it being “the central drive for the discovery and creation of new behaviours.” They conclude that consciousness “must have a function” (Cleeremans and Tallon-Baudry, 2022).
Under their hypothesis, “consciousness would have evolved and been selected because it adds an important degree of freedom to the machinery of reward-based behaviour: behaviour that seems purposeless from a purely functional perspective nevertheless has intrinsic value. But this, crucially, only holds when associated with conscious experience.” Phenomenal experience, they speculate, “might act as a mental currency of sorts, which not only endows conscious mental states with intrinsic value but also makes it possible for conscious agents to compare vastly different experiences in a common subject-centered space”—a feature, they claim, that “readily explains the fact that consciousness is ‘unified.’” They offer the “phenomenal worthiness hypothesis” as a way to make “the ‘hard problem’ of consciousness more tractable, since it can then be reduced to a problem about function”—an offering unlikely to persuade nonmaterialists (Cleeremans and Tallon-Baudry, 2022).
9.10.5. Andrew’s consciousness without complex brains
Philosopher Kristin Andrews, an expert on animal minds, argues that progress in consciousness studies has been hampered by prevailing conventional wisdom that for an organism to be conscious, a complex brain is required. She advocates moving “past a focus on complex mammalian brains to study the behavior of ‘simpler’ animals” (Andrews, 2023).
In forming her argument, Andrews rehearses how Crick and Koch helped turn consciousness studies into a real science by supposing that “higher mammals” possess some essential features of consciousness (9.2.2), by setting aside the still-common Cartesian view that language is needed for conscious experience, and by assuming that a nervous system is necessary for consciousness. She recruits the Cambridge Declaration on Consciousness, which states that “there is sufficient evidence to conclude that ‘all mammals and birds, and many other creatures, including octopuses’ experience conscious states.” The Declaration, she notes, identifies five consciousness markers (not all of which would be necessary): “homologous brain circuits; artificial stimulation of brain regions causing similar behaviours and emotional expressions in humans and other animals; neural circuits supporting behavioural/electrophysical states of attentiveness, sleep and decision-making; mirror self-recognition; and similar impacts of hallucinogenic drugs across species” (Andrews, 2023).
But Andrews posits that “emphasis on the neurological … may be holding the science back,” and that animal research suggests “multiple realizability—the view that mental capacities can be instantiated by very different physical systems.” If neuroscience looks only at slightly different physical systems (say, just other primates or even mammals), she says, “we may be overlooking the key piece to the consciousness puzzle.”
Andrews asks, “What might we learn if our anthropocentrism didn’t lead us to focus on the brain as the relevant part of physiology needed for consciousness, but instead led us to examine the behaviours that are associated with experiences?” She advocates studying “the nature of consciousness by looking at bees, octopuses and worms as research subjects. All these animals have a robust profile of behaviours that warrant the hypothesis that they are conscious. Moving away from painful stimuli, learning the location of desirable nutrients, and seeking out what is needed for reproduction is something we share widely with other animals.” By studying simple animals, she offers, we can simplify research on consciousness (Andrews, 2023).
Andrews likens studying consciousness to studying the origin of life on earth and searching for life on other planets. For each, there is only one confirmed instance. It’s the “N = 1 problem.” “If we study only one evolved instance of consciousness (our own),” she says, “we will be unable to disentangle the contingent and dispensable from the essential and indispensable.” She offers “good news” in that “consciousness science, unlike the search for extraterrestrial life, can break out of its N = 1 problem using other cases from our own planet.” Typically, consciousness scientists study other primates (e.g., macaque monkeys) and, to a lesser extent, other mammals, such as rats. “But the N = 1 problem still bites here. Because the common ancestor of the primates was very probably conscious, as indeed was the common ancestor of all mammals—we are still looking at the same evolved instance (just a different variant of it). To find independently evolved instances of consciousness, we really need to look to much more distant branches of the tree of life” (Andrews and Birch, 2023).
Andrews speculates that “sentience has evolved only three times: once in the arthropods (including crustaceans and insects), once in the cephalopods (including octopuses) and once in the vertebrates.” But she cannot rule out “the possibility that the last common ancestor of humans, bees and octopuses, which was a tiny worm-like creature that lived more than 500 million years ago, was itself sentient—and that therefore sentience has evolved only once on Earth.”
In either case, she argues, “If a marker-based approach does start pointing towards sentience being present in our worm-like last common ancestor, we would have evidence against current theories that rely on a close relationship between sentience and special brain regions adapted for integrating information, like the cerebral cortex in humans. We would have grounds to suspect that many features often said to be essential to sentience are actually dispensable” (Andrews and Birch, 2023). Conversely, it could mean that sentience is related to some unknown feature(s).
To Andrews, the philosophy of animal minds addresses profound questions about the nature of mind as they cut across animal cognition and philosophy of mind. Key topics include the evolution of consciousness, tool use in animals, animal culture, mental representation, belief, communication, theory of mind, animal ethics, and moral psychology (Andrews, 2020a). Andrews outlines “the scientific benefits of treating animals as sentient research participants who come from their own social contexts” (Andrews, 2020b).
Andrews concludes: “Just as Crick and Koch pushed back on the popular view of their time that language is needed for consciousness, today we should push back on the popular view of our time that a complex brain is needed for consciousness.” She also speculates: “If we recognize that our starting assumptions are open to revision and allow them to change with new scientific discoveries, we may find new puzzle pieces, making the hard problem a whole lot easier” (Andrews, 2023).
In essence, then, Andrews reverses the traditional “neurocentric” argument of consciousness. Whereas the common assumption is that consciousness is (somehow) related to the complexity of the nervous system, but because all neurobiological advances, collectively, have not progressed in solving the hard problem, then perhaps the common assumption is not correct and the generation of consciousness can be found outside the nervous system. Thus, rather than assuming that organisms without complex nervous systems cannot be conscious, perhaps a radical new approach might be to consider that these organisms are (in a way) conscious and focus research on how such “lower” or “primitive” consciousness might come about.
Finally, regarding our current obsession with discerning AI sentience, Andrews claims that “without a deep understanding of the variety of animal minds on this planet, we will almost certainly fail” (Andrews and Birch, 2023).
Neuroscience/consciousness writer Annaka Harris goes further, questioning our potentially false but deeply ingrained intuition that “systems that act like us are conscious, and those that don’t are not.” Plants and philosophical zombies, she says, indicate that this human-centric intuition “has no real foundation.” (A. Harris, 2020, 2019). Consciousness may not even require a brain (A. Harris, 2022).
9.10.6. Reber’s cellular basis of consciousness
Cognitive psychologist Arthur Reber dubs his theory of the origins of mind and consciousness the Cellular Basis of Consciousness (CBC), arguing that “sentience emerged with life itself.” He states, “The most primitive unicellular species of bacteria are conscious, though it is a sentience of a primitive kind. They have minds, though they are tiny and limited in scope.” He rejects that “minds are computational and can be captured by an artificial intelligence.” He develops CBC using standard models of evolutionary biology, leveraging the “remarkable repertoire of single-celled species that micro- and cell-biologists have discovered … Bacteria, for example, have sophisticated sensory and perceptual systems, learn, form memories, make decisions based on information about their environment relative to internal metabolic states, communicate with each other, and even show a primitive form of altruism.” All such functions, Reber contends, “are indicators of sentience” (Reber, 2016, 2018).
Reber’s model is based on a simple, radical axiom: “Mind and consciousness are not unique features of human brains. They are grounded in inherent features present in simpler forms in virtually every species. Any organism with flexible cell walls, a sensitivity to its surrounds and the capacity for locomotion will possess the biological foundations of mind and consciousness.” In other words, “subjectivity is an inherent feature of particular kinds of organic form. Experiential states, including those denoted as ‘mind’ and ‘consciousness,’ are present in the most primitive species” (Reber, 2016).
Reber founds his model on several principles: “Complexity has its roots in simplicity. Evolution has a pyramidal schema. Older forms and functions lie at the base, the more recently evolved ones toward the zenith …. In virtue of the nature of pyramidal systems, the older structures and the behaviors and processes that utilize them will be relatively stable, showing less individual-to-individual and species-to-species variation. They will also, in virtue of their foundational status, be robust and less likely to be lost. Adaptive forms and functions are not jettisoned; they are modified and, if the selection processes are effective, they will become more complex and capable of greater behavioral and mental flexibility and power” (Reber, 2016).
Reber claims that his model has several conceptual and empirical virtues, among them: “(a) it (re)solves the problem of how minds are created by brains—the “Hard Problem”—by showing that the apparent difficulty results from a category error; (b) it redirects the search for the origins of mind from complex neural structures to foundational biomechanical ones; and (c) it reformulates the long-term research focus from looking for ‘miracle moments’ where a brain is suddenly capable of making a mind to discovering how complex and sophisticated cognitive, emotional and behavioral functions evolve from more primitive ones” (Reber, 2016).
In addressing the hard problem, Reber argues that the reason it looks “hard” is “because it assumes that there is some ‘added’ element that comes from having a mind.” However, he says, “from the CBC perspective the answer is easily expressed. Organisms have minds, or the precursors of what we from our philosophy of mind perspective think of as minds, because they are an inherent component of organic form. What gets ‘added’ isn’t ontologically novel; it’s a gradual accretion of functions that are layered over and interlock with pre-existing ones” (Reber, 2016).
In the CBC framework, “All experience is mental. All organisms that experience have minds, all have consciousness.” Reber contends that this way of thinking repositions the problem, from how brains create consciousness (i.e., the hard problem) to how all experience is consciousness. “Instead of trying to grasp the neuro-complexities in brains that give rise to minds, we can redirect the focus toward understanding how particular kinds of basic, primitive organic forms came to have the bio-sensitivity that is the foundation of subjectivity.” Reber recognizes that “this argument requires a commitment to a biological reductionism.” It would also undermine Functionalism (9.1.3) in that mental states would be “intrinsically hardware dependent” (Reber, 2016).
9.10.7. Feinberg and Mallatt’s ancient origins of consciousness
Neurologist/psychiatrist Todd Feinberg and evolutionary biologist Jon Mallatt propose that consciousness appeared much earlier in evolutionary history than is commonly assumed, and therefore all vertebrates and perhaps even some invertebrates are conscious. By assembling a list of the biological and neurobiological features that seem responsible for consciousness, and by juxtaposing the fossil record of evolution, the authors argue that about 520–560 million years ago, “the great ‘Cambrian explosion’ of animal diversity produced the first complex brains, which were accompanied by the first appearance of consciousness; simple reflexive behaviors evolved into a unified inner world of subjective experiences” (Fineberg and Mallatt, 2016).
Doing what they call “neuroevolution,” Feinberg and Mallatt put forth the even more unconventional idea that the origin of consciousness goes back to the origin of life, in that single-cell creatures respond to stimuli from the environment, whether attracted to food sources or repelled by harmful chemicals. The authors call this process “sensory consciousness” [but which others may call stimulus-response patterns unworthy of the “consciousness” appellation]. In addition, the cell membrane distinguishes self from non-self, which becomes another baby step on the long evolutionary journey to human consciousness. A crucial developmental step, they say, was the evolution of “hidden layers” of clusters of intermediary nerve cells that process and relay internal signals between sensory-input and motor-output nerve cells. Driven by evolutionary pressures, these clusters would go on to evolve into primitive and then more complex brains (Fineberg and Mallatt, 2016; Rose, 2017).
If indeed these were the historical facts, it would naturally follow that “all vertebrates are and have always been conscious—not just humans and other mammals, but also every fish, reptile, amphibian, and bird.” Moreover, Feinberg and Mallatt find that many invertebrates—arthropods (including insects and probably crustaceans) and cephalopods (including the octopus)—”meet many of the criteria for consciousness.” Their proposal challenges standard-model theory that “consciousness evolved simultaneously but independently in the first vertebrates and possibly arthropods more than half a billion years ago.” Combining evolutionary, neurobiological, and philosophical approaches enables Feinberg and Mallatt to cast a broader group of animals that are conscious, though it is less clear how their theory offers—as the marketing claims, the authors less so—“an original solution to the ‘hard problem’ of consciousness” (Fineberg and Mallatt, 2016).
9.10.8. Levin’s technological approach to mind everywhere
Developmental and synthetic biologist Michael Levin introduces “a framework for understanding and manipulating cognition in unconventional substrates,” which he calls ‘TAME—Technological Approach to Mind Everywhere.” He asserts that creating “novel embodied cognitive systems (otherwise known as minds) in a very wide variety of chimeric architectures combining evolved and designed material and software”—via synthetic biology and bioengineering—“are disrupting familiar concepts in the philosophy of mind, and require new ways of thinking about and comparing truly diverse intelligences, whose composition and origin are not like any of the available natural model species.” TAME, Levin says, “formalizes a non-binary (continuous), empirically-based approach to strongly embodied agency,” and it “provides a natural way to think about animal sentience as an instance of collective intelligence of cell groups, arising from dynamics that manifest in similar ways in numerous other substrates” (Levin, 2022).
By focusing on cognitive function, not on phenomenal or access consciousness, Levin takes “TAME’s view of sentience as fundamentally tied to goal-directed activity,” noting carefully that “only some aspects of which can be studied via third-person approaches.” Provisionally, Levin suggests that consciousness “comes in degrees and kinds (is not binary),” for the same reasons he argues for continuity of cognition: “if consciousness is fundamentally embodied, the plasticity and gradual malleability of bodies suggest that it is a strong requirement for proponents of phase transitions to specify what kind of ‘atomic’ (not further divisible) bodily change makes for a qualitative shift in capacity consciousness” (Levin, 2022).
Although Levin takes the null or default hypothesis to be the relatively smooth continuity of consciousness across species and phylogenetically, he hedges that “the TAME framework is not incompatible with novel discoveries about sharp phase transitions.” He points to future, radical brain-computer interfaces in human patients as “perhaps one avenue where a subject undergoing such a change can convince themselves, and perhaps others, that a qualitative, not continuous, change in their consciousness had occurred.”
In a radical implication of TAME, Levin argues that “while ‘embodiment’ is critical for consciousness, it is not restricted to physical bodies acting in 3D space, but also includes perception-action systems working in all sorts of spaces.” This implies, he says, “counter to many people’s intuitions, that systems that operate in morphogenetic, transcriptional, and other spaces should also have some (if very minimal) degree of consciousness. This in turn suggests that an agent, such as a typical modern human, is really a patchwork of many diverse consciousnesses, only one of which is usually capable of verbally reporting its states (and, not surprisingly, given its limited access and self-boundary, believes itself to be a unitary, sole owner of the body).”
Levin remains “skeptical about being able to say anything definitive about consciousness per se (as distinct from correlates of consciousness) from a 3rd-person, objective perspective.” Yet, he muses, “The developmental approach to the emergence of consciousness on short, ontogenetic timescales complements the related question on phylogenetic timescales, and is likely to be a key component of mature theories in this field” (Levin, 2022).
9.10.9. No hard problem in William James’s psychology
Writer Tracy Witham argues that William James flipped the paradigm in which the hard problem arises, because James viewed consciousness through a problem he believed it solves by selecting for adaptive responses to specific environmental situations (James, 1890). Essentially, James believed that a brain complex enough to support a proliferation of options for responding to environmental situations is more likely to obscure than to identify the best option to use, unless that brain also has a selection mechanism for choosing adaptive over less, non-, and maladaptive options. But the question remains, Witham says, whether consciousness is, at least, a good prima facie fit, to address what can be called “the selection problem.”
The hypothesis that underlies James’s view, she says, is that consciousness increases an organism’s fitness by “bringing … pressure to bear in favor of those of its performances which make for the most permanent interests of the brain’s owner …” (James, 1890, p. 140).
Specifically, the role James gave to consciousness must be understood only in the context of the formation of de facto ends which he believed form when preferred sensations are recalled in their absence (James, 1890, p. 78). This context is crucial, because it is consciousness that confers the preferences for some sensations over others and thereby serves as the source of the ends. But to understand why James gave consciousness that role, Witham says we need to understand his two-word phrase, “cerebral reflex,” (James, 1890, p. 80). which implies a stimulus-and-response schema is the basis for the ends-and-means couplings that form cerebral reflexes. However, there is a problem with the implication. For this to work, ends must stand in for stimuli, arising in interactions between organisms and their environments.
The problem is solved, Witham says, if consciousness just is what it seems to be: the means by which we reflect on our interactions with our environments to sense whether the interactions are favorable or not. So, what consciousness seems to be fits James’s hypothesis perfectly, that its role is to “bring … pressure to bear [in favor of] those of our performances” that are adaptive. Reflective experience, in short, makes it possible to identify experiences of our environmental interactions that contain adaptive behaviors and retain them as cerebral reflexes for future use. But then, as the means to solve the selection problem, consciousness becomes an adaptive adaptation in the sense of being an adaptation selecting for adaptive behaviors. And it does so by being, indeed, what it seems to be: an adaptive adaptation that is a marvelous source of solutions, not a confounding source of problems.
The critical question, however, is whether a zombie-like black box of sufficient complexity could perform environmentally driven, fitness enhancing, evolutionarily successful activities, and if so, why then the radical advent of something so startlingly novel in the universe: inner experience? In other words, while the question of why consciousness was favored and selected by evolution is important, it is not the question of what consciousness actually is, which of course is the hard problem.
-
10. Non-reductive physicalism
Non-Reductive Physicalism takes consciousness to be entirely physical, solely the product of biological brains, but mental states or properties are irreducibly distinct from physical states or properties such that they cannot be entirely explained by physical laws, principles or discoveries (in brains or otherwise) (Macdonald and Macdonald, 2019).
Non-reductive Physicalism was, in part, a response to conceptual problems in the early identity theories of physicalism where mental properties or kinds were literally the same thing as physical properties or kinds. This was challenged by several conceptual conundrums: the multiple realizability of the same mental properties or kinds by different physical properties or kinds (Hilary Putnam); the intentional essence of mental phenomena, which seems so radically different from physical laws or things (Donald Davidson’s “Anomalous Monism,” 14.2); and the apparent unbridgeable gap between physics and the special sciences (Jerry Fodor) (Macdonald and Macdonald, 2019).
While mental states are generated entirely by physical states (of the brain), non-reductive physicalism maintains that they are truly other than physical; mental states are ontologically distinct.
This would seem to make Non-Reductive Physicalism a form of property dualism (15.1) in that both recognize real mental states and yet only one kind of substance, matter—but, as expected, some adherents of each reject the claims of the other. If Non-Reductive Physicalism is indeed a form of property dualism, it would be perhaps the predominant contemporary kind.
A core mechanism of Non-Reductive Physicalism is emergence, where novel properties at higher levels of integration are not discernible (and perhaps not even predictable, ever) from all-you-can-know at lower or more fundamental levels. A prime feature of Non-Reductive Physicalism is often “top-down causation,” where the content of consciousness is causally efficacious—qualia can do real work (contra Epiphenomenalism, 9.1.2).
Some Christian philosophers, such as Nancey Murphy (10.2), who seek greater consonance between contemporary science and the Christian faith, look to Non-Reductive Physicalism as a nondualistic account of the human person. It does not consider the “soul” an entity separable from the body, such that scientific statements about the physical nature of human beings would be referring to exactly the same entity as theological statements concerning the spiritual nature of human beings (Brown et al., 1998). The structure of Non-Reductive Physicalism is said to enhance the Judeo-Christian concept of “resurrection of the dead” as opposed to what is said to be the non-Judeo-Christian doctrine of an “immortal soul” (Van Inwagen, 1995).
On the other hand, Christian philosopher J.P. Moreland takes dualism to be “the clear teaching of Scripture” that “overwhelmingly sets forth a dichotomy of soul and body” and he decries those Christian thinkers who deny this conclusion, especially adherents of Non-Reductive Physicalism (Moreland, 2014).
Philosopher Jaegwon Kim’s objections to Non-Reductive Physicalism, based on causal closure and overdetermination, highlight its three principles: the irreducibility of the mental to the physical; some version of mental-physical supervenience; and the causal efficaciousness of mental states. The problem, according to Kim, is that when these three commitments are combined, an inconsistency is generated that entails the causal impotence of mental properties (Kim, 2024).
I’ve always been puzzled by Non-Reductive Physicalism in that I can well understand how, under physicalism, consciousness is non-reductive in practice, but how non-reductive in principle? Conversely, if indeed consciousness is in principle non-reductive—impossible for science ever to explain how it works in terms of fundamental physical constituents—it would seem to require the ontological reality of non-physical properties (at least by current boundaries), which would seem to embed a contradiction. Or else, by what mechanisms could such higher-level non-reducible “laws” work? Perhaps by something analogous to quantum fields but operating at higher levels? Occam is sharpening his Razor.
10.1. Ellis’s strong emergence and top-down causation
Mathematical physicist George Ellis approaches consciousness by combining non-reductionist strong emergence and top-down causation in the context of “possibility spaces” (Ellis, 2017a). While he calls consciousness “the biggest unsolved problem in science,” he sees the larger vision that consciousness transforms the nature of existence itself such that existence is quite different than it might have been had there been only nonconscious matter (Ellis, 2006).
Ellis begins with four kinds of entities, or “Worlds,” whose existence requires explanation: matter and forces, consciousness, physical and biological possibilities, and mathematical reality. An adequate explanation of what exists, he says, must encompass all four kinds of entities, in two forms: generic forms of the kinds of entities that might exist, and specific instantiations of some of these possibilities that actually occur or have occurred in the real universe. The first are possibilities, and the second are actualizations of those possibilities (Ellis, 2015).
“Possibility spaces,” then, show what is and what is not possible for entities of whatever kind we are discussing. For example, the possibility space for classical physics is all possible states of the system; for quantum physics, the state spaces for the system wave function are Hilbert spaces.
For consciousness, possibility spaces include separate subspaces for all possible thoughts, all possible qualia, all possible emotions—each with its own character. Ellis says, “The rationale is always the same: if these aspects of consciousness occur, then it is possible that they occur; and that possibility was there long before they ever occurred, and so is an abstract feature of the universe. The physical existence of brains enables their potential existence to be actualized” (Ellis, 2015).
Ellis embeds his theory of consciousness in the presence and power of strong emergence, where properties of a system are impossible to predict in terms of the properties of its constituents, even in principle; and of top-down causation, where higher hierarchical levels exert causal force on lower levels, even though the higher levels are comprised only of the lower levels. Strong emergence, according to Ellis, works throughout the physical world, particularly in biology where the whole is more than just the sum of its parts (Ellis, 2017b, 2019).
He explains that “emergence is possible because downward causation takes place right down to the lower physical levels, hence, arguments from the alleged causal completeness of physics and supervenience are wrong. Lower levels, including the underlying physical levels, are conscripted to higher level purposes; the higher levels are thereby causally effective, so strong emergence occurs. No violation of physical laws is implied. The key point is that outcomes of universally applicable generic physical laws depend on the context when applied in specific real world biological situations … including the brain” (Ellis, 2019).
Continuing to focus on emergence and downward causation, Ellis “considers how a classification of causal effects as comprising efficient, formal, material, and final causation can provide a useful understanding of how emergence takes place in biology and technology, with formal, material, and final causation all including cases of downward causation; they each occur in both synchronic and diachronic forms.” Taken together, he says, the four causal effects “underlie why all emergent levels in the hierarchy of emergence have causal powers (which is Noble’s principle of biological relativity) and so why causal closure only occurs when the upward and downward interactions between all emergent levels are taken into account, contra to claims that some underlying physics level is by itself causality complete” A key feature, Ellis adds, is that “stochasticity at the molecular level plays an important role in enabling agency to emerge, underlying the possibility of final causation occurring in these contexts” (Ellis, 2023).
Ellis’s two points here, if veridical and representing reality, would have extraordinary impact on theories of consciousness, and the two bear repeating: (i) emergence has causal powers at all levels in biology, and (ii) top-down causation as well as bottom-up causation is necessary for causal closure. At once, almost every Materialism Theory—maybe every Materialism Theory (more than 90 at last count)—would be shown insufficient to explain consciousness (even if one or more were still necessary to do so).
Ellis highlights questions that he claims reductionists cannot answer: “Reductionists cannot answer why strong emergence (unitary, branching, and logical) is possible, and in particular why abstract entities such as thoughts and social agreements can have causal powers. The reason why they cannot answer these questions is that they do not take into account the prevalence of downward causation in the world, which in fact occurs in physics, biology, the mind, and society” (Ellis 2017b, 2019).
David Chalmers distinguishes strong downward causation from weak downward causation. “With strong downward causation, the causal impact of a high-level phenomenon on low-level processes is not deducible even in principle from initial conditions and low-level laws. With weak downward causation, the causal impact of the high-level phenomenon is deducible in principle, but is nevertheless unexpected. As with strong and weak emergence, both strong and weak downward causation are interesting in their own right. But strong downward causation would have more radical consequences for our understanding of nature.” However, Chalmers concludes, “I do not know whether there is any strong downward causation, but it seems to me that if there is any strong downward causation, quantum mechanics is the most likely locus for it … The question remains wide open, however, as to whether or not strong downward causation exists” (Chalmers, 2008).
10.2. Murphy’s non-reductive physicalism
Christian philosopher Nancey Murphy, reflecting increasing Christian scholarship calling for acceptance of physicalism, argues that the theological workability of physicalism depends on the success of an argument against reductionism. She takes Non-Reductive Physicalism, a common term in philosophy of mind, to “signal opposition to anthropological dualisms of body and either mind or soul, as well as to physicalist accounts that reduce humans to nothing but complex animals.” She sets herself the task of showing that “non-reductive physicalism is philosophically defensible, compatible with mainstream cognitive neuroscience, and is also acceptable biblically and theologically”—a task made more difficult because she must be able to explain “how Christians for centuries could have been wrong in believing dualism to be biblical teaching” (Murphy, 2017, 2018).
To Murphy, part of the answer lies in translation. She focuses on the Septuagint, a Greek translation of the Hebrew scriptures that dates from around 250 BC. This text translated Hebrew terminology into Greek, and “it then contained terms that, in the minds of Christians influenced by Greek philosophy, referred to constituent parts of humans. Later Christians have obligingly read and translated them in this way.” A key instance, she says, is “the Hebrew word nephesh, which was translated as psyche in the Septuagint and later into English as ‘soul’ … In most cases the Hebrew or Greek term is taken simply to be a way of referring to the whole living person” (Murphy, 2018).
Murphy is impressed by how many capacities or faculties of the soul, as attributed by Thomas Aquinas, are now well explained by cognitive science and neurobiology. She is moved by “localization studies—that is, research indicating not only that the brain is involved in specific mental operations, but that very specific regions are.”
That gives her the physicalism—the easy part, I’d say. What about the non-reductive—the hard part?
An obvious answer to the problem of neurobiological reductionism, Murphy says, would be the presence and power of downward causation or whole-part causation. That is, if causal reductionism is the thesis that all causation is from part to whole, then the complementary alternative causation would be from whole to part. If we describe a more complex system, such as an organism, as a higher-level system than the simple sum of its biological parts, then causal reductionism is bottom-up causation, and the alternative, causal anti-reductionism, or causal non-reductionism, is top-down or downward causation (Murphy, 2017).
To support Non-reductive Physicalism by undermining reductionist determinism, Murphy recruits contemporary concepts in systems theory, such as chaos theory, non-linear dynamics, complex adaptive systems, systems probabilities, and systems biology. Thus, Murphy posits, an understanding of downward causation in complex systems allows for the defeat of neurobiological reductionism.
Finally, Murphy muses that “non-reductive physicalism, while it is the term most often used in philosophy, is perhaps not the best for purposes of Christian anthropology, because, at least by connotation, it places disproportionate stress on the aspect of our physicality.” She quotes theologian Veli-Matti Kärkkäinen in proposing a replacement: “multi-aspect monism” (Murphy, 2018).
10.3. Van Inwagen’s Christian materialism and the resurrection of the dead
Christian philosopher/metaphysician Peter van Inwagen combines a wholly materialist ontology of the human person (Van Inwagen, 2007a) with a committed belief in the resurrection of the dead as the Christian hope of eternal life. His thesis is that “dualism is a Greek import into Christianity and that the Christian resurrection of the dead does not presuppose dualism” (Van Inwagen, 1995, 2007b).
He states, “Most Christians seem to have a picture of the afterlife that can without too much unfairness be described as ‘Platonic.’ When one dies, one’s body decays, and what one is, what one has been all along, an immaterial soul or mind or self, continues to exist”—a picture and a doctrine that Van Inwagen finds “unsatisfactory, both as a Christian and as a philosopher” (Van Inwagen, 1995).
He reflects, “when I enter most deeply into that which I call myself, I seem to discover that I am a living animal. And, therefore, dualism seems to me to be an unnecessarily complicated theory about my nature unless there is some fact or phenomenon or aspect of the world that dualism deals with better than materialism does” (which he does not find). As for the argument from phenomenal consciousness, he admits, “It is a mystery how a material thing could have sensuous properties [phenomenal consciousness],” but then retorts, “simply and solely because it is a mystery how anything could.”
Van Inwagen rejects dualism biblically as well as philosophically. After examining biblical texts in the Old Testament, Van Inwagen finds “little to support dualism in the Old Testament, and much that the materialist will find congenial.” His analysis of New Testament texts requires more elaborate (some may say more convoluted) exegesis: “twisting and turning, impaled on intransigent texts,” in Van Inwagen’s own self-deprecating words. For example, Jesus’s parable of the “Rich Man” and his words to the “Good Thief” on the cross (“Today you shall be with me in Paradise.”). Moreover, Paul’s repeated representation of death as “sleep” cannot be discounted.
An important philosophical argument for Christian dualism, Van Inwagen says, is that the doctrine of the Resurrection of the Dead seems to presuppose dualism. “For if I am not something immaterial, if I am a living animal, then death must be the end of me. If I am a living animal, then I am a material object. If I am a material object, then I am the mereological sum of certain atoms. But if I am the mereological sum of certain atoms today, it is clear from what we know about the metabolisms of living things that I was not the sum of those same atoms a year ago” (Van Inwagen, 1995).
For the materialist who believes in the biblical resurrection of the dead as a literal future event, as Van Inwagen does, the fact that the atoms of which we are composed are in continuous flux is a “stumbling block.” He asks, “How shall even omnipotence bring me back—me, whose former atoms are now spread pretty evenly throughout the biosphere?” This question does not confront the dualist, who will say that there is no need to bring me back because I have never left. But what shall the materialist say?” (Van Inwagen, 1995).
Van Inwagen challenges Divine power: “For what can even omnipotence do but reassemble? What else is there to do? And reassembly is not enough, for I have been composed of different atoms at different times.” This leads to the conundrum of myriad duplicates.
In the end, Van Inwagen concludes, “there would seem to be no way around the following requirement: if I am a material thing, then, if a man who lives at some time in the future is to be I, there will have to be some sort of material and causal continuity between this matter that composes me now and the matter that will then compose that man.” Van Inwagen finds this requirement looking very much like Paul’s description of the resurrection: “when I die, the power of God will somehow preserve something of my present being, a gumnos kókkos [bare/naked grain/kernel35], which will continue to exist throughout the interval between my death and my resurrection and will, at the general resurrection, be clothed in a festal garment of new flesh” (Van Inwagen, 1995).
While van Inwagen would be the first to admit that “oddly enough,” few Christian dualists have been persuaded by his arguments against a Christian immortal soul, I (for one) consider his arguments probative, disruptive, insightful (if not dispositive) (Van Inwagen, 2007b).
10.4. Nagasawa’s nontheoretical physicalism
Philosopher Yujin Nagasawa interrelates central debates in philosophy of mind (phenomenal consciousness) and philosophy of religion (existence of God) to construct a unique metaphysical thesis, which he calls “nontheoretical physicalism,” by which he claims that although this world is entirely physical, there are physical facts that cannot be captured even by complete theories of the physical sciences (Nagasawa, 2008). This is no defense of traditional Non-Reductive Physicalism, but it is consistent with some of its distinguishing features.
Nagasawa’s unique methodology, moving from epistemology to ontology, draws heretofore unrecognized parallels between fundamental arguments in philosophy of mind and philosophy of religion, using in the former the Knowledge Argument that Mary cannot know what it is like to see color in her black-and-white room, and in the latter atheistic arguments that God cannot know what it is like to be evil or limited due to his perfections. From what Nagasawa takes as the failures of traditional arguments against physicalism, yet in still rejecting a physicalist approach to phenomenal consciousness, he constructs his “nontheoretical physicalism” (Nagasawa, 2023).
What Nagasawa means by “nontheoretical” is an explanation of physicalism that is entity-based, not theory-based, which is consistent with his view that even with complete and final physical theories all reality cannot be explained (Nagasawa, 2008).
10.5. Sanfey’s Abstract Realism
Medical doctor John Sanfey’s Abstract Realism (AR) claims to bridge the mind-matter explanatory gap with two arguments suggesting a complementarity between first and third-person perspectives, with each perspective containing an equivalent observer function. The first argument posits that science must use abstract devices integrating past and future moments of continuous time that reflect first-person perception. The second argument tackles the hard problem by examining phenomenal simultaneity, where no time separates experiencer from experienced (Sanfey, 2023).
In “something it is like to experience redness,” the experiencer knows they are not simultaneously causing the redness; one cannot consciously cause something without being conscious of doing so, obviously. But an intelligent system not experiencing conscious presence cannot be certain it is not causing what it perceives because its observing self must reside in the same physical systems that may or may not be producing illusions. This suggests, to Sanfey, that experiencing presence is sufficient to create logical possibilities such as disembodied mind or idealism. Rooted in phenomenal simultaneity, these causal mechanics of consciousness are unobservable in principle, he says, making consciousness indistinguishable from strong emergence. Proven causal power means that consciousness can be produced by physical systems even synthetic ones without introducing new physics. (In Sanfey’s AR, the brain generates consciousness when two information systems, two electromagnetic fields [9.3], interact bi-directionally, causally, and with sufficient complexity such that one is the observing reference for the other.) (Sanfey, 2023).
Simultaneous causation cannot happen, but experiential simultaneity is certain, and with causal power, consciousness can be integrated with physics within a Non-Reductive Physicalism paradigm—without appealing to psycho-identity, panpsychism, idealism, or reductive physicalism. Matter, defined as that which behaves according to physical laws independently of conscious mind, is always either a sensory or conceptual model, a complementarity of first and third-person perspectives, each containing an equivalent observer function (Sanfey, 2023).
10.6. Northoff’s non-reductive neurophilosophy
Northoff frames his views on consciousness (1.2.12) as “non-reductive neurophilosophy,” which, he says, is “primarily a methodological approach,” a particular strategy that takes into account “certain phenomena which otherwise would remain outside our scope [consciousness studies].” He deems “the link of conceptual models and ontological theories with empirical data to be key in providing insight into brain-mind connection and its subjectivity” (Northoff, 2022).
Paraphrasing Kant, Northoff says that “brain data without brain-mind models are blind, brain-mind models without brain data are empty.” Thus, Northoff has non-reductive neurophilosophy allowing for “a systematic and bilateral connection of theoretical concepts and empirical data, of philosophy and neuroscience.” His emphasis is on “systematic,” by providing and defining “different steps in how to link concepts and facts in a valid way without reducing the one to the respective other.” Taken in such sense, Northoff considers non-reductive neurophilosophy “a methodological strategy of analyzing the relationship of concepts and facts just like there are specific methods of logical analyses in philosophy and empirical data analysis in neuroscience.” In other words, “non-reductive neurophilosophy is a methodological tool at the interface of philosophy and neuroscience. As such it can be applied to problems in both philosophy and neuroscience” (Northoff, 2022).
11. Quantum theories
Quantum theories of consciousness take seriously the idea that quantum mechanics plays a necessary, if not sufficient role, in the specific generation of phenomenal consciousness in certain physical entities like brains—beyond the general application of quantum mechanics in all physical entities. The kinds of quantum theories or models on offer differ radically.
Philosopher of science Paavo Pylkkänen explores whether the dynamical and holistic features of conscious experience might reflect “the dynamic and holistic quantum physical processes associated with the brain that may underlie (and make possible) the more mechanistic neurophysiological processes that contemporary cognitive neuroscience is measuring.” If so, he says, “these macroscopic processes would be a kind of shadow, or amplification of the results of quantum processes at a deeper (pre-spatial or ‘implicate’) level where our minds and conscious experience essentially live and unfold.” At the very least, Pylkkänen says, “a quantum perspective will help a ‘classical’ consciousness theorist to become better aware of some of the hidden assumptions in his or her approach.” What quantum theory is all about, he stresses, is “learning, on the basis of scientific experiments, to question the ‘obvious’ truths about the nature of the physical world and to come up with more coherent alternatives” (Pylkkänen, 2018).
There is certainly growing interest in the putative quantum-consciousness nexus. For example, Quantum and Consciousness Revisited, with papers the product of two conferences, present various philosophical approaches to quantum paradoxes including further considerations of the Copenhagen Interpretation and alternatives with implications for consciousness studies, mathematics and biology. Topics include observation and measurement; collapse of the wave function; and time and gravity. All the papers, the editors write, “reopen the questions of consciousness and meaning which occupied the minds of the early thinkers of quantum physics” (Kafatos et al., 2024).
In his technical review article, “Quantum Approaches to Consciousness,” theoretical physicist Harald Atmanspacher describes three basic approaches to the question of whether quantum theory can help understand consciousness: (1) consciousness as manifestation of quantum processes in the brain, (2) quantum concepts elucidating consciousness without referring to brain activity, and (3) matter and consciousness as dual aspects of one underlying reality (Atmanspacher, 2020a).
For example, one approach considers how quantum field theory can describe why and how classical behavior emerges at the level of brain activity. The relevant brain states themselves are properly considered as classical states. The idea, Atmanspacher says, is “similar to a classical thermodynamical description arising from quantum statistical mechanics,” and works “to identify different regimes of stable behavior (phases, attractors) and transitions between them. This way, quantum field theory provides formal elements from which a standard classical description of brain activity can be inferred” (Atmanspacher, 2020a).
Atmanspacher reports applications of quantum concepts to mental processes, focusing on complementarity, entanglement, dispersive states, and non-Boolean logic. These involve quantum-inspired concepts to address purely mental (psychological or cognitive) phenomena, without claiming that actual quantum mechanics is necessary to make it work. This includes research groups studying quantum ideas in cognition (Patra, 2019). While the term “quantum cognition” has gained acceptance, Atmanspacher says that a more appropriate characterization would be “non-commutative structures in cognition,” and he questions whether it is “necessarily true that quantum features in psychology imply quantum physics in the brain?” (Atmanspacher, 2020a).
After reviewing major quantum theories of consciousness (several are discussed below), Atmanspacher suggests that progress is more likely made by investigating “mental quantum features without focusing on associated brain activity” (at least to begin with). Ultimately, he says, “mind-matter entanglement is conceived as the hypothetical origin of mind-matter correlations. This exhibits the highly speculative picture of a fundamentally holistic, psychophysically neutral level of reality from which correlated mental and material domains emerge” (Atmanspacher’s Dual-Aspect Monism, 14.7.).
To position quantum theories of consciousness, consider each as representing one of two forms: (i) quantum processes, similar to those in diverse areas of biology (e.g., photosynthesis), that uniquely empower or enable the special activities of cells, primarily neurons, to generate consciousness; and (ii) the more radical claim that these two great mysteries, consciousness and quantum theory, are intimately connected such that the solution to both mysteries can be solved only together.
Physicist Carlo Rovelli disagrees. Consciousness and quantum mechanics, he says, have no special, intimate relationship. With respect to quantum mechanics, Rovelli says, “Consciousness never played a role … except for some fringe speculations that I do not believe have any solid ground. The notion of ‘observer’ should not be misunderstood. In quantum physics parlance an ‘observer’ can be a detector, a screen, or even a stone. Anything that is affected by a process. It does not need to be conscious, or human, or living, or anything of the sort” (Rovelli, 2022).
Philosopher of physics David Wallace sees “potentially intriguing connections between consciousness and quantum mechanics, tied partly to the idea that traditional formulations of quantum mechanics seem to give a role to measurement or observation—and, well, what is that?” He says, “the natural hypothesis is that measurement or observation is conscious perception,” which somehow implies “a role of a conscious observer.” Although this would be “extremely suggestive for connecting the two”—consciousness and quantum mechanics—”but you can connect them in a lot of ways.” Some, Wallace says, might try to explain consciousness reductionistically in terms of quantum mechanical processes. But, “In my view, that works no better than explaining consciousness in terms of classical processes.” However, “Another way is not try to reduce consciousness, but find roles for consciousness in quantum mechanics. That’s one of the big questions about consciousness. What does it do? What is it here for? How can it affect the physical world? So, I’m at least taking seriously the idea that maybe consciousness plays a potential role in quantum mechanics. It’s a version of the traditional idea that consciousness collapses the wave function. It’s not an especially popular idea among physicists these days, partly because it takes consciousness as fundamental—but if, like me, you think there are independent reasons to do that, then I think it’s an avenue worth looking at” (Wallace, 2016b).
Chalmers and McQueen readdress the question of whether consciousness collapses the quantum wave function. Noting that this idea was taken seriously by John von Neumann and Eugene Wigner but is now widely dismissed, they develop the idea by combining a mathematical theory of consciousness (Integrated Information Theory, 12) with an account of quantum collapse dynamics (continuous spontaneous localization). In principle, versions of the theory can be tested by experiments with quantum computers. The upshot is not that consciousness-collapse interpretations are clearly correct, but that there is a research program here worth exploring (Chalmers and McQueen, 2022).
Physicist Tim Palmer argues that our ability for counterfactual thinking—the existence of alternative worlds where things happen differently—which is both an exercise in imagination and a key prediction of quantum mechanics—suggests that “our brains are able to ponder how things could have been because in essence they are quantum computers, accessing information from alternative worlds” (he recruits the Many Worlds Interpretation of quantum mechanics). Consciousness (along with understanding and free will), he states, “involves appealing to counterfactual worlds” and thus “quantum computing is the key to consciousness” (Palmer, 2023).
At the very least, for quantum processing to play a content or informational role in the brain it would require some mechanism that stores and transports quantum information in qubits for sufficiently long, macroscopic times. Moreover, the mechanism would need to entangle vast numbers of qubits, and then that entanglement would need to be translated into higher-level chemistry in order to influence how neurons trigger action potentials (Ouellette, 2016). Experiments with anesthetics and brain organoids hint that quantum effects in the brain may be in some way involved in consciousness (Musser, 2024).
Although most physicists and neuroscientists have not taken quantum theories of consciousness seriously, such theories are proliferating, becoming more sophisticated and mainstream, and are increasingly backed up by claims of experimental evidence. Personally, I started out an incorrigible, utter skeptic about quantum consciousness; I’m still a skeptic, though no longer so incorrigible, no longer so utter.
11.1. Penrose-Hameroff’s orchestrated objective reduction
Penrose-Hameroff’s quantum consciousness, which they call Orchestrated Objective Reduction (OrchOR), is the claim that consciousness arises in the fundamental gap between the quantum and classical worlds. Formulated by mathematician and Nobel laureate Roger Penrose (Penrose, 2014; 1996; Penrose, 2014, 2023), and developed by anesthesiologist Stuart Hameroff (Hameroff, 2014a, 2014b), consciousness is non-computational, yet still explained by the physics of neurons, but a physics distinct from and broader than that which we currently understand.
Penrose claims that only a non-computational physical process could explain consciousness. He is not saying that consciousness is beyond physics, rather that it is beyond today’s physics. “Conscious thinking can’t be described entirely by the physics that we know,” Penrose said, explaining that he “needed something that had a hope of being non-computational.”36 He focuses on “the main gap in physics”: the contradiction between the continuous, probabilistic evolution given by the Schrödinger equation in quantum mechanics and the discrete, deterministic events when you make measurements in classical physics—“how rules like Schrödinger’s cat being dead and alive at the same time in quantum mechanics do not apply at the classical level” (Penrose, 2014, 2023),
Penrose argues that the missing physics that describes how the quantum world becomes the classical world “is the only place where you could have non-computational activity.” But he admits that it’s “a tall order” to sustain quantum information in the hot, wet brain, because “whenever quantum systems become entangled with the environment, ‘environmental decoherence’ occurs and information is lost.”
“Quantum mechanics acting incoherently is not useful [to account for consciousness],” Penrose explains; “it has to act coherently. That’s why we call [our mechanism] ‘Orch OR’, or ‘orchestrated objective reduction’—the ‘OR’ stands for objective reduction, which is where the quantum state collapses to one alternative or another, and ‘Orch’ stands for orchestrated. The whole system must be orchestrated, or organized, in some global way, so that the different reductions of the states actually do make a big difference to what happens to the network of neurons” (Penrose, 2014, 2023),
So how can the hot, wet brain operate a quantum information system? Hameroff proposed a biological mechanism utilizing microtubules in neurons. As an anesthesiologist who had shepherded thousands of conscious-unconscious-conscious transitions, Hameroff, together with Penrose, developed their quantum theory of consciousness.
“Objective reduction in the quantum world is occurring everywhere,” Hameroff recognizes, “so proto-conscious, undifferentiated moments are ubiquitous in the universe. Now in our view when orchestrated objective reduction occurs in neuronal microtubules, the process gives rise to rich conscious experience” (Hameroff, 2014b).
In Hameroff’s telling, microtubules are cylindrical polymers of the protein tubulin capable of information processing, with fundamental units being states of a billion tubulins per neuron. Microtubules in all cells enact purposeful spatiotemporal activities, and in the brain, microtubules establish neuronal shape, create and regulate synapses, and are proposed to underlie memory, cognition and consciousness. Tubulin is the brain’s most prevalent protein, so the brain is largely made of microtubules, each with unique, high frequency vibrational and quantum properties from non-polar aromatic ring pathways. The claim is made that experimental evidence shows that anti-depressants, psychedelics and general anesthetics, which selectively alter or block consciousness, all act via microtubules (Brophy and Hameroff, 2023).
Some evidence suggests that entangled states can be maintained in noisy open quantum systems at high temperature and far from thermal equilibrium—for example, counterbalancing decoherence by a “recoherence” mechanism—such that, “under particular circumstances, entanglement may persist even in hot and noisy environments such as the brain” (Atmanspacher, 2020a). Moreover, Anirban Bandyopadhyay describes experiments with the tubulin protein in microtubules where conductivity resistance becomes so low it’s almost a macroscopic quantum-like system (Bandyopadhyay, 2014).
Penrose’s ontology requires basic conscious acts to be linked to gravitation-mediated reductions of quantum states, with “real quantum jumps” related to conscious thoughts and, by extension, to neural correlates of consciousness. A complete theory seems to require a robust theory of quantum gravity, long the holy grail of physics.
As noted, the Orch OR theory proposes that consciousness arises from orchestrated (Orch) quantum state objective reductions (OR) in microtubules within brain neurons, which connect, adherents say, to the fine-scale structure of spacetime geometry. Adherents posit that Orch OR accounts for cognitive binding, real-time conscious causal action (through non-computable Penrose OR and retroactivity), memory encoding, and, ambitiously, the hard problem of phenomenal experience. Moreover, consciousness as a non-local quantum process in spacetime geometry provides potentially plausible mechanism for near-death and out-of-body experiences, pre-cognition, afterlife and reincarnation (Brophy and Hameroff, 2023). Quite the claim, that.
Hameroff makes the striking statement that “consciousness came before life.” Based on observations of extraterrestrial organic material, in context of the Penrose-Hameroff quantum theory of consciousness, Hameroff challenges the conventional wisdom that consciousness evolved after life, posing that “consciousness may have been what made evolution and life possible in the first place” (Hameroff et al., 2024).
For years, Penrose-Hameroff stood largely alone, defending their quantum consciousness model against waves of scientific critics (Baars and Edelman, 2012), some of whom largely dismissed the notion as fanciful and fringy. Then, as quantum biology began emerging as a real science with broad applications—with quantum mechanisms shown to play essential roles in photosynthesis, vision, olfaction, mitochondria, DNA mutations, magnetoreception, etc.—a larger community began taking quantum consciousness more seriously.
Today, while Penrose-Hameroff Orch OR remains the most well-known quantum theory of consciousness, with increasing interest, there are other, diverse theories of how quantum processes are essential in consciousness. Their numbers are growing.
11.2. Stapp’s collapsing the wave function via asking “questions”
Mathematical physicist Henry Stapp argues for the quantum nature of consciousness by relying on a traditional interpretation of quantum mechanics, where quantum wave functions collapse only when they interact with consciousness in an act of measurement. He envisions a “mind-like” wave-function collapse that exploits quantum effects in the synapses between neurons, generating consciousness, which he believes is fundamental to the universe (Stapp, 2011, 2023, 2007.)
Stapp founds his theory on the transition from the classical-physics conception of reality to von Neumann’s application of the principles of quantum physics to our conscious brains (Stapp, 2006; Von Neumann, 1955/1932). Von Neumann extended quantum theory to incorporate the devices and the brain/body of the observers into physical theory, leaving out only the stream of conscious experiences of the agents. According to von Neumann’s formulation, “the part of the physically described system being directly acted upon by a psychologically described ‘observer’ is the brain of that observer” (Stapp, 2011).
The quantum jump of the state of an observer’s brain to the ‘Yes’ basis state (vector) then becomes the representation, in the state of that brain, of the conscious acquisition of the knowledge associated with that answer ‘Yes,’ which constitutes the neural correlate of that person’s conscious experience. This fixes the essential quantum link between consciousness and neuroscience (Stapp, 2006).
To Stapp, this is the key point. “Quantum physics is built around ‘events’ that have both physical and phenomenal aspects. The events are physical because they are represented in the physical/mathematical description by a ‘quantum jump’ to one or another of the basis state vectors defined by the agent/observer’s choice of what question to ask. If the resulting event is such that the ‘Yes’ feedback experience occurs then this event ‘collapses’ the prior physical state to a new physical state compatible with that phenomenal experience” (Stapp, 2006).
Thus, in Stapp’s telling, mind and matter thereby become dynamically linked in a way that is causally tied to an agent’s free choice of how to act. “A causal dynamical connection is established between (1) a person’s conscious choices of how to act, (2) that person’s consciously experienced increments in knowledge, and (3) the physical actualizations of the neural correlates of the experienced increments in knowledge” (Stapp, 2006).
More colloquially, Stapp argues that given the perspective of classical physics, where all is mechanical, where the physical universe is a closed system, “there’s nothing for consciousness to do … and so it must be some sort of an illusion.” Why would there have been consciousness at all, he asks? Under classical physics, “consciousness is just sitting there inert, a passive observer of the scene in which it has no function; it does nothing. So, it’s a mystery why consciousness should ever come into existence” (Stapp, 2007).
In stark contrast, Stapp says, the way quantum mechanics works, in order to get consequences, predictions, there must be a question posed. It’s like “20 questions,” yes-or-no questions. A question is posed in the quantum mechanical scheme; then there is an evolution according to the Schrodinger equation, and then nature gives an answer (which is statistically determined).
The axial idea, Stapp says, is that there is nothing in quantum mechanics that determines what decides the questions. This means that there’s a gap, a critical causal gap in quantum mechanics. And the way it’s filled in practice is that an observer, on the basis of reasons or motivations or with rules, sets up a certain experiment in a certain way. For example, putting a Geiger counter or some other detector in the path of particles.
This yields Stapp’s concept of quantum consciousness. Nobody denies that thoughts exist, he says, but how do they do something? And that’s the place where quantum consciousness has causal impact.
The crux of quantum mechanics is what questions are going to be asked. There is nothing in classical physics that asks such questions. But in quantum mechanics questions are answered by the psychological process of the experimenter, who is interested in learning something. And because there is nothing in the way quantum mechanics works that explains the choice of the question, there is an opening for the injection of mental events into the flow of physical events. The choice of the question is not determined by the laws as we know them (Stapp, 2007).
This means we need another process, which is consciousness. And this gives consciousness an actual role to play and allows it to do things causally. And if consciousness can act causally and do things, Stapp says, then classic materialism is out.
Niels Bohr had a famous quote: “one must never forget that in the drama of existence we are ourselves both actors and spectators.” In the classical worldview, Stapp says, “we were just spectators; always we would just watch what’s happening but couldn’t do anything. In the quantum mechanical worldview, we are actors. We are needed to make the theory work.”
Moreover, Stapp says, “this mental process cannot just be the product of the brain, because the brain, like all physical things, evolves via quantum mechanical rules. While quantum mechanics describes the evolution of potentialities for events to happen, that’s all they describe, only potentialities—they do not describe what chooses the events that are going to happen, the actual events. Something must ask the questions, something outside of quantum mechanics—quantum mechanics forces that process.” The only candidate, Stapp says, must be the independent existence of consciousness (Stapp, 2007).
Stapp’s conclusions are as bold as they are controversial. First, the ontological foundations of consciousness and quantum mechanics are inextricably linked. Second, classical materialism is defeated (Stapp, 2007).
Philosopher of physics David Wallace is sympathetic with the idea that consciousness with respect to quantum physics has to be taken somehow as fundamental and irreducible, but there are two different ways that could go. “There’s the dualist way, where you have physics and you have consciousness as two separate things, and there’s the panpsychist idea, where consciousness underlies all of physics and is present at the most fundamental level of every physical process. Those are two different ideas” (Wallace, 2016a, 2016b).
When Wallace thinks about consciousness collapsing the wave function, as in quantum mechanics, he says, “That’s the dualist half of my head. You’ve got physics, you’ve got a wave function, and you’ve got consciousness, which is observing the wave function. And somehow consciousness is something distinct from the physical wave function and every now and then affecting it in this interesting phenomenon of collapse. In a way, it’s an updated version of Rene Descartes’s dualism: there’s mind and then there’s body; they’re separate and they interact.”
Wallace says one could try to combine dualism and panpsychism with respect to the relationship between consciousness and quantum mechanics, “but I don’t think they’d combine all that well,” he said. “If consciousness is everywhere and consciousness collapses the wave function, then the wave function would be constantly collapsing and we know that doesn’t happen because you get interference effects in double slit experiments. So, I think these two ideas, panpsychism and consciousness collapsing the wave function, should be pursued on separate tracks (Wallace, 2016a; 2016b; 2016c.)
11.3. Bohm’s implicate-explicate order
Quantum physicist David Bohm, colleague of Einstein, famously introduced the idea of “implicate order” and “explicate order” as ontological implications of quantum theory to explain two radically opposed perspectives of the same phenomenon—something seems to be needed to account for the bizarrely divergent ways of conceiving reality, quantum and classical, both of which seemed undeniably correct.
Bohm is a big thinker, leveraging the counterintuitive concepts of quantum mechanics to try to see reality as it really is. He envisions matter and mind as intertwined. He worked with Karl Pribram to develop “Holonomic Brain Theory” (9.4.5). He explored the essence of thought with Indian philosopher Jiddu Krishnamurti. Of particular import is what he calls “undivided wholeness,” meaning that the subject actively participates with the object, rather than being a detached observer. Bohm developed his “wholeness” as innately dynamic, alive, and open-ended (Gomez-Marin, 2023a).
According to Bohm, everything is in a state of process or becoming (folding and unfolding)—Bohm calls it the “universal flux”. All is dynamic interconnected process. In the same manner, Bohm says, “knowledge, too, is a process, an abstraction from the one total flux, which latter is therefore the ground both of reality and of knowledge of this reality” (Section: Bohm, 1980; Bohm, Wise Insights Forum, website).
Now, regarding “implicate order,” Bohm means “order which is enfolded (the root meaning of ‘implicate’) and later unfolded or made explicate.” Relating the enfolding-unfolding universe to consciousness, Bohm contrasts mechanistic order with implicate order. In mechanistic order, which is inherent to classical physics, “the principal feature of this order is that the world is regarded as constituted of entities which are outside of each other, in the sense that they exist independently in different regions of space (and time) and interact through forces that do not bring about any changes in their essential natures. The machine gives a typical illustration of such a system of order …. By contrast, in a living organism, for example, each part grows in the context of the whole, so that it does not exist independently, nor can it be said that it merely ‘interacts’ with the others, without itself being essentially affected in this relationship” (Bohm, 1980; Bohm, n.d.).
Bohm contends, “the implicate order applies both to matter (living and non-living) and to consciousness, and that it can therefore make possible an understanding of the general relationship between these two”—yet he recognizes “the very great difference in their basic qualities.” Still, he believes that because both consciousness and matter are extensions of the implicate order, a connection is possible.
To Bohm, the explicate order, which is “the order that we commonly contact in common experience,” has room “for something like memory”, with the fact that “memories are first enfolded and then unfolded during recall” being consistent with Bohm’s concepts of implicate and explicate order. “Everything emerges from and returns to the Whole” (Bohm, n.d.).
Confirming his non-materialist status, Bohm proposes, “the more comprehensive, deeper, and more inward actuality is neither mind nor body but rather a yet higher-dimensional actuality, which is their common ground and which is of a nature beyond both.” What we experience consciously, Bohm offers, is a projection of a higher-dimensional reality onto our lower-dimensional elements. “In the higher-dimensional ground the implicate order prevails,” he says. “Thus, within this ground, what is is movement which is represented in thought as the co-presence of many phases of the implicate order …. We do not say that mind and body causally affect each other, but rather that the movements of both are the outcome of related projections of a common higher-dimensional ground” (Bohm, 1980; Bohm, n.d.).
11.4. Pylkkänen’s quantum potential energy and active information
Philosopher Paavo Pylkkänen proposes a view in which “the mechanistic framework of classical physics and neuroscience is complemented by a more holistic underlying framework in which conscious experience finds its place more naturally” (Pylkkänen, 2007). Recognizing that it is “very likely that some radically new ideas are required if we are to make any progress” on the hard problem, he turns to quantum theory “to understand the place of mind and conscious experience in nature.” In particular, Pylkkänen and physicist Basil Hiley focus on the ontological interpretation of quantum theory proposed by David Bohm and Hiley (1993) and make “the radical proposal that quantum reality includes a new type of potential energy which contains active information. This proposal, if correct, constitutes a major change in our notion of matter” (Hiley and Pylkkänen, 2022).
Pylkkänen and Hiley’s intuition is that the reason “it is not possible to understand how and why physical processes can give rise to consciousness is partly the result of our assuming that physical processes (including neurophysiological processes) are always mechanical.” However, they say, if “we are willing to change our view of physical reality by allowing non-mechanical, organic and holistic concepts such as active information to play a fundamental role,” this might make it possible to understand the relationship between physical and mental processes in a new way (Hiley and Pylkkänen, 2022). For example, the human brain could operate in some ways like a “quantum measuring apparatus” (Pylkkänen, 2022).
Philosophically, according to Pylkkänen, that the physical domain is causally closed has left “no room for mental states qua mental to have a causal influence upon the physical domain, leading to epiphenomenalism and the problem of mental causation.” One road to a possible solution is called “causal antifundamentalism:” causal notions cannot play a role in physics, because the fundamental laws of physics are radically different from causal laws.” While “causal anti-fundamentalism seems to challenge the received view in physicalist philosophy of mind and thus raises the possibility of there being genuine mental causation after all,” Pylkkänen rejects it in favor of the ontological interpretation of quantum theory imparting active information (Pylkkänen, 2019).
11.5. Wolfram’s consciousness in the ruliad
Physicist and computer scientist Stephen Wolfram seeks “to formalize issues about consciousness, and to turn questions about consciousness into what amounts to concrete questions about mathematics, computation, logic or whatever that can be formally and rigorously explored” (Wolfram, 2021b). He begins by embedding consciousness in what he calls the “ruliad” (neologism from “rules”), which he defines as “the entangled limit of everything that is computationally possible: the result of following all possible computational rules in all possible ways.” The ruliad, he says, is “a kind of ultimate limit of all abstraction and generalization,” encapsulating “not only all formal possibilities but also everything about our physical universe” (Wolfram, 2021a). The ruliad is crucial for formalizing the “rules” of consciousness, he argues, because “everything we experience can be thought of as sampling that part of the ruliad that corresponds to our particular way of perceiving and interpreting the universe” (Wolfram, 2021b).
Consciousness, Wolfram says, is not about the general computation that brains can do. “It’s about the particular feature of our brains that causes us to have a coherent thread of experience.” And this invokes the ruliad, which “has deep consequences that far transcend the details of brains or biology.” It defines (what we consider to be) the laws of physics (Wolfram, 2021b).
While consciousness involves computational sophistication, Wolfram says, “its essence is not so much about what can happen as about having ways to integrate what’s happening to make it somehow coherent and to allow what we might see as ‘definite thoughts’ to be formed about it.” Surprisingly, “rather than consciousness being somehow beyond ‘generalized intelligence’ or general computational sophistication,” he instead sees consciousness “as a kind of ‘step down’—as something associated with simplified descriptions of the universe based on using only bounded amounts of computation.” In addition, “for our particular version of consciousness, the idea of sequentialization seems to be central” (Wolfram, 2021b).
Wolfram probes consciousness by asking, “Why can’t one human consciousness ‘get inside’ another?” It’s not just a matter of separation in physical space, he says, “It’s also that the different consciousnesses—in particular by virtue of their different histories—are inevitably at different locations in rulial space. In principle they could be brought together; but this would require not just motion in physical space, but also motion in rulial space” (Wolfram, 2021a).
Quantum mechanics is involved in Wolfram’s consciousness, but with more than its usual putative mechanisms. Considering the foundations of quantum mechanics in context of the ruliad—quantum mechanics emerges “as a result of trying to form a coherent perception of the universe”—Wolfram offers a sharp epigram to describe consciousness: “how branching brains perceive a branching universe” (Wolfram, 2021b).
To Wolfram, to grasp the core notion of consciousness goes beyond explicating consciousness per se because it “is crucial to our whole way of seeing and describing the universe—and at a very fundamental level it’s what makes the universe seem to us to have the kinds of laws and behavior it does.” The richness of what we see, he says, reflects computational irreducibility, “but if we are to understand it we must find computational reducibility in it.” This is how consciousness “might fundamentally relate to the computational reducibility we need for science, and might ultimately drive our actual scientific laws” (Wolfram, 2021a).
11.6. Beck-Eccles’s quantum processes in the synapse
Sir John Eccles, Nobel laureate for his seminal work on the synapse, the small space between neurons across which neurochemicals flow to excite or inhibit contiguous neurons, was a pioneer in early efforts to construct a “quantum neurobiological” theory of consciousness. In their formulation, Beck and Eccles applied concrete quantum mechanical features to describe how, in the cerebral cortex, incoming nerve impulses cause the emission of transmitter molecules in presynaptic neurons (i.e., exocytosis) via information transfer and “quantal selection” with a direct relationship with consciousness (i.e., influenced by mental actions) (Beck and Eccles, 1992).
Beck and Eccles propose that “the quantum state reduction, or selection of amplitudes, offers a doorway for a new logic, the quantum logic, with its unpredictability for a single event.” Because conscious action (e.g., intention) is a dynamical process which forms temporal patterns in relevant areas of the brain (cerebral cortex), they propose how regulating the myriad synaptic switches between innumerable neurons in those relevant areas can be regulated effectively by a quantum trigger (based on an electron transfer process in the synaptic membrane). Thus, they conclude, “conscious action is essentially related to quantum state reduction” (Beck and Eccles, 1998).
Stapp supports the hypothesis that quantum effects are important in brain dynamics in connection with cerebral exocytosis. Exocytosis is instigated by a neuronal action potential pulse that triggers an influx of calcium ions through ion channels into a nerve terminal, such that, due to the very small diameter of the ion channel, the quantum wave packet that describes the location of the ion spreads out to a size much larger than the trigger site. This means that “one must retain both the possibility that the ion activates the trigger, and exocytosis occurs, and also the possibility that the ion misses the trigger site, and exocytosis does not occur” (Stapp, 2006).
As Beck and Eccles hypothesize, “the mental intention (the volition) becomes neurally effective by momentarily increasing the probability of exocytosis in selected cortical areas” (Beck and Eccles, 1992). If so, this fundamental indeterminism of the nature of each specific quantum state collapse is said to open opportunity for mental powers to affect brain states, with supposed implications for conscious intervention and even for free will.
11.7. Kauffman’s mind mediating possibles to actuals
Theoretical biologist Stuart Kauffman posits the following: (i) Quantum measurement converts Res potentia—ontologically real Possibles—into Res extensa – ontologically real Actuals. (ii) Brain/mind/consciousness cannot be purely classical physics because no classical system can be an analog computer whose dynamic behavior can be isomorphic to “possible uses”, and therefore, brain/mind/consciousness must be partly quantum. (iii) Res potentia and Res extensa suggest a role for mind/consciousness in collapsing the wave function converting Possibles to Actuals, because no physical cause can convert a Possible into an Actual. (iv) Our brain/mind/consciousness entangles with the world in a vast superposition and we collapse the wave function to a single state which we experience as qualia, allowing “seeing” or “perceiving” of X to accomplish Y (Kauffman, 2019, 2023; Kauffman and Roli, 2022)37
As Kauffman and parapsychologist Dean Radin put it, “We propose a non-substance dualism theory, following a suggestion by Heisenberg (1958), whereby the world consists of both ontologically real Possibles that do not obey Aristotle’s law of the excluded middle, and ontologically real Actuals, that do obey the law of the excluded middle.” Measurement, they say, is what converts Possibles into Actuals” (Kauffman and Radin, 2020).
The “culprit” at the root of the mind-body problem, according to Kauffman and Radin, is the causal closure of classical physics. “We ask mind to act causally on the brain and body, but in classical physics all of the causes are already determined.” Because of this, they conclude, no form of substance dualism can work while quantum mechanics as the foundational mechanism of consciousness should be taken seriously—which, they say, would lead to “the intriguing possibility that some aspects of mind are nonlocal, and that mind plays an active role in the physical world” (Kauffman and Radin, 2020). (9.)
11.8. Torday’s cellular and cosmic consciousness
Developmental physiologist John Torday offers an original cellular-based explanation of consciousness that embeds quantum mechanics (Torday, 2022a, 2022b, 2023, 2024). He describes consciousness as a two-tiered-system, derivative from physiology, having been “constructed” from the environment via factors in the environment that have been assimilated via symbiogenesis and integrated as cell physiology—the cell semi-permeable membrane being the first tier, and the compartmentation and integration of cell physiologic data as cell-cell communication as the second tier. Basing his model on both classical Newtonian and quantum mechanical principles, he proposes that consciousness is stored within and between our cells based on control mechanisms, referencing the “First Principles of Physiology”, that is, negative entropy, chemiosmosis and homeostasis, and consciousness is retrieved from them via the central nervous system as the “algorithm” for translating local and non-local cellular physiologic memories into thought (Torday, 2022a).
He claims that quantum entanglement is integral to our physiology, and that it links our local consciousness with the non-local consciousness of the cosmos, distinguishing causation from coincidence based on science. Moreover, he posits that local physiologic memories are paired with non-local memories that dwell in cosmic consciousness and that all cellular memories are on a continuum of local and non-local properties, and that under certain conditions we may be more locally or non-locally conscious. He speculates that as we evolve, we move closer to the non-local by transcending the local. He maintains that we can take advantage of certain experiences in order to attain a transcendent level of consciousness: lucid dreaming, near-death experiences, out-of-body experiences, Maslow peak experiences, runner’s high (Torday, 2022a).
Torday’s main point is that “the quantum” is native to our physiology (Torday, 2022a, 2022b, 2023, 2024). Moreover, “since our physiology derives from the Cosmos based on Symbiogenesis,” he hypothesizes that “the cell behaves like a functional Mobius Strip, having no ‘inside or outside’ cell membrane surface—it is continuous with the Cosmos, its history being codified from Quantum Entanglement to Newtonian Mechanics, affording the cell consciousness and unconsciousness-subconsciousness as a continuum for the first time” (Torday, 2024).
11.9. Smolin’s causal theory of views
Physicist Lee Smolin approaches the question of how qualia fit into the physical world in the context of his “relational and realist completion of quantum theory, called the causal theory of views” (Smolin, 2020).
Smolin has long focused on a “realist” double completion of quantum mechanics and general relativity that would give a full description of, or explanation for, all individual physical processes, independent of our knowledge or interventions. Such a completion is required for unifying gravity, spacetime, and cosmology into the rest of physics. His common theme has been that of a relational “hidden variables” theory: a realist description of precisely what goes on in each individual event or process, which reduces to quantum mechanics in a certain limit and averaging procedure.
In Smolin’s theory, the first key idea is that “the universe is constructed from nothing but a collection of views of events, where the view of an event is what can be known about that event’s place in the universe from what can be seen from that event.” In other words, “the beables of this theory [‘beable’ is short for ‘maybe-able,’ i.e., anything that could possibly be, in any superimposed quantum states] are views from events, the information available at each event from its causal past, such as its causal predecessors and the energy and momentum they transfer to the event.” Smolin calls this the “view” of an event—that is, “a causal universe that is composed of a set of partial views of itself.” Within such an ontology of views, Smolin says it’s “natural to propose that instances or moments of conscious experience are aspects of some views. That is, an elementary unit of consciousness is not a single qualia, but the entire of a partial view of the universe, as seen from one event” (Smolin, 2020.)
Smolin’s second key idea restricts the views that are associated with consciousness to within a very small set. Most events and their views are common and routine, he says, in that they have many near copies in the universe within their causal pasts. He proposes that these common and routine views have no conscious perceptions. Then, “there are a few, very rare views which are unprecedented, which are having their first instance, or are unique, in that they have no copies in universal history.” Smolin proposes it is “those few views of events, which are unprecedented, and/or unique, and are hence novel, [i.e., they are not duplicates of the view of any event in the event’s own causal past] which are the physical correlates of conscious perceptions.”
This addresses, he says, “the problem of why consciousness always involves awareness of a bundled grouping of qualia that define a momentary self. This gives a restricted form of panpsychism defined by a physically based selection principle which selects which views have experiential aspects.”
To summarize, Smolin bases his theory on two concepts: First, the beables of a relational theory to be the views of events. Second, the possibility of making a physical distinction between common and routine states, on the one hand, and novel and unique states, on the other. “A relational theory that incorporates both ideas offers a possible setting for bringing qualia and consciousness into physics. The physical correlates of consciousness would be the novel or unique views of events” (Smolin, 2020.)
11.10. Carr’s quantum theory, psi, mental space
Mathematician-astronomer Bernard Carr speculates that “mental space,” an unknown aspect of reality, may be the ultimate foundation of consciousness. “Even if you believe that consciousness collapses the wave function,” he says, “that doesn’t really accommodate consciousness within physics. It’s saying that quantum theory is weird and therefore maybe it can explain consciousness, which is also weird—but that is illogical because it’s just explaining one mystery in terms of another. We need to get consciousness into physics in a more fundamental way” (Carr, 2016a).
Carr notes that most physicists take the view that “consciousness is just an epiphenomenon produced by the brain, independent of physics, and that as physicists they don’t have to confront the problem of consciousness because, after all, physics has a third-person perspective, objects in the outside world, whereas consciousness has a first-person perspective. In other words, clearly brains exist and brains are physical systems, but consciousness is simply beyond the domain of physics. The real issue is how can physics ever accommodate that first-person perspective?” (Carr, 2016b).
Carr considers the radical view that “consciousness actually is more fundamental, that the brain’s role is to limit your experience. So, when you see the world through your eyes and hear it through your ears, the brain is limiting your experience—which, on the face of it, might seem a completely bizarre thing to say, but that, at least, is an alternative view, that consciousness is not actually generated by the brain, but merely encounters the world through the brain” (Carr, 2016c).
“The only way I can see this,” Carr poses, is a state of affairs “where consciousness is primary, a fundamental aspect of reality. In other words, consciousness is not just generated as a result, as the endpoint, of physical processes. In some sense, it’s there from the beginning” (Carr, 2016c).
As to the relationship between consciousness and mathematics, Carr sees them “on a par because I feel that the final picture of the world must marry matter and mind. They come together. Which is primary? I’m not sure the question even makes sense, because I prefer a picture in which matter and mind co-exist right from the beginning.” Carr is careful to clarify what he means by “mind.” He says, “When I use the word ‘mind’ in this context, I’m using ‘Mind’ with an upper-case ‘M’, rather than mind with a lower-case ‘m’, which is generated by the brain. ‘Mind’ with a big ‘M’ is like consciousness with a big ‘C’” (Carr, 2016c).
In forming his theory, Carr sees support from psi or the paranormal. While he recognizes that psi “encompasses a multitude of sins,” there are some aspects, such as telepathy and clairvoyance, which he takes seriously, whereas other aspects, such as precognition and psychokinesis, less so. Still, he regards even these psi phenomena as possible because of potential deep interactions between consciousness and physics. Thus, psi is another reason why, he says, “We need a theory of physics that accommodates consciousness.” (Carr stresses that he gives no credence to many aspects of psi or the paranormal.) (Carr, 2016d).
Carr’s “favorite view,” he says, is that “the way to explain this link between minds, and indeed between minds and the physical world, is to say that there is in some sense a ‘bigger space’ and this bigger space in some sense links your mind and my mind.” He labels this bigger space “mental space.” He says, “Just as there’s a physical world that reconciles innumerable observations of the physical world, there is this ‘mental space’ that allows connections between different minds and between minds and the physical world—because, remember, the physical world is also part of this bigger space.”
Carr offers another category of explanations for psi which involves quantum theory, where entanglement can connect spatially separated objects and events. “Maybe we’re all entangled in some weird quantum mechanical way. Now, that’s probably the view which is currently the most popular among parapsychologists.” However, that’s not Carr’s own view. “As noted, my own favorite view is that there is this bigger space, this mental space, that in some sense links minds and perhaps matter as well.”
Carr discerns the relationship between quantum theory and this mental space. “If you want consciousness to come into physics, quantum theory is going to play a role. All I’m saying is I don’t think that quantum theory alone can explain all the phenomena. You need some form of mental space to accommodate these psi or paranormal phenomena (if you believe in these phenomena, of course, which most of my colleagues do not).” Carr stresses, rightly I think, that psi or paranormal phenomena are worth taking seriously (17), because even with a minimalist view that the probability of these phenomena being real is small, their significance for a final theory of physics would be huge” (Carr, 2016d).
11.11. Faggin’s quantum information-based panpsychism
Physicist/inventor Federico Faggin postulates “with high confidence” that “consciousness and free will are properties of quantum systems in pure quantum states” because they depend on quantum entanglement, a nonlocal property that cannot exist in any classical, deterministic universe (Faggin, 2023). The kind of information involved in consciousness needs to be quantum for multiple reasons, he says, “including its intrinsic privacy and its power of building up thoughts by entangling qualia states.” As a result, Faggin comes to a “quantum-information-based panpsychism” (QIP) (D’Ariano and Faggin, 2022).
The essence of QIP is that “a quantum system that is in a pure quantum state is conscious of its own state, that is, it has a qualia experience of its state.” Faggin calls this “a highly plausible postulate” because “a qualia experience is definite (integrated, not made of a mixture of separable parts) and private since it can only be known by the experiencer.”
More formally, the theory says that a quantum state is an effective mathematical representation of a conscious experience because it possesses the same crucial characteristics of what it represents: the definiteness and privacy of the experience. “Within QIP, quantum information describes the subjective inner reality of quantum systems, a reality that is private for each system” (Faggin, 2023).
But this mathematical description of an experience (a vector in Hilbert space), Faggin stresses, is not the experience itself. Quantum information is non-cloneable and thus can be only partially objectified with classical information. Moreover, “the nature of that private knowing is not numeric but qualitative and subjective, because a conscious system ‘knows’ its own state by feeling it through qualia.”
Faggin says his hypothesis has creative possibilities, which are the foundation of imagination, intuition, vision, creativity, comprehension, and inventiveness, emerging “from the quantum level of reality, since a classical world is deterministic, that is, algorithmic and predictable, and thus incapable of real creativity.” True creativity, Faggin says, like free will and consciousness, “are non-algorithmic properties that can only exist in a fundamental layer of the universe ruled by quantum physics.” Because quantum consciousness is not reproducible, Faggin predicts that no machine can ever have it or create it (it is not reducible to mechanisms) and, he says, it could continue to exist after the death of the body (Faggin, 2023).
11.12. Fisher’s quantum cognition
Condensed matter physicist Matthew Fisher proposes that quantum processing with nuclear spins might be operative in the brain and key to its functioning. He identifies “phosphorus as the unique biological element with a nuclear spin that can serve as a qubit for such putative quantum processing—a neural qubit—while the phosphate ion is the only possible qubit-transporter.” He suggests the “Posner molecule” (calcium phosphate clusters, Ca9(PO4)6) as “the unique molecule that can protect the neural qubits on very long times and thereby serve as a (working) quantum-memory” (Fisher, 2015).
To be functionally relevant in the brain, he says, “the dynamics and quantum entanglement of the phosphorus nuclear spins must be capable of modulating the excitability and signaling of neurons”—which he takes as a working definition of “quantum cognition”. Phosphate uptake by neurons, he says, might provide the critical link.
Because quantum processing requires quantum entanglement, Fisher argues that “the enzyme catalyzed chemical reaction which breaks a pyrophosphate ion into two phosphate ions can quantum entangle pairs of qubits,” and that “Posner molecules, formed by binding such phosphate pairs with extracellular calcium ions, will inherit the nuclear spin entanglement.” Continuing the explanatory sequence, Fisher says “Quantum measurements can occur when a pair of Posner molecules chemically bind and subsequently melt, releasing a shower of intra-cellular calcium ions that can trigger further neurotransmitter release and enhance the probability of post-synaptic neuron firing. Multiple entangled Posner molecules, triggering non-local quantum correlations of neuron firing rates, would provide the key mechanism for neural quantum processing” (Fisher, 2015).
The possible centrality of quantum processing in the brain is supported by the emerging field of quantum biology. It can be called, “quantum neuroscience” (Ouellette, 2016). Fisher’s proposal, even if incorrect in its specifics, is useful in identifying the kinds of processes and sequences of explanatory steps required if quantum processing is to be fundamental for brain function in general and for consciousness in particular.
11.13. Globus’s quantum thermofield brain dynamics
Psychiatrist-philosopher Gordon Globus seeks to link two seemingly independent discourses: An application of quantum field theory to brain functioning, which he calls “quantum brain dynamics,” and the continental postphenomenological tradition, especially the work of Martin Heidegger and Jacques Derrida. Underlying both, he says, “is a new ontology of non-Cartesian dual modes whose rich provenance is their between” (Globus 2003).
The key issue, in Globus’s telling, is that of primary “closure”—the nonphenomenality of quantum physical reality—and the action that brings “dis-closure.” Dis-closure of the phenomenal world, he argues, “can be understood within the framework of dissipative quantum thermofield brain dynamics without any reference to consciousness” (Globus, 2011). He posits to “deconstruct” the field of consciousness studies by combining “two persistently controversial areas: the hard problem of qualia and the measurement problem in quantum physics …. within the framework of dissipative quantum thermofield brain dynamics: disclosure.” His claim is that “the problematics of consciousness/brain, qualia, and measurement in quantum physics are resolved by substituting disclosure for perceptual consciousness and distinguishing the phenomenal brain-p from the macroscopic quantum object brain-q” (Globus, 2013).
Metaphysically, Globus conceives the world as a “continual creation” on the part of each quantum thermofield brain in parallel, which is “triply tuned”: by sensory input, memory and self-tuning. Such a brain, he says, “does not primarily process information—does not compute—but through its multiple tunability achieves an internal match in which a world is disclosed, even though there is no world out there, only objects under quantum description at microscopic, mesoscopic and macroscopic scales.” Globus claims his “unconventional formulation revives a version of monadology via quantum brain theory” (Globus, 2022).
Globus decries how “philosophers have said some rather naive things by ignoring the extraordinary advances in the neurosciences in the 20th century. The skull is not filled with green cheese!” On the other hand, he criticizes “the arrogance of many scientists toward philosophy and their faith in the scientific method,” which he calls “equally naïve,” asserting that “scientists clearly have much to learn from philosophy as an intellectual discipline” (Globus, 2012).
11.14. Poznanski’s dynamic organicity theory
Neuroscientist Roman Poznanski proposes a Dynamic Organicity Theory (DOT) of consciousness, a quantum biological theory based on a multiscale interpretation of type-B materialism.38 DOT utilizes a multiscalar temporal-topological framework to include quantum biological effects in the sense of what happens to macroscopic systems upon interaction with quantum potential energy that exists when a living negentropic39 state of the brain imposes thermodynamic constraints (Section: Poznanski, 2024).
DOT does not deal with quantum consciousness or assume quantum brain dynamics. However, according to Poznanski, a Schrödinger-like equation describes the quantum effects within the multiscale complexity, where multiscale complexity is both functional and structural through changeable boundary conditions (resulting in the topology being a holarchical modularity). This is made possible by treating time consciousness, i.e., “consciousness-in-the-moment,” on a nonlinear temporal scale and implicitly grounding space to the contingency of changing boundary conditions. The approach is based on the dynamics of functional relations (not to be confused with functionalist or relational theories of consciousness). It is a nonspatial topological framework (not the mathematical study of “space” in a general sense of topological spaces) associated with the temporal aspect of the functionality. Here, functionality refers to the biological realization of the physical as those features of usefulness that exist subjectively. Therefore, Poznanski says, it rules out functionalism and focuses on the qualitativeness of brain functioning. As noted, the approach is type-B materialism (Chalmers, 2003), where consciousness is a physical process, but epistemic objectivism alone does not define physicalism (Shand, 2021). This means that functionality as the quality of usefulness only refers to physical properties assessed subjectively, which can be possible only through quantum biological effects.
Moreover, the functional capability of the negentropic state changing over time must satisfy the following necessary condition for consciousness to arise: the functionality of multiscale complexity must exceed the functionality of maximum complexity, i.e., FMultiComplexity > FMaxComplexity. This means that consciousness arises when the functionality of multiscale complexity reaches above the functionality of maximum complexity. This required increase in functionality of multiscale complexity is derived from an additional degree of freedom made possible by quantum biology40 beyond that of the functionality of maximum complexity as derived from brain structure, dynamics, and function. FMaxComplexity is an insufficient measure of consciousness. FMultiComplexity provides an epistemically subjective approach to dynamic organicity, including self-referential dynamic pathways that give an extra quality of energy-negentropy exchange for path selection as realization relations. FMultiComplexity is not a step-function but a gradual ascendance to plateaus accounting for different degrees of consciousness. (Whether this condition is sufficient is beyond DOT to decipher; something with an equivalent topology could cause consciousness in other systems.) (Poznanski, 2024).
Poznanski states that “the act of understanding uncertainty is the main qualifier of consciousness” and “the ’act’ connotes the experienceable form, which is, in essence, a precursor of the experience of acting.” The process entails the potential for understanding “meaning” through self-referential dynamical pathways “instead of recognizing (cf. introspection) sensory information through perceptual channels, forming the basis of understanding uncertainty without relying on memory recall.” It is not, he says, “coming into existence” because “quantum-thermal fluctuations are irreducible, yet the process as a whole comes ‘to exist’ perhaps not instantaneously but appears spontaneously. Its output is intentionality as an instruction to act in path selection.”
The self-reference principle, which Poznanski says can replace emergence and self-organization when dealing with functionality rather than structure, “establishes dynamical pathways from the microscale to the macroscale (this includes nonlocal pathways), in which diachronic causation and how the disunity of causal order in the redundancy creates a weak unity of consciousness through its temporal structure,” the inferred purpose giving rise to “a sense of self.”
Poznanski avoids discussing phenomenological properties of consciousness, such as qualia, because, he says, they do “not apply to conscious reality when considered in the context of functional-structural realism, an offshoot of structuralism, without relying on introspection.” Phenomenological consciousness, he says, “appears like a black box of ‘being’ instead of ‘doing.’” However, functional interactions that entail self-referential dynamics “are uniquely fathomed and, hence, not phenomenally equivalent in other functional systems.”
Thus, Poznanski concludes, “a living negentropic state that supports biological function is a dynamic state of being organic representing an additional degree of freedom for intrinsic information to be structured, which makes it possible for a dynamic organicity theory of consciousness to take shape in the material brain” (Poznanski, 2024).
11.15. Quantum consciousness extensions
The following theories of consciousness are not quantum theories per se in that they do not have quantum mechanics as the essence or generator of consciousness. Rather, each reflects how quantum mechanics could facilitate or interact with other theories of consciousness. All are highly speculative.
Computer scientist Terry Bollinger enjoys speculating about possible mechanisms of quantum consciousness; these include, non-linear soliton Schrödinger wave models in sensory neural networks; neural dendrites as antennas for wave collapses; how warm brains might actively maintain and manipulate quantum wave functions; and how “quasiparticles” might enable quantum consciousness by quantizing classical data transfers between neurons (Bollinger, 2023).
Complexity theorist Sudip Patra posits that mathematical tools used in quantum science (information theory included) can be also used to describe cognition; for example, Hilbert space modeling of cognitive states might provide better descriptions of different features like contextuality in decision making, or even exploring ‘entanglement-like’ features of mental states (Patra, 2023; Rooney and Patra, 2022). Though Patra is agnostic about any underlying physics of consciousness, he works with Kauffman (11.7) to construct a non-local theory of consciousness outside the constraints of physical space-time.
New-age physician-author Deepak Chopra explains “the intricate relationship between consciousness and the quantum field” by applying the same word “field” to both. Consciousness isn’t individual, he says. “Instead, it is a vast field that individuals share in. This field encompasses myriad possibilities. It is the source from which thoughts, sensations, images, and feelings emerge and then dissolve back into, just as subatomic particles do in the quantum field. Mental experiences and quanta are transient, shaped by uncertainty, and are, in essence, energetic fluctuations within the consciousness field.” Chopra points to the infinite nature of the quantum and the consciousness fields, and to the essential entanglement within each, such that local realism—i.e., the world of isolated physical objects and mental thoughts—is “out the window” for both physical and mental phenomena. This entanglement, he says, “suggests that physical objects are intertwined with perception and consciousness, blurring the boundaries between the observer and the observed.” Chopra proposes “a drastic paradigm shift” in which “consciousness comes first, being the field that is the origin of creation, acting in concert with the quantum field” (Chopra, 2023a, Chopra, 2023b).
Philosopher Emmanuel Ransford proposes “quantum panpsychism” where matter is richer “with an extra content or dimension”—he calls it “holomatter,” composed of “holoparticles”—and consciousness is “a nonmaterial content of the world.” It assumes two types of causality: “out-causation,” causation from outside, out of reach and deterministic; and “in-causation,” causation from within, unpredictable and “self-willed,” a kind of randomness. Holoparticles, Ransford offers, also have two parts: one obvious, deterministic and out-causal; the other hidden, random-looking and in-causal. This hints, he says, that “the randomness of some quantum events is a smoking-gun evidence of in-causation.” He adds the “im-im hypothesis,” where “im-im” stands for immaterial and immanent, and his claimed insight is that the brain is a catalyst of the mind. “It is a biological ‘lamp’ of sorts that pours out untold sparks of consciousness instead of untold sparks of light (or photons) in the case of ordinary lamps.” Indeed, the brain spawning large flows of active and entangled in-causal holoparticles within the im-im framework would underpin ordinary consciousness—holoparticles linking quantum and consciousness. This is why “consciousness, albeit immaterial, needs a physical structure to ‘catalyze’ it into being” (Ransford, 2023).
Theoretical engineer Edward Kamen proposes that “the human soul is a type of quantum field,” which interacts with only certain fields in the physical universe, and not directly with matter. The claim is made that “fields that interact with the soul field include electromagnetic waves,” citing as evidence “near-death experiences where events that could not have been seen through the eyes of the individual are verified.” Extending the theory, Kamen speculates that because “electric fields and electromagnetic fields have the same quanta consisting of photons, electric fields may also interact with the soul field.” This could result in the transfer of information, he says, from working memory to the soul through electric fields produced by neural ensembles in the human brain. Further, the soul field may also affect neurons on the molecular level, perhaps via electric fields and cytoelectric coupling (Kamen and Kamen, 2023).
Quantum consciousness: a growth market.
11.16. Rovelli’s relational physics
Physicist Carlo Rovelli focuses on “the profoundly relational aspect of physics, manifest in general relativity, but especially in quantum mechanics.” 20th century physics, he says, “is not about how individual entities are by themselves. It is about how entities manifest themselves to one another. It is about relations.” This vindicates, he offers, “a very mild form of panpsychism,” but “this same fact may undermine some of the motivations for more marked forms of panpsychism” (Rovelli, 2021).
“Although there is nothing specifically psychic or mental in the relational properties of a system with respect to another system,” Rovelli says “there is definitely something in common with panpsychism, because the world is not described from the outside: it is always described relative to a physical system. So, physical reality is, in our current physics, perspectival reality” (Dorato, 2016).
Rovelli takes a deflationary view of the hard problem: “If our basic understanding of the physical world is in terms of more or less complex systems that interact with one another and affect one another, the discrepancy between the mental and the physical seems much less dramatic.” He concludes, “It is a world where physical systems—simple and complex—manifest themselves to other systems—single and complex—in a way that our physics describes. I see no reason to believe that this should not be sufficient to account for stones, thunderstorms, and thoughts” (Dorato, 2016).
According to George Musser, one way to argue that relationalism could solve the hard problem is, first, to recognize that “third-person physics isn’t up to the task of explaining first-person experience and, specifically, its qualitative aspect (qualia).” Then, Rovelli’s approach is to say that “physics is not, in fact, third-person; it is specific to each of us, just as each of us has our own private stream of consciousness.” Thus, “the two sides are not so mismatched after all.” However, Musser adds, “although physics may well be relational, subjective experience doesn’t seem to be” (Musser, 2023a, Musser, 2023b).
12. Integrated information theory
Integrated Information Theory (IIT), developed by neuroscientist Giulio Tononi and supported by neuroscientist Christof Koch, is an original, indeed radical model that states what experience is and what types of physical systems can have it (Tononi and Koch, 2015). IIT is grounded in experience, the phenomenology of consciousness, and it features mathematical description, quantitative measurement, scientific testability, broad applications, and nonpareil, intrinsic, cause-effect “structures.” In other words, “IIT addresses the problem of consciousness starting from phenomenology—the existence of my own experience, which is immediate and indubitable—rather than from the behavioral, functional, or neural correlates of experience” (Tononi et al., 2022). Controversial to be sure, IIT has become a leading theory of consciousness.41
IIT accounts for consciousness in the following way. First, introspection and reason identify the essential properties of consciousness—the axioms of phenomenal existence. Then, each axiom is accounted for terms of cause–effect power; that is, “translating” a “phenomenal property into an essential property of the physical substrate of consciousness” [PSC]—yielding the postulates of physical existence. In this way, IIT claims to “obtain a set of criteria that a physical substrate of consciousness (say, a set of cortical neurons) must satisfy” (Tononi et al., 2022).
IIT asserts that distinct conscious experiences are in a literal sense distinct kinds of conceptual structures in a radical and heretofore unknown kind of “qualia space.” IIT says (and introduced the idea) that for every conscious experience, there is a corresponding mathematical object such that the mathematical features of that object are isomorphic to the properties of the experience.
“Integrated information theory means that you need a very special kind of mechanism organized in a special kind of way to experience consciousness,” Tononi says. “A conscious experience is a maximally reduced conceptual structure in a space called ‘qualia space.’ Think of it as a shape. But not an ordinary shape—a shape seen from the inside.” Tononi stresses that simulation is “not the real thing.” To be truly conscious, he said, an entity must be “of a certain kind that can constrain its past and future—and certainly a simulation is not of that kind” (Tononi, 2014b).
Christof Koch envisions how IIT could explain experience—how consciousness arises out of matter. “The theory makes two fundamental axiomatic assumptions,” Koch explains. “First, conscious experiences are unique and there are a vast number of different conscious experiences. Just think of all the frames of all the movies you’ve ever seen or movies that will ever be made until the end of time. Each one is a unique visual experience and you can couple that with all the unique auditory experiences, pain experiences, etc. All possible conscious experiences are a gigantic number. Second, at the same time, each experience is integrated—what philosophers refer to as unitary. Whatever I am conscious of, I am conscious of as a whole. I apprehend as a whole. So, the idea is to take these two axioms seriously and to cast them into an information theory framework. Why information theory? Because information theory deals with different states and their interrelationships. We don’t think the stuff the brain is made out of is really what’s critical about consciousness. It’s the interrelationship that’s critical” (Koch, 2012b).
IIT starts from phenomenology itself—a point that Tononi stresses cannot be overstressed—with axioms that are deemed to be unequivocally and universally true for all instances of consciousness, such that whatever systems manifest these axioms will ipso facto manifest consciousness.
It is at this point that IIT seeks a mathematical expression of the fundamental properties of experience. It is not the reverse: IIT does not start from mathematics hoping to explain phenomenology; rather it starts with phenomenology and ends with mathematics (Tononi, 2014a). Because IIT’s consciousness is a purely information-theoretic property of systems, not limited to brains or even to biology, Tononi constructs a mathematical function φ (phi) to measure a system’s informational integration, with levels of φ covarying with degrees of consciousness (Van Gulick, 2019).
In IIT, each experience, each conscious percept, has clear characteristics: it is specific: it is what it is by how it differs from alternative experiences; it is unified: irreducible to noninterdependent components; it is unique: it has its own one-off borders and a particular spatio-temporal grain (Oizumi et al., 2014; Haun and Tononi, 2019).
These pillar concepts, all grounded in experience, are expressed by five phenomenological axioms: intrinsic existence, composition, information, integration and exclusion. These axioms are then formalized into postulates that prescribe how physical mechanisms, such as neurons or logic gates, must be configured to generate experience (phenomenology). The postulates are used to define integrated information as information specified by a whole that cannot be reduced to that specified by its parts (Tononi and Koch, 2015).
Each of IIT’s five postulates defines and constrains the properties required of physical mechanisms to support consciousness (Tononi and Koch, 2015).
- (i) Intrinsic Existence. Consciousness exists of its own inherent nature: each experience is real, and it exists from its own inherent perspective; to account for experience, a system of mechanisms in a state must exist intrinsically and it must have cause–effect power.
- (ii) Composition. Consciousness is structured: each experience is composed of phenomenological distinctions; the system must be structured: subsets of system elements (composed in various combinations) must have cause–effect power upon the system.
- (iii) Information. Consciousness is specific: each experience is the particular way it is; the system must specify a cause–effect-enabling structure that is the particular way it is; the system has a set of specific cause–effect repertoires that distinguishes it from all other possible structures (differentiation).
- (iv) Integration. Consciousness is unified: each experience is irreducible to noninterdependent subsets of phenomenal distinctions; the cause–effect structure specified by the system must be unified: it must be intrinsically irreducible.
- (v) Exclusion. Consciousness is definite, in content and spatio-temporal grain: each experience has the set of phenomenal distinctions it has, not less or more, and flows at the speed it does, not faster or slower; the cause–effect structure specified by the system must be definite and is maximally irreducible intrinsically (“conceptual structure”).
It is this conceptual structure that is especially intriguing. Maximally irreducible intrinsically, it is also known as a “quale” (plural: qualia). Its arguably infinite varieties are formed when higher-order mechanisms specify concepts, with the constellation of all concepts specifying the overall form or shape of the quale. On this basis, Tononi and Koch formulate the central identity of IIT quite simply: an experience is identical to a conceptual structure that is maximally irreducible intrinsically (Tononi and Koch, 2015).
Questions that IIT seeks to address: Why the cerebral cortex gives rise to consciousness but the cerebellum does not, though the latter has even more neurons and appears to be just as complex? Is consciousness present in coma patients, preterm infants, non-mammalian species? Can computers, artificial intelligence (e.g., large language models) become conscious as humans are conscious?
Most relevant to our Landscape is IIT’s fundamental ontology. Put simply, it begins with “the ontological primacy of phenomenal existence.” The proper understanding of consciousness, IIT states, is “true existence, captured by its intrinsic powers ontology: what truly exists, in physical terms, are intrinsic entities, and only what truly exists can cause” (Tononi et al., 2022).
Seeking to embed its theory of consciousness within a coherent metaphysical framework, IIT introduces its “0th postulate” or “principle of being.” To exist physically, IIT states, “means to have cause–effect power—being able to take and make a difference. In other words, physical existence is defined purely operationally, from the extrinsic perspective of a conscious observer, with no residual ‘intrinsic’ properties (such as mass or charge). Furthermore, physical existence should be conceived of as cause–effect power all the way down—namely down to the finest, ‘atomic’ units that can take and make a difference” (Tononi et al., 2022).
IIT deep conclusion is that “only a substrate that unfolds into a maximum of intrinsic, structured, specific, irreducible cause–effect power—an intrinsic entity—can account for the essential properties of phenomenal existence in physical terms.” IIT goes on to claim that “only an intrinsic entity can be said to exist intrinsically—to exist for itself, in an absolute sense. By contrast, if something has cause–effect power but does not qualify as an intrinsic entity, it can only be said to exist extrinsically—to exist for something else—say, for an external observer—in a relative sense. And intrinsic, absolute existence is the only existence worth having—what we might call true existence. Said otherwise, an intrinsic entity is the only entity worth being.”
In a crucial move, according to Tononi and colleagues, “IIT asserts an explanatory identity: an experience is identical to a Φ-structure. In other words, the phenomenal properties of an experience—its quality or how it feels—correspond one-to-one to the physical properties of the cause–effect structure unfolded from the physical substrate of consciousness. Thus, all the contents of an experience here and now—including spatial extendedness; temporal flow; objects; colors and sounds; thoughts, intentions, decisions, and beliefs; doubts and convictions; hopes and fears; memories and expectations—correspond to sub-structures in a cause–effect structure (Ф-folds in a Ф-structure)” (Tononi et al., 2022).
This means that “all contents of experience correspond to sub-structures within a maximally irreducible cause–effect structure—to Φ-folds within a Φ-structure. This applies not only to the experience of space, time, and objects, but also to conscious thoughts and feelings of any kind … Conscious alternatives, too, are Φ-folds within the Φ-structure corresponding to an experience.
Fundamentally, then, it is IIT’s claim that when one is conscious, “what actually exists is a large Ф-structure corresponding to my experience, and it exists at its particular grain. No subsets, supersets, or parasets of that Ф-structure also exist, just as no other grains also exist. Moreover, what actually exists is only the Ф-structure corresponding to my experience, not also an associated physical substrate. Crucially, any content of my experience, including alternatives, reasons, and decisions, corresponds to a sub-structure [i.e., Φ-folds] within my Ф-structure, not to a functional property emerging from my [neural] substrate (Tononi et al., 2022).
Because “IIT starts from phenomenal existence and defines physical existence operationally in terms of cause–effect power ‘all the way down,’ with no intrinsic residue, such as mass and charge … a physical substrate should not be thought of as an ontological or ‘substantial’ basis—an ontological substrate—constituted of elementary particles that would exist as such, endowed with intrinsic properties.”
This means, according to IIT, “because I actually exist—as a large intrinsic entity—the neurons of my substrate as such but the Ф-structure expressing its causal powers … Moreover, because my alternatives, reasons, and decisions exist within my experience—as sub-structures within an intrinsic entity—the neuronal substrates of alternatives, reasons, and decisions cannot also exist.” If this picture is correct, IIT claims controversially, “it leaves no room for emergence or dualism of any sort” (Tononi et al., 2022).
As a defining corollary to its radical theory of consciousness, IIT claims that true free will exists, based on “the proper understanding of experience as true existence and on the intrinsic powers view: what truly exists, in physical terms, are intrinsic entities, and only what truly exists can cause.” In contrast, in materialistic theories, with ontological and causal micro-determination, much of the debate about free will has revolved not around existence but around determinism/indeterminism, so that true free will is incompatible (Tononi et al., 2022).
In the same set of “adversarial collaboration” experiments that tested Global Workspace Theory (9.2.3), IIT was also subjected to the putatively rigorous protocols (Templeton World Charity Foundation, n.d.). “The specific IIT prediction examined was that consciousness is a kind of “structure” in the brain formed by a particular type of neuronal connectivity that is active for as long as a given experience, say, seeing an image, is occurring. This structure is said to be in the posterior cortex (the occipital, parietal, and temporal cortices in the back part of the brain). Preliminary results indicate that while “areas in the posterior cortex do contain information in a sustained manner”—which could be taken as evidence that the “structure” postulated by the theory is being observed—the independent “theory-neutral” researchers didn’t find sustained synchronization between different areas of the brain, as had been predicted. Preliminary brain-scanning data to calculate φ for simplified models of specific neural networks within the human brain, such as the visual cortex, seem to correlate with states of consciousness (Lenharo, 2023a, Lenharo, 2023b, 2024). Scanning the brain as people “slip into anesthesia” is said to offer support for IIT by calculating phi “for simplified models of specific neural networks within the human brain that have known functions, such as the visual cortex” (Wilson, 2023)—though, by all accounts, the empirical neuroscience of IIT is still rudimentary.
More recently, Koch defines IIT’s consciousness as “unfolded intrinsic causal power, the ability to effect change, a property associated with any system of interacting components, be they neurons or transistors. Consciousness is a structure, not a function, a process, or a computation.” He calls out “the theory’s insistence that consciousness must be incorporated into the basic description of what exists, at the rock-bottom level of reality”—a claim that “has also drawn considerable fire from opponents.” He explains that IIT “quantifies the amount of consciousness of any system by its integrated information, characterizing the system’s irreducibility. The more integrated information a system possesses, the more it is conscious. Systems with a lot of integration, such as the adult human brain, have the freedom to choose; they possess free will” (Koch, 2024, p. 16).
Personally, I see IIT operating in three dimensions. First, measurement: IIT is a test of consciousness, assessing what things are conscious, and in those things that are, quantifying the degree of consciousness (e.g., coma patients). Second, mechanism: IIT can predict brain structures and functions involved in consciousness. Third, ontology (the most controversial): IIT speculates that the conceptual structures of qualia are “located” in some kind of “qualia space” (13.5).
The first two dimensions, IIT’s measurement and mechanism, could sit comfortably in the Materialism Theories area of the Landscape. The third, IIT’s ontology of qualia, is radically distinct, its classification unclear—which is part of the reason why I have given IIT its own category on the Landscape.42
IIT claims that integrated information is both necessary and sufficient for consciousness: necessary seems uncontroversial; sufficient is the rub to many. But what I especially like about IIT’s “conceptual structures” in “qualia space” is that IIT makes a stake-in-the-ground commitment to what consciousness per se may literally be—an appreciated rarity on the Landscape of consciousness (which does not mean that I subscribe to it).
12.1. Critiques of integrated information theory
IIT has its critics, of course, as should every scientific theory. Some like to highlight IIT’s “anti-common sense” predictions imputing consciousness to objects and things that just do not in any way seem to be conscious. The early exchange between theoretical quantum computer scientist Scott Aaronson and Giulio Tononi is illuminating (Aaronson, 2014a, 2014b, 2014c; Tononi, 2014a).
More sensational, though not necessarily more illuminating, is the open letter from 124 neuroscientists and philosophers, including leading names, that characterizes IIT as “pseudoscience,” a damning descriptor that relegates IIT with the likes of astrology, alchemy, flat Earth and homeopathy. The impact is such that one can no longer discuss IIT without referencing the letter (Fleming et al., 2023).
The letter is titled “The Integrated Information Theory of Consciousness as Pseudoscience” and it expresses concerns that the media, including both Nature and Science magazines “celebrated” IIT as “a ‘leading’ and empirically tested theory of consciousness”—prior to peer-review. Moreover, the letter criticizes the large-scale adversarial collaboration project as testing only “some idiosyncratic predictions made by certain theorists, which are not really logically related to the core ideas of IIT.” The letter concludes, “As researchers, we have a duty to protect the public from scientific misinformation”—thereby igniting a firestorm in consciousness studies (Fleming et al., 2023).
Nature called it an “uproar” (Lenharo, 2023a, Lenharo, 2023b). Responding, Christof Koch said, “IIT is a theory, of course, and therefore may be empirically wrong,” but it makes its assumptions very clear—for example, that consciousness has a physical basis and can be mathematically measured.
David Chalmers was quick to comment: “IIT has many problems, but ‘pseudoscience’ is like dropping a nuclear bomb over a regional dispute. It’s disproportionate, unsupported by good reasoning, and does vast collateral damage to the field far beyond IIT. As in Vietnam: ‘We had to destroy the field in order to save it’” (Chalmers, 2023).
Hakwan Lau, one of the lead co-authors of the open letter, writes in an extended response to the “uproar” that “it is already false to characterize IIT, a panpsychist theory, as being empirically tested at all in a meaningful way.” He argues that the entire field, including his own theory, is not at the stage where predictions can logically apply, stating “the advertised goal of really testing and potentially falsifying theories is unrealistic, given where the field is at the moment.” Lau concludes by doubling down: “The world has now seen the nature of the conflicts and problems in our field, which can no longer be unseen. As a matter of fact, a sizable group of researchers think that IIT is pseudoscience” (Lau, 2023).
To physicist-neuroscientist Alex Gomez-Marin, “IIT ticks too many nonmaterialist boxes. There is academic hate for nonphysicalist speech … Cancel culture has unfortunately landed in the sciences, and just now in neuroscience. Using the pseudo-word is a pseudo-argument akin to name-calling to get rid of people … We have the responsibility to tell the truth, to the best of our ability” (Gomez-Marin, 2023).
My own view straddles the barbed fence. On one side, I agree that IIT has more weight than warrant in the pop-sci and even scientific communities, and that the results of the adversarial collaboration experiments, even if they could achieve their preset objectives, would not, perhaps could not, justify the core IIT theory. Moreover, the one-on-one adversarial experiments in general, with their high publicity, give the inappropriate impression that the two protagonists are the finalists in a theory-of-consciousness “run off,” as it were, when in fact there are many dozens of other theories, nonphysical as well as physical, still in the game.
On the other side, I do not sign on to the “pseudoscience” branding; just because IIT may not be subject to traditional kinds of scientific methodologies, such as falsification, does not ipso facto force it out of bounds. (The multiverse in cosmology faces similar kinds of criticism.43) It could be that discerning consciousness escapes traditional science methodologies, as would a majority of theory-categories on this Landscape (not that discerning truth is a democratic process).
12.2. Koch compares integrated information theory with panpsychism
Neuroscientist Christof Koch states that Integrated Information Theory (IIT) shares many intuitions with panpsychism (13), in particular that “consciousness is an intrinsic fundamental property of reality, is graded, and can be found in small amounts in simple physical systems.” Unlike panpsychism, Koch continues, IIT “articulates which systems are conscious and which ones are not [partially] resolving panpsychism’s combination problem and why consciousness can be adaptive.” The systemic weakness of panpsychism, or any other-ism, he says, “is that they fail to offer a protracted conceptual, let alone empirical, research program that yields novel insights or proposes new experiments” (Koch, 2021).
While uncertainty in theoretical development and inconceivability of empirical experiments are indeed weaknesses, should they ipso facto disqualify the theory? Experimental verification of string theory seems impossible because the energy levels required are many orders of magnitude larger than instrumentation could ever be built, and while some argue that this incapacity to be falsified should indeed disqualify string theory as a scientific theory, many string theorists disagree, betting their careers on it.
Koch’s comparing IIT with panpsychism provides insight into both. Although admitting “I’ve always had a secret crush on the singular beauty of panpsychism,” Koch counts himself among those surprised by its resurgence. He claims that IIT addresses several major shortcomings of panpsychism—“it explains why consciousness is adaptive, it explains the different qualitative aspects of consciousness (why a ‘kind of blue’ feels different from a stinky Limburger cheese), and it head-on addresses the combination problem”—per IIT’s exclusion postulate, only systems with a maximum of Φ have intrinsic existence and are conscious” (Koch, 2021).
The exclusion postulate, Koch explains, “dictates whether or not an aggregate of entities—ants in a colony, cells making up a tree, bees in a hive, starlings in a murmurating flock, an octopus with its eight semi-autonomous arms, and so on—exist as a unitary conscious entity or not.”
Koch claims that IIT “offers a startling counter-example to Goff’s claim that qualitative aspects of conscious experience cannot be captured by quantitative considerations”—“a detailed, mathematical account of how the phenomenology of two-dimensional space, say an empty canvas, can be fully accounted for in terms of intrinsic causal powers of the associated physical substrate, here a very simple, grid-like neural network” (Koch, 2021, quoting Huang, ). Integrated Information Theory may well be wrong, Koch says, but it “provides proof-of-principle for how quantitative primary qualities (here intrinsic causal power of simple model neurons that can be numerically computed; it doesn’t get more quantitative than that) correspond to secondary qualities—the experience of looking at a blank wall” (Koch, 2021). (For Goff’s response, 13.8.)
13. Panpsychisms
Panpsychism is the theory that phenomenal consciousness exists because physical ultimates, fundamental physics, have phenomenal or proto-phenomenal properties. This means that the essence of mentality, awareness, experience is a primitive, non-reducible feature of each and every part or aspect of physical reality, similar to the fundamental fields and particles in physics. Everywhere there is energy-matter, perhaps everywhere there is even spacetime, panpsychism says there is also something of consciousness. Everything that exists has a kind of inherent “proto-consciousness” which, in certain aggregates and under certain conditions, can generate inner awareness and experience. Panpsychism has multiple forms, nuances, and variants, as one would expect.
Panpsychism is one of the oldest theories in philosophy of mind, going back to pre-modern animistic religions, the ancient Greeks, Leibniz’s monads, and a host of 19th century thinkers (Goff et al., 2022). Of late, in reaction to the seemingly intractable hard problem of consciousness, panpsychism has been gathering adherents and gaining momentum, especially among some analytic philosophers.
Panpsychism has strong non-Western roots, not often explored. In particular, the ideas and arguments from Indian philosophical traditions—especially Vedānta, Yogācāra Buddhism, and Śaiva Nondualism—can enrich contemporary debates about panpsychism (Maharaj, 2020).
Panpsychism is also finding new supporters. Take “Kabbalah Panpsychism,” an interpretation of the Jewish mystical tradition that understands consciousness to be holographically and hierarchically organized, relativistic, and capable of downward causation (Schipper, 2021).
Yujin Nagasawa provides a careful critique of panpsychism, arguing that although it seems promising, it reaches “a cognitive dead end” in that “even if it’s true, we can’t prove it.” He challenges so-called constitutive Russellian panpsychism (14.1), which many consider to be the most efficacious panpsychist approach to the hard problem of consciousness, by arguing that it “seems caught in a deadlock: we are cognitively unable to show how microphenomenal properties can aggregate to yield macrophenomenal properties (or how cosmophenomenal properties can be segmented to yield macrophenomenal properties)” (Nagasawa, 2021).
Panpsychism’s revival, indeed its flourishing, has left some philosophers (as well as scientists) dumbfounded and dismayed. (I’d feel remiss if I did not make an exception and at least recognize panpsychism’s critics.) When I asked John Searle about panpsychism’s increasing scholarly acceptance, he said, “I don’t think that’s a serious view. If you’ve got panpsychism, you know you’ve made a mistake. And the reason is that consciousness comes in discrete units. There has to be a place where my consciousness ends and your consciousness begins. It can’t just be spread over the universe like a thin veneer of jam. Panpsychism has the result that everything is conscious, and you can’t make a coherent statement of that” ( Searle, 2014a).
To physicist Sean Carroll, “our current knowledge of physics should make us skeptical of hypothetical modifications of the known rules, and that without such modifications it’s hard to imagine how intrinsically mental aspects could play a useful explanatory role.” Part of the reason is the “causal closure of the physical” such that “Without dramatically upending our understanding of quantum field theory, there is no room for any new influences that could bear on the problem of consciousness.” Other than materialism/physicalism, Carroll characterizes all theories of consciousness, including panpsychism, thus: “To start with the least well-understood aspects of reality and draw sweeping conclusions about the best-understood aspects is arguably the tail wagging the dog” (Carroll, 2021).
Here I array the nature and kinds of panpsychism on offer. I then summarize the perspectives of several well-known panpsychists.
13.1. Micropsychism
Proponents position panpsychism as a solution to the vexing problems of both materialism and dualism: replacing materialism’s apparent impotence to account for consciousness and avoiding dualism’s sharply bifurcated reality (Goff et al., 2022). The challenge, according to Chalmers, is how microphysical properties, characterized by a completed physics, relate to phenomenal (or experiential) properties, the most familiar of which is simply the property of phenomenal consciousness (Chalmers, 2013).
If panpsychism is correct, Chalmers says, there is microexperience and there are microphenomenal properties, which are obviously very different from human experience. Though a proper panpsychist theory of consciousness is currently lacking, some progress can be made.
Chalmers posits “constitutive panpsychism” as the thesis that macroexperience is (wholly or partially) grounded in microexperience. It is the thesis that microexperiences somehow add up to yield macroexperience. “Nonconstitutive panpsychism” holds that microexperience does not ground the macroexperience; rather, macroexperience is strongly emergent from microexperience and/or from microphysics (Chalmers, 2013).
In either case, traditional panpsychism is micropsychism, the position that all facts of panpsychism are formed at the micro-level. Two forms are distinguished, based on which aspect of mentality is privileged to be fundamental and ubiquitous: thought (pancognitivism) and consciousness (panexperientialism).
Panpsychism’s thorniest problem, long recognized, is the “combination problem”: How could micro-level entities with their own very basic forms of conscious experience somehow come together in brains to constitute human and animal conscious experience? The problem is severe: How could minuscule conscious subjects of rudimentary experience somehow coalesce to form macroscopic conscious subjects with complex experiences? (