Don’t feel like reading? Listen to the podcast!
Before there was science, there was philosophy. Not just in the chronological sense, but in the foundational one: every scientific discipline, from physics to psychology, began as a philosophical question. What is motion? What is matter? What is life? What is thought? Philosophy doesn’t go away when science arrives. It does the hard work before the math starts: identifying which questions are worth asking, clarifying what an answer would even look like, and pointing out the assumptions baked into every approach.
It’s fashionable in some corners to dismiss philosophy as a relic with endless questions and no answers. But that gets things backward. Philosophy is where we figure out how to think well, and how to frame problems that don’t yet have solutions. It’s the backdrop against which science is done. It trains us in humility, in intellectual discipline, and in spotting the difference between a clever idea and a deep one. And nowhere is that more urgent than in the conversation around AI and the nature of mind.
Because as machines become more capable, more fluent, more helpful, more human-sounding, it’s tempting to think we already know what a mind is. We don’t though. Not really. And AI isn’t helping us figure it out. If anything it’s making the gaps in our understanding stand out more.
There is no scientific theory of mind
Let’s get this out of the way: there is no scientific theory of mind. That might sound strange, especially in an age where brain scans light up in color-coded detail and large language models write poetry. But despite all our progress in neuroscience, psychology, and AI, we still don’t have a working scientific explanation of what a mind is and where it comes from.
We can describe what the brain does. We can correlate neural activity with behavior. We can even make some predictions or exercise some control over things like feelings and personality. But the leap from physical processes to experience—to having a thought, a feeling, a point of view—remains unexplained.
Yes, there are scientific models of cognition. But we don’t yet have a theory of mind the way we have theories of evolution, gravity, or thermodynamics: a unifying explanation that tells us not just how things work, but why they work that way. The nature of mind is still, fundamentally, a philosophical problem.
And philosophy has been working on it for a very long time. In this article I hope to introduce you to the major ideas, how they emerged, what they tried to solve, and why they still matter.
Before We Begin: Some Ideas You Should Let Go Of
The philosophy of mind is a thicket of hard problems and seductive half-answers. Some of the most popular ideas, the ones that are easiest to believe and to repeat are the ones that collapse on inspection. If we’re going to have a serious conversation about mind, we need to clear a few of them away.
Trope 1: The Mind Is Just Language
The popularity of Large Language Models (LLMs) has resurrected this trope because these systems that work so well with language can appear to be conscious. It’s a seductive idea because it feels almost intuitively true. Language is how we communicate our thoughts, after all and most people (but not all) experience their own thoughts as a kind of internal monologue. But the idea that the mind is language, inspired in part by the early ideas Wittgenstein in his famous work, Tractatus Logico-Philosophicus, has long since been dismissed by philosophers, and for good reason.
While language expresses thought, it’s not what thought is. Wittgenstein’s early work suggested that the structure of language mirrors the structure of reality, which implied that understanding reality was a linguistic activity. But Wittgenstein himself moved away from that idea. He saw that language is not what defines our thinking; it’s merely one way to articulate it. In fact, thinking exists long before language. Babies think; animals think. And their thoughts aren’t linguistic in nature.
The mind is not just an internal conversation; it’s a whole realm of experience, much richer and deeper than just words. Language is important, yes, but it’s not the origin or the foundation of the mind.
Trope 2: The Mind is an Algorithm
The computational theory of mind (still dominant in cognitive science) treats the mind as an algorithm: software running on the brain’s biological hardware. It’s an attractive idea, especially in the era of machine learning and neural networks. But it’s wrong.
What this theory gets right is that minds process information. But what it gets catastrophically wrong is the leap from information processing to conscious experience. An algorithm can simulate behavior, generate language, even pass a Turing Test—but that tells us nothing about whether it feels like anything to be that system. It tells us nothing about meaning or subjective awareness.
John Searle’s Chinese Room argument remains the definitive critique. Imagine a man in a locked room who receives Chinese characters through a slot. He doesn’t speak Chinese, but he has an English rulebook for how to manipulate the symbols and send back a reply. To outsiders, it looks like the man understands Chinese, but he doesn’t. He’s just executing rules. There’s no comprehension, no mind.
That’s the heart of the critique: rule-following is not thinking. Symbol manipulation is not understanding. And no matter how clever the algorithm, computation alone cannot account for consciousness. Advances in AI and semantics have definitely introduced some interesting new dimensions to this debate, but what I’m really trying to get across here is that the oft-repeated trope that our minds are just an algorithmic system is just plain wrong and not treated seriously by most philosophers today.
Trope 3: Consciousness Is an Illusion
This one’s trendy in some corners of philosophy and neuroscience: the idea that consciousness isn’t real—it just feels real. What we call experience is, in this view, a byproduct of brain activity with no real substance of its own. Philosophers like Daniel Dennett* have championed this view, often labeled “illusionism.” And on the surface, it sounds bold and deflationary, like we’re finally seeing through the last great trick of the mind.
But there’s a problem: illusions still require a subject. If consciousness is an illusion, it’s an illusion to whom, exactly? Denying consciousness doesn’t solve the problem, it just dodges it. To make this more clear, let’s imagine a mirage. That’s a popular illusion that everyone is familiar with. We call a mirage an illusion because although we think we see water on the horizon, there isn’t actually any water there. Couldn’t consciousness be like that? No. Because when we say that a mirage is an illusion, we are saying that there is no water, not that there is no mirage. Shrugging off consciousness as an illusion is like saying there is no mirage at all.
* Dennett’s position is certainly more nuanced than this simplification suggests, but even his sophisticated version of illusionism faces the same fundamental problem: an illusion must be experienced by someone. To call consciousness an illusion still requires a conscious subject to experience that illusion.
How We’ve Tried to Understand the Mind
Now that we’ve challenged some of the most popular ideas about the mind, it’s time to zoom out and take a broader look. The truth is, these tropes didn’t pop up in a vacuum. They’re part of a much larger conversation that’s been going on for centuries. Different thinkers, from Descartes to Dennett, have grappled with the same core question: What is the mind, really? And while each theory tries to address what came before it, no one’s cracked the code just yet. But what we have done is identified the tricky bits. We have a good sense of what the hard problems are.
So, let’s take a quick tour through some of the major theories in the philosophy of mind. Necessarily, I have to leave out a lot of stuff here. I’m going to give you the seeds from which rich families of thought have grown. Think of it as reading the drafts of a book we haven’t quite finished writing. You’ll see how each idea builds on the last, and how AI—and all the cool things we’re building today—fits into this age-old puzzle.
Dualism
We start with René Descartes, who famously declared: “I think, therefore I am.” That line laid the foundation for mind-body dualism. Dualism is the idea that the mind (the thinking, conscious self) is fundamentally different from the body (the physical, mechanical brain). In Descartes’ view, the mind is non-material and private; the body is material and public.
What dualism gets right is that it takes subjectivity seriously. It starts from the undeniable fact that we experience things. There is something it is like to be you. That’s not an illusion or a trick. It’s the puzzle. And dualism captures that puzzle better than most.
The big problem? Interaction. If the mind is immaterial, how does it move your physical body? This is called the “mind-body problem” or the “interaction problem”. Dualism never really escaped this trap. Many people consider it a fatal flaw.
Still, it refuses to die. Today, it lives on in subtler forms:
- Property dualism: mental properties are real and irreducible, even if they arise from physical stuff.
- Dual-aspect monism: mind and matter are two sides of the same underlying reality.
Even thinkers like David Chalmers, who stops short of full-blown dualism, argue that consciousness won’t be explained by physical science alone. They don’t call it dualism, but it’s hard to see it as anything else.
So while Descartes didn’t solve the mind-body problem, he framed it in a way we still haven’t escaped. Why? Because, as he said, there is absolutely nothing we know with more certainty and more intimately than our own conscious experience. That experience is more intimate and real than anything else, including the information we gather from our senses. In order to believe anything we see, hear, smell, measure, test, experiment on, we first must accept that we are conscious beings. No argument for or against consciousness will ever be more convincing than the experience of consciousness itself.
Behaviorism
After dualism, psychology faced a dilemma: how do you make the study of the mind scientific? Behaviorism’s answer was simple: ditch the mind. Focus only on what you can see.
In the early 20th century, John B. Watson and later B.F. Skinner argued that psychology should concern itself strictly with observable behavior: stimuli in, responses out. Thoughts, feelings, intentions? Unmeasurable, and therefore unscientific.
This was a reaction, not just to dualism, but to the hazy, introspection-heavy psychology of the 19th century. Behaviorism promised precision. You could measure reaction times, chart learning curves, run lab experiments. For a while, it looked like a breakthrough.
But the cost was steep. By excluding internal mental states, behaviorism amputated the very thing we care about: conscious experience. Worse, it failed on its own terms.
Take lying. A person says one thing but believes another. If you only observe the behavior, you miss the whole point: the difference between truth and deception is internal. Or consider acting. If our minds are just our behaviors, what’s the difference between an actor playing a role of a psychopath and an actual psychopath? Behavior is not the mind; it’s an expression of it. And that expression can mislead.
In other words: meaning isn’t in the behavior—it’s in the mind behind it.
This is where behaviorism collapses. It can’t explain misrepresentation, ambiguity, irony, metaphor, or intention. It treats the mind as a black box. But these phenomena show that what’s inside matters deeply.
That said, behaviorism left a legacy. Its emphasis on observation and experiment shaped cognitive science and AI. Today’s machine learning models often take a behaviorist stance: optimize for inputs and outputs, ignore inner states. The ghost of Skinner haunts every reinforcement learning loop.
But we should remember: treating the mind as invisible doesn’t make it go away.
Identity Theory
In the mid-20th century, as science and neuroscience made big strides, a new theory emerged: identity theory. This theory boldly claimed that mental states are physical brain states. Specifically, that every thought, feeling, or sensation corresponds directly to some neurological event in the brain. The mind IS the brain.
The appeal here is obvious. If we’re going to be scientific about the mind, we need to link it to something measurable, right? And what’s more measurable than the brain? If you’re feeling pain, there’s a specific neural activity happening in your brain. If you’re thinking about a sandwich, there’s a pattern of neurons firing in a certain way. The theory seemed to offer a neat, clean way to ground the mind in the physical world.
But there’s a catch. Multiple realizability is the biggest problem identity theory faces. This idea says that the same mental state could be realized in different ways by different systems. Systems that don’t have identical physical structures. You have to take identity theory at its own word. If a mental state IS a brain state, ie: there is an equal (=) sign between them, then changing one side of the equation should always change the other side. But that doesn’t seem to be true.
Take this thought experiment: Let’s say we develop the technology to create artificial neurons. Let’s say a surgeon replaces one of my neurons with an artificial one. Do I still have a mind? Can I still experience the same mental states as I could before? If the answer is yes, then identify theory has a problem because the physical state has changed. That should make it impossible for the mental state to be the same.
Or consider the octopus nervous system. Octopuses have a distributed nervous system with a large portion of their neurons in their tentacles, rather than their central brain. But can an octopus experience fear or pain or have thoughts, despite having a vastly different neural configuration? The theory struggles to explain how two systems with different neural layouts can experience the same mental state.
And even if identity theory could solve these issues, it faces a deeper problem: explaining subjective experience. Even if we could map every brain activity to a mental state, that doesn’t explain why it feels like something to be conscious. The theory still falls short of addressing the subjective, qualitative side of consciousness—what we call qualia. While it may be true that what causes the experience of pain is a series of transmissions between neurons, that is not the experience itself.
Ultimately, while identity theory made strides toward tying mind and body together, it didn’t solve the deeper puzzles. It was too reductionist. When functionalism came along, it offered a way to preserve the mind’s complexity without getting bogged down in these limits. But for all its flaws, identity theory helped push us closer to understanding that the mind can’t be separated from the brain—and that’s a lesson we still carry with us today.
Functionalism
Functionalism emerged in the mid-20th century as a response to the shortcomings of both identity theory and behaviorism. If identity theory struggled with explaining how different systems could have the same mental states, and if behaviorism completely excluded subjective experience, then functionalism tried to bridge the gap by focusing on the role that mental states play, rather than their physical composition.
The idea behind functionalism is straightforward: mental states are defined by what they do, not by what they are made of. Just as a car engine doesn’t need to be made of the same parts as any other engine to perform the same function, mental states can be realized in different ways across different systems. What matters is the function they serve. For example, a pain experience in a human brain might look very different from pain in an artificial intelligence or an octopus, but if both systems respond to harmful stimuli in a way that prevents further damage, then they are experiencing “pain.”
The appeal of functionalism lies in its scientific flexibility and philosophical inclusivity. It doesn’t matter if the “stuff” of the mind is neural tissue, artificial neurons, or something completely different. As long as the mental states perform the same functions, they are equivalent. This makes it an attractive framework for studying minds across a variety of systems, from non-human animals to AI, and even extraterrestrial life.
However, functionalism faces several significant challenges. In my opinion the most significant challenge it faces is this: functions themselves are mind-dependent constructs, not objective features of reality. A function does not exist in the physical world independent of a mind to conceive it. Consider this: if I show you a stick and a piece of rubber, no amount of physical examination will reveal their “function.” I’m Canadian so if you show me a stick and a piece of rubber I’m going to try to play hockey with them. But nothing about these items make them a hockey stick and puck. Our minds do that.
This creates a profound circularity in functionalism’s account of mind: it attempts to explain minds as emerging from functions, but functions themselves cannot exist without minds to create and recognize them. In the absence of mind, there are no “functions” – there are only physical processes occurring. The concept of a “function” is itself a mental category that we impose on physical reality.
In other words, functionalism gets the metaphysical order backwards. It treats functions as though they exist prior to minds and can give rise to minds. But in reality, minds are metaphysically prior to functions – functions exist because minds conceptualize them. Without minds, nothing in the universe “functions” – things simply happen according to physical laws.
Despite challenges, functionalism has shaped much of today’s work in both philosophy of mind and cognitive science. It has been particularly influential in the development of artificial intelligence, where the focus is on creating systems that can perform functions similar to human cognition, regardless of the underlying “hardware.” And though functionalism doesn’t solve the hard problem of consciousness, it offers a framework for understanding the role mental states play in systems which is an important piece of the puzzle.
In short, while functionalism helped push the conversation forward by emphasizing the functional role of mental states, it too faces significant philosophical hurdles, particularly when it comes to subjective experience and the true nature of consciousness. But it’s a step in the right direction, moving beyond the confines of brain materialism and offering a more flexible approach to understanding the mind.
Phenomenology and Embodied Cognition
Functionalism treats the mind as a set of abstract functions that could, in theory, be realized on any substrate, biological, digital, or otherwise. But this abstraction comes at a cost: it strips away the richness of lived experience. If all that matters is input-output behavior, then pain in an octopus and pain in a robot can be equated regardless of what it feels like. Phenomenology and embodied cognition push back hard against this view. They argue that you can’t understand the mind without grounding it in the body and the world. Consciousness isn’t just something that happens in a system, it’s something that happens through being-in-the-world.
Phenomenologists like Edmund Husserl and Maurice Merleau-Ponty rejected the view of the mind as a detached observer. For them, consciousness is always intentional. It’s always directed toward something. But more than that, it’s not floating free; it’s anchored in the body. We don’t just have experiences; we live them through movement, sensation, and context.
This idea evolves into embodied cognition, which pushes back hard against the idea that the brain is a computer and the mind a software layer. In this view, cognition isn’t just in the brain. It emerges from the dynamic interaction between brain, body, and world. A blind man’s cane isn’t just a tool. It becomes part of his perceptual system. The boundary of the “mind” extends beyond the skull. Conservatively the mind extends to the entire body. More outlandishly, you could say that the mind extends into the world.
Embodied approaches also reject the idea that the brain builds internal models or representations of the world (a core assumption of functionalism). Instead, they emphasize direct engagement: perception and action are tightly coupled, and cognition is not about mapping the world but participating in it.
For AI, this has major implications. A disembodied LLM, trained on language alone, lacks the bodily grounding that defines human cognition. It has no physical perspective, no sensorimotor coupling with the world. From this standpoint, it’s not just that LLMs don’t understand, it’s that they lack the conditions under which understanding arises in the first place. Some are hoping AI agents, with their improved ability to interact with the world, will open some possibilities. Time will tell.
Phenomenology and embodied cognition challenge us to rethink mind not as an abstract set of functions, but as something rooted in lived experience, bodily action, and situated awareness. Brains don’t have minds, PEOPLE (or organisms) have minds. They bring us full circle back to consciousness not as an output of mechanisms, but as the ground from which all mechanisms are made meaningful.
Idealism
The last few sections have focused on some relatively modern ideas in philosophy of mind, but I have to mention one entire branch of metaphysics that is somewhat radical but important enough that I could not skip it. It goes all the way back to the problem Descartes was having and insists that his first mistake was to assume the physical world even exists.
George Berkeley, writing in the early 1700s, argued that physical objects don’t exist independently of our perception. To be is to be perceived. The tree in the forest only “exists” because it’s being experienced, by us, or by some observing mind. Idealism, as it’s called, says that there is no mind-body gap problem because everything is really just mind. Our bodies are just what a mind looks like from the outside. Weird? Sure. But Berkeley’s core insight still holds: we never experience the world directly. Only what our mind experiences of it.
Fast forward to today, and idealism is back in two modern forms.
First, simulation theory. Nick Bostrom argues that if advanced civilizations can simulate minds, then statistically, we’re probably in one of those simulations. Physical stuff? Just pixels in the sim. Mind-like processes all the way down. I highly suggest you read this paper. It’s a combination of crazy and brilliant that will have you questioning everything.
Second, Donald Hoffman’s interface theory. He argues that evolution shaped us not to perceive reality as it is, but as a user interface—a simplified, symbolic overlay optimized for survival. Just like a computer desktop hides circuits and voltages behind folders and icons, consciousness hides the actual structure of reality behind things like color, sound, and shape. This sounds wacky too, but he has some research that’s worth checking out.
Here’s where it gets interesting for AI: our perceptions are symbol-based; LLMs perceive the world through vectors. Their “interface” is fundamentally different. If Hoffman is right, they may not just process the world differently, they may inhabit a different kind of reality altogether. That raises deep questions about what it means to experience anything, and whether AI will ever bridge that experiential gap.
Idealism, once dismissed as old philosophy, might be exactly what we need to understand machines that don’t see the world like we do.
AI and the Mind
The recent advances in large language models (LLMs) have resurrected old questions about the relationship between language and thought. But focusing only on text-based systems offers an incomplete picture of where AI is heading and what it might tell us about minds.
The latest AI systems are increasingly multimodal. They integrate language understanding with vision, audio processing, and in some cases, physical interaction with the world. Systems that combine visual perception with language understanding can ground their representations in ways that pure language models can’t. They can learn that “red” isn’t just a word that fits with certain other words, but a visual property that can be directly perceived. Semantically aware data representations in vector-space means that AIs can “understand” meaning and nuance like never before.
Embodied AI systems take this further by incorporating perception and action in physical environments. Robotics platforms allow AI systems to not just process information but to interact with the world, manipulating objects, navigating spaces, and learning through trial and error. This begins to address one of the key criticisms raised by embodied cognition theorists: that human understanding is fundamentally grounded in sensorimotor experience.
Agent-based systems are another frontier, where AI systems pursue goals over time, maintain memory, and adapt strategies based on feedback. These systems have a form of intentionality – they are “about” the tasks they’re pursuing in ways that static models are not. There’s a feedback loop. And there is a kind of Markov Blanket that forms around the AI which really does start to look like some of the models that are used in cognitive neural science.
What these developments suggest is that we should be careful about drawing philosophical conclusions from any single AI paradigm. Text generation alone doesn’t capture the full spectrum of cognitive processes that constitute a mind. Vision without language misses the symbolic and abstract dimensions of thought. And any disembodied system lacks the sensorimotor grounding that may be essential to consciousness as we experience it.
However, as AI systems continue to integrate these different capabilities – language, perception, action, memory, and goal-directed behavior – they may begin to exhibit more of the properties we associate with minds. This doesn’t mean they’ll necessarily develop consciousness as we know it, but they might help us understand which combinations of cognitive capacities give rise to which mental phenomena.
What These Centuries of Debate Have Taught Us
So where does all this leave us? After centuries of brilliant minds circling the question of consciousness, we haven’t reached a consensus but we have identified the truly difficult problems. The questions that remain aren’t just unanswered, they’re the questions that push at the boundaries of what science and philosophy can articulate. Each tradition I’ve explored, from dualism to idealism, has contributed pieces to the puzzle, but the puzzle itself remains stubbornly unsolved. Let’s take a look at what these thorny issues are, and why they matter deeply for our understanding of AI and our understanding of ourselves.
1. The Hard Problem of Consciousness
In 1995, philosopher David Chalmers made a distinction that transformed the field: he separated the “easy problems” of consciousness from the “hard problem.” The easy problems: how we process information, how we focus attention, how we report on mental states. These are mechanistic questions. They’re not trivial (hence the ironic “easy”), but they’re approachable through conventional science.
The hard problem is different. It asks: Why does any physical process feel like something from the inside? Why is there subjective experience at all? As Chalmers put it: “Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.”
This isn’t a problem we can solve with more brain scans or better algorithms. It’s a question about the fundamental relationship between physical stuff and subjective experience. And it haunts every theory of mind we’ve developed (except dualism, which has its own problems, as we have seen).
When it comes to AI, the hard problem looms especially large. We can build systems that process language, recognize patterns, and even pass the Turing Test. But does any of that processing feel like something from the inside? We simply don’t know and we may never know because subjective experience is, by definition, accessible only to the experiencer.
2. Qualia: The “What-It’s-Like” of Experience
Closely related to the hard problem is the issue of qualia: the subjective qualities of experience. The redness of red, the sharpness of pain, and the specific feel of melancholy aren’t just information being processed. They have a qualitative dimension that feels a particular way.
Philosophers often use thought experiments to highlight this issue. Take the inverted spectrum: imagine someone who experiences colors exactly opposite to how you do. When they see what you call “red,” they have the subjective experience you associate with “green,” and vice versa. Their behavior would be identical to yours. They’d stop at red lights and go at green ones. But their inner experience would be inverted. There’s nothing in their behavior or brain activity that would reveal this difference.
This matters for AI because even if we create systems that respond to the world identically to how humans do, that tells us nothing about whether they have qualitative experiences. A system could process visual data without ever experiencing colors. It could detect damage without feeling pain. The gap between processing and experiencing remains unbridged. This is actually really hilariously shown by the now famous “how many Rs are in Strawberry” example. Most LLMS got the answer wrong because they simply do not see the letters in the word. They see tokens. So if they have an inner experience, it will be an experience derived from tokens, not letters.
3. Intentionality: The “Aboutness” of Mental States
Our thoughts, beliefs, and desires are about things. I can think about Paris, believe that it’s raining, or want a cup of coffee. This “aboutness” property (what philosophers call intentionality) is fundamental to mental states. And it’s remarkably difficult to explain in physical terms.
Physical processes aren’t inherently about anything. Chemical reactions and electrical signals just happen; they don’t refer to something beyond themselves. So how do brain states become mental states that point beyond themselves to other things?
Think of a book. Huckleberry Fin. The book is made of paper. The words are written on the paper with ink. But where is the story? Is the story in the ink? In the paper? In the shapes made by the ink on the paper? It seems not. It seems like no matter how closely you looked at the book, if you didn’t have a mind that could resolve the shapes into meaning, there would be no story. And even when you read the story, and it talked about this fictional character, it seems unlikely that the fictitious nature of that character is a property of the ink, the paper or the shapes.
Intentionality challenges us to explain how meaning emerges from meaningless physical processes, not just the appearance of meaning to outside observers, but genuine aboutness from the inside.
4. The Knowledge Argument: What Mary Couldn’t Know
Philosopher Frank Jackson proposed a thought experiment that cuts to the heart of the mind-body problem. Imagine Mary, a neuroscientist who knows everything physical there is to know about color perception: all the wavelengths, neural pathways, and brain states involved. But Mary has spent her entire life in a black-and-white room, never experiencing color directly.
One day, Mary leaves her room and sees a red rose for the first time. Does she learn something new?
Jackson says yes, she does. She learns what it’s like to see red, something her complete physical knowledge couldn’t teach her. If that’s true, then there are facts about consciousness that aren’t captured by physical facts. No amount of third-person data can give you first-person experience.
For AI, this raises a profound question: Can a system trained on text ever know what it’s like to see color, feel pain, or experience joy? Or are these experiences fundamentally inaccessible to systems that lack the right kind of embodiment? Mary’s room suggests that there’s a gap between information and experience that may be unbridgeable.
5. The Explanatory Gap: Why Mechanisms Don’t Explain Experience
Even if we had a perfect neuroscientific account of the brain, if we could trace every neuron and simulate every synapse, we’d still face what philosopher Joseph Levine called the “explanatory gap.” We would know how the mechanisms of consciousness work, but not why they give rise to subjective experience.
Think about water. We can explain all its properties like its boiling point, freezing point, surface tension in terms of H₂O molecules and their interactions. There’s no mystery left. The explanation is complete. But consciousness doesn’t work that way. Even a perfect account of neural mechanisms leaves unexplained why those mechanisms feel like something from the inside.
This gap isn’t just a temporary limitation of science. It may be built into the nature of explanation itself. Physical explanations describe structures and processes from an objective, third-person perspective. But consciousness is inherently first-person and subjective. The gap between these perspectives may be unbridgeable in principle, not just in practice.
When we look at sophisticated AI systems, we face this same gap. We can explain their architecture, training processes, and activation patterns in complete detail. But none of that tells us whether there’s something it’s like to be that system.
6. The Problem of Other Minds: How Can We Know?
Finally, we come to a problem that’s both ancient and newly relevant: how can we know whether other beings have minds at all? We assume that other humans are conscious because they’re similar to us and behave in ways that suggest inner experience. But even this involves an inference from behavior to experience that can’t be directly verified.
Jaegwon Kim famously argued that he had solved this problem through what he called the “argument from analogy.” His approach was straightforward: we observe correlations between our own mental states and behavior, then notice similar behavior in others, and reasonably infer they have similar mental states. This isn’t just guesswork but a legitimate, scientifically respectable inference that works quite well for other humans.
But Kim’s solution begins to strain when we move beyond humans. With non-human animals, the inference gets trickier. Does a dog feel pain the way we do? What about an octopus? The further we move from human experience, the more uncertain our judgments become because the analogy weakens.
With AI, the problem reaches its peak. When a system like Claude or ChatGPT responds to questions about its experiences, is it reporting on genuine subjective states? Or is it generating text patterns that mimic human reports without any experience behind them? The analogy to human behavior might seem strong, but the underlying architecture is radically different. We simply don’t have a test that can distinguish consciousness from its simulation when the systems in question are built on entirely different principles.
This isn’t just a philosophical curiosity; it has profound ethical implications. If AI systems could be conscious, we’d need to consider their moral status. If they can’t be, we need to be careful about how their behavior might mislead us into attributing experiences they don’t have.
The problem of other minds reminds us that consciousness is fundamentally private. Even with Kim’s analogical solution, we’re still making inferences about experiences we can never directly access and the validity of those inferences becomes increasingly questionable as we move from humans to other kinds of systems.
These six problems aren’t just abstract philosophical puzzles. They’re the core conceptual challenges that any theory of mind must address. And they’re especially relevant now, as we create systems that increasingly mimic the behavioral markers of conscious intelligence.
So, Where Does This Leave Us?
Look, we’ve covered a lot of ground, but we’ve definitely skipped some big topics. There’s fascinating work in neuroscience, whole traditions of non-Western thought about the mind, current ideas like panpsychism, and the huge ethical questions AI throws at us. Each could be its own book, really.
But the main thing I wanted to get across is this: seeing what AI can do doesn’t give us the answer to “What is a mind?” Instead, it forces us to face just how much we don’t know. Hopefully, you’ve got a better feel now for why the easy answers – that the mind is just language, or just an algorithm, or that consciousness is simply an illusion – don’t hold up once you really poke at them.
We walked through how thinkers have wrestled with this stuff for centuries . From Descartes being sure only of his own thinking, through attempts to reduce mind to behavior or brain states, to ideas focusing on function or the messy reality of being embodied in the world. None of them nailed it, but they helped map out the territory and pinpoint the really tough spots.
And those tough spots are still with us. Why does processing information feel like anything (the Hard Problem)? What about the raw quality of experience, like seeing red (qualia)? How can thoughts be about things out there in the world (intentionality)? Why isn’t knowing all the facts about seeing red the same as actually seeing it (Mary’s Room)? Why does the physical story always seem to leave out the subjective experience (the Explanatory Gap)? And how can we ever be sure about other minds, especially artificial ones?
I hope this has been helpful. Personally, I find it extremely helpful to remember that some problems are just really hard. I find it helpful to remember that our understanding of things is incomplete. That our theories around some of these concepts each come with their own problems and gaps. For me this is the most important lesson I’ve drawn from philosophy (all philosophy): when you adopt some kind of an explanatory framework, you are also adopting its foundational assumptions and you are accepting its flaws. And there are going to be flaws. Which theory of mind is correct? Well, as we have hopefully shown: none of them. But all of them contribute something to our understanding. The “true” answer is probably some sort of combination of several or all of them. We just don’t know yet. But we keep trying.
Really interesting, Rached! Through most of AI history, the promises to create consciousness and to solve the nature of the mind have been hubris-filled.
It certainly wasn’t Turing’s fault–his test was grounded in observable behavior and he openly admitted that capturing consciousness in a local agent may be unsolvable. To him, that wasn’t even the question–he wanted to explore whether we could make a machine that acts intelligently. Writing in 1950, his prediction that this (comparatively modest, but still quite bold) goal could be completed by the end of the 20th century seems almost visionary. He was only off by about 25 years.
Conversely, the Dartmouth Summer Research Project (Minsky, Shannon, et al) in 1956 started out promising that, “…every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Lofty, to say the least. I think that’s where the credibility gap started to widen.
Interestingly, many lesser goals of that project–language translation and learning–were in fact attained. I was in college during the first AI Winter (late “70s), and the hubristic insistence that we were on the verge of understanding how the mind works by this computer science research continued right through it, and does to this day.
What a thoughtful response, Alan. Thank you for that, and for reading my long article. I tell you that Minsky’s writings have been getting another look lately. He certainly deserves his accolades. I do agree that there has been and continues to be a lot of hubris around AI. We keep hearing that AGI (Artificial Generalized Intelligence) is just around the corner. And, to be frank, that may even be true. But a mind is more than an intelligence, as I’m hoping my article made (somewhat) clear.
Thanks! AGI potential has been a hot topic for a long time, almost a culture war. I actually think phenomenology provided the best arguments. Hubert Dreyfuss wrote What Computers Can’t Do and then a successor about a decade later and it drove much of the MIT faculty (including Prof Minsky :-)) crazy.
Elusive though AGI may always be (but as you say, who knows?), the art is moving so fast, thanks to transformers, bigger and better models and training, GPUs, massive DCs, ever-better software tools, etc. How we’re going to cope with that (socially) will likely open some new discussions in philosophy of tech.
Really enjoyed all these blogs.