The Explanatory Gap of AI

Text-to-image generation by Midjourney for the caption "As above. So below."

If an AI understands is a topic of debate. Seldom debated are the limits of our own understanding. We imagine our understanding as complete and establish this as a baseline for AI. When we accept our understanding as incomplete a different story appears altogther.

The fundamental issue goes like this:

An AI may describe an apple as “sweet” though it has never tasted one, “red” though it has never seen one, and “smooth” though it has never felt one. So while an AI may skillfully describe the word “apple” and use it appropriately in context, in what way, if any, can we say an AI understands the word “apple”?

Steven Sarnad named this the “symbol grounding problem.” He reasoned that the symbols we use as references must be grounded, or referent, in phenomena to be meaningful. After all, the word “apple” when written in a book has no meaning on its own. It’s only when the word is read, and the reference and referent linked in a mind, that the symbols take on meaning. Sarnad writes that meaning in symbols is “parasitic on the meanings in our heads”.1 To Sarnad an AI doesn’t understand because it does not ground symbols in their referents; only human minds are hosts of meaning.

Yet now we have AIs like OpenAI’s Dall-E that seem to ground text in images2. Is Sarnad wrong? Do these AIs understand something previous ones did not? Predating Sarnad, John Searle addressed this issue. In Minds, Brains and Programs, Searle fashions and then refutes the “Robot Reply” argument. He describes an AI that would “have a television camera attached to it that enabled it to ‘see’ it would have arms and legs that enabled it to ‘act’ and all of this would be controlled by its computer ‘brain’.”3 Yet to Searle, this “adds nothing” to the argument. We are simply introducing more symbols, and to exploit correlations in symbols between modalities is no different than exploiting correlations within a modality. To Searle, grounding was neither the problem, nor the solution, to semantics from syntax.

The computer understanding is not just…partial or incomplete; it is zero. John Searle - Minds, Brains, and Programs

So just what is a grounded AI lacking? On this question, Sarnad and Searle are silent. Less silent, though, were the empiricists predating them. We attempt to get at our question by asking what they asked: what are human minds lacking?

The empiricists accept that the sweetness, redness and smoothness of an apple is a manifestation of our minds. Outside the mind, apples have no taste, color or texture. These properties make up the phenomena of apples. To know what an “apple” is outside the mind, is to know what Kant called the noumena of an apple: the apple “in itself.” Kant acknowledged that the existence of things outside the mind can only be “accepted on faith” and famously called this dilemma the “scandal to philosophy.”4 Hume considered our belief in external objects a “gross absurdity”,5 and Berkeley denied not only our knowledge of external reality, but its very existence.6 Several centuries later and the issue is far from settled.

So what is a human mind lacking? One answer is we lack understanding of the noumena.

Our brains receive only representations of reality: electrical stimuli, whether through the optical, cochlear, etc. nerves. We are hopeless to know what these stimuli are at the most fundamental level – their nature eludes us. Our brains feast on physical reality and know its referent not.

AIs receive only representations of our phenomenological experience: word tokens for language, or RGB matrices for images, etc. AI is hopeless to know what our experience is at the most fundamental level – their nature eludes them. AIs feast on computational reality and know its referent not.

In both cases the inputs are syntactic. What makes reasoning about AI semantics confusing is that its input is a representation of our output; our phenomena. Is not an image of an apple a representation of our visual experience? Is not the embedding of the word “apple” a representation of our semantic concept? Our realities are nested. Our phenomena is the AI’s noumena.

The philosopher Colin McGinn founded the epistemological school called “Mysterianism.” It posits that certain properties can be both real and fundamentally unknowable to a mind. A mind’s “cognitive closure” is all that can be knowable to it. The foundations of semantics, McGinn argues, may lie outside our cognitive closure; perhaps knowable, just not by us.7

What is noumenal for us may not be miraculous itself. We should therefore be alert to the possibility that a problem that strikes us as deeply intractable…may arise from an area of cognitive closure in our way of representing the world. Colin McGinn - The Problem of Philosophy

How can we fault an AI for its lack of our subjective experience? We are asking it to do what we ourselves cannot: to know what is outside its cognitive closure. A different framing would be to argue that relative to an AI, we have transcendent knowledge. There is an explanatory gap the AI will never bridge, but one so natural we effortlessly cross. We are able to ground the output of an AI in our own phenomena because our cognitive closure encircles its own.8 We are not like Sarnad’s hosts with a monopoly on meaning. Rather, we are like mystics, aware of a reality beyond the comprehension of another.9

A biblical allegory comes from the Hebrew interpretations of the Book of Genesis, the Bereishit Rabbah. When God created Adam and gave him language to name the animals, the Angels in Heaven could not understand this revelation.10 Being of Spirit, their perception was confined to the spiritual. Only a corporeal being, masked by the flesh from perceiving the noumena directly, could comprehend these representations. Unlike today’s languages, this Adamic language is thought to be in harmony with the nature of its reference – the semantic collapse of sign and signified.

[God] brought before [the Angels] beast and animal and bird. He said to them: This one, what is his name? and they did not know. This one, what is his name? and they did not know. He made them pass before Adam. He said to him: This one, what is his name? Adam said: This is ox/shor, and this is donkey/chamor and this is horse/sus and this is camel/gamal. Bereishit Rabbah 17:4

When we experience words, we experience them conceptually. When we experience images, we experience them visually. We are thrust into awareness upon recognition of our signs. Like the Adamic language, these representations are in direct correspondence with their reference. Their phenomenal nature is revealed to us without mediation – the semantic collapse of sign and signified. This correspondence is also the noumenal quality concealed from an AI. An AI is masked from perceiving our phenomena by a computational flesh. A representation, fashioned in the likeness of our phenomena, is needed. Like the Angels, we are baffled by these representations, unable to grasp their significance.

Does an AI understand? I would say it understands a reality grounded in the computational, which is itself grounded in the phenomenological, which is itself grounded in the incomprehensible.


  1. Harnad, Steven (1990). The Symbol Grounding Problem 

  2. OpenAI, Dall-E 2 https://openai.com/dall-e-2/ 

  3. Searle, John (1980). Minds, Brains, and Programs 

  4. Kant, Immanuel (1791). Critique of Pure Reason 

  5. Hume, David (1739). A Treatise of Human Nature 

  6. Berkeley, George (1791). A Treatise concerning the Principles of Human Knowledge 

  7. McGinn, Colin (1989). Can We Solve the Mind-Body Problem? 

  8. This is not to imply the AI will not know something we cannot understand. As if an AI ever has subjective experience, that would likely be unknowable to us. 

  9. It is a strange thing to behold an explanatory gap. Here, our comprehension straddles each side: our subjective experience and its computational representation. The collapse of this gap is manifest in the grounding of symbols between two domains. 

  10. Bereishit Rabbah 17:4 (450 CE) 

If you like this writing you may like to follow me on Twitter