Part Of: Object sequence
Followup To: The Language of Consciousness
Introduction
Objects are distributed networks: their resident features are computed in myriad locations across cortex.
Objects are an extremely rich concept, at the intersection of several disciplines. Given their explanatory fecundity, they will undoubtedly play an important role in the future of philosophy. Let me gesture at a few conversations where this is especially true.
Logical vs Statistical Inference
There have been two distinct waves of Artificial Intelligence research. The first was inspired by symbolism, which basically held the brain as maneuvering within a formal system of logic (recall that logic is computation). The approach fell out of favor after the discovery of the frame problem. In the 1980s, a new approach to AI rose from the ashes, this time fueled by connectionism and powerful methods of probabilistic inference.
These approaches comprise the Great AI Schism:
Each programme has its own unique strength
- The logical approach excels at modeling complexity within its environment
- The statistical approach excels at representing uncertainty within its beliefs.
There is reason to believe that AI will accelerate significantly once it discovers how to weld these approaches together. But the solution has not yet been discovered.
Object files may provide insight. You see, the human mind seems able to perform two kinds of mental operations:
- Slow, serial, conscious inference (e.g., long division)
- Fast, parallel, pre-conscious inference (e.g., finding a red hat in a crowd)
The behavioral evidence for such “modes of thought” has motivated dual process theory. Don’t these modes remind you of our statistical vs logical divide?
Objects are the middleware that straddles the logical-statistical divide. Understanding their mechanics may at last heal the AI Schism.
Problem Of Intentionality
One of the most protracted philosophical objections to the cognitive revolution goes as follows:
Computers are formal systems which manipulate abstract symbols (e.g., program variables). CPUs play games with these variables, but they have don’t participate in the meaning of these symbols. For example, a weather program will update the bits of EXPECTED_TEMP, without access to the physical interpretation of such a symbol.
Call this the problem of intentionality. For another example, we turn to Searle’s Chinese Room thought experiment:
Imagine an enormously sophisticated rulebook, billions of lines long, with the following format:
- IF Chinese character X is slipped under the door THEN output Chinese Y.
Imagine John, an English speaker, enters the room containing this rulebook, and closes the door. His bilingual friend Wei slips Chinese sentences under the door. Following these rules in the book, John pushes characters under the door. To Wei’s astonishment, these characters compose Chinese sentences. What’s more, Wei experience a lively conversation in Chinese.
Does anyone understand what Wei is saying? Not John. He doesn’t speak Chinese! And it is absurd to attribute understanding to the pages of the rulebook. The room is behaving like a human, but does not understand.
Therefore, Searle claims, even if AI could pass the Turing Test, it would never understand in the same sense that Wei understands.
Object construction is the road by which philosophers will solve intentionality. As Harnad puts it in his excellent 1990 paper:
Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds:
- “iconic representations” , which are analogs of the proximal sensory projections of distal objects and events, and
- “categorical representations”, which are learned and innate feature-detectors that pick out the invariant features of object and event categories from their sensory projections.
In other words, the brain performs symbol grounding: translating the symbol RED into non-symbolic imagery! Once we understand how objects cash out in sensorimotor systems, we will have explained how the brain injects semantic meaning into its knowledge systems. Why should such a mechanism be limited to meat. 🙂
Cognitive Epistemology
Philosophers concern themselves, among other things, with the definition of truth. The most widely accepted definition, the correspondence theory of truth goes like this:
The truth or falsity of a statement is determined by how well its contents correspond to the world.
This philosophical frame coheres well with the cognitive notion of representation. A representation can fail to reflect the structure of reality: the map is not the territory. But some representations can correspond to reality, in a strict, mathematical sense.
Plantinga’s Evolutionary Argument Against Naturalism (EAAN) discusses the relationship between naturalism, biological fitness, and knowledge. If natural selection is the driver for organism design, how do we know that our brains are constructing truth? That our maps really do correspond with reality?
Adaptive behavior, after all, does not require true beliefs:
Perhaps Paul very much likes the idea of being eaten, but when he sees a tiger, always runs off looking for a better prospect, because he thinks it unlikely the tiger he sees will eat him. This will get his body parts in the right place so far as survival is concerned, without involving much by way of true belief.
The question reduces to, is adaptive behavior orthogonal to knowledge?
As I like to say, “epistemology without psychophysics is dead”. If you want to understand the trustworthiness of human knowledge acquisition systems, you cannot responsibly ignore the brain. Empirically literate epistemology is, regrettably, still in its infancy. However, cognitive epistemology cannot proclaim victory at early perceptual areas. We want to know whether we can retain this sense of trust from the retina to beliefs more accessible to conscious awareness (i.e., beliefs grounded in objects).
Until next time.