Epistemic Topography

Related To: [Metaphor Is Narrative]
Content Summary: 1600 words, 16 min read.

Ambassadors Of Good Taste

I concluded my discussion of metaphor with three takeaways:

  • Metaphor relocates inference: we reason about abstract concepts using sensorimotor processes.
  • Metaphor imbues communication with affective flair or style.
  • Weaving metaphors together is narrative paint.

Let me build on such theses with the following aphorisms:

  • Metaphors which generate accurate empirical predictions are apt. Not all metaphors have this quality.
  • Metaphorical aptitude is a continuous scale, with complex empirical predictions generating higher scores.
  • Improving metaphorical aptitude is a design process.
  • Scientists who immerse their empirical results into this process are, in my language, ambassadors of good taste.

This post strives to develop a metaphor with high aptitude. You are witness to what I mean by “design process”.

Anatomy Of A Metaphor

Concept-space is useful because it sheds light on the nature of learning. The central identifications are:

  • A World Model Is A Location.
  • The Reasoner Is A Vehicle
  • Inference Is Travel

Our unconscious selves already use this metaphor frequently (c.f. phrases like “I’m way ahead of you.”) We aren’t inventing something so much as refining it.

To these three pillars, another identification can be successfully bolted on:

  • Predictive Accuracy Is Height

As we will see, pursuing knowledge really is like climbing a mountain.

Epistemic Topology- Your Location

Need For Cognition is Frequency Of Travel

Let’s talk about need for cognition: that personality trait that disposes some people towards critical thinking.

Those who know me, know how deeply I am driven to interrogate reality. Why am I like this? My answer:

I pursue deep questions because I tell myself I am curious → I tell myself I am curious because I pursue deep questions.

Such identity bootstrapping appears in other contexts as well. For example:

I am generous with my time because I tell myself I am selfless → I tell myself I am selfless because I am generous with my time.

Curiosity is an itch, active curiosity is scratching it. In terms of our metaphor:

  • If inference is travel, actively curious people are those who travel more frequently.

Intelligence is Vehicular Speed

Where does intelligence – that mental ability linked to abstraction – fit? Consider the following:

  • Although our society tends to lionize IQ as a personal trait, intelligence is mostly (50-80%) genetic. High-IQ parents tend to have high-IQ children, and vice versa.
  • What’s more, intelligence is highly predictive of success in life. It is so important for intellectual pursuits that eminent scientists in some fields have average IQs around 150 to 160. Since IQ this high only appears in 1/10,000 people or so, it beggars coincidence to believe this represents anything but a very strong filter for IQ.
  • In other words, Nature is not going to win any awards for egalitarianism any time soon.

We interpret intelligence as follows:

  • If the reasoner is a vehicle, intelligence is the speed of her vehicle.

If this topic conjures up existential angst (“I’ll never study again!” :P) check out this post. Speaking from my own life, my need for cognition is comparatively stronger than my intelligence quotient. In the tortoise-vs-hare race, I am the tortoise. 

On Education And Directional Calibration

One might reasonably complain that learning is not a solitary activity – our metaphor is too individualistic.

Let’s fix it. Consider the classroom. A teacher typically knows more than her students; in our metaphorical space, she is elevated above them. But the incomprehensible size of concept-space entails three uncomfortable facts:

  1. Every student resides in a different location.
  2. Knowing the precise location is computationally infeasible (even one’s own location).
  3. Without such knowledge, discovering to that student’s optimal path up the mountain is also infeasible.

Fortunately, location approximations are possible. Imagine a calculus professor with five students. Three students are stuck on the mathematics of the chain rule, the other two don’t grok infinitesimals. We might imagine the first group in the SW direction and the second are S-SE:

Epistemic Topology- Relative Location Groups (1)

Without knowing anyone’s precise location, the professor (white dot) can provide the red group with worked examples of the chain rule (direct to the NE) and the blue group with stories to motivate the need for infinitesimals (direct to N-NW). While such directional calibration is imprecise, it nevertheless gets them closer to the professors’ knowledge (amplifying their predictive power).

Epistemic Topology- Directional Calibration (2)Notice how each student travels along different speeds (intelligence) and frequencies (work ethic).

On Inferential Distance

If the process of building World Models is a journey, the notion of inferential distance becomes relevant.

Imagine reading two essays and then being quizzed for comprehension. Both have the same word count; one is written by a theoretical physicist, the other by a journalist. The physicist’s writings would probably take longer to understand. But why is this so?

Surely there is a greater inferential distance between us and the theoretical physicist. Is it so surprising that traveling greater distances consume more time?

This intuition sheds light on a common communication barrier, which Steven Pinker frames well:

Why is so much writing so bad?

The most popular explanation is that opaque prose is a deliberate choice. Bureaucrats insist on gibberish to cover their anatomy. Plaid-clad tech writers get their revenge on the jocks who kicked sand in their faces and the girls who turned them down for dates. Pseudo-intellectuals spout obscure verbiage to hide the fact that they have nothing to say, hoping to bamboozle their audiences with highfalutin gobbledygook.

But the bamboozlement theory makes it too easy to demonize other people while letting ourselves off the hook. In explaining any human shortcoming, the first tool I reach for is Hanlon’s Razor: Never attribute to malice that which is adequately explained by stupidity. The kind of stupidity I have in mind has nothing to do with ignorance or low IQ; in fact, it’s often the brightest and best informed who suffer the most from it.

The curse of knowledge is the single best explanation of why good people write bad prose. It simply doesn’t occur to the writer that her readers don’t know what she knows—that they haven’t mastered the argot of her guild, can’t divine the missing steps that seem too obvious to mention, have no way to visualize a scene that to her is as clear as day.

The curse of knowledge expects short inferential distances. Why does this bias (not another) live in our brains?

As we have seen, estimating location is expensive.  So the brain takes a shortcut: it uses a location it already knows about (its own) and employs differences between the Self and the Other to estimate distance. Call this self-anchoring. But the brain isn’t aware of all differences, only those it observes. Hence the process of “pushing out” one’s estimation of Other Locations typically doesn’t go far enough… the birthplace of the curse.

On Epistemic Frontiers, Fences, and Cliffs

It is tempting to view cognition as transcendent. Cognition transcendence plays a key role in debates over free will debates, for example. But I will argue that barriers to inference are possible. Not only that, but they come in three flavors.

Intelligence is speed, but is there a speed limit? There exist physical reasons to answer “yes”; instantaneous learning is as absurd as physical teleportation.  Just as a light cone constrains how physical event spreads through the universe, we might appeal to a cognition cone. Our first barrier to inference, then, is running out of gasoline. Death represents an epistemic frontier, with intellectually gifted people enjoying wider frontiers. Arguably, the frontier of anterograde amnesiacs is much shorter, defined by the frequency at which their memories “reset”.

If most education eases inference, we might imagine other social devices that retard that very same movement. Examples abound of such malicious, man-made epistemic fences. While conspiracy theories typically rely on naive models of incentive structures, other forms of information concealment plague the world. Finally, people steeped in cognitive biases (e.g., cult members within a happy death spiral) cannot navigate concept-space normally.

Epistemic frontiers need not concern us overly much (e.g., educational inefficiencies inhibit progress more than short lifespans).  Epistemic fences are more malicious, but we can still dream of moving away from tribalism.  What about permanent barriers? Might naturally-occurring epistemic cliffs inhabit our intellectual landscape? Yes. Some of the more well-known cliffs include Godel’s Incompleteness Theorems, and the Heisenberg Uncertainty Principle.

We have seen three types of inferential stumbling blocks: finite frontiers, man-made fences, and natural cliffs.  But consider what it means to reject cognition transcendence. Two theses from Normative Therapy were:

  • Motivation: normative structures should point towards their ends in motivationally-optimal ways.
  • Despair: It is not motivationally-optimal to be held to a normative structure beyond one’s capacities.

If these principles seem agreeable, it may be time to reject arguments of the form “all people should believe X”. 

Takeaways

In this post, we developed a metaphor of epistemic topography, or concept-space:

  1. A World Model Is A Location.
  2. The Reasoner Is A Vehicle
  3. Predictive Accuracy Is Height
  4. Intelligence Is Vehicular Speed
  5. Inference Is Travel
  6. Need For Cognition Is Frequency Of Travel

We then used this five-part metaphor to shed light on the following applications:

  • Education is the art of directing people whose locations you do not know towards higher peaks.
  • The Curse Of Knowledge can be explained as incomplete extrapolating from one’s own conceptual location.
  • The inferential journey can be blocked by three kinds of barriers: finite frontiers, man-made fences, and natural cliffs
  • These facts render arguments of the form “all people should believe X” dubious.
Advertisement

Knowledge: An Empirical Sketch

Table Of Contents

  • Introduction
    • All The World Is Particle Soup
    • Soup Texture
  • Perceptual Tunnels
    • On Resolution
    • Sampling
    • Light Cones, Transduction Artifacts, Translation Proxies
  • The Lens-dependent Theorybuilding Triad
    • Step One: Conceptiation
    • Step Two: Graphicalization
    • Step Three: Annotation
    • Putting It All Together: The Triad
  • Conclusion
    • Going Meta
    • Takeaways

Introduction

All The World Is Particle Soup

Scientific realism holds that the entities scientists refer to are real things. Electrons are not figments of our imagination, they possess an existence independent of your mind. What does it mean for us to view particle physics with such a lens?

Here’s what it means: every single thing you see, smell, touch… every vacation, every distant star, every family member… it is all made of particles.

This is an account of how the nervous system (a collection of particles) came to understand the universe (a larger collection of particles). How could Particle Soup ever come to understand itself?

Soup Texture

Look at your hand. How many types of particles do you think you are staring at? A particle physicist might answer: nine. You have four first-generation fermions (roughly, particles that comprise matter) and five bosons (roughly, particles to carry force). Sure, you may get lucky and find a couple exotic particles within your hand, but such a nuance would not detract from the morale to the story: in your hand, the domain (number of types) of particles is very small.

Look at your hand. How large a quantity of particles do you think you are staring at? The object of your gaze is a collection of about 700,000,000,000,000,000,000,000,000 (7.0 * 10^26) particles. Make a habit about thinking in this way, and you’ll find a new appreciation for the Matrix Trilogy. 🙂 In your hand, the cardinality (number of tokens) of particles is very large.

These observations generalize. There aren’t many kinds of sand in God’s Sandbox, but there is a lot of it, with different consistencies across space.

Perceptual Tunnels

On Resolution

Consider the following image. What do you see?

Lincoln Resolution

Your eyes filter images at particular frequencies. At this default human frequency, your “primitives” are the pixelated squares. However, imagine being able to perceive this same image at a lower resolution (sound complicated? move your face away from the screen :P). If you do this, the pixels fade, and a face emerges.

Here, we learn that different resolution lens may complement one another, despite their imaging the same underlying reality. In much the same way, we can enrich our cognitive toolkit by examining the same particle soup with different “lens settings”.

Sampling

By default, the brain does not really collect useful information. It is only by way of sensory transductor cells – specialized cells that translate particle soup into Mentalese – that the brain gains access to some small slice of physical reality. With increasing quantity and type of these sensory organs, the perceptual tunnel burrowed into the soup becomes wide enough to support a lifeform.

Another term for the perceptual tunnel is the umwelt. Different biota experience different umwelts; for example, honeybees are able to perceive the Earth’s magnetic field as directly as we humans perceive the sunrise.

Perceptual tunneling may occur at different resolutions. For example, your proprioceptive cells create signals only on the event of coordinated effort of trillions and trillions of particles (e.g., the wind pushes against your arm). In contrast, your vision cells create signals at very fine resolutions (e.g., if a single photon strikes your photoreceptor, it will fire).

Perceptual Tunneling

Light Cones, Transduction Artifacts, Translation Proxies

Transduction is a physically-embedded computational process. As such, it is subject to several pervasive imperfections. Let me briefly point towards three.

First, nature precludes the brain from the ability to sample from the entirety of the particle soup. Because your nervous system is embedded within a particular spatial volume, it is subject to one particular light cone. Since particles cannot move faster than the speed of light, you cannot perceive any non-local particles. Speaking more generally: all information outside of your light cone is closed to direct experience.

Second, the nervous system is an imperfect medium. It has difficulty, for example, representing negative numbers (ever try to get a neuron firing -10 times per second?). Another such transduction artifact is our penchant for representing information in a comparative, rather than absolute, format. Think of all those times you have driven on the highway with the radio on: when you turn onto a sidestreet, the music feels louder. This experience has nothing at all to do with an increased sound wave amplitude: it is an artifact of a comparison (music minus background noise). Practically all sensory information is stained by this compressive technique.

Third, perceptual data may not represent the actual slice of the particle soup we want. To take one colorful example, suppose we ask a person whether they perceived a dim flashing light, and they say “yes”. Such self-reporting, of course, represents sensory input (in this case, audio vibrations). But this kind of sensory information is a kind of translation proxy to a different collection of particles we are interested in observing (e.g., the activity of your visual cortex).

This last point underscores an oft-neglected aspect of perception: it is an active process. Our bodies don’t just sample particles, they move particles around. Despite the static nature of our umwelt, our species has managed to learn ever more intricate scientific theories in virtue of sophisticated measurement technology; and measurement devices are nothing more than mechanized translation proxies.

The Lens-dependent Theorybuilding Triad

Step One: Conceptiation

Plato once describes concept acquisition as “carving nature at its joints”. I will call this process (constructing Mentalese from the Soup) theory conceptiation.

TheoryBuilding- Conceptiation

If you meditate on this diagram for a while, you will notice that theory conceptiation is a form of compression. Acccording to Kolmogorov information theory, the efficacy of compression hinges on how many patterns exist within your data. This is why you’ll find leading researchers claiming that:

Compression and Artificial Intelligence are equivalent problems

A caveat: concepts are also not carved solely from perception; as one’s bag of concepts expands, such pre-existent mindware exerts an influence on the further carving up of percepts. This is what the postmoderns attribute to hermeneutics, this is the root of memetic theory, this is what is meant by the nature vs. nurture dialogue.

Step Two: Graphicalization

Once the particle soup is compressed into a set of concepts, relations between these concepts are established. Call this process theory graphicalization.

TheoryBuilding- Graphicalization

If I were ask you to complete the word “s**p”, would you choose “soap” or “soup”?  How would your answer change if we were to have a conversation about food network television?

Even if I never once mention the word “soup”, you become significantly more likely to auto-complete that alternative after our conversation. Such priming is explained through concept graphs: our conversation about the food network activates food-proximate nodes like “soup” much more strongly than graphically distant nodes like “soap”.

Step Three: Annotation

Once the graph structure is known, metagraph information (e.g., “this graph skeleton occurs frequently”) is appended. Such metagraph information is not bound to graphs. Call this process theory annotation.

TheoryBuilding- Annotation

We can express a common complaint about metaphysics thusly: theoretical annotation is invariant to changes in conceptiation & graphicalization results. In my view (as hinted at by my discussion of normative therapy) theoretical annotation is fundamentally an accretive process – it is logically possible to generate an infinite annotative tree; this is not seen in practice because of the computational principle of cognitive speed limit (or, to make a cute analogy, the cognition cone).

Putting It All Together: The Triad

Call the cumulative process of conceptiation, graphicalization, and annotation the lens-dependent theorybuilding triad.

TheoryBuilding- Lens-Dependent Triad

Conclusion

Going Meta

One funny thing about theorybuilding is how amenable it is to recursion. Can we explain this article in terms of Kevin engaging in theorybuilding? Of course! For example, consider the On Resolution section above. Out of all possible adjectives used to describe theorybuilding, I deliberately chose to focus my attention on spatial resolution. What phase of the triad does that sound like to you?  Right: theory conceptiation.

Takeaways

This article does not represent serious research. In fact, its core model – the lens-dependent theorybuilding triad – cites almost no empirical results. It is a toy model designed to get us thinking about how a cognitive process can construct a representation of reality. Here is an executive summary of this toy model:

  1. Perception tunneling is how organisms begin to understand the particle soup of the universe.
    1. Tunneling only occurs by virtue of sensory organs, which transduce some subset of data (sampling) into Mentalese.
    2. Tunneling is a local effect, it discolors its target, and its sometimes merely represents data located elsewhere.
  2. The Lens-Dependent Theorybuilding Triad takes the perception tunnel as input, and builds models of the world. There are three phases:
    1. During conceptiation, perception contents are carved into isolable concepts.
    2. During graphicalization, concept interrelationships are inferred.
    3. During annotation, abstracted properties and metadata are attached to the conceptual graph.