Epistemic Topography

Related To: [Metaphor Is Narrative]
Content Summary: 1600 words, 16 min read.

Ambassadors Of Good Taste

I concluded my discussion of metaphor with three takeaways:

  • Metaphor relocates inference: we reason about abstract concepts using sensorimotor processes.
  • Metaphor imbues communication with affective flair or style.
  • Weaving metaphors together is narrative paint.

Let me build on such theses with the following aphorisms:

  • Metaphors which generate accurate empirical predictions are apt. Not all metaphors have this quality.
  • Metaphorical aptitude is a continuous scale, with complex empirical predictions generating higher scores.
  • Improving metaphorical aptitude is a design process.
  • Scientists who immerse their empirical results into this process are, in my language, ambassadors of good taste.

This post strives to develop a metaphor with high aptitude. You are witness to what I mean by “design process”.

Anatomy Of A Metaphor

Concept-space is useful because it sheds light on the nature of learning. The central identifications are:

  • A World Model Is A Location.
  • The Reasoner Is A Vehicle
  • Inference Is Travel

Our unconscious selves already use this metaphor frequently (c.f. phrases like “I’m way ahead of you.”) We aren’t inventing something so much as refining it.

To these three pillars, another identification can be successfully bolted on:

  • Predictive Accuracy Is Height

As we will see, pursuing knowledge really is like climbing a mountain.

Epistemic Topology- Your Location

Need For Cognition is Frequency Of Travel

Let’s talk about need for cognition: that personality trait that disposes some people towards critical thinking.

Those who know me, know how deeply I am driven to interrogate reality. Why am I like this? My answer:

I pursue deep questions because I tell myself I am curious → I tell myself I am curious because I pursue deep questions.

Such identity bootstrapping appears in other contexts as well. For example:

I am generous with my time because I tell myself I am selfless → I tell myself I am selfless because I am generous with my time.

Curiosity is an itch, active curiosity is scratching it. In terms of our metaphor:

  • If inference is travel, actively curious people are those who travel more frequently.

Intelligence is Vehicular Speed

Where does intelligence – that mental ability linked to abstraction – fit? Consider the following:

  • Although our society tends to lionize IQ as a personal trait, intelligence is mostly (50-80%) genetic. High-IQ parents tend to have high-IQ children, and vice versa.
  • What’s more, intelligence is highly predictive of success in life. It is so important for intellectual pursuits that eminent scientists in some fields have average IQs around 150 to 160. Since IQ this high only appears in 1/10,000 people or so, it beggars coincidence to believe this represents anything but a very strong filter for IQ.
  • In other words, Nature is not going to win any awards for egalitarianism any time soon.

We interpret intelligence as follows:

  • If the reasoner is a vehicle, intelligence is the speed of her vehicle.

If this topic conjures up existential angst (“I’ll never study again!” :P) check out this post. Speaking from my own life, my need for cognition is comparatively stronger than my intelligence quotient. In the tortoise-vs-hare race, I am the tortoise. 

On Education And Directional Calibration

One might reasonably complain that learning is not a solitary activity – our metaphor is too individualistic.

Let’s fix it. Consider the classroom. A teacher typically knows more than her students; in our metaphorical space, she is elevated above them. But the incomprehensible size of concept-space entails three uncomfortable facts:

  1. Every student resides in a different location.
  2. Knowing the precise location is computationally infeasible (even one’s own location).
  3. Without such knowledge, discovering to that student’s optimal path up the mountain is also infeasible.

Fortunately, location approximations are possible. Imagine a calculus professor with five students. Three students are stuck on the mathematics of the chain rule, the other two don’t grok infinitesimals. We might imagine the first group in the SW direction and the second are S-SE:

Epistemic Topology- Relative Location Groups (1)

Without knowing anyone’s precise location, the professor (white dot) can provide the red group with worked examples of the chain rule (direct to the NE) and the blue group with stories to motivate the need for infinitesimals (direct to N-NW). While such directional calibration is imprecise, it nevertheless gets them closer to the professors’ knowledge (amplifying their predictive power).

Epistemic Topology- Directional Calibration (2)Notice how each student travels along different speeds (intelligence) and frequencies (work ethic).

On Inferential Distance

If the process of building World Models is a journey, the notion of inferential distance becomes relevant.

Imagine reading two essays and then being quizzed for comprehension. Both have the same word count; one is written by a theoretical physicist, the other by a journalist. The physicist’s writings would probably take longer to understand. But why is this so?

Surely there is a greater inferential distance between us and the theoretical physicist. Is it so surprising that traveling greater distances consume more time?

This intuition sheds light on a common communication barrier, which Steven Pinker frames well:

Why is so much writing so bad?

The most popular explanation is that opaque prose is a deliberate choice. Bureaucrats insist on gibberish to cover their anatomy. Plaid-clad tech writers get their revenge on the jocks who kicked sand in their faces and the girls who turned them down for dates. Pseudo-intellectuals spout obscure verbiage to hide the fact that they have nothing to say, hoping to bamboozle their audiences with highfalutin gobbledygook.

But the bamboozlement theory makes it too easy to demonize other people while letting ourselves off the hook. In explaining any human shortcoming, the first tool I reach for is Hanlon’s Razor: Never attribute to malice that which is adequately explained by stupidity. The kind of stupidity I have in mind has nothing to do with ignorance or low IQ; in fact, it’s often the brightest and best informed who suffer the most from it.

The curse of knowledge is the single best explanation of why good people write bad prose. It simply doesn’t occur to the writer that her readers don’t know what she knows—that they haven’t mastered the argot of her guild, can’t divine the missing steps that seem too obvious to mention, have no way to visualize a scene that to her is as clear as day.

The curse of knowledge expects short inferential distances. Why does this bias (not another) live in our brains?

As we have seen, estimating location is expensive.  So the brain takes a shortcut: it uses a location it already knows about (its own) and employs differences between the Self and the Other to estimate distance. Call this self-anchoring. But the brain isn’t aware of all differences, only those it observes. Hence the process of “pushing out” one’s estimation of Other Locations typically doesn’t go far enough… the birthplace of the curse.

On Epistemic Frontiers, Fences, and Cliffs

It is tempting to view cognition as transcendent. Cognition transcendence plays a key role in debates over free will debates, for example. But I will argue that barriers to inference are possible. Not only that, but they come in three flavors.

Intelligence is speed, but is there a speed limit? There exist physical reasons to answer “yes”; instantaneous learning is as absurd as physical teleportation.  Just as a light cone constrains how physical event spreads through the universe, we might appeal to a cognition cone. Our first barrier to inference, then, is running out of gasoline. Death represents an epistemic frontier, with intellectually gifted people enjoying wider frontiers. Arguably, the frontier of anterograde amnesiacs is much shorter, defined by the frequency at which their memories “reset”.

If most education eases inference, we might imagine other social devices that retard that very same movement. Examples abound of such malicious, man-made epistemic fences. While conspiracy theories typically rely on naive models of incentive structures, other forms of information concealment plague the world. Finally, people steeped in cognitive biases (e.g., cult members within a happy death spiral) cannot navigate concept-space normally.

Epistemic frontiers need not concern us overly much (e.g., educational inefficiencies inhibit progress more than short lifespans).  Epistemic fences are more malicious, but we can still dream of moving away from tribalism.  What about permanent barriers? Might naturally-occurring epistemic cliffs inhabit our intellectual landscape? Yes. Some of the more well-known cliffs include Godel’s Incompleteness Theorems, and the Heisenberg Uncertainty Principle.

We have seen three types of inferential stumbling blocks: finite frontiers, man-made fences, and natural cliffs.  But consider what it means to reject cognition transcendence. Two theses from Normative Therapy were:

  • Motivation: normative structures should point towards their ends in motivationally-optimal ways.
  • Despair: It is not motivationally-optimal to be held to a normative structure beyond one’s capacities.

If these principles seem agreeable, it may be time to reject arguments of the form “all people should believe X”. 


In this post, we developed a metaphor of epistemic topography, or concept-space:

  1. A World Model Is A Location.
  2. The Reasoner Is A Vehicle
  3. Predictive Accuracy Is Height
  4. Intelligence Is Vehicular Speed
  5. Inference Is Travel
  6. Need For Cognition Is Frequency Of Travel

We then used this five-part metaphor to shed light on the following applications:

  • Education is the art of directing people whose locations you do not know towards higher peaks.
  • The Curse Of Knowledge can be explained as incomplete extrapolating from one’s own conceptual location.
  • The inferential journey can be blocked by three kinds of barriers: finite frontiers, man-made fences, and natural cliffs
  • These facts render arguments of the form “all people should believe X” dubious.

Deserialization: Hazards & Control

Part Of: [Deserialized Cognition] sequence
Followup To: [Deserialized Cognition]


Two major differences exist between conceptiation and deserialization:

  1. Deserialization Delay: A time barrier exists between concept birth & use.
  2. Deserialization Reuse: The brain is able to “get more” out of its concepts.

Inference Deserialization: Obsolescence Hazard

Let’s consider the deserialization delay within inference cognition modes:

Deserialization- Inference Cognition

If you think of an idea, and a couple hours later deserialize & leverage it, risk will (presumably) be minimal. But what about ideas conceived decades ago?

Your inference engines change over time. Here’s a fun example: Santa Claus. It is easy to imagine even a very bright child believing in Santa, given a sufficiently persuasive parent. The cognitive sophistication to reject Santa Claus only comes with time. However, even after this ability is acquired, this belief may be loaded from semantic memory for months before it is actively re-evaluated.

The problem is that every time your inference engines are upgraded (“versioned”), their past creations are not tagged as obsolete. What’s worse, you are often even ignorant of upgrades to the engine itself – you typically fail to notice (c.f., Curse Of Knowledge).

Potential Research Vector: The fact that deserialization decouples your beliefs from your belief-engines has interesting implications for psychotherapy, and the mind-hacking industries of the future. I can imagine moral fictionalism (moral talk is untrue, but useful to talk about) leveraging such a finding, for example.

Social Deserialization: Epistemic Bypass Hazard

Let’s now consider deserialization reuse within social cognition modes:

Deserialization- Social Cognition

Let me zoom into how social conceptiation is actually implemented in your brains. Do people believe every claim they hear?

The answer turns out to be… yes. Of course, you may disbelieve a claim; but to do so requires a later, optional process to analyze, and make an erasure decision about, the original claim. If you interrupt a person immediately after exposure to a social claim, you interrupt this posterior process and thereby increase acceptance irrespective of the content of the claim!

Social conceptiation, therefore, is less epistemically robust than inference conceptiation. Deserialization simply compounds this problem, by allowing the reuse of concepts that fail to be truth-tracking.

Potential Research Vector: Memetic theory postulates that, in virtue of your belief generation systems having a shape: that certain properties of the belief themselves influences cognition. I imagine that this distinction between concept acquisition modes would have nteresting implications for memetic theory.

How To Select Away From Hazardous Deserialization

Unfortunately, from the subjective/phenomenological perspective, there is precious little you can do to feel the difference between hazardous and truth-bearing deserializations. The brain simply fails to tag its beliefs in any way that would be helpful.

Before proceeding, I want to underscore one point: the process of selecting away from hazards cannot be usefully divided into a noticing step and a selection step. If you notice hazard, you don’t need “tips” on how to select away from it: your brain is already hardwired with an action-guiding desire for truth-tracking beliefs. No, these steps remain together; your challenge is “merely” to learn how to raise hazardous patterns to your attention.

Let’s get specific. When I say “raise X to your attention”, what I mean is “when X is perceived, your analytic system (System 2) overrides your autonomic system (System 1) response”. If this does not make sense to you, I’d recommend reading about dual process theory.

How does one encourage a domain-general stance favorable to such overrides? It turns out that there exists an observable personality trait – the need for cognition – which facilitates an increased override rate. Three suggestions that may help:

  1. Reward yourself when you feel curiosity.
  2. Inculcate an attitude of distrust when you notice yourself experiencing familiarity.
  3. Take advantage of your social mirroring circuit by surrounding yourself with others who possess high needs for cognition.

How can you encourage a domain-specific stance favorable to such overrides? In other words: how can you trigger overrides in hazardous conditions, in conditions where obsolescence or epistemic bypassing has occured? So far, two approachs seem promising to me:

  1. Keep track of areas where you have been learning rapidly. Be more skeptical about deserializing concepts close to this domain.
  2. Train yourself to be skeptical of memes originating outside of yourself: whenever possible, try to reproduce the underlying logic yourself.

Of course, these suggestions won’t work exceptionally well, for the same reason self-help books aren’t particularly useful. In my language, your mind has a kind of volition resistance that tends to render such mind hacks temporary and/or ineffectual (“people don’t change”). But I’ll leave a discussion for why this might be so, and what can be done, for another day…


In this post, we explored how the brain recycles concepts in order to save time, via the deserialization technique discussed earlier. Such recycling brings with it two risks:

  1. Obsolescence: The concepts you resurrect may be inconsistent with your present beliefs.
  2. Epistemic Bypass: The concepts you resurrect may not have been evaluated at all.

We then identified two ways this mindware might enrich our lives:

  1. Getting precise about how concepts & conceptiation diverge will give us more control over our mental lives.
  2. Getting precise about how deserialization complements epistemic overrides will allow us to expand memetic accounts of culture.

Finally, we explored several ways in which we might encourage our minds to override hazardous deserialization patterns.