[Graphic] Mental Architecture v1.2

Today’s post is a bit experimental. In addition to producing the next revision of my architecture diagram, I will indulge in autobiographical musings.  I suspect this will become useful historical data (but, failing that, may constitute a source of humor :P)

I have recently completed a sequence on Theory Of Mind. This sequence is a synthesis of my own making. It marries the following mini-theories:

  1. Agent Detection arguments (Scott Atran’s arguments, among others)
  2. Intentional Stance (inspired by Daniel Dennett)
  3. Spinozan brain (Gilbert thesis).
  4. False beliefs blindness solved by counterfactuals (a small part of Leslie’s theory).

While I agree that pretense is the solution to false belief blindness, I currently reject Leslie’s broader theory of representation. This is despite my own synthesis e.g., not yet providing a mechanical explanation for the phenomenon of counterfactual decoupling.

With the completion of this sequence, it is time to update my vision of mental architecture.

Architecture v1.2

Recent musings on culture:

  • Introducing human culture as something that organisms inherit. Follows the dual inheritance model of Boyd and Richerson. For now (this is my first truly serious foray into the science of culture)
  • When it comes time to investigate culture, I want to learn how dominance hierarchies fractionated into (1) religions, (2) marked social groups, (3) economies, (4) political affiliations, and (5) cultural knowledge. I want to explore which cultural mode manifested first in human evolution (a “phylogeny of cultural modes”), and why every branch is not covered by others (especially the different services provided between religions/marked social groups/political affiliations).

Recent musings on identity systems:

  • I’ve added three modules representing various processes underlying identity management. Arguably, there are four identity management motives: https://en.wikipedia.org/wiki/Self-evaluation_motives. The three modules explore self-verification theory (c.f., confirmation bias), self-confirmation theory (c.f. self-esteem and its relation to expanding latitudes of belief), and self-evaluation (c.f., Bandura’s self-efficacy and Ainslie’s personal rules). I don’t require an explicit module for self-improvement at this time, as I suspect it may be a comparatively-more composite process.

Recent musings on decision systems:

  • A recent integrative “aha moment”, which I have not yet had time to explain: Motivation Is Utility Transduction. All of the explananda I had been reserving for motivation, I am hoping to reduce in utility production later passed to neuroeconomic winner-take-all mechanisms.
  • I wonder whether attention needs to be exploded into several modules; perhaps a separate module to track e.g., perceptual attention like saccades. I also wonder whether one of these exploded modules will connect with the Central Executive of Working Memory.

Known omissions and “module folding”:

  • Arousal module remains omitted (for the same reason that Heart Rate and other brain stem functions aren’t captured here).
  • Metaphor remains omitted, as I suspect that it is an artifact of Hebbian learning (that is, closer married to neurophysiology than dedicated computational modules)
  • Kin Detection (related to e.g., see filial imprinting) is folded into Agent Classifier.
  • Folk Geometry is folded into Folk Math.

Miscellaneous changes:

  • Adding an Environment -> Genome arrow to represent epigenetics. Mostly a bookmark b/c I need to absorb more from this field.
  • Folk Psychology moved to Social Systems (as discussed in Intentional Stance)

Research Directions:

  • My current investigations have tended to center around social psychology (e.g., tribalism, morality, love, religion, culture). In the next 6 months, it would be good for me to swing back to the cognitive side (e.g., language, domain-specificity of memory).

Counterfactual Simulation: Presumed Similar Until Proven Different

Part Of: Sociality sequence
Content Summary: 1400 words, 14 min read

False Belief Blindness

Consider the Sally-Anne test:

A child is in a room, watching Sally and Anne who are also in the room. The room has two objects in it: a basket and a box.

Sally has a marble. Sally puts the marble in the basket and leaves the room. While she is gone, Anne moves the marble to the box.

Anne comes back, wanting to play with her marble. Where will she look for it?

To answer the question correctly, it is not enough to model Sally and Anne as having desires and beliefs (to deploy the intentional stance). The child must also be able to differentiate her own knowledge from the knowledge of others (the child is correct, but Sally is wrong). This is an instance of cognitive decoupling: building firewalls around the beliefs/desires of individual agents.

It turns out that 3yo children get the answer wrong, but 5yo children (except autistic ones) get it right. What happens at age four?

Why Belief Inference Is Blind …

At the end of the test, the child believes that the marble is in the box. Sally believes that the marble is in the basket. Their respective minds might look something like this:

ToM- Sally-Anne v1

In Awakening To A Social World, we learned that, at twelve months, children begin to think about other people as having beliefs and goals:

ToM- Sally-Anne v2

If the above picture is how your brain works, it would be puzzling to explain how a child would ever be tempted to conclude that Sally thinks the marble is in the box.

But this is not how your brain encodes second-order beliefs. Here’s what actually happens:

ToM- Sally-Anne v3

Crucially, relational second-order beliefs (“Sally thinks that”) point towards first-order beliefs (“Marble’s in basket”) which live in your world model. This mental library comes equipped with a librarian, who flags the incompatibility between “Marble’s in box” and “Marble’s in basket”, and removes the latter.

Architecturally, the reason why three year olds suffer from false belief blindness is that all beliefs funnel through one world model. There are simply no separate memory spaces to evaluate the world model of other people. In order to understand the beliefs of other people, they must be compatible with facts known by oneself.

… And Recognizing Falsehood Is No Cure

In Gullible By Default we discussed how negating beliefs is optional, effortful, and prone to failure. So you might think that the three year old child hasn’t yet developed the ability to negate Sally’s belief. But you’d be wrong. if you look carefully at the mechanics of negation, you will realize that negation cannot help.

Negating claims is implemented by adding an “It is false that” tag in front of the belief in question. We can negate Sally’s belief in two different ways:

ToM- Negation Cannot Model False Beliefs

Both negations fail. It is simply not true that Sally recognizes the ball isn’t in the basket. Likewise, we cannot say that Sally does not think that the ball is in the basket. Even exotic combinations (e.g., double negatives) are of no use.

But shouldn’t recognizing falsehood in one’s self enable us to recognize it in others?

No. It may help to notice that lie detection immediately corrects false beliefs (by introducing an “it is false than” mental prefix). This repair is essential to protect the mental ecosystem from contamination. Put simply, negation evolved as a protection against deception; it is simply not equipped to recognize honest mistakes.

The Birth Of Creativity

Quite independently of the evolution of lie detectors, the hominid line has also acquired a different kind of ability: the ability for pretense. Did you play the “floor is lava” game as a child? What is going on in the mind of a child when they pretend that couches, etc are the only refuge from a sea of molten rock?

You might be tempted to say that there exists a “floor is lava” belief in your mental library during such games. Or that the falsehood-detector is exploring the possibility of appended “It is false that…” to the traditional belief that “the floor is carpet”. But something more sophisticated is going on. As Leslie puts it:

If I jump up suddenly because I mistakenly think I see a spider on the table, I act as if a spider were there. But I certainly do not pretend a spider is there.

Instead of replacing belief, the child is alternating between competing beliefs. More specifically, the child is building a tiny little scaffold, which hovers over their actual belief that “the floor is carpet” and simulates a world in which that belief is replaced. We may call this counterfactual simulation.

ToM- Sally-Anne v4 Counterfactual Maps

To operate effectively, your counterfactual simulation must:

  • Retain the original belief and its relationships to the rest of your memory. Damaging to any of these forms of knowledge are irreversible.
  • Construct maps between the original belief and the counterfactual. It is not enough to imagine “Floor is Lava”, you must know which belief it overwrites.
  • Distance the prediction machine from the sensorimotor river. The floor’s perceptual signature doesn’t evince visceral fear, for example.

Why has a counterfactual simulator evolved in the hominid line?

Pretense originates in play, but is far more significant than that. With the ability to simulate different worlds, our minds are able to “try on” new beliefs as if they were hats. If we locate a belief that explains the world better than our world model, we upgrade our world model. Counterfactuals allow our prediction machines to upgrade themselves. They are the algorithmic bedrock of creativity.

How Counterfactuals Restored Our Sight …

A timeline of mind-relevant developmental milestones:

  • At 12 months, the intentional stance emerges
  • At 20 months, pretending behavior emerges.
  • At 48 months, false belief blindness is overcome.

Despite the large time gap between pretense and the blindness reversal, I claim that pretense is the cure. What gives?

The answer lies in the fallback mechanism for representing false beliefs. Recall that the 13 month old’s mental librarian rejects “Marble’s In Basket” as incompatible, and as a fallback, re-routes “Sally thinks that” towards the true belief “Marble’s In Box”. When the counterfactual simulator comes online at twenty months, it isn’t yet involved in this failure mode.

It is only slowly that the child’s brain discovers that these two technologies can be productively combined. Passing the Sally-Anne test requires a novel modification to the error processing algorithm:

ToM- Sally-Anne v5 Counterfactual Redemption

Not only are false beliefs encoded counterfactually, but the novel data are stored in the relationship model for later reuse. This is how we become aware of the fallibility of our peers.

… At The Cost Of Self-Anchoring

In Epistemic Topography, I said this:

The curse of knowledge expects short inferential distances. Why does this bias (not another) live in our brains?

As we have seen, estimating [epistemic] location is expensive. So the brain takes a shortcut: it uses a location it already knows about (its own) and employs differences between the Self and the Other to estimate distance. Call this self-anchoring. But the brain isn’t aware of all differences, only those it observes. Hence the process of “pushing out” one’s estimation of Other Locations typically doesn’t go far enough… the birthplace of the curse.

We now have a mechanical explanation for this mental shortcut. The more differences between our beliefs and another person, the more data we must encoded counterfactually. But counterfactuals are not prediction machines in their own right; they only facilitate tinkering with our own machinery. Other people are thus presumed similar until proven different.

Takeaways

Executive Summary:

  • Three year old children cannot conceive of other people being wrong.  This is far after they become mind-aware, even after they become able to recognize deceit. What gives?
  • First, children are blind to false beliefs because all beliefs (mine and yours) are based in the same location: the mental library, or world model.
  • Second, falsehood detection ultimately evolved as a deception-detector; why should we expect it to also function as a wrongness-detector?
  • The ability to pretend (e.g., “This Floor Is Lava”) allows our minds to test out new beliefs, rejecting the failures and integrating the successes.
  • The ability to simulate counterfactuals opens up a new pathway to encode false beliefs.
  • However, this pathway doesn’t let us imagine other people independently: other minds are always self-anchored; that is, imaged as deviations from your own mind.

References

  • [Leslie 1987] Pretense and Representation: The Origins of “Theory of Mind” [link]

[Sequence] Sociality

Culture

Mindreading

Selfhood

Miscellaneous Resources

Related sequences

Lie Detection: Gullible By Default

Part Of: Language sequence
Content Summary: 1300 words, 13 min read

Two Tagging Methods

Imagine a library of a few million volumes, a small number of which are fiction. There are at least two reasonable methods with which one could distinguish fiction from nonfiction at a glance:

  1. Paste a red tag on each volume of fiction and a blue tag on each volume of nonfiction.
  2. Tag the fiction and leave the nonfiction untagged.

Perhaps the most striking feature of these two different systems is how similar they are. Despite the fact that the two libraries use somewhat different tagging systems, both ultimately accomplish the same end. Imagine each book If the labeling was done by a machine inside of a tiny closet – if a library user could not see with her own eyes the method employed, is there any hope of her discovering the truth method employed?

This is exactly the problem faced by cognitive scientists trying to understand the nature of belief. Your brain is responsible for maintaining a collection of beliefs (the mental library). Some of these beliefs are marked true (e.g., “fish swim”); others are marked false (e.g., “Santa Claus is real”). As philosophers in the 18th century discovered, your brain could process truth from falsehood in two distinct ways:

  1. Rene Descartes thought the brain uses the red-blue system. That is, it first tries to comprehend an idea (import the book) and then evaluate its status (give it the appropriate color).
  2. Baruch Spinoza thought the brain uses the tagged-untagged system. That is, it first tries to comprehend an idea (import the book) and then check whether it is fiction (decide whether it needs to be tagged).

Here is a graphical representations of the two brains:

Default Gullibility- Two World Model Updating Systems

Would The Real Brain Please Stand Up?

Ideal mental systems have unlimited processing resources and an unstoppable tagging system. Real mental systems operate under imperfect conditions and a finite pool of resources, which causes these mental processes to sometimes fail. Sometimes, your brain isn’t able to assess beliefs at all.

What happens when a Cartesian red-blue brain is unable to fully assess incoming beliefs? Well, if the world model is left alone after comprehension (middle column), then resultant beliefs are neither marked true nor false, easily distinguishable from more trustworthy beliefs.

What happens when a Spinozan tagged-untagged brain cannot assess an incoming belief? Well, if its World Model processing stops after comprehension (middle column), then the novel claims appear identical to true beliefs.

On the Cartesian system, comprehension is distinct from acceptance. On a Spinozan system, comprehension is acceptance, and an additional (optional!) effort is required to unaccept belief. Cartesian brains are innately analytic, Spinozan brains are innately gullible.

So which is it? Is your brain Cartesian or Spinozan?

Three Reasons Why Your Brain Is Spinozan

Three streams of evidence independently corroborate the existence of the Spinozan brain.

First, scientists have confirmed time and again that distraction amplifies gullibility.

[Festinger & Maccoby 1964] demonstrated that subjects who listened to an untrue communication while attending to an irrelevant stimulus were particularly likely to accept the propositions they comprehended (see [Baron & Miller 1973] for a review of such studies)

When resource-depleted persons are exposed to doubtful propositions (i.e., propositions that they normally would disbelieve), their ability to reject those propositions is markedly reduced (see [Petty & Cacioppo 1986] for a review).

This effect appears in more complex scenarios, too. Suppose your friend Clyde says that “dragons exist”. In this scenario, the brain may not simply wish to reject that (first-order) claim, but also implement lie detection via rejecting the second-order proposition that “Clyde thinks that dragons exist”.

Default Gullibility- Two Types Of Negation (1)

In the context of second-order propositions, distraction causes an even stronger inability to reject claims:

After decades of research activity, both the lie-detection and attribution literatures have independently concluded that people are particularly prone to accept the second-order propositions implicit in others’ words and deeds (for reviews of these literatures see, respectively, [Zuckerman, Depaulo, & Rosenthal 1981] and [Jones 1979]. What makes this phenomenon so intriguing is that people accept these assertions even when they know full well that the assertions stand an excellent chance of being wrong. For example, if an authority asks someone to read aloud a prepared statement (e.g., “I am in favor of federal protection of armadillos”), people [still] assume that the speaker believes the words coming out of the speaker’s mouth. This robust tendency is precisely the sort that a resource-depleted Spinozan system should display.

Not only does dubious position assertions more believable amidst distraction, the opposite of reasonable denials are also likely to be affirmed. That is, resource depletion will cause statements like “Bob Talbert not linked to Mafia” to induce belief in “Bob Talbert linked to Mafia”. The Cartesian model predicts no such asymmetry in response to resource depletion during assessment.

Second, children develop the ability to believe long before the ability to disbelieve.

The ability to deny propositions is, in fact, one of the last linguistic abilities to emerge in childhood [Bloom 1970] [Pea 1980] Although very young children may use the word no to reject, the denial function of the word is not mastered until quite a bit later.

Furthermore, young children are particularly prone to accept propositions uncritically (see [Ceci et al 1987]). Although such hypersuggestibility is surely exacerbated by the child’s inexperience and powerlessness, young children are more suggestible than older children even when such factors are taken into account [Ceci et al 1987].

Third, linguistic evidence shows that negative beliefs take longer to assess, and appear less frequently in practice.

A fundamental assumption of psycholinguistic research is that “complexity of in thought tends to be reflected in complexity of expression”, and vice versa. The markedness of a word is usually considered the clearest index of linguistic complexity… The Spinozan hypothesis states that acceptance is a more complex operation than is acceptance and, interestingly enough, the English words that indicate acceptance of ideas are generally unmarked. That is, our everyday language has us speaking of propositions as acceptable and unacceptable instead of rejectable and unrejectable. Indeed, people even speak of belief and disbelief more naturally than they speak of doubt and undoubt.

People are generally quicker to assess true statements, than false statements [Gough 1965].

How Should We Then Think?

Frankly, this was a difficult article to post. Knowing about biases can hurt people; that is, learning about their own flaws can make people defensive and inflexible.

But this sobering post need not cause us to abandon curiosity and pursuit of truth. It is the mark of an educated mind embrace a thought without flinching, to explore its consequences without fear. It is possible to change your mind.

Takeaways

This article was inspired by [Gilbert 1991] How Mental Systems Believe. Points to remember:

  • How to tell truth from falsehood? You can either tag all beliefs true or false (Cartesian system) or only tag false belief (Spinozan system)
  • Beliefs aren’t always fully analyzed. But in a Spinozan system, unassessed beliefs appear true – the system is credulous by default.
  • Comprehension is belief: gullibility is innate. Only critical thinking is optional, effortful, and prone to failure. Your brain is Spinozan.
    • How do we know? Because distraction causes thinkers to become more gullible
    • How do we know? Because young children are very suggestible, only later acquiring the ability to be skeptical
    • How do we know? Because negative beliefs take longer to assess, have more complex words, and appear less frequently in practice.
  • The great master fallacy of the human mind is believing too much.

References

  • [Baron & Miller 1973] The relation between distraction and persuasion.
  • [Bloom 1970] Language development: Form and function in emerging grammars.
  • [Ceci et al 1987] Suggestibility of children’s memory: Psychological implications.
  • [Festinger & Maccoby 1964] On resistance to persuasive communications.
  • [Gough 1965] The verification of sentences: The effects of delay of evidence and sentence length.
  • [Jones 1979] The rocky road from acts to dispositions.
  • [Pea 1980] The development of negation in early child language.
  • [Petty & Cacioppo 1986] The elaboration likelihood model of persuasion.
  • [Zuckerman, Depaulo, & Rosenthal 1981] Verbal and nonverbal communication of deception.

Granite In Every Soul

Part Of: Awakening To A Social World sequence
Followup To: Intentional Stance: Awakening To A Social World
Content Summary: 400 words, 4 min read

The Origins Of Social Identity

At twelve months of age, children begin ascribing beliefs and desires to others. If beliefs represent the world, then beliefs-about-beliefs are meta-representational. We may call the former 1-beliefs, and the latter 2-beliefs.

Not all beliefs are about people… but some are! In Awakening To A Social World, we discussed how people-oriented knowledge is stored in relationship models. Specifically, 1-beliefs about significant people in your life are stored in primary models, and 2-beliefs about their impression of you are stored in secondary models.

The social mind can be visualized as follows:

Granite Self- Modeling All Relationships

Notice how every person has a color: Clyde, for example, is orange, and Bonnie is purple. Within Bonnie’s mind, her primary models of other people are large ovals, and secondary models are nested inside. Since the large ovals describe other people, they have a variety of colors. Since the nested ovals capture who you are to these people, they are all the same color (in Bonnie’s case, purple).

Importantly, every purple “impression” contributes to Bonnie’s social identity

Granite Self- Relationships Produce External Self

The Sensorimotor River Requires Granite

What good is a social identity? Does it exist solely to give your warm fuzzies and/or social anxiety?

No, selfhood is inextricably linked to behavior. While your behavior may diverge from time to time, it is at least strongly guided by your sense of identity. Call this hypothesized connection the self-motivation thesis.

The flow from perception to action (the sensorimotor river) is the single most important service your mind provides. But imagine, if you will, a  child’s relationship to her best friend, which forms a central role in her social identity. If she moves away, that part of her identity is lost. But this is unacceptable: grief is a healthy response to such an event, but damaging your sense of self (and thus, crippling your motivation) is not healthy.

Social uncertainties and change cause dramatic fluctuations within your social identity, but the sensorimotor river requires consistency. Your brain resolves this tension by injecting granite into your soul: by disconnecting social identity from self-concept.

Granite Self- Social Identity vs Self-Concept

The interplay between the fluctuating identity and the rigid self-concept will be taken up next time, when we discuss self-verification theory.

The Logic of Mindreading

Part Of: Sociality sequence
Followup To: Agent Detection: Life Recognizing Itself
Content Summary: 1500 words, 15 min read

David Hume once observed:

There is an universal tendency among mankind to conceive all beings like themselves, and to transfer to every object those qualities of which they are intimately conscious. We find faces in the moon, armies in the clouds; and, by a natural propensity, if not corrected by experience and reflection, ascribe malice or goodwill to every thing that hurts or pleases us … trees, mountains and streams are personified, and the inanimate parts of nature acquire sentiment and passion.

How did we acquire the ability to discover minds in the world? Let’s find out.

Equifinality: Awakening To Goals

Obviously enough, living things obey the laws of physics. But living things (which we called agents) also have properties unique to them: characteristic appearances (e.g., faces) and behaviors (e.g., self-propelled movement). In Life Recognizing Itself we saw how this allows the brain to build better prediction machines around these agents. We saw that differentiating agents does not require representing other minds. This fact allows us to appreciate the gradual maturation of the Agency Detection system as we travel across the Tree of Life.

Once an organism is detected, the Agent Classifier strains additional information out of its perceptual signature. But there are many other improvements we can make to our prediction machines. Consider, for example, the perspective of an infant who every night is picked up and placed in her crib.

On some nights she may be crawling around the kitchen, in others the family room, in others sitting in her parent’s lap on the couch. Despite these very diverse beginnings, the outcome of the putting-to-bed process is always the same. From the perspective of the neurons in her eyes, very different beginning states always result in the same end state. This property is known as equifinality, and it is not unique to nursery rooms: it is ubiquitous among living things (another example: the behavior of water buffalo around watering holes).

How might prediction machines anticipate equifinality? An efficient explanation of equifinality comes from ascribing goals to an agent.

Dominance Hierarchies: Awakening To Belief

Minds capable of computing situation-agnostic equifinality already have an advantage: perhaps the lion just ate and is not hungry enough to give chase, but the gazelle is just as well modeling the lion as universally hungry. But there is room to inject a little contingency. Return back to our nursery example: the infant knows that putting-to-bed desire is only activated at night. How best to anticipate contingent desire? An efficient explanation for equifinality contingency is ascribing beliefs to an agent.

In fact, there are myriad benefits towards possessing belief inference systems besides equifinal contingency.  Another important reward gradient stems from most important social structure of the animal kingdom: the dominance hierarchy. Wolf packs, for example, feature an alpha male, who is given special feeding & reproductive privileges. In species like the baboon, this hierarchy becomes much more detailed: every individual male knows their who is above & who is below their place.

Dominance hierarchies can arguably exist without its members having theories of mind. But considering that agility in climbing the hierarchy is under strong selective pressure, and that effective navigation requires tremendous psychological prowess. For example, in his book A Primate’s Memoir, Robert Sapolsky recounts tales of his baboons forming alliances in their attempts to dethrone the sitting alpha. The ability to model the beliefs and desires of one’s alliance partner is surely a boon; hence discovering mind can be viewed as a selective consequence of the dominance hierarchy.

Intentional Stance

This tendency to ascribe beliefs and desires onto other agents together known as the intentional stance. Let us name the module responsible for this ability the Agent Mentalizer. The intentional stance appears in children between 6 and 12 months (Gergely et al, 1994).

Intentional Stance- Information Processing

As we might expect from any algorithm in the Agency Detection system, we might expect the intentional stance to be quite vulnerable to false positives. And that is exactly what we find. For an extreme demonstration of this, consider the following video (taken from Heider & Simmel. (1944)):

While the Agent Mentalizer was designed to understand the minds of other animals, it had no trouble ascribing beliefs and goals to two dimensional shapes. This is roughly analogous to your email provider accepting a tennis ball as a login password.

Impression Management Via Secondary Models

In your brain, you have a set of beliefs about the world. These beliefs may be stored in many different memory systems, but all of them together improve the power of prediction that you wield over your environment. Your knowledge, your web of belief, I call a World Model. For every significant individual in your life, you have a collection of beliefs about them, provided by your relationship modeling system. Some of these beliefs are non-mental, obtained from systems like the Agent Classifier. Call these beliefs collectively your primary model. For every significant individual in your life, you have a primary model for them; your World Model contains many primary models.

However, the Agent Mentalizer evolved to infer mental states (beliefs, goals) of other individuals as well. But simulating mental states is not like simulating objects – mental states are about objects. That is, we require a new type of model – a secondary model – to simulate the primary model of another person.

Confused? An example should help.

Imagine a three year old child, Bonnie, and her best friend Clyde. Since she was very young, Bonnie has been accumulating shared experiences with Clyde, and is now able to recognize him by appearance. These memories and knowledge of Clyde are stored in a primary model.

At the twelve month mark, Bonnie acquired the ability to simulate beliefs/goals of other people. This awakening has greatly improved her understanding of Clyde. She began noticing when Clyde was not in the mood to play, and his opinions of various toys. She even became able to crudely simulate Clyde’s response to situation X, even if Clyde hadn’t encountered X in real life.

During this time, of course, Clyde went through the very same maturation process. We can model their co-understanding as follows:

Intentional Stance- Modeling Other Minds

The above graphic underscores the “egotistical” subset of secondary models: simulation of the impression they are making on one another. The intentional stance is the birthplace of impression management.

Against Ternary Models

If secondary models simulate primary models, can ternary models simulate secondary models? Well, let’s not close ourselves to the possibility. Here’s what such a thing looks like:

Intentional Stance- Difficulties With Ternary Modeling v1

So… ternary models are hard to understand! Let’s simplify:

  • A’s model of B = How B Appears To A
  • B’s model of A’s model of B = How B thinks B appears to A
Intentional Stance- Difficulties With Ternary Modeling v2

Make sure the construction logic of the above image is clear: you should have no trouble constructing a quaternary graphic, quinary graphic, etc. Doesn’t the raw ability to reason about such higher-order logics mean that your brain is capable of infinitely-nested meta-models?

A useful analogy here is the natural numbers (0, 1, 2, …). How can your brain hope to count arbitrarily large numbers of things? After all, folkmathematics is fueled with biochemistry; the number of representations it can store is finite. So… how many numbers can it count?

The correct answer is four.  That is, your subitizing module renders your ability to count up to four is almost instantaneous; for larger numbers, response times rise dramatically, with an extra 250-350 ms added for each additional item beyond about four items. We see a similar time difference in recursive reasoning: most people are easily able to simulate of other people (primary modeling) and conduct impression management (secondary modeling). But imagining the impression-management of other people takes conscious effort; and thus cannot be part of your default Theory Of Mind machinery.

With these insights in mind, we see that ternary models out not be admitted into our mental architecture simply because they are conceivable.  Such a solution would not be parsimonous since these complex behaviors can be built from these atomic components. In fact, in the simplification above, you can even see hints of your two-level brain “translating” ternary concepts into a more direct language that it better understands. 

Takeaways

Here are the ideas I want you to walk away from this post:

  • Agents are able to steer different situations towards one outcome. Seeing a world of desire, a world of goals, is how children explain equifinality.
  • Social animals typically compete for resources in the shadow of a dominance hierarchy. Modeling the beliefs of their peers about them improves social acumen, and with it, fitness.
  • Ascribing beliefs and desires to other agents is known as the intentional stance. It appears very early in humans, between six and twelve months.
  • We can formalize the above notions in representation theory. Thinking about other people reside in primary models, thinking about other people’s thoughts go in secondary models.
  • Ternary models are unlikely to exist for the same reason that computers don’t feel inclined to count to infinity. Our brains can get there by recursively invoking more simple components.

Relevant Resources

  • Gergely et al (1994) Taking the intentional stance at 12 months of age
  • Heider & Simmel. (1944) An experimental study of apparent behavior
  • Trick, L.M., & Pylyshyn, Z.W. (1994). Why are small and large numbers enumerated differently?

Agent Detection: Life Recognizing Itself

Part Of: Demystifying Sociality sequence
Content Summary: 1400 words, 14 min read

David Hume once observed:

There is an universal tendency among mankind to conceive all beings like themselves, and to transfer to every object those qualities of which they are intimately conscious. We find faces in the moon, armies in the clouds; and, by a natural propensity, if not corrected by experience and reflection, ascribe malice or goodwill to every thing that hurts or pleases us … trees, mountains and streams are personified, and the inanimate parts of nature acquire sentiment and passion.

Today, we will learn how we came on the ability to discover other animals in the world.

Response Patterns To Predation

Today, we discuss behaviors induced by predation. Did you know that even bacteria can do predation?

  • On detecting energy-laden chemicals, it will swim towards it via a process known as positive chemotaxis.
  • On detecting noxious chemicals, it will instead swim away via negative chemotaxis.

Chemotaxis doesn’t require a nervous system, which is nice because bacteria don’t have one. This nicely illustrates a key lesson in biology: competent behavior does not require comprehension. The only things required here are stimulus-response (SR) maps, which are just as mechanical as the button linking the entrance of your house to a doorbell.

Contra B.F Skinner and his school of radical behaviorism, mammals construct mental representations of their environment. But you can still find SR maps in reflexes (e.g., your knee recoiling from a doctor’s mallet) and fixed action patterns (e.g., the Sphex building her nest).

Of course, S-R maps are metabolically costly, and easy for social predators to outmaneuver. Mammals improved on this approach via re-purposing their endocrine system: the amygdala drives the hypothalamus into one of two modes:

  • Sympathetic nervous system, also known as the fight-or-flight response, prepares your body for action. Symptoms include heart rate increase, tunnel vision, dilated pupils, flushed skin, dry mouth, and slowed digestion.
  • Parasympathetic nervous system, also known as rest-and-digest, restores normal metabolic function (e.g.,  digestion).

The sympathetic nervous system prepares the body for intense activity (amusingly, it is used by both predator and prey).

Two Agency Detection Faculties

Animals with complex nervous systems possess a wide range of sense data. But perceptions don’t include “Lion Warning” labels; instead, sense-data is encoded in neuronal spike trains (roughly, a string of 1s and 0s):

Agency Detection- Interpreting Sense-Data (1)

Fortunately, just as machine learning algorithms feed on data to generate prediction machines, your brain ingests such sensory data to produce inferences about predators and prey. What kinds of algorithms might it use to this effect?

Consider, for a moment, the gazelle. Lion-detector algorithms would surely benefit this creature. However, the perceptual signatures of lions significantly overlaps other gazelles: both have faces, four limbs, the ability to move at great speed, etc.

A gazelle perceives the outline of some as-yet-indeterminate animal concealed in the underbrush: should it simply try to resolve the ambiguity? By no means! While computing identity remains worthwhile, it also pays to immediately invoke the sympathetic system (“prepare for the worst, hope for the best”). We thus see a need for two distinct mental modules:

  • The Agent Detector module is responsible for detecting agents generally. The Agent Detector module is informed by multiple algorithms that search for specific features of an environment. 
  • The Agent Classifier module is responsible for differentiating between agents: predator from prey, friend from foe; it answers the question “so there’s an organism over there: what is it?” 

Perceptual Fluency, Relationship Models, Affect Signature

Like all processes subject to natural selection, the Agent Classifier is not built in the service of truth. Tinkering with the existing software is only preserved when the changes maintain or promote that organism’s biological fitness. We can nevertheless see four classifications that would honor this harsh criteria:

  • Predator: noticing that an animal is a predator, enables differential activation of fight-or-flight, which improves chances of survival.
  • Prey: noticing that an animal is prey, enables differential activation of fight-or-flight, which reduces the risk of starvation.
  • Kin: populations engaged in sexual reproduction are genetically motivated to help their kin (c.f., inclusive fitness). Noticing family members underwriting this ability.
  • Conspecific: social populations often engage in tasks which require coordination. Organisms able to recognize one another in such an environment stand to benefit politically.

The above labels are in fact used reached by a wide variety of organisms. How did they arrive at these abilities? The first clue lies in the mere exposure effect: that which is familiar exudes warmth. Two examples:

  • In studies of interpersonal attraction, the more often a person is seen by someone, the more pleasing and likeable that person appears to be.
  • In another study, subjects were shown nonsense symbols that resembled Chinese characters.  Each character was shown from 0–25 times.  The subjects were then asked to rate how they felt about each character. Eleven out of twelve times, the character was liked better when it was in the high frequency category.

The more you encounter a certain perceptual signature that doesn’t attack you, the easier that signature is to decode (perceptual fluency), and the more “good vibes” you get from the experience.

Besides these foundational mechanism, mammals have additional modules underwriting their social interactions. Significant relationships are implemented with relationship models: finite databases in your brain that track your interactions with significant individuals. But Capgras syndrome complicates this picture somewhat. An example:

Mrs. D, a 74-year-old married housewife, recently discharged from a local hospital after her first psychiatric admission, presented to our facility for a second opinion. At the time of her admission earlier in the year, she believed that her husband had been replaced by another unrelated man. She refused to sleep with the impostor, locked her bedroom and door at night, asked her son for a gun, and finally fought with the police when attempts were made to hospitalise her.

It turns out that people associate signs of normal, autonomic emotional arousal on recognizing close relationships. While Mrs. D relational model produced the same memories, her affective response to her husband was found to be damaged: he felt like a stranger to her. Your emotional encoding of significant individuals, their affect signature, is so powerful that your brain will privilege its information over your memories, should they ever contradict.

Here then is the information processing view of life recognizing itself:

Agency Detection- Information Processing v1

The Ability To See Faces

In 1976, NASA’s Viking 1 was orbiting Mars, exploring the surface for possible landing sites. Here’s one of its pictures, in the Cydonia region:

Agent Detection- Face On Mars

Striking, no?

While the popular reaction involved speculation of extraterrestrial intelligence, the scientists were, of course, a bit less credulous. Presented with such examples, it would be easy for us to get lost in the spooky feelings, or in dissecting superstitious tendencies. The most fertile explananda whispers to us but quietly. Why are such false positives more common than false negatives? We will return to this question in a moment.

Richard Feynman once said:

What I cannot create, I do not understand.

Even by this exacting metric, face detection is a solved problem. The software operating your smartphone’s camera is able to detect faces using a machine learning algorithm. We even know which area of your brain operates the wetware version of this algorithm. 

In Defense Of False Positives

For any such binary classification task, four outcomes are possible:

Agency Detection- Binary Classification Outcome Matrix (1)

There are two ways to get face detection wrong. Why are false positives so much more common than false negatives?

This question can only be satisfactorily answered by the fitness landscape. In our environment of evolutionary adaptation (EEA), these two errors induce radically asymmetric costs:

Agency Detection- Binary Classification Outcome Costs (1)

The above cost asymmetry explains this predominance of false positives, tells us why Agency Detector so often sees armies in the clouds.

Takeaways

  • Simple lifeforms simply move away from noxious stimuli. More complex animals instead possess the fight or flight mechanism.
  • Activating fight-or-flight requires two separate abilities: the ability to detect, and the ability to classify, other animals.
  • Animals label their perceptions of one another by three mechanisms: perceptual fluency, affect infusion, and relational models
  • Many different algorithms exist in your brain for detecting agents. One particularly well-understood example is the ability to see faces.
  • False positives appear more frequently because they cost less than false negatives.
    • This explains why we find faces in the moon, and ascribe malice or goodwill to every thing that hurts or pleases us.
Agency Detection- Information Processing v2 (1)

Against Willpower

Part Of: [Breakdown of Will] sequence
Followup To: [Iterated Schizophrenic’s Dilemma]

This post is addressed to those of you who view personal rules as a Good Thing. If your only thought about willpower is “I wish I had more”, pay attention.

The Breath Of Science

What is human nature?

Suppose that I sat you in a room for fifteen minutes, and had you list as many distinctly human characteristics as you could. How long is your list?

Here’s a first stab at it:

Willpower- Human Explananda Only

Observation is the breath of science. But what’s next?

Most people don’t know how to think scientifically. This skill can only be learned by doing, but let me gesture at the tradecraft in passing.

The first step: give your observations names. A name is the sound you brain makes while hitting CTRL-S.

Willpower- Human Explananda and Jargon

Observations are not bald facts. Observations cry out for explanation (they are explananda).

Science is in the business of building prediction machines (also known as explanations, or theories).

Theory builders keep an implicit picture of their target explananda. The mechanization of science will bring with it explananda databases.

Those who forbids nothing live in ignorance. The secret of knowledge is expectation constraint.

Willpower- Human Explananda Pre-Willpower

The scientific lens, then, interprets observation as explanation-magnets. Theories explain observations indirectly, by painting the space of impossibility.

The mind of a scientist flows in this direction: bald observation → named patterns → explananda → prediction machine → theory integration.

A Tale Of Four Side-Effects

The above list features five explananda divorced from theory. This is, of course, to be expected.

Here are the first four observations. At first glance, these may not seem particularly related.

  • Modernity tends to suffer from people “living in their own heads”, unable to appreciate the subtleties of experience. Call this emotional detachment.
  • Rules tend to be all-or-nothing, erring on the side of memorability above reasonableness. Call this salience enslavement.
  • When people fail to meet some standard, that failure tends to repeat itself more than other failures. Call this lapse aggravation.
  • People are prone to self-deception. Call such events introspective failures.

By now, I have completed my sketch of Ainslie’s theory of willpower (willpower is preference bundling, implemented as personal rules).  One reason to take this theory seriously, is that it explains why humans suffer from these syndromes.  The surprising nature of these connections is a virtue in science.

Willpower- Human Explananda Post-Willpower

All of this is not to say that willpower is undesirable.  Willpower is just not unequivocally beneficial. If you are in the business of authoring rules for yourself, you would be well-advised to account for the risks.

Let me now turn to how our theory of willpower entails these four uncomfortable “side effects”.

On Emotional Detachment

Let’s return to a central result of the Iterated Prisoner’s Dilemma (IPD). Recall that IPD has both prisoners repeatedly tempted to “rat” on one another for a wide range of different charges, with no end in sight. Suppose the DA has 40 separate questions for which they will give less prison time to the prisoner who provides information. Is it in the government’s interest to tell the prisoner’s how long they will be playing this game?

The answer is yes! On round 40, each prisoner will know that they will have nothing to lose by trusting one another, and defect. On round 39, each prisoner will anticipate the other to betray them on round 40, so they can’t do better than defecting on that previous round too. This chain of reasoning threatens to contaminate the entire iterated game. Prisoners will be more likely to cooperate with one another if the length of their game is let unknown.

Can known-termination collapse apply to games played between your interpersonal selves? Not only does it apply, but I know of no better explanation for the change that strikes many people when they are told they have X months to live. Consider the story of Stephen Hawking. When he was diagnosed at twenty-one years old, he was given two years to live. Fortunately, the diagnosis was inaccurate, but Stephen credits his diagnosis with adding urgency to his work, inspiring him to “seize the moment” before it was too late.

Why should personal rules run at odds with living in the moment?

Attention is a finite resource. When you spend it analyzing situations for rule-conformance, your choices become detached from the here-and-now. Perhaps this is the birthplace of loss of authenticity, that existential philosophers complain of in modern society generally.

On Salience Enslavement

If neuroeconomics has taught us anything, it is that decision-making is an algorithm. But consider the effect of injecting a Preference Bundler inside of such an  algorithm:

Personal Rule- Crude Decision Process (2)

Preferences are bundled according to personal rules. However, our memory systems deliver these rules in differing degrees of strength.

Personal rules operate most effectively on memorable goals. Let’s imagine for a moment that it is medically beneficial to consume half a glass of wine a night. Consider the case of a non-alcoholic who nevertheless, consumes a medically unwise amount of alcohol. Would you advise him adopt a rule of half a glass, or no alcohol at all?

While the absolute difference between a half-glass and no wine is small, the memorability of either differs dramatically. You can observe the same effect in three chocolate chips vs. no chocolate chips, or one white lie vs. total honesty.

But memorability (salience) is not always so innocent. The exacting personal rules of anorectics or misers are ultimately too strict to promise the greatest satisfaction in the long run, but their exactness makes them more enforceable than subtler rules that depend on judgment calls. In general, the mechanics of policing this cooperation may increase your efficiency at reward getting in the categories you have defined, but reduce your sensitivity to less well-marked kinds of rewards.

On Lapse Aggravation

In Epistemic Topography, I discuss the notion of identity bootstrapping:

I pursue deep questions because I tell myself I am curious → I tell myself I am curious because I pursue deep questions.

Personal rules share an analogous pattern of commitment bootstrapping:

If I renege on my commitment, I am unlikely to do the right thing next time → If I can’t count on my future self, it’s best to take things into my own hands now.

Both phenomena constitute positive feedback:

Personal Rule- Feedback

 

Have you ever heard a microphone squeal because it gets too close to the speakers? This is another artifact of positive feedback:

If a microphone detects noise (e.g., a singer) then the speakers will amplify it → if the speakers produce noise the microphone may detect it

The squeal of the microphone that makes you wince is literally the loudest sound a speaker can produce without blowing a fuse. This effect is not unique to acoustics: all systems that rely on positive feedback are both powerful and unstable.

Suppose I were to build a strong interpersonal rule against eating cheese, but then I lapse. Without the rule, such a decision would be have no bearing on my sense of identity. However, with the personal rule in place, I have just sent myself evidence of non-compliance, my present-self loses confidence that my future-self will sustain my best interests.

Consider the stories you have encountered of children raised in a very strict atmosphere, who have since matured and rebelled. Surely you can think of a time when the rebellion is not simply to adopt a new identity, but instead fall towards an anti-identity. Malignant failure modes owe their roots to the fabric of rules themselves.

On Introspective Failures

Consider again our poor compatriot who has just lapsed in violation of her personal anti-cheese commitment. This person’s short-range interest is in concealing the lapse, to prevent detection so as not invite attempts to throw out the cheese.  What’s more, the person’s long-range interest is in a similarly awkward position where admitting defeat risks birthing a new malignant failure mode.

The individual’s long-term interests is in the awkward position of a country that has threatened to go to war under some scenario. The country wants to avoid war without destroying the credibility of its threats, and may therefore look for ways to be seen as not having detected the circumstance.

Willpower thus creates perverse incentives. These incentives explain how money disappears despite a strict budget, or how people who “eat like a bird” mysteriously gain weight. To preserve the right to make promises to oneself, we fallen victim to self-deception.

Takeaways

This post is addressed to those of you who view personal rules as a Good Thing. If your only thought about willpower is “I wish I had more”, pay attention.

Willpower is not an unmixed blessing. If willpower is preference bundling, implemented as personal rules, then willpower brings with it four uncomfortable side effects:

  • Emotional detachment: Attending to personal rules tends to induce the “living in your own heads” (an inability to appreciate the subtleties of experience). 
  • Salience enslavement: Personal rules tend to be all-or-nothing, erring on the side of memorability even at the cost of reasonableness.
  • Lapse aggravation: Personal rules are powerful feedback loops. When rules are followed they get stronger, but lapses not only occur but also promote failure modes.
  • Introspective failures: Decisions are made in the context of interpersonal warfare. On the event of a lapse, both of your selves are incentivized towards self-deception.

All of this is not to say that willpower is undesirable.  Willpower is simply not unequivocally beneficial. If you are in the business of authoring rules for yourself, it pays to be informed of the risks.

Ecology Embeds Gene-Space In The Biosphere

Part Of: Demystifying Life sequence
Followup To: An Introduction To Natural Selection

Motivations

In the past two posts, we have explored the landscape of gene-space.

  • A Genotype Is A Location.
  • Organisms Are Unmoving Points
  • Birth Is Point Creation, Death Is Point Erasure
  • Genome Differences Are Distances
  • Biological Fitness Is Height

Within this topography, we identified the following features:

  • A Species Is A Cluster Of Points
  • Species Are Vehicles
  • Genetic Drift is Random Travel.
  • Natural Selection is Uphill Locomotion

We can imagine millions of alien worlds with their own landscapes.  Today, we survey the complexities of our biosphere: the fitness landscape of Earth.

Fitness-As-Resource

Let population be members of a species that are in the same geological area (such that its members are able to interbreed). For example, we know of fourteen different populations of the humpback whale. Now, for every population, there exists some maximum number of individuals that the local environment can sustain. This number is known as carrying capacity. Now, for most populations this number is fairly large, especially in fertile environments. What does this concept entail for our fitness landscape?

We can define fundamental fitness to be fitness that would be afforded to some pair of individual organisms in an empty world, with no other organisms competing for the same resources. As we introduce more organisms into a given population cluster, the amount of realized fitness we could afford the original members decreases. When the number of dots equals the carrying capacity, average fitness is 1.0 (such that population as a whole will neither increase nor decrease).

This perspective leads us to viewing realized fitness as a finite resource. This interpretation is entirely compatible with our physics-oriented view of life; life as disentropy engine. There is only so much disentropy to go around in a given volume of spacetime! Call this the fitness-as-resource view.

Biotopic Landscapes

Organisms successful in the jungle are not necessarily well-equipped for the desert. Fitness landscapes change with location. We must embed gene-space into spacetime.

Does every cubic centimeter of Earth’s surface merit its own gene-space? Surely not! Not even the most life-friendly milliliter cannot sustain thousands of lifeforms.  We instead need to zoom out, and consider larger slices of land that can support a more meaningful amount of life. Will any collection of land work? No: we want to carve nature at the joints, and draw lines around ecologically uniform habitats (i.e., biotopes).

If we think of time as a dimension, then it is possible to view the entire universe stretched out throughout eternity as a four-dimensional block. But for our purposes, we only care about Earth across all time; call this smaller 4D rectangle the 4-biosphere. Ecological embedding is the art of finding maximally large 4-biotopes that maintain coherent landscapes..

Ecology- Biotopic Landscapes

Niches → Correlated Peaks

If organism fitness is a resource, what is resource competition? To answer, we must don our fitness-as-resource glasses. The idea here is that every organism comes equipped with disentropy vacuums; that is, machinery for extracting viable energy from its environment.

Let’s get specific. Imagine two different whale species in the same biotope, consuming the same kind of plankton. If the underlying plankton population started dying off, both whale populations would be jeopardized. Their disentropy vacuums would falter, and their collective fitness would plummet concurrently. Resource competition is fitness correlation.

Niches ultimately describe the resource-seeking strategies an organism uses to survive. Organisms with complete overlap of such strategies are said to occupy the same niche. While we could explore niches as a resource-space (the Hutchinsonian view, which has produced the discipline of niche modeling), let us instead view niches in terms of fitness landscapes. On this view, niches simply are correlated fitness peaks.

With this identification in hand, we are now in a position to understand more complete ecological phenomena:

  • The competitive exclusion principle holds that, other things being equal, two species competing for the same resource cannot coexist at constant population values. In our language: housing multiple populations on the same correlated fitness archipellago is an unstable zero-sum game.
  • Niche differentiation is a direct consequence of competitive exclusion. If another population invades and begins to take over your niche, you move to another niche (another resource profile). A classic example of niche differentiation comes from Robert MacArthur’s analyses of different populations of starlings in the same biotope. He observed that, despite their similar lifestyles, each population evolved to live in different cross-sections of trees: some living in the top branches, and others making homes near the base.

With niches under our belt, we now turn to two other, central notions in population ecology: food webs and arms races.

Food Webs → Lossy Fitness Theft

Our fitness-as-resource view provides a natural understanding of food webs. What is the fundamental fitness of the antelope? Zero! Herbivores cannot sustain themselves without vegetation; their existence is contingent on consuming such resources. Antelopes become relatively more reproductively successful only when grass becomes relatively less successful. This is a destructive form of fitness relocation, which I will call fitness theft.

93% of all human consumption of meat comes from three animals (36% pigs, 33% chicken, and 24% cows). None are carnivores. Why?

Consider the energy budget of life. If some blade of grass absorbs X kiloJoules worth of sunlight in its lifecycle, will the cow who ingests it absorb all X units of energy? No: most of the underlying energy was spent maintaining the cellular structure of the animal. Similar analyses can be run at any level of the food web. Life spends energy merely to sustain itself; this is the meaning of metabolism. Therefore, predation is always inefficient, and fitness theft is always lossy.

We can encode food webs into fitness landscape as follows. Diminishing peak size represents energy loss.

Ecology- Example Food Web

Perhaps unsurprisingly, food webs tend to create dynamical population patterns that fluctuate in sync with one another.  Systems theory plays a role in their analysis; here is a visual guide to the underlying differential equations.

Arms Race → Comoving Peaks

Meet the American rough-skinned newt.

Theoretical Ecology- Newt

Cute, right? Just, if you hold one, please remember to wash your hands before eating. These things secretes enough poison to literally kill an elephant.

Why should this be? I mean, obviously poison is a good safeguard against predation, but none of the newt’s natural predators come close to being elephant-sized. What benefit could the newt possibly derive in developing a poison so gratuitous?  Genetic accidents seems unlikely to explain the immense gap between the practical dose and the actual dose. What gives?

The clue lies in looking carefully at the newt’s predators. It turns out that a nearby species of garter snake has been developing a massively overpowered immunity to the newt’s poison.

If predators can be viewed as “fitness vacuums” in gene-space, then predation can induce natural selection towards novel defense mechanisms. As prey evolve towards more defensible peaks, predators with bolstered offensive capabilities are selected. In this way, the peaks of both species move alongside one another:

Takeaways

As hinted in my reference to systems theory above, theoretical ecology does not always leverage fitness landscape models.

We began this post by naturalizing our notion of gene-space:

  • Fitness is ultimately grounded in energy budgets. Fitness is thus a finite, fungible resource.
  • The fitness landscapes of a desert diverges from that of the ocean. Fitness landscapes are most clearly defined in uniform habitats, or biotopes.

These theoretical additions let us model new ecological behavior:

  • Niches are correlated fitness peaks, where each peak “vacuums up” fitness from the same set of resource.
  • Food webs are fitness theft, where predators gain fitness by reducing fitness of their prey. However, predation is inefficient, which guarantees a finite number of predation levels.
  • Arms races are comoving peaks, which occur when predator and prey attempt to outmaneuver one another in the fitness landscape.

These phenomena suggest that the fitness landscape is better understood as a seascape, whose contours fluctuate & interrelate in subtle ways.