The Supernatural Punishment Hypothesis

Part Of: Religion sequence
Followup To: The cognitive basis of religion
Content Summary: 2300 words, 23 min read

Introduction

Last time, we discussed how various mindreading intuitions explain why religions overwhelmingly endorse thoughts of disembodied minds, the afterlife, etc. 

One noticeable absence in that discussion is morality. Religious dogma is highly concerned with morality, and religions often claim to have made their members better people. Is there some kind of necessary relationship between mind-body dualism and ethics? 

Today, I will present evidence suggesting that one particular aspect of theism, namely belief in supernatural monitoring & punishment, is responsible for most of the moral virtues seen in the religious. This is the supernatural punishment hypothesis.

Social Monitoring

Human beings possess norm psychology, enforced by gossip-mediated reputation and altruistic punishment. While we often internalize norms (especially the guilt response of the idiocentric), we are much more likely to engage in self-interested behavior if no one is watching. 

This is in fact the case. Anonymity significantly increases selfish behavior (Hoffman et al 1994). Playing economic games under low light also promotes selfishness (Zhong et al 2010); this darkness effect is statistically mediated by feelings of anonymity.

Self-awareness theory holds that people can oscillate between two different states of self-awareness. When people view themselves as a target of social scrutiny, they experience public self-awareness; in contrast, private self-awareness occurs when attention is directed inward. Reminders of social monitoring uniquely heightens public self-awareness. 

These effects aren’t subtle. Monitoring detection is absolutely essential to human social life. Our subconscious response to eyes remains strong even if we consciously know the eyes aren’t connected to minds. In a classic experiment, Bateson et al (2006) measured voluntary donations given to the office coffee fund. By simply altering a picture adjacent to the coffee maker (from flower pictures to eye pictures), prosocial donations increased when people were around pictures of eyes.

Watched people are nice people.

Supernatural Monitoring

Human social cognition produces intuitions about disembodied minds. Despite their immateriality, we relate to gods as we do other agents. Does the thought of God make us behave like we do when other humans watch us?

Yes. Supernatural monitoring is a special case of social monitoring. We can see this in e.g. the Dictator Game. In the game, one person is given $10, and must decide how much to give to the other player ($0 is maximally selfish, $5 is often considered more “fair”).

Both believers and atheists offer about $2 on their own accord. But, when thoughts of God are unconsciously primed, the offer of believers increases dramatically. 

If God-primes simply activated stereotypes of prosocial behavior (an ideomotor response), we would expect atheists to respond too. But the effect only works for those who believe God is watching them.

We can also see this supernatural monitoring effect in children. Piazza et al (2011) involved a game with an apparently easy way to cheat (detected via hidden camera). Children playing this game cheated much less when they were told an invisible agent, “Princess Alice” was sitting in a nearby chair.

Shariff et al (2015) performs a meta-analysis over 25 religious priming studies, and found that religious primes reliably elicit prosocial behavior.

Just like the presence of a video camera, experimental reminders of gods increase public self-awareness but not private self-awareness. For Christians reporting strong religious belief, the effect on public self-awareness of thinking about God is statistically indistinguishable from the effect of thinking about being judged by one’s peers (Gervais & Norenzayan 2012, Study 1). 

Feeling watched increases prosocial behavior, but it also leads people to put their best foot forward, even at the expense of honesty. Socially desirable responding, in particular, is a potentially useful way to test whether priming religious concepts triggers mind perception or merely makes people act in accordance with prosocial norms and stereotypes. If priming gods merely makes prosocial stereotypes and norms salient, then they should perhaps increase honesty (which translates into fewer socially desirable responses). On the other hand, if reminders of gods make people feel watched, then people should instead respond  in more socially desirable ways. Recent evidence supports the latter hypothesis. Believers exhibit significantly more socially desirable, dishonest responding after being primed with god concepts (Gervais & Norenzayan, 2012, Study 3). 

Supernatural Punishment

Supernatural monitoring promotes religious prosociality. Why? Which specific elements within our religious ideas creates such an effect?

The stick looms larger than the carrot. Strong evidence suggests the threat of punishment is the theological element that does the heavy lifting. And this makes sense: in earthly settings, punishment is the lifeblood of intragroup cooperation.

Yilmaz & Bahcekapili (2016) and Shariff & Norenzayan (2011) found that, while belief in gods are largely irrelevant, priming beliefs in supernatural punishment causes lower individual rates of cheating. DeBono et al (2015) found a related effect: reminders of God’s forgiveness actually increases cheating behavior, relative to control conditions.

At the national scale, after controlling for relevant variables such as GDP and education, belief in hell is strongly associated with lower crime rates (Shariff & Rhemtulla 2012). But such beliefs also come at a price. Belief in hell is also associated with lower overall life satisfaction and day-to-day measures of wellbeing (Shariff & Aknin 2014). 

If hell is the necessary ingredient for good behavior, why believe in heaven? While hell might be better at getting people to be good, heaven is better at making them feel good. Heaven will get you in the door, hell will ensure the door is repaired in a timely fashion.

Facultative Prosociality

Most of the individual-level causal evidence above is based on priming studies. These studies are quite robust, as revealed in meta-analyses such as Shariff et al (2015).

But what about when believers aren’t reminded of their faith? Do they consistently act better than nonbelievers? Is prosociality a personality trait, strengthened by religious praxis?

The answer is complicated. But there is some reason to think, at least in WEIRD nations, belief doesn’t increase moral behavior. 

First, consider cheating. Are religious people less likely to cheat? Smith et al (1975) describes how research has consistently failed to find any substantial difference based on religiosity.

Second, consider altruism. The Parable of the Good Samaritan describes people passing by an injured person in need; helping – even when it is inconvenient – seems a useful way to operationalize altruism. But when Darley & Batons (1973) recreated a modern-day Good Samaritan scenario, religious people were no more likely to help! In fact, the only two variables that predicted helping were sex (females show a higher frequency of altruistic behavior) and situational factors (people told to hurry were much less likely to help). 

Religion does produce prosocial behavior, as explained by the supernatural punishment hypothesis. But in WEIRD nations, it seems this prosociality is restricted to times when believers are reminded of their faith. For Christians, this explains the Sunday effect

  • Malhotra (2010) found that Christians are much more likely to give to charity on Sundays.
  • Edelman (2009) found that, while Christians use as much porn as non-Christians, they use less on Sundays (compensating later in the week).

This effect generalizes to other faiths:

  • Duhaine (2015) found that Moslems are much more likely to give to charity when asked during a call to prayer.
  • Xygalatas (2013) found that Hindus are much less likely to act selfishly in an economic game conducted within a temple.

What good is facultative (situation-dependent) prosociality? Consider this quote from Henrich (2020).

A far more important source of divine punishment arose from the violation of sacred oaths taken in the name of particular gods while signing commercial contracts, making sales, or assuming public offices. In Athens, as in many parts of the Greek world, the marketplace was filled with altars to various gods. Merchants were required to swear sacred oaths before these altars to affirm the authenticity and quality of their goods. Athenians’ intense reliance on the gods, and on such oaths, may help explain their enduring reputation for trustworthiness in both business and treaty-making. 

Religion is more in the situation than in the person.

In Gods We Trust

Religion is a team sport. Enhanced individual prosociality should have group-level consequences. 

Recall our previous discussion of the Prisoner’s Dilemma. While that particular payoff structure may not be representative of human social dynamics (Hawk-Dove and Biological Markets are more incisive), the general point holds: cooperation is the most important challenge of human social life

If religion promotes cooperation, that’s one thing. But if cooperators manage to find each other (costly and/or credible signals) and prefer interactions with one another (trust), then their groups will reap the rewards. You can see this principle illustrated in spatial evolutionary games:

Note that cooperators can only seek each other in relationally mobile (i.e., individualistic) societies. Using trust signals to engage with cooperators is less relevant in collectivist societies, which employs reputation (Sosis 2005).

There exist two ways to operationalize trust:

  1. Attitudinal trust is an attitude of confidence in the reliability of another person or institution. 
  2. Behavioral trust is a costly and risky investment in a person or entity, with the future expectation of cooperation. 

Attitudinal trust for religious participants is well attested. A worldwide survey of 81 countries representing 85 percent of the world’s population, conducted between 1999 and 2002 found that almost two-thirds of all participants said they trust religion, compared to only half who trust their government, and only about one-third who trust political parties.

The religious also elicit strong levels of behavioral trust (Tan & Vogel 2008):

  1. More money was forwarded to responders perceived to be religious. 
  2. While believers strongly trusted their own kind, nonbelievers were either, if anything, mildly trusting of believers.
  3. Believers were in fact more likely to cooperate.

Anti-Atheist Prejudice

Consider the Traveling Salesman problem:

Max Weber was sitting next to a traveling salesman when the conversation turned to religion. In a now famous quote, the man said: “Sir for my part everybody may believe or not believe as he pleases; but if I saw a farmer or a businessman not belonging to any church at all, I wouldn’t trust him with fifty cents. Why pay me, if he doesn’t believe in anything? 

Social scientists have long noted that Americans are less accepting of atheists than of any other groups, and by a wide margin. This anti-atheist prejudice is widely expressed internationally, but it of course varies by country. Consider that only one US legislator has come out as atheist:

On March 2, 2007, long-time congressman Pete Stark, Democrat from California, made history. He did not author a far-reaching legislation that created jobs, cleaned the air, or shaped foreign policy. He was simply the first member of the US Congress to come out as an atheist. In 2012, Stark lost to fellow Democrat Eric Swalwell Jr., who publicly attacked Stark’s atheism. 

My first impulse was to attribute anti-atheist prejudice to a generic stereotyping mechanism. But if you examine its characteristics in detail, it turns out to behave quite different from other stigmas:

The supernatural punishment hypothesis is able to explain this rather peculiar psychological profile.

If sincere belief in a morally concerned deity serves as a reliable cooperative signal, it follows that those who explicitly deny the existence of God are inadvertently sending the wrong signal: they are being perceived as subversive non-cooperators by the religious. 

I’ll close by noting that there are three known ways to reduce anti-atheist prejudice:

  1. Exposure to or reminders of strong institutions that create prosocial norms
  2. Exposure to or reminders of atheists’ prevalence 
  3. The decline of religiosity in a given society

Conclusion

Today, we covered five topics:

  1. Social monitoring: Watched people are nice people
  2. Supernatural monitoring: God is watching you.
  3. Supernatural punishment: Hell is stronger than heaven
  4. Facultative prosociality: Religion is more in the situation than in the person.
  5. In Gods We Trust: Trust people who trust in god.

To be clear, supernatural monitoring is not the only source of prosociality in religion. Self-control (McCullough & Willoughby 2009) is an extremely important factor, for example. The supernatural punishment hypothesis merely posits that this intuitive mechanism is a) specific to believers, and b) intuitively held & deeply powerful. 

Until next time.

References

Social monitoring

  • Bateson et al (2006). Cues of being watched enhance cooperation in a real-world setting.
  • Hoffman et al (1994). Preferences, property rights, and anonymity in bargaining games.
  • Zhong et al (2010). A good lamp is the best police: Darkness increases dishonesty and self-interested behavior

Supernatural Monitoring

  • Atkinson & Beliefs about God, the afterlife and morality support the role of supernatural policing in human cooperation
  • Gervais & Norenzayan (2012). Like a camera in the sky? Thinking about God increases public self-awareness and socially desirable responding
  • Piazza et al (2011). “Princess Alice is watching you”: Children’s belief in an invisible person inhibits cheating.
  • Shariff et al (2015). Religious Priming: A meta-analysis with a focus on religious prosociality.

Situational Prosociality & Sunday Effect

  • Darley & Batons (1973). “From Jerusalem to Jericho”: A study of situational and dispositional variables in helping behavior
  • Edelman (2009). Red light states: who buys online adult entertainment?
  • Henrich (2020). The WEIRDest People in the World: How the West Became Psychologically Peculiar and Particularly Prosperous
  • Malhotra (2008). Are religious people nicer? Religious salience and the “Sunday effect” on prosocial behavior.
  • Shariff (2015). Does religion increase moral behavior? 
  • Smith et al (1975). Faith Without Works: Jesus People, Resistance to Temptation, and Altruism 
  • Duhaime (2015). Is the call to prayer a call to cooperate? A field experiment on the impact of religious salience on prosocial behavior

Hell is Stronger than Heaven

  • DeBono et al (2017). Forgive us our trespasses: Priming a forgiving (but not a punishing) god increases unethical behavior
  • Shariff & Rhemtulla (2012). Divergent effects on belief in heaven and hell on national crime rates.
  • Shariff & Aknin (2014). The Emotional Toll of Hell: Cross-National and Experimental Evidence for the Negative Well-Being Effects of Hell Beliefs
  • Shariff & Norenzayan (2011). Mean gods make good people: Different views of god predict cheating behavior
  • Yilmaz & Bahcekapili (2016). Supernatural and secular monitors promote human cooperation only if they remind of punishment

Trust & Anti-Atheist Prejudice

  • Sosis (2005).  Does Religion Promote Trust? The Role of Signaling, Reputation, and Punishment
  • Shigaki et al (2012). Referring to the social performance promotes cooperation in spatial prisoner’s dilemma games
  • Tan & Vogel (2008). Religion and Trust: An Experimental Study

Other

  • McCullough & Willoughby (2009). Religion, Self-Regulation, and Self-Control: Associations, Explanations, and Implications

The cognitive basis of theism

Part Of: Religion sequence
Related To: The Logic of Mindreading
Content Summary: 3000 words, 30 min read

Religion as Natural

Most human beings in recorded history have participated in religion

There is no consensus definition of religion; but we can still put our finger on a family resemblance. Theistic religions tend to…

  1. Include a belief in spiritual beings, such as ghosts, angels, ancestor spirits, and so on. These entities often have mental lives, but no physical form. 
  2. Posit an afterlife.
  3. Affirm the purposeful creation of the universe, including humans and other animals. 

You are not going to find a place, anywhere, where such notions feel absurd. Yet despite the ubiquity and family resemblances, the content of religious traditions vary immensely across cultures and across history. 

Perhaps religion is similar to language. All societies have at least one language; all societies have at least one religion. Also like language, religion is not present at birth. It develops instead through immersion in a social environment. The specific language or religion that a child develops is determined by the culture in which the child is raised, not by genes or the physical environment. But there are universals of language. Every language has words and sentences, as well as principles of phonology, morphology, and syntax. And as we saw above, there are also universals of religion. 

In sum, the ubiquity and family resemblance of religious traditions point to a shared cognitive basis, but theodiversity reminds us these intuitions support constructs subject to cultural evolution.

Today, we will be exploring the cognitive basis of religion. Unlike language, a unified faculty which directly promotes biological fitness, we will learn how religious instincts are byproducts of human social machinery. 

Mind perception enables God perception

There is a cartoon depicting an alarmed husband telling his visibly upset wife, “Of course I care about how you imagined I thought you perceived I wanted you to feel.” While most social interactions are fortunately not as convoluted, social life does require mindreading. You can think of the mindreading faculty as a little machine in your brain, which guesses at the beliefs and desires of others.

Across cultures, supernatural agents are described as having minds. Contrary to some theological doctrines that cast God as an abstract universal force, Ground of Being, or the totality of everything, God has a remarkably human-like mind in the natural religion of the faithful. God can get angry when His followers do not act in accordance with His desires. Ancestor spirits will be pleased when they receive the proper signals of fealty and allegiance. Zeus has plans for humans that often upset his wife Hera.

Representing the mind of a being that your sense organs don’t perceive may require particularly potent mindreading abilities. Americans, Czechs, and Slovaks with better mindreading abilities are more likely than others to believe in God (Willard et al, 2020 but see Maij et al, 2017). 

Women are more religious than men. Ever wonder why? Well, women are on the average better at mentalizing and empathy. Once we adjust for men’s inferior abilities, women and men don’t differ in their belief in supernatural agents (Norenzayan et al, 2012).

If religiosity relies on mindreading, we might expect damage to its circuitry to affect religiosity. And indeed, autistic individuals are on average much less religious (Norenzayan et al, 2012). 

If religiosity relies on mindreading, we might expect theological reasoning to emerge at the same life stages as ordinary social reasoning. 3yo reasoning falls prey to a reality bias— in verbal reasoning, these children fail to distinguish between the state of reality and peoples’ (sometimes inaccurate) mental states. 4.5yo children become capable of reasoning about the false beliefs of other agents.

Do children intuitively treat supernatural agents as earthly? Lane et al (2010) answers in the affirmative. As children begin to appreciate human beliefs as fallible, they initially attribute similar limitations to all agents, including agents with exceptional powers (in the experiment, God and Mr. Smart). Only later do children learn to overcome their intuitions and affirm depictions of these “super-agents”.

Adults show this same penchant for viewing God as having an essentially human mind. Barrett and Keil (1996) show that even adults, reading a story of God in transcendent terms, think of God as having human-like mental limitations. 

If religiosity relies on mindreading, we might expect thinking about, and praying to, God should activate brain regions associated with mindreading. Indeed, that is precisely what was found in e.g. Schjoedt et al (2009)

Attachment to God

For obvious evolutionary reasons, babies don’t just care about their physical needs, but also the relationship with their caregivers. The attachment system is responsible for promoting these crucial relationships. The Strange Situation task nicely illustrates how it works:

An infant and its caregiver are taken to a room full of toys.  A stranger enters the room, interacting with the caregiver.  The caregiver leaves the room for a few minutes.  Then, after a while, the caregiver returns.  The infant’s behavior is carefully recorded.

How would you anticipate children to respond? It turns out that responses fell into three categories:

  1. Some children did not acknowledge the caregiver, seemingly only interested in play. 23% of all children exhibited this avoidant attachment 
  2. Some children clung to the caregiver, refusing to let go & resume play for some time. 15% of all children exhibited this anxious attachment.
  3. Some children came to hug the caregiver, and then peacefully resumed play. 62% of all children exhibited this secure attachment. 

These attachment styles have been shown to correlate with the emotional availability of the caregivers. A parental style of consistent care promotes security; inconsistent care promotes anxiety; consistent lack of care promotes avoidance. 

The attachment system doesn’t just facilitate parent-child relationships. It is also the substrate of romantic love, and generally friendships of all kinds. Which leads to a question: do believers use their attachment system to relate to God?

Granqvist et al (2010) adduces evidence for the affirmative. Consider this quote from Mother Teresa:

Since age 49 or 50 this terrible sense of loss—this untold darkness—this loneliness, this continual longing for God—which gives me that pain deep down in my heart—Darkness is such that I really do not see . . .—the place of God in my soul is blank—There is no God in me—when the pain of longing is so great—I just long & long for God—and then it is that I feel—He does not want me—He is not there—. . . God does not want me—sometimes I just hear my own heart cry out—“My God” and nothing else comes. 

It seems at least some believers experience relational attachment to gods, ancestors, etc. Just as people seek out other human minds in time of loneliness and stress, these are the times they are uniquely likely to affirm their religious beliefs. 

  1. Many unpredictable negative events (including illness, injury, fatigue, frightening events, and separation from loved ones), activate the attachment system and activate a search for minds. These same situations cause people to turn to God (Gray & Wegner, 2010).
  2. Subtly primed religious concepts buffer religious participants against the negative consequences of laboratory induced social isolation (Aydin et al 2010).  

Supernatural attachment is a bit unusual, however. Generally speaking, egocentrism describes the phenomena of, when we don’t have much knowledge of other people’s views, we often attribute our own values & beliefs onto them. Epley et al (2009) show that this phenomena happens even more frequently for people’s relationships with gods. 

Dualism, Disembodied Minds, and the Afterlife

Mind-body dualism represents the notion that minds and bodies are completely separate substances. Western thinkers often associate dualism with Rene Descartes. But cross-cultural research suggests that adult intuitions about disembodied minds are strikingly similar across societies (Cohen et al., 2011; Roazzi et al, 2013). Recent work has even found dualist thinking in ancient Chinese texts (Slingerland & Chudek, 2011). 

Further, children’s belief in the afterlife (at least in Western cultures) gets weaker with age, not stronger (Bering, 2006). While enculturation often does weaken these intuitions, mind-body dualism re-emerges in adults under cognitive load (Forstmann & Burgme, 2015).

Dualistic intuitions are a human universal. But why? Why should movies about Freaky Friday feel intuitive?

Bloom (2007) posits the independent system hypothesis. On this view, our dualism is a natural by-product of the fact that we have two distinct cognitive systems, one for dealing with material objects, the other for social entities. These systems can operate independently, and have incommensurable outputs. Hence dualism emerges as an evolutionary accident.

  • For example, Kuhlmeier et al (2004) tested 5-month-olds’ ability to reason about the law of continuous motion as it applies to the human body.  For inanimate objects, infant are surprised (i.e., look longer) when the object disappears from behind one barrier and then seems to reemerge from a separate barrier. Not so for humans!  This suggests that babies interpret humans with radically different assumptions, than they do other objects.
  • Indeed, our ability to imagine a disembodied mind often emerges early in childhood. Taylor’s (1999) research on children’s propensity to maintain social relationships with imaginary friends suggests that by age 3 to 4 years, children are already equipped to sustain vivid mental representations of the wants, opinions, actions, and personalities of such an agent. Of course, children do understand their imaginary friends as fictive (not so for adult beliefs about God). 

Most social interactions require physical perception and mindreading processes to execute in parallel. But mindreading supports offline processing; that is, we can think of others not in the room. On the independent systems hypothesis, offline processing makes sense – only one system is operational.

Sigmund Freud once said,

Our own death is indeed unimaginable and whenever we make an attempt to imagine it we can perceive that we really survive as spectators. 

The consensus view of mindreading is that it relies, at least in some ways, on simulation to attribute mental states to others (hence, the egocentric bias). This mechanism is uniquely poorly equipped to contemplate death. In line with this simulation constraint hypothesis, Bering (2006) found that that most undergraduate students, who later claimed to believe that consciousness stops at death), nevertheless stated that the dead person knew he was dead. 

The most dramatic demonstration of dualism concerns the development of afterlife beliefs. Bering and Bjorklund (2004) told children of different ages stories about a mouse that died, and asked about the persistence of certain properties. When asked about biological properties of the mouse, the children appreciated the effects of death, including that the brain no longer worked. But when asked about the psychological properties, most the children said that these would continue – the dead mouse can think thoughts, and hold desires. The body was gone, but the soul survives.

Death is thus processed biologically, and does not terminate social inference systems. This fact will be familiar to the bereaved.

Thus, the afterlife and immaterial agents are innately-endowed intuitions. But these disembodied minds are felt to have all-too-human mental lives. 

Mythology and MCI Theory

Innate dualism is explained by the independent system hypothesis. This in turn relates to the disjoint core knowledge hypothesis: there seem to be distinct ontologies built into the nervous system: objects vs organisms vs minds. Each ontology is associated with a distinct set of modules and the intuitions they generate. Most social interactions engage both biological and psychological modules; intuitions about disembodied minds express themselves when only the latter modules are active.

The disjoint core knowledge hypothesis allows us to better understand mythology. Artifacts of Paleolithic religion contain counterintuitive entities: stuff that blurs ontological boundaries:

Folklore like Grimm’s Fairy Tales are also rife with such counterintuitive entities (e.g., a talking wolf).

Too many counterintuitive entities produces a confusing story. But why have them at all? The simple answer is memorability: intuitive concepts are easy to grasp, but are quickly forgotten; a small number of counterintuitive entities increases the salience (and the memorability!) of the myth.

The disjoint core knowledge hypothesis explains why so many of our myths are seasoned with minimally counterintuitive (MCI) ingredients.

Two Teleological Instincts

American 4- and 5-year-olds differ from adults by finding the question “what’s this for?” appropriate not only to artifacts and body parts, but also to whole living things like lions (“to go in the zoo”) and nonliving phenomena like clouds (“for raining”). Kelemen (2004) shows how, around this age, children adopt a design-based teleological view of objects with increasing consistency. 

This trend seems to be innate rather than cultural. A study of responses young children receive when asking questions about nature indicates parents generally favor causal rather than teleological explanation. (Kelemen et al 2002). While physical explanations are culturally favored, teleological thinking does re-emerge in adults under cognitive load (Kelemen et al, 2009).

This form of object teleology (“what’s it for?”) is theorized to be a byproduct of artifact cognition. Great apes are toolmakers. Part of our ability to understand such artifacts is the capacity to reverse-engineer their purpose: to infer specific goals and motivations of their creators.

In contrast with object teleology, autobiographical teleology (“purpose in life”) ascribes purpose to significant life events. We know that counterfactual reasoning is used to construct a sense of purpose (Kray et al, 2010). The stories we tell ourselves about these events coalesce into our narrative identity during late adolescence, and casts a strong influence on life satisfaction (McAdams & McLean, 2013). There is a correlation between autobiographical teleology and religiosity, but the nature of this relationship is unclear to me. 

Anthropomorphism generate paranormal beliefs

In 1976, NASA’s Viking 1 was orbiting Mars, exploring the surface for possible landing sites. Here’s one of its pictures, in the Cydonia region:

Anthropomorphism, the penchant for hallucinating agents in natural environments, was recognized by David Hume as a pervasive human weakness:

There is an universal tendency among mankind to conceive all beings like themselves, and to transfer to every object, those qualities, with which they are familiarly acquainted, and of which they are intimately conscious. We find human faces in the moon, armies in the clouds; and by a natural propensity, if not corrected by experience and reflection, ascribe malice or goodwill to everything, that hurts or pleases us.

Daniel Dennett has made much of this instinct, dubbing it the hyperactive agency detection device (HADD) module. But Willard et al (2020) found anthropomorphism had zero effect on belief in God. Indeed, they were weakly negatively correlated, presumably because of the Abrahamic faith’s emphasis on aniconism. However, a person’s anthropomorphic bias does predict that individual’s beliefs about the paranormal (ghosts, aliens, etc).

The Four Pillars of Religiosity

Mindreading plays a central role in religious instincts. Its influence is largely mediated by more specific intuitions about mind-body dualism, promiscuous teleology, and anthropomorphism.  The specific relationships between these factors has been mapped:

Dual-process theory posits human cognition can be understood and implicit intuition vs explicit reflection. Since religion is rooted in social intuitions, we might predict that people who rely more extensively on their intuitions are more religious. And that is precisely what we find. 

This post locates the cognitive basis of religion in three mindreading-based intuitions, together with the penchant to take such intuitions at face value. But we will see later, there are at least two other drivers of religiosity. First, religion is promoted by cultural transmission via CRedibility Enhancing Displays (CREDs). Second, religion tend to flourish under conditions of existential insecurity, which is well illustrated by the proverb “there are few atheists in foxholes”.

These four elements promote religious belief. If any of them are missing, weakened or overriden, individuals will tend towards atheism (Norenzayan & Gervais, 2012).

But these factors are not created equal. The size of their contributions have also been extensively researched. While the cognitive factors discussed today inarguably matter, cultural factors have more clout. It is to these factors we will turn next time.

Until then!

References

Attachment

  1. Aydin et al (2010). Turning to god in the face of ostracism: Effects of social exclusion on religiousness
  2. Granqvist et al (2010). Religion as Attachment: Normative Processes and Individual Differences
  3. Gray & Wegner (2010). Blaming God for Our Pain: Human Suffering and the Divine Mind

Mentalizing

  1. Barrett & Keil (1996). Conceptualizing a Nonnatural Entity: Anthropomorphism in God Concepts
  2. Gervais (2013). Perceiving Minds and Gods: How Mind Perception Enables, Constrains, and Is Triggered by Belief in God
  3. Maij et al (2017). Mentalizing skills do not differentiate believers from non-believers, but credibility enhancing displays do 
  4. Norenzayan et al (2012), Mentalizing deficits constrain belief in a personal God
  5. Schjoedt et al (2009). Highly religious participants recruit areas of social cognition in personal prayer 
  6. Taylor (1999). Imaginary companions and the children who create them
  7. Willard, Norenzayan (2013) Cognitive biases explain religious belief, paranormal belief, and belief in life’s purpose 
  8. Willard et al (2020). Cognitive Biases and Religious Belief: A Path Model Replication in the Czech Republic and Slovakia With a Focus on Anthropomorphism

Dualism

  1. Bering (2006). The folk psychology of souls 
  2. Bering and Bjorklund (2004) The natural emergence of reasoning about the afterlife as a developmental regularity.
  3. Bloom (2007). Religion is natural. 
  4. Carruthers (2020). How Mindreading Might Mislead Cognitive Science
  5. Cohen et al (2011). Cross-Cultural Similarities and Differences in Person-Body Reasoning: Experimental Evidence From the United Kingdom and Brazilian Amazon 
  6. Forstmann & Burgmer (2015). Adults Are Intuitive Mind-Body Dualists
  7. Kuhlmeier et al (2004). Do 5-month-old infants see humans as material objects? 
  8. Lane et al (2010). Children’s Understanding of Ordinary and Extraordinary Minds. 
  9. Roazzi et al (2013). Mind, Soul and Spirit: Conceptions of Immaterial Identity in Different Cultures 
  10. Slingerland & Chudek (2011). The Prevalence of Mind–Body Dualism in Early China 

MCI Theory

  1. Norenzayan et al (2006). Memory and Mystery: The Cultural Selection of Minimally Counterintuitive Narratives
  2. Lindeman et al (2015). Ontological confusions but not mentalizing abilities predict religious belief, paranormal belief, and belief in supernatural purpose

Teleology

  1. Kelemen et al (2002). Why things happen: teleological explanation in parent-child conversations. 
  2. Kelemen (2004). Are Children ‘‘Intuitive Theists’’? Reasoning About Purpose and Design in Nature 
  3. Kelemen et al (2009) The human function compunction: Teleological explanation in adults. 
  4. Kray et al (2010). From what might have been to what must have been: Counterfactual thinking creates meaning.
  5. McAdams & McLean (2013). Narrative Identity.

Other

  1. Norenzayan & Gervais (2012). The origins of religious disbelief
  2. Epley et al (2009). Believers’ estimates of God’s beliefs are more egocentric than estimates of other people’s beliefs 
  3. Riekki et al (2013). Conceptions about the mind-body problem and their relations to afterlife beliefs, paranormal beliefs, religiosity, and ontological confusions

Kinship explains the I/C Dimension

Part Of: Culture sequence
Followup To: Individualism vs Collectivism
Content Summary: 3000 words, 30 min read

Forager vs Farmer

Homo erectus was a forager. We know this from the fossils: our anatomy diverged from the apes precisely for those features that most support running/hunting (including longer legs, foot arch, fat gluteus maximus, etc). Our species began hunting & gathering some 1,800,000 years ago.

9,000 years ago we switched to farming. Why then? Climate measurements from ice cores shed light on the answer. During the Pleistocene, the climate was extremely variable, with dramatic ecological shifts about every century. In fact, several periods of intense change have been linked to speciation events within the hominin line. 

But the same Milankovitch cycles later introduced periods of climate stability: 12-11 kya, and 9-0 kya. The first stable period correlates with our species first attempt at proto-farming: the Natufian period. Variability returned for another two millennia until the Neolithic revolution led to the true advent of agriculture. 

Why farm? The less nutritious diets of farmers left them shorter, sicker, and more likely to die young. The surplus also generated tremendous inequality, which induced severe warfare between farming communities, whose impact is visible today in the Y-chromosome. 

But farmers did reproduce more quickly than hunter-gatherers. With the “right” set of institutions, farmers could spread across the landscape like an epidemic, driving out foragers in their path. Early farming spread not because it was a better lifestyle, but because farming communities with particular institutions beat mobile hunter-gatherer populations in intergroup competition

Tight vs Loose Kinship

Ever since the Neolithic Revolution, human beings were given in marriage to their cousins & in-laws. The small cost of genetic disorders was more than compensated by social benefits of marriage alliances, creating a “Goldilocks Zone” of optimal mates at intermediate levels of relatedness (second cousins).

These tight (intensive) kinship systems emphasize relatedness-increasing social norms of kin marriages, polygyny, endogamy, and lineal fissions because these behaviors create strongly overlapping networks of kin that often co-reside in the same community. Intensive systems often include marriage alliances between lineages leading to cross-cousin marriages and converging networks of kin (Levi-Strauss, 1949).

In contrast, loose (extensive) kinship lacks most, if not all, of the above relatedness-increasing social norms (Yellen and Harpending, 1972). Marrying unrelated or distantly related individuals increases the total possible numbers of kin by inclusion of a wider cast of individuals to form a more diverse kinship network.  Extensive kinship systems include more geographically-distant marriages in other communities (Fix, 1999) to form alliances with a large number of affines in a diffuse kinship network (Bugos, 1985). 

In contrast to agricultural societies, foraging societies tend to employ loose kinship:

Kinship systems are solutions to economic problems. Loose kinship is useful in unpredictable environments and for nomadic populations given that it provides a plethora of residential options. Hunter-gatherers that exploit a diversity of unevenly distributed food resources may need to hedge their bets by having kin in different places in times of need (Yellen and Harpending, 1972). In contrast, tight kinship reduces the dilution of inheritable family wealth, and may help with kin-based resource defense of such wealth, which is more important for many agropastoral societies as opposed to hunter-gatherer societies (Borgerhoff Mulder et al., 2009). 

Correlates of these kinship systems can be summarized as follows:

Clans and Patriarchy

All pre-modern agricultural societies exhibited tight kinship. To motivate this claim, I want to explain the process by which states evolved.

In all societies, genealogy is the same. But genealogical concepts differ, drawing attention to certain aspects of your family tree, while eliding others. Societies either trace lineages through one parent (unilineal) or both (bilineal). Agriculturalists overwhelmingly use unilineal descent. Why? 

  • This agricultural lifestyle requires sedentism, which fosters community stability. This in turn promotes more elaborate kinship structures, and unilocality. 
  • The agricultural surplus incentivizes resource competition (warfare). In warfare, it is advantageous to construct raiding coalitions with few conflicts of interest. In unilineal systems, every person belongs to just one kinship group. 

There are two forms of unilineal descent: through the mother (matrilineal) or the father (patrilineal). Patrilineal inheritance is reflected in e.g., traditional European practices of the wife taking on the husband’s last name. 

Of these two options, patrilineal social systems are by far the most common. Many explanations for matrilineality have been proposed; one recent proposal suggests they helped building cross-cutting male sodalities for external warfare across meta-ethnic frontiers (Jones 2010). In contrast, Ember et al (1974) found that internal warfare helps predict patrilineality (easier to defend property when all sons live under the same roof) and contiguous lineages (best to keep extended family nearby to help).

Patrilineal social systems are known as clans

Incest aversion is an innate biological response to avoid genetic abnormalities. Known as the Westermarck effect, it is a form of reverse sexual imprinting. In contrast, incest taboos are human-specific norms instilled by culture, which extend the range of incest aversion.  Nearly all clans incest taboos typically forbid within-clan marriage, for at least two reasons. This suppresses sexual competition among men of the same clan and instead focuses their mating efforts outward, on nearby clans. Such exogamy also builds alliances with other clans. (Walter 2000).

Arranged marriages empower patriarchs to strategically use their daughter’s marriages to nourish their clan’s network of alliances. These alliances are reinforced by norms such as levirate marriage (e.g., Deut 25:5-10), which specify that when the husband dies, his surviving wife must marry one of his brothers. This sustains the marital alliance between clans. 

Agriculture won, clans won, consanguineous marriage won. This is why all major world religions (with the exception of the Roman Catholic Church, for reasons we’ll explore next time) are permissive of these intensive kinship strategies including cousin marriage (and, in the Hebrew Bible, also affinal and uncle-niece marriages).

Scaling Up: The Evolution of Statecraft

Agricultural groups compete for resources (surplus). Since group size is a critical determinant of success in intergroup competition, this placed tremendous pressure on groups to scale up. But scaling up is hard. But in the Sepik region of New Guinea, anthropologists have long noted that villages rarely exceeded 300 individuals (3-5 clans, each of which usually consist of several related lineages). If a village strove to grow much larger than this, fissions would erupt, and the village would split – typically along clan lines. Within-clan cooperation is typically not too difficult (aided by the evolutionary logic of Hamilton’s rule). 

However, the relentless logic of cultural group evolution did eventually find two ways to scale up. 

First, age-sets use rituals to better integrate clans. Psychologically potent initiation rituals (rites of terror; roughly, “hazing on steroids”) forge deep bonds between cohorts of males from different kin-groups. The phenomenology of these rituals is paralleled by comradery experienced by the military (band of brothers). After an initiation, norms specify that this cohort is endowed with a new set of privileges, responsibilities, and obligations. Age-sets often work, play and feast together as a unit. Failure to meet their cohort’s collective obligations could threaten to delay their next ritual promotion; which could preclude the entire group from marriage etc etc. 

Second, segmented lineages use lineage myths to bind clans. Norms demand that more closely related clans, who usually control adjacent territories, ally themselves against more distantly related segments. If there is a conflict between brothers, it will be settled by all the brothers, and cousins will not take sides. If the conflict is between cousins, brothers on one side will align against brothers on the other side. However, if the conflict is between a member of a tribe and a non-member, the entire tribe, including distant cousins, could mobilise against the outsider and his or her allies. That tiered mobilisation is traditionally expressed, for example, in the Bedouin saying:

Me and my brothers against my cousins, me and my cousins against the world.

I’ll let Henrich (2020) describe the implications:

This descent-based institution is built around personal and corporate honor. A man’s safety, security, and status—and his family’s—are linked to his reputation. Acts of dishonor can dissolve the reputational shield that protects his property and family from thieves or avengers, and they can reduce his children’s marital prospects and affect the reputation of his entire clan. Hence, relatives closely monitor one another (out of self-interest) and will punish each other in order to restore the honor of their family or clan. Supporting one’s lineage allies is central to each man’s honor. One unfortunate consequence of this is that any particularly aggressive clan could drag the entire maximal lineage into an enduring conflict

Even today, in a world dominated by territorial states, the impact of segmentary lineages can still be felt. In 21st-century Africa, tribal populations with segmentary lineages still experience significantly higher violence and civil war than populations without these institutions. Many familiar cases of chronic conflict in Africa are associated with populations organized by segmentary lineages; for example the two century-long civil war between Dinka and Nuer.  On the other side of the world, the echoes of the culture of honor that were part of Scotland’s segmentary lineages still affect life and death: in counties of the U.S. South, the higher the percentage of Scottish or Scotch-Irish residents in the first U.S. census in 1790, the higher the murder rate is today. The cultural descendants of these migrants still tend to respond aggressively by the honor psychology fermented in segmentary lineages. Boko Haram, Al Shabab, and Al Qaeda, for example, all recruit heavily from populations with segmentary lineages.

Age-sets and segmented lineages allow groups to scale from ~300 to ~3,000; but their success is constrained because they lacked stable, hierarchical authorities. 

Chiefdoms are formed when one clan gains ascendency over sister clans. In the egalitarian multi-clan groups described above, each clan is typically in charge of conducting certain rituals. Ritual ownership often corresponded to real power; e.g. a clan can forbid others from ritual fishing rights. Right to perform these rituals are often grounded in claims about ancestral gods. They can be challenged. Ritual takeovers are one way clans compete within a multi-clan system; given the lack of written language, changes become normalized within a few generations; and then the process may begin anew. In this way, certain clans can emerge as clearly dominant. It is not so much the chief that rules, as the clan (from whose ranks chiefs are elected). 

Recall that arranged marriages serve as an incredibly important social glue; the resultant alliances served to bind clans together and prevent fission. Stratified chiefdoms didn’t emerge until the upper strata (dominant clan) stopped intermarrying with the lower strata. This isolated the upper strata and allowed them to claim that they were fundamentally different from the lower strata—truly divine, superior, and deserving. 

Stratified chiefdoms can evolve into premodern states (kingdoms) as the dominant clan injects new bureaucracies between themselves and the lower strata. These institutions collect taxes, conduct long-distance trade, orchestrate public rites, and marshal armies. They also adjudicate disputes between clans; but premodern states usually left it to the clans to police their own internal affairs, including theft, assault, and even murder. 

Intergroup competition selects for social institutions that scale up. Its relentless Darwinian logic caused societies to scale from 300-person villages to (in the case of the Achaemenid Empire), some 30 million people. But once competition wanes, which often happens when states eliminate their competition, things slowly fall apart. Without the looming threats posed by competing societies, the competition among ruling families within a society will intensify and gradually tear the state-level institutions apart. 

Here, then, is how states evolved from clans:

From Henrich (2020):

At the dawn of agriculture, all societies were built on institutions rooted in family ties, ritual bonds, and enduring interpersonal relationships. New institutional forms were always built on these ancient foundations by variously augmenting, extending, or reinforcing the inherited forms. Social norms related to family, marriage, ritual, and interpersonal relationships became more complex and intensive as societies began to scale up. Crucially, these institutions were always built atop a deep foundation of tight kinship. The fact that people couldn’t simply wipe away their ancient kin-based institutions when building institutions creates what researchers call a strong path-dependence. That is, given that new forms always build on older forms, and these older forms are anchored in our evolved primate psychology, there are a limited number of pathways along which these new institutions can develop. 

Individualism 2.0

Agriculture wins, so we would expect the entire world to employ tight kinship systems. Right?

From 9,000 to 1,000 years ago, this map would indeed have been bright red. However, loose kinship recently made a resurgence in Western (European or European-descended) societies. We’ll explore why this happened next time. 

The civilized world is no longer exclusively dominated by tight kinship. Premodern kin-based states compete with modern impersonal states. 

Kinship Drives the I/C dimension

Last time, we learned about individualism vs collectivism, i.e. the I/C dimension. We adduced lots of data suggesting that cognitive style, social orientation, and moral posture co-vary based on how relationally mobile a society is. 

But why? Why are there systematic differences in relational mobility across nations? To answer this, we turn to our conception of tight vs loose kinship. If you are a male growing up in a clan (with intensive kinship norms), you and your wife will live your entire live in your father’s house. You will be rely on your kin-group for affiliation, for sustenance, for defense. You will not choose your friends or your spouse. In clans, relational rigidity is the norm. We should thus expect tight kinship societies to score highly in the myriad dimensions that comprise the I/C dimension. 

And so it is. Cousin marriage, and the kinship intensity index, are enormously predictive of a holistic cognition, and other-oriented sociality.

You can see how kinship impacts moral systems directly, when investigating rates of blood donations. Tight kinship societies don’t donate blood donations nearly as frequently: why care for strangers (people estranged from your social world) when you can instead invest in the relationships you will keep your entire life? Similarly, they are more likely to trust their in-group than impersonal financial institutions like banks; cousin marriage is enormously predictive of desire to not use checks.

We previously saw how individualistic societies have a dispositional psychology, which features intent as a morally relevant factor. Curtin et al (2020) has shown that kinship intensity explains much of this variance. For low intent scenarios (blue line), looser kinship (moving left) reduced judgment severity.

More generally, and across a suite of features, Schultz et al (2019) show that consanguinity rates explain a dramatic proportion of moral variance, both within-country as seen above, but also between countries. 

In sum, kinship is a very strong determinant of relational mobility, and hence explains most cross-cultural variance along the I/C dimension.

The Role of Pathogens

You may be wondering why sanctity intuitions rely so heavily on disgust, rather than other self-perceptions like dizziness. 

As noted by Schaller & Park (2011), the immune system is a necessary but insufficient protection against disease. The visceral immune system is energetically costly (a 13% increase in metabolic expenditure is required to increase human body temperature by just 1 C), and temporarily debilitating (the syndrome is known as sickness behavior). Surely in addition to a reactive system, animals co-evolved proactive defences against disease: a behavioral immune system.

Disgust evolved to promote pathogen avoidance, and thus contributes to the behavioral immune system. But sometimes people represent contagion risk, even if they aren’t manifesting signs of disease. This is especially true for unfamiliar people, travelling from distant groups with different immunological memory. Is xenophobia related to the behavioral immune system?

The answer appears to be yes. When primed to think about disease, people became much less tolerant of immigration from culturally-dissimilar countries (e.g., in US, immigrants from majority-Muslim countries). 

But the role of diseases extends beyond facultative dispositions. Consistent exposure to disease has been shown in Enke (2019) to increase kinship tightness in societies, along with its characteristic collectivism & ethnocentrism.

Thus, tight kinship is not only an economic response to resource defense, as we saw with agricultural surplus. It also serves to reduce disease transmission. 

The Role of Rice Paddies

We have already discussed how agriculture tends to produce surplus, defense of which strongly incentivizes clan formation. But the specific kind of agriculture matters too! Wheat farming can be conducted by single families on small plots of land. Rice paddy farming, however, due to the economic characteristics of that crop, require multiple families to correlate. This particular cereal, due to the social structures necessary to farm it, induces an even stronger nudge towards allocentric psychology than wheat.

Together with the discussion of pathogens, we have shown evidence to suggest that ecological and economic factors are important antecedents to kinship structure. Ecology doesn’t determine kinship, but it does bias it.

Conclusion

We have seen human societies tend to organize themself using tight or loose kinship. Each kinship system creates its own social worlds, which in turn incentivizes its own suite of psychological adaptations (cognitive, social, and moral). 

One of the virtues of this anthropological theory is that it explains the individualism/collectivism dimension as a function of kinship, and explains why so many moral, social, and cognitive factors covary along this single continuum. 

References

  1. Barrett et al (2016). Small scale societies exhibit fundamental variation in the role of intentions in moral judgment.
  2. Borgerhoff Mulder et al. (2009). Intergenerational Wealth Transmission and the Dynamics of Inequality in Small-Scale Societies
  3. Bowles & Choi (2013). Coevolution of farming and private property during the early Holocene
  4. Bugos (1985). An evolutionary ecological analysis of the social organization of the Ayoreo of the Northern Gran Chaco
  5. Curtin et al (2020). Kinship intensity and the use of mental states in moral judgment across societies
  6. Dohmen et al (2018). Patience and comparative development.
  7. Ember et al (2014). On the Development of Unilineal Descent
  8. Enke (2019). Kinship, Cooperation, and the Evolution of Moral Systems. 
  9. Fix & Shepherd (1999). Migration and colonization in human microevolution.
  10. Helgason et al (2008). An Association Between the Kinship and Fertility of Human Couples
  11. Henrich (2020). The WEIRDest people in the world: how the West became psychologically peculiar and particularly prosperous
  12. Levi-Strauss (1949). The elementary structures of kinship. 
  13. Jones (2011). The Matrilocal Tribe: An Organization of Demic Expansion
  14. Korotayev (2004) Unilocal Residence and Unilineal Descent: A Reconsideration
  15. Schaller & Park (2011). The Behavioral Immune System (and Why It Matters)
  16. Talhelm et al (2014). Large-Scale Psychological Differences Within China Explained by Rice Versus Wheat Agriculture
  17. Walker & Bailey (2014). Marrying Kin in Small-Scale Societies 
  18. Walter (2000). From Westermarck’s Effect to Fox’s Law: paradox and principle in the relationship between incest taboos and exogamy.
  19. Yellen and Harpending (1972). Hunter‐gatherer populations and archaeological inference

Individualism vs Collectivism

Part Of: Culture sequence
Content Summary: 3000 words, 30 min read

Relational Mobility vs Fixedness

Since the Neolithic Revolution, the vast majority of human beings have spent their days as subsistence farmers. These people do not have many choices about joining groups, or even who they marry. One is more or less stuck with one’s extended family and a few friends. If a farmer’s relationship with these people fails, there is no recourse. Reinventing yourself is not an option; relational fixedness is your reality.  Henrich (2020) describes the social worlds that emerge:

Throughout most of human history, people grew up enmeshed in dense family networks that knitted together distant cousins and in-laws. In these regulated-relational worlds, people’s survival, identity, security, marriages, and success depended on the health and prosperity of kin-based networks, which often formed discrete institutions known as clans, lineages, houses, or tribes.

Within these enduring networks, everyone is endowed with an extensive array of inherited obligations, responsibilities, and privileges in relation to others in a dense social web. The social norms that govern these relationships constrain people from shopping widely for new friends, business partners, or spouses. Instead, they channel people’s investments into a distinct and largely inherited in-group. Many kin-based institutions not only influence inheritance and the residence of newly married couples, they also create communal ownership of property (e.g., land is owned by the clan) and shared liability for criminal acts among members (e.g., fathers can be imprisoned for their sons’ crimes).

In contrast, some modern societies allow people to leave toxic relationships and groups, and join others. The new calculus involves not only “how do I gain status”, but also “where should I gain status”. 

Relational mobility is well illustrated by the WaitButWhy concept of the relationship mountain; whose underlying advice is a form of relationship economics: prioritize friendships with advantageous cost/benefit ratios. 

But, only individualistic people approach relationships in this way. In contrast, allocentric people have steeper mountains (acquaintances are strangers), and less control on who inhabits its slopes. This contrast also applies at more granular levels of detail. Majority-idiocentric societies are individualistic, and societies with mostly allocentric psychologies are collectivist

Your social environment plays a tremendous role in shaping your psychological development. Children enculturated in a relationally mobile context learn to listen to “their inner voice”, to be analytic, and adopt a universal morality. Children in relationally fixed worlds, in contrast, pay much more attention to the quality of these relationships, and their cognitive style and moral posture reflect the primacy of their in-group. 

Two Social Orientations

In Granite In Every Soul, I sketched an important distinction between social identity versus self-concept

Back then I wrote, “Social change causes dramatic fluctuations within your social identity, but decision making requires consistency. Your brain can resolve this tension by relying more heavily on self-concept.” 

We can prove this! When given the prompt “I am…”, socially static people answer in the language of social identity: a father, a husband, etc. But Western, Educated, Individualistic, Rich, Democratic (WEIRD) nations answer in a language of personal attributes: kind, a hard worker, etc. 

Another example. In traditional parts of Indonesia, people do not use personal names, but rather they use teknonyms (the equivalent of “the second son of the Smith family”). In other words, the person is not treated as an autonomous individual but an appendage of the group, in this case the family.

Success in a socially rigid world means conforming to group opinion. The Japanese proverb, “the nail that stands out gets pounded down” contrasts with the individualist proverb to “be yourself”. Indeed, as measured by the Asch conformity test (answering “how long is this line?” when confederates give the obviously-incorrect answer), collectivist cultures are more likely to conform to the group.

More generally, WEIRD nations express the syndrome described by Hofstede (2003) as the individualism syndrome. You think it’s normal to emphasize autonomy and “individual rights” above harmony? Normal to prioritize individual achievement over one’s group? Normal to leave a group if you do not enjoy being a member? 

All of these national measurements co-vary: if people in your culture uses the language of relationship to describe itself, it will very often conform more too. Hofstede (2003) takes advantage of this fact to summarize all of the above measurements into a single individualism score. 

Two Cognitive Patterns

Some cognitive content is fixed, or culturally invariant. Naive theories of mechanics and physics (e.g., Aristotelian physics), naive theories of biology (e.g., essentialism), and naive theory of mind (e.g., dualism) appear so early and are so widespread that it seems quite likely that at least some aspects of them are largely innate and resistant to social modification. 

But culture does exert an influence on many other cognitive traits. Given that human beings are social animals, is it really so surprising our social worlds are projected onto our habits of thought?

People immersed in a socially static world rely on social identities, which is heavily sensitive to the norms imbued within a situation. They tend to express a tacit version of situationist psychology: the situation dictates behavior. In contrast, socially dynamic worlds incentivize the use of self-concept, where you ascribe attributes to yourself. Individualists use a dispositional framework, with significant downstream effects. 

The fundamental attribution error, the tendency to explain behavior excessively by someone’s intrinsic attributes rather than by the situation involved, was originally suspected to be a human universal. But it turns out that this bias is not present in collectivist cultures (Norenzayan et al. 2002). You are much less prone to make this mistake if you lack a robust concept of personality. 

In collectivist culture, with its emphasis on multifaceted social identity, social behavior tends to be more relationship-specific. In individualist cultures, these very same behaviors are typically condemned as hypocritical and contradictory. Indeed, cognitive dissonance (an aversion to discovering contradiction within yourself) is not a human universal; rather, it is primarily expressed in individualist cultures. This tolerance for relationship-specificity/contradiction is is nicely illustrated by this Triandis vignette: “I had a friend from India who told me he was a meat-eating vegetarian. When I asked him how could this be, he replied ‘well, I am a vegetarian, but when other people are eating meat, I do too.’”

When idiocentrics see social contradiction, they debate to see who is right. In contrast, allocentrics try to find elements of truth in opposing positions (a dialectical approach). This Middle Way also serves to promote harmony, and preclude relationship collapse.

Confusingly, individualism & collectivism are also associated with non-social effects. Consider the triad categorization task, where you are asked whether a glove is more related to a scarf, or a hand. Individualist countries resonate to the former (using the analytical concept of CLOTHING); collectivist ones resonate with the latter (using affordance relations; what is done with the glove). This proclivity for parsing the world in terms of objects versus relations must explain why language learners in collectivist nations tend to learn verbs more quickly than nouns, and vice versa.

Allocentric people spend lots of time with the same individuals. Familiarity breeds communicative efficiency: you can say more with fewer words. Thus, collectivist cultures typically feature high-context communication. An example from Triandis (1994)

For example, in Indonesia, a lower-class man and an upper-class woman met secretly and got to the point where they wanted to marry. They informed their parents, and following protocol the man’s mother visited the woman’s mother. The latter served her tea and bananas. Since tea is never served with bananas, that was a “dissonant” stimulus that said “no,” without actually saying the word. Both women saved face. 

In contrast to high-context cultures, we see low-context cultures which pay much less attention to nonverbal cues. Since many more interactions are unfamiliar, everything is spelled out more explicitly.

Surprisingly, this social emphasis on context manifests as differences in extremely low-level perceptual processes. For example, in the rod and frame test, people are asked to vertically align a rod, pictured within a rectangular frame. Idiocentric, object-centric people perform noticeably better than context-aware allocentrics; this phenomenon is known as field dependence. Ji et al (2000) also show how allocentrics are also much better at detecting covariation when different objects of a picture are simultaneously distorted. 

San Martin et al (2019) show that relational mobility predicts these cognitive patterns. Taken together, the I/C dimension is associated with two distinct cognitive patterns: analytic versus holistic cognition.

Why Nonsocial Differences? The Locus of Control Hypothesis

It’s fairly easy to trace a link from relational mobility to differences in social orientation and moral posture. But what possible relationship could there be between relational mobility and field dependence?

Is the relationship coincidence? We might hypothesize linguistic, genetic, history of thought, or other unrelated cultural forces drive the analytic vs holistic continuum.  

But, per Varnum et al (2010), two streams of evidence suggest the answer is “no”. First, social orientation and cognitive style don’t just co-vary between the United States and East Asia. Covariation has also been found e.g., between European nations. The phenomena also covary within countries, such as Hokkaido Japanese vs Mainland Japan, US working-class vs middle class, Orthodox vs secular Jews. Second, when you prime a subject with social orientation, the corresponding cognitive style is expressed. 

So the social ecology we inherit from our parents shapes our cognitive style. But why?

One suggestion is the locus of control hypothesis. It turns out that desire for personal control is lower among East Asians than among Westerners. When Zhou et al (2012) experimentally induced desire for control (via deprivation), allocentrics start to think more analytically!

They also found that control deprivation made East Asians …

  1. … shift toward favoring logical arguments rather than dialectical ones.
  2. … categorize by rules rather than relations/similarities.
  3. … more open to and more convinced by arguments that were logically sound but employed unintuitive exemplars.
  4. … predict the future more by linear extrapolation extending current trends rather than by expecting reversals toward holistically balanced opposites.

San Martin et al (2019) show that locus of control statistically mediates the relationship between relational mobility and analytic cognition. Loss of control also stimulates approach motivation (Greenway et al 2013). Approach motivation is more frequently used by idiocentrics (Elliot et al 2001). Approach motivational processes are predominantly left-lateralized (Jonas et al 2014). This may explain why idiocentric psychology relies so heavily on the left hemisphere, and vice versa for allocentrics (Rozin et al 2016).

I have not yet found an explanation for why this cluster of mechanisms explain the features of analytic cognition (what does control have to do with field dependence?) But a complete account of control, power, and approach vs avoidance might be able to establish a more complete picture. In the meantime, I will content myself with the knowledge that the mediation of these cognition effects is at least partially identifiable.

Two Moral Systems

Consider the Passenger’s Dilemma:

You are riding in a car driven by a close friend. He hits a pedestrian. You know that he was going at least 35 mph in a speed zone of 20 mph. There are no witnesses, except for you. His lawyer says that if you testify under oath that he was driving only 20 mph, it may save him from serious legal consequences. Do you think 1) that your friend has a right to expect you (as his close friend) to testify that he was going 20mph; or 2) that your friend has no right to expect false testimony. 

In a socially rigid world, the “right thing to do” is to lie for your friend. In a socially mobile world, the “right thing to do” is to act on universal principles. Individualistic nations are much more likely to not use false testimony.

Jonathan Haidt’s Moral Foundation Theory acknowledges a bifurcation between impersonal vs interpersonal moralities. To quote Haidt (2012)

The ethic of autonomy is based on the idea that people are, first and foremost, autonomous individuals with wants, needs, and preferences. People should be free to satisfy these wants, needs, and preferences as they see fit, and so societies develop moral concepts such as rights, liberty, and justice, which allow people to coexist peacefully without interfering too much in each other’s projects. This is the dominant ethic in individualistic societies. You find it in the writings of utilitarians such as John Stuart Mill and Peter Singer (who value justice and rights only to the extent that they increase human welfare), and you find it in the writings of deontologists such as Kant and Kohlberg (who prize justice and rights even in cases where doing so may reduce overall welfare).

But as soon as you step outside of Western secular society, you hear people talking in two additional moral languages. The ethic of community is based on the idea that people are, first and foremost, members of larger entities such as families, teams, armies, companies, tribes, and nations. These larger entities are more than the sum of the people who compose them; they are real, they matter, and they must be protected. People have an obligation to play their assigned roles in these entities. Many societies therefore develop moral concepts such as duty, hierarchy, respect, reputation, and patriotism. In such societies, the Western insistence that people should design their own lives and pursue their own goals seems selfish and dangerous— a sure way to weaken the social fabric and destroy the institutions and collective entities upon which everyone depends.

The ethic of divinity is based on the idea that people are, first and foremost, temporary vessels within which a divine soul has been implanted. People are not just animals with an extra serving of consciousness; they are children of God and should behave accordingly. The body is a temple, not a playground. Even if it does no harm and violates nobody’s rights when a man has sex with a chicken carcass, he still shouldn’t do it because it degrades him, dishonors his creator, and violates the sacred order of the universe. Many societies therefore develop moral concepts such as sanctity and sin, purity and pollution, elevation and degradation. In such societies, the personal liberty of secular Western nations looks like libertinism, hedonism, and a celebration of humanity’s baser instincts

Indeed, Enke (2019) shows that individualist nations score higher in the universal moral sentiments (Harm & Fairness); whereas collectivist nations score more highly in parochial moral sentiments (Loyalty, Authority, and Sanctity). 

Let’s turn to the moral relevance of intention. All people possess the mindreading faculty, which generates guesses about the intentions of others. Because idiocentric people more strongly believe in personality, they are more likely to use these inferences while judging the actions of others. When asked to judge the badness of accidental versus intentional theft, WEIRD countries are much more likely to evaluate the cases separately.

Relational mobility permits leaving behind damaged reputations, making reputations in individualistic societies a less reliable source of information. Ironically, trust and norm internalization may be most vital in individualistic environments (Sosis 2005). 

Finally, idiocentric people are much more likely to experience guilt (an internal motivation), rather than shame (an external-relational emotion).

Moral systems are not human universal. There are two moralities, not one. Relational mobility determines which morality you internalized as a child.

Residential Mobility

A questionnaire from Lun et al (2012) asked participants who they liked more:

  1. the egalitarian who splits her time between helping the friend and the stranger
  2. the loyal friend, who only helped her friend

One of your potential partners always preferred the egalitarian helper. The other always liked the loyal friend. Who do you want to work with? 

Did your family move residences while you were a child?

If you haven’t, there’s a 90% chance you prefer the non-egalitarian friend. If you moved once as a child, the percentage dropped to 75 percent. If you move more than once, the percentage drops to 62 percent.

Experiments like this show that residential moves, and their concomitant need for new relationships, activates the idiocentric syndrome, including impersonal prosociality.  strengthen people’s preferences for egalitarianism and improve how they treat strangers. These experiences strengthen egalitarian intuitions, flatten the in-group vs out-group distinction, and shift people away from relying too heavily on their long-enduring social networks. 

The same effect can be produced experimentally. Subconscious primes of residential mobility increase people’s motivations to expand their social networks—to establish and nourish new relationships. 

Residential mobility produces relational mobility. Thus, it is not surprising that moving promotes idiocentrism.

Summary

When you encounter a culture truly different from your own, it is easy and natural to feel a sense of wonder and amazement.

There is a tendency to associate that reverence with a belief that every culture is unique, like a snowflake. But that conclusion need not follow.

To build a theory of cultural difference, it helps to remember concepts from dimensionality reduction algorithms like PCA. There are thousands of differences across cultures, true; but it is possible to identity which phenomena vary together.

The individualism/collectivism continuum, or I/C dimension was one of the first dimensions discovered by Hofstoede, while mining through survey data from IBM offices across the continents. In all the voluminous research outlined above, it is worth noting that the mean differences are often very large, typically on the order of 2:1, 3:1 or higher (Nisbett et al 2001). This is why the I/C dimension is held to be the most significant dimension of human cultural variation (Triandis 1994).

Recall the I/C dimension explains why cognitive, social and moral differences covary, and are largely explained by a single factor: relational mobility.

A couple reminders here:

  • The I/C dimension is a continuous, not a binary, variable.
  • The I/C dimension, while originally studied as East Asia vs United States, manifests in all nations, and within all nations.

Is the I/C dimension the only significant driver of global psychological variation? By no means! Cross-cultural psychology has revealed (at least) three other dimensions that explain much cultural variation: 

Until next time.

References

  1. Bond & Smith (1996). Culture and Conformity: A Meta-Analysis using Asch’s (1952b, 1956) 
  2. Elliot et al (2001). A Cross-Cultural Analysis of Avoidance (Relative to Approach) Personal Goals.
  3. Enke (2017). Kinship Systems, Cooperation and the Evolution of Culture
  4. Greenway et al (2015). Loss of control stimulates approach motivation.
  5. Haidt (2012). The Righteous Mind: why good people are divided by politics and religion
  6. Hofstede, G. (1980). Culture’s consequences: International differences in work-related values
  7. Hsu et al (2012). Critical Tests of Multiple Theories of Cultures’ Consequences 
  8. Inglehart & Baker (2000). Modernization, cultural change, and the persistence of traditional values
  9. Jonas et al (2014). Threat and Defense: From Anxiety to Approach
  10. Lun et al (2012). Residential mobility moderates preferences for egalitarian versus loyal helpers
  11. Ji et al (2000). Culture, Control, and Perception of Relationships in the Environment 
  12. Ma & Schoeneman (1997). Individualism vs Collectivism: a comparison of Kenyan and American self-concepts
  13. Marcus & Kitayama (1991). Culture and the Self: Implications for Cognition, Emotion, and Motivation 
  14. Nisbett et al (2001). Culture and systems of thought: holistic versus analytic cognition.
  15. Norenzayan & Nisbett (2000). Culture and Causal Cognition
  16. Rozin et al (2016). Right: Left:: East: West. Evidence that individuals from East Asian and South Asian cultures emphasize right hemisphere functions in comparison to Euro-American cultures
  17. San Martin et al (2019). Relational Mobility and Cultural Differences in Analytic and Holistic Thinking
  18. Schwartz, S. H. (2006). A theory of cultural value orientations: Explication and applications
  19. Sosis (2005). Does Religion Promote Trust? The Role of Signaling, Reputation, and Punishment
  20. Steenkamp (2001). The role of national culture in international marketing research.
  21. Triandis (1994). Culture and social behavior.
  22. Trompenaars & Hampden-Turner (1998). Riding the waves of culture: Understanding diversity in global business.
  23. Varnum et al (2010). The Origin of Cultural Differences in Cognition: Evidence for the Social Orientation Hypothesis
  24. Zhou et al (2011). Control Deprivation and Styles of Thinking

The Politics of Monolatrism

Part Of: History sequence
Content Summary: 3000 words, 30 min read

The Pan-Israelite Identity

Until 700 BCE Judah is a much smaller political force than it makes itself to be. One demonstration of the small scale of this society is the request in one of the letters sent by the Abdi-Heba, king of Jerusalem, to the pharaoh that he supply fifty men “to protect the land.” Another letter asks the pharaoh for one hundred soldiers to guard Megiddo from an attack by his aggressive neighbor, the king of Shechem. These Amarna letters date to the 14th century BCE. But the population in the intervening time period does not change much. Until 700 BCE, Judah’s population totaled no more than twenty settlements with a population of roughly 40,000, with a handful of fortified cities (not including Jerusalem).1

Judah wasn’t always beholden to the Assyrian empire. But when the Assyrian god-king Tiglath-Pileser III switched from a policy from remote domination to direct military control, the states of Canaan began looking for a way out. Israel and Aram-Damascus went to Jerusalem to pressure Judah to join the independence movement. But the Judahite King Ahaz instead appealed for military assistance from Assyria, at the price of becoming a vassal to the superpower. Tiglath-Pileser III accepted the proposition, and utterly destroyed Aram-Damascus in short order (2 Kings 16:5-18). He also conquered Megiddo and Hazor in 732 BCE, crippling the Northern economy. Assyria “cancelled” the Israelite Kingdom entirely in 722 BCE. 

These Iron Age nations lived on the edge of a knife. One political miscalculation, and atrocities ensue. 

The Assyrians were feared for their war crimes, and their practice of exile: forcibly relocating thousands of people into a new region, until their national heritage was subsumed by Assyrian monoculture. Little wonder that archaeologists find evidence for a mass migration of southern Samaria Israelites into Judah, plausibly as a mean to escape exile. Conservative estimates place the Judah population doubling from 40,000 to 80,000 people. The immigration was particularly pronounced in Jerusalem, which gained 15x more people in less than a generation (Brochi, 1974).

Not only did Judah experience a population boom. Sites in Stephalah that show signs of a new olive oil industry. Beyond tribute, Ahaz also integrated Judah into the Assyrian world economy. This economic boom complemented the population boom. Together, they led Judah towards full statehood; this time period contains the first evidence of an advanced bureaucracy, complete with public works projects, and scribal activity.

The population of Judah doubled. Imagine 400 million Canadians emigrated to the United States. It’s hard to fathom the myriad ways life would have to change to accommodate such an influx. Social stability would only be possible with heavy ideological emphasis on unity.

The South remembered a King David; the North remembered a King Saul. While these historical figures may have interacted one another, the tales of their relationship – and how David ultimately earned the right to the unified throne – are surely relevant to the interests of Judahite scribes. These scribes compiled texts in an effort to reconcile the two peoples, to motivate a sense of Pan-Israelite identity. This era is where clearly-Northern (Judges, E, Saul) and clearly-Southern (J, David) traditions were first brought together in a unified series of texts.

Preparations For Revolt

2 Kings 18:14 reports that Sennacherib levied tribute of 30 talents of gold for Judah. Assyrian records reveal that this is in fact an extremely steep sum: only two other vassals received greater demands per Rothlin & Roux (2013). This suggests that Judah around this time was quite wealthy, a fact attested in 2 Chronicles 32:27-29. How did Judah manage to acquire so much wealth? 

Judah’s role in the international market was limited by her lack of a major sea port and natural resources. But Rothlin & Roux (2013) point out that Judah should have been able to extract taxes against traffic following two international trade routes, the King’s Highway and the Via Maris. The other two cities adjacent to international trade routes, Tyre and Damascus, were subject to comparably steep tribute demands and frequent military action; testifying to their tremendous wealth-generating potential.

Given his immense wealth, King Hezekiah did not find vassal-hood acceptable. So, in a move that would ultimately doom his nation, he began preparing a revolt. His administration built the Siloam Tunnel, bringing freshwater to Jerusalem as a defense against siege. 

Archaeologists have also discovered vast numbers of storage jars produced during Hezekiah’s reign, decorated with “LMLK”, which roughly translates to “property of the King”. Many scholars think they were used for the distribution of supplies in preparation for the revolt.

Tithes as Taxes

Genesis is rife with stories of the patriarchs building altars to worship their god. The practice is codified by the Covenant Code: Exodus 20:24 endorses the construction of local altars, where all the people of Israel can participate in the Yahweh cult. 

Contrast this with Deut 12:5-6,11-14, which insists on cult centralization. Here Moses insists that there is only one legitimate place of worship – Jerusalem. This motif is a central fixture of the book of Deuteronomy. 

From a comparative perspective, centralization is an unusual policy. Recall, these reforms occurred centuries before the concept of prayer, synagogue, and scripture even existed – there was only cultic ritual.  By necessity, they deprives the worshiper of that direct and spontaneous religious experience to which he was accustomed in the local altars spread throughout the country. 

So, why? Why was centralization such a vital issue to the Hebrew Bible? 

Many ex-Israelites presumably still worshiped in the Bethel temple, situated in the midst of their ancestral villages. Located just a few miles north of Jerusalem, this must have posed a serious religious challenge to Judahite authority. It seems that the solution was a ban on all sanctuaries – countryside shrines in Judah and the Bethel temple alike. 

Social reasons may not have been the only factor. Theocracies like ancient Israel feature strong interactions between politics, economics, and religion. Consider the taxation system of ancient Judah. As described in Oden (1984), there were several ways a king could generate revenue:

Recall Hezekiah’s predicament. Judah was starting to mature beyond its provincial chiefdom legacy. The state is actively strengthening its power, especially in the capital city of Jerusalem. Meanwhile, he has hatched the desperate plan to revolt against the Assyrian superpower. This act will require massive funding: standing armies and city fortifications aren’t exactly cheap.

Claburn (1973) put forward the fiscal hypothesis:

How does an ambitious king most efficiently get his hands on the largest possible proportion of the peasantry’s agricultural surplus? If he is smart, he does it not by raising the assessed level of taxes, but by reforming his fiscal system so that he brings into the capital a larger proportion of the taxes already being assessed. He does this by substituting for the semi-independent local dignitaries to whom the peasant had been paying the taxes (but who had been pocketing most of the proceeds locally) a hierarchically organized central internal revenue bureau of paid officials under his direct control. 

2 Chr 31:11-12 describes the construction of elaborate storehouses to store the new influx of wealth. Deut 14:24-26 provides helpful advice on how peasants can more efficiently transport their money to the Jerusalem coffers. Deut 16:16 offers another plain assertion (emphasis added): “three times a year all your males must appear before the Yahweh in the place he chooses for the Feast of Unleavened Bread, the Feast of Weeks, and the Feast of Shelters; and they must not appear before Yahweh empty-handed.”

Hezekiah also guaranteed financial protection to the now-impoverished Levites (2 Chr 31:19), which may have gone some way in quelling a potential source of civil unrest. By granting supplies to the local priests, Hezekiah assured them that his reforms did not intend to deprive them of their livelihood.

The Pious Lie

Since Martin Noth, scholars have recognized that the books of Deuteronomy, Judges, Joshua, 1-2 Samuel, and 1-2 Kings share the same author (the Deuteronomist Dtr), or at least the same cadre of authors. Together, these books form the Deuteronomistic History (DtrH)

We have already seen how centralization pervades Deuteronomy. But critically, centralization also plays a pivotal role in the books 1 and 2 Kings. In the DtrH, all kings of Judah and Israel (!) are evaluated, in large part, on their failure to enforce Jerusalem-only worship. All northern kings are given a bad evaluation, even Jehu, who destroyed the cult of Baal. Even the good kings of Judah after Solomon and prior to Hezekiah are given only qualified good evaluations, because they permitted sanctuaries on the high places.

Next, we turn to 2 Kings 22:8-13

Hilkiah the high priest said to Shaphan the secretary, “I have found the Book of the Law in the temple of the Lord.” He gave it to Shaphan, who informed the king, “Hilkiah the priest has given me a book.” When the king heard the words of the Book of the Law, he tore his robes. He gave these orders: “Go and inquire of the Lord about what is written in this book that has been found. Great is the Lord’s anger that burns against us because those who have gone before us have not obeyed the words of this book; they have not acted in accordance with all that is written there concerning us.”

What was this “book of the law” that Hilkiah found? Since the early eighteenth century, scholars have known it to be Deuteronomy. It is the only book in the Pentateuch to advocate centralization, and the details of Deuteronomy’s proscriptions are reported to be specifically implemented by Josiah. 

Let us put all of this together. 

  1. Deuteronomy contradicts Exodus’ endorsement of decentralized Yahweh worship (the ancestral form) with the (much later) idea of centralization.
  2. Centralization served to funnel wealth away from the local Levites, and channel those funds directly into royal coffers.
  3. In Kings, monarchs are judged good/bad on two criteria: exclusive worship of Yahweh, but also conformance to centralization.
  4. In Kings, Josiah is said to “discover” the book of Deuteronomy, and use it as the basis of his centralization reforms. 
  5. Per the DtrH hypothesis, the author of Kings most likely also authored Deuteronomy. 
  6. In Kings, the most textual space and full-throated praise is given to King Josiah and Hezekiah – who revolted against Assyria.
  7. In Kings, the most vitriolic condemnations are reserved for Ahaz and Manasseh – who were deferent to Assyria. 

These data suggest three natural conclusions:

  1. Dtr is a scribe in Josiah’s court. In Kings, he combines history with an ideological argument against decentralization and idolatry. 
  2. Dtr also penned the book of Deuteronomy. Moses never advocated centralization; these ideas were instead placed on Moses’ lips. 
  3. Dtr was not only pro-centralization and pro-intolerance. He was also orchestrating a political independence movement.

In short, Dtr was a member of a Hardliner group in Jerusalem. They were violently opposed to another faction, whom we’ll call the Internationalists:

The Hardliners needed centralization to fund their war efforts. It is less clear why they affiliated with Yahweh-only monolatrist prophets like Elijah. Couldn’t the Yahwists have just as easily chosen the Internationalists’ approach to geopolitics? Or is there some structural connection between religious fundamentalism and nationalistic ferver? I don’t have an answer, but Akenaten’s eerily similar theopolitical reforms may suggest the latter. 

Sennacherib’s Revenge

Hezekiah represented the Hardliner faction within Jerusalem. At first, he continued the posture pioneered by Ahaz: subservience to Assyria. This was appropriate given that Hezekiah was crowned during the reign of Sargon II of Assyria, who single-handedly transformed the neo-Assyrian state into a multinational empire. 

But Hezekiah planned his rebellion, using centralization as a new source of funding. And when Sargon II was killed in battle, and his untested son Sennacherib assumed the throne, Hezekiah took a gamble and declared independence.

The revolt did not go well. The Neo-Assyrian war machine in this era was absolutely devastating, and Sennacherib proved able to wield it. He laid siege to every significant Judean town, captured, and ransacked them. Hezekiah was subjugated as a vassal, and conceded an enormous tribute.

The Hebrew Bible spends just one sentence on this state-crippling result. But the archaeological record has revealed the extent of the damage. Sennacherib commissioned artwork depicting his victory over Lachish, the second most important city in Judah. The inscription of the Lachish Relief reads “Sennacherib, the mighty king of Assyria, sitting on the throne of judgment, at the entrance of the city of Lachish. I gave permission for its slaughter”.

Sennacherib laid siege to Jerusalem, but failed to capture it. The Hardliners make much of this fact, attributing the non-capture to a miracle. Yet despite their failure to destroy the monarchy of Judah (as they had Israel and Arab-Damascus), the Assyrians did cripple the state, and negotiated a very unfavorable peace treaty. 

The Hardliners took the survival of Jerusalem as evidence of their own invulnerability. But the Internationalists looked to another outcome: the slaughter of their people. Little surprise the Internationalists took control of government. The next Judean king, Manasseh, reversed the unpopular doctrine of centralization, allowing Yahweh worship to continue on countryside altars.

The 55 year reign of Manasseh, with a conciliatory policy towards Assyria, surely facilitated the nation’s economic recovery. Internationalist scribes likely authored the Great Solomonic Empire tradition (archaeology suggests the historical Solomon was little more than a warlord1). From Finkelstein & Silberman (2006):

The stories of Solomon in the Bible are uniquely cosmopolitan. Foreign leaders are not enemies to be conquered or tyrants to be suffered; they are equals with whom to deal politely, if cleverly, to achieve commercial successs. The Solomonic narratives were used to legitimize for all of Judah’s people the aristocratic culture and commercial concerns of the court of Manasseh that promoted Judah’s participation in the Assyrian world economy.

Return of the Hardliners

After Manasseh died, Amon inherited the throne… and was then murdered. The Hardliners emerged from the coup in control of the reigns of government, with an 8 year old boy named Josiah ultimately crowned King. During Josiah’s childhood, Dtr authored Deuteronomy, and this text was later used by the King as an ideological justification for his renewed efforts at taxation-centralization. Deuteronomy is also framed as a suzerain covenant treaty of submission of Israel to Yahweh, in the same template as was used by Assyria to assert dominance over its vassals (Romer, 2007). By declaring fealty to Yahweh, a political statement was made: Judah was no longer a vassal of Assyria. 

The Hardliners were more skeptical of the Assyrian global economy. To the glories of Internationalist Solomon were added Hardliner allegation of moral depravity, stemming from corruption by his foreign wives.

In the middle of Josiah’s reign, and for external reasons, the Assyrian Empire was beginning to disintegrate, and the Neo-Babylonian Empire had not yet risen to replace it. It is possible that Egypt and Assyria reached some sort of an understanding, according to which Egypt inherited the Assyrian provinces to the west of the Euphrates in exchange for a commitment to provide Assyria with military support (Finkelstein, 2002). Yet in these uncertain times, many nations formerly under the yoke of Assyria were able to govern themselves independently. One imagines a waft of optimism during this time, hope tinged with zealous patriotism.

A Fateful Miscalculation

So when the Egyptians journeyed north to support Assyria against the “new kid on the block” (Babylon), the precocious Judah decided not to let them pass. The thought of their hated nemesis receiving military support was perhaps too much. 

Of course, this geopolitical read turned out to be mistaken. Babylon, not Assyria, turned out to be the empire to worry about. And more to-the-point, Egypt defeated Judah on the battlefield of Megiddo. Josiah was killed. His army was slaughtered.

Just as Israel had done, a dramatically weakened Judah went on to stubbornly rebel against Babylon, despite the suicidal imbalance of power. And they paid the price. Nebucchadnezzar sacked Jerusalem, and exiled the Jerusalem elite. All told, 5-15% of the Judahite population – the intelligenstia – were exiled into Babylon provinces. This is called the beginning of diaspora. But, note that most of the Judahite population remained on the land as rural subsistence farmers: their daily lives weren’t affected much by the change of power in the capital.

The Politics of Monolatrism

Had the Bible been written in a modern context, you might see it bristling with geopolitical intrigue, moving appeals for independence, and the like. But since it was written in a vastly different cultural milieu, these very same sentiments manifest themselves as zealous ardor for centralization.

After exile, Dtr naturally didn’t throw away the exuberant texts of Kings and Deuteronomy. Instead, he reworked them to explain why the state of Judah had been destroyed. Unwilling to attribute blame to the Hardliner acts of rebellion, he instead attributed the collapse of the state to two factors:

  • Past Internationalist administrations blamed for the bad political outcomes of the Hardliners. 
  • The sins of the peasants, who consistently failed to renounce their polytheism and worship Yahweh.

In this second edition of DtrH, the conditional Mosaic covenant (“you will keep the land if…”) was emphasized, as a way of reconciling history and the unconditional Davidic covenant (“David’s dynasty will never end”). 

Perhaps exile would have been the end of the story, if the processes of cultural assimilation had not been interrupted by Cyrus and the Achaemenid Empire. But they did intervene. And within this timeframe, as we will see, a rival faction to the Deuteronomists are responsible for one of the most important ideological innovations of our modern world. Called the Priestly source in DH parlance, these temple-less priests sitting in exile invented monotheism

Until next time.

Footnotes

1. This particular section relies on low chronology. Alternative chronologies exist; see Thomas (2016). Note that most of the conclusions reached in this article do not depend on low chronology, and are also held by “high chronology” scholars.

References

  • Borowski (1995). Hezekiah’s Reforms and the Revolt against Assyria 
  • Brochi (1974). The Expansion of Jerusalem in the Reigns of Hezekiah and Manasseh
  • Claburn (1973). The Fiscal Basis of Josiah’s Reforms
  • Finkelstein & Silberman (2002). The Bible Unearthed
  • Finkelstein & Silberman (2006). David and Solomon
  • Oden (1984). Taxation in Biblical Israel
  • Rothlin & Roux (2013). Hezekiah and the Assyrian tribute
  • Romer (2007). The So-Called Deuteronomistic History
  • Thomas (2016). Debating the United Monarchy: Let’s See How Far We’ve Come 

Epistemic vs Aleatory Uncertainty

Part Of: Bayesianism series
Content Summary: 2300 words, 23 min read
Epistemic Status: several of these ideas are not distillations, but rather products of my own mind. Recommend a grain of salt.

The Biology of Uncertainty

In the reinforcement learning literature, there exists a bedrock distinction of exploration vs exploitation. A rat can either search for a new food source, or continue mining calories from his current stash. There is risk in exploration (what if you don’t find anything better?), and often diminishing returns (if you’re confined to 2 miles from your sleeping grounds, there’s only so much territory that needs to be explored). But without exploration, you hazard large opportunity costs and your food supply becomes quite fragile. 

Exploitation can be conducted unconsciously. You simply need nonconscious modules to track the rate of returns provided by your food site. These devices will alarm if the food source degrades, but otherwise don’t bother you much. In contrast, exploration engages an enormous amount of cognitive resources: your cognitive map (neural GPS), action plans, world-beliefs, causal inference. Exploration is about learning, and as such requires consciousness. Exploration is paying attention to the details.

Exploration will tend to produce probability matching behaviors: your actions are in proportion to your action value estimates. Exploitation tends to produce maximizing behaviors: you always choose the action estimated to produce the most value. 

Statistics and Controversy

Everyone agrees that probability theory is a profoundly useful tool for understanding uncertainty. The problem is, statisticians cannot agree on what probability means. Frequentists insist on interpreting probability as relative frequency; Bayesians interpret probability as degree of confidence. Frequentists use random variables to describe data; Bayesians are comfortable also using them to describe model parameters. 

We can reformulate the debate as between two conceptions of uncertainty. Epistemic uncertainty is the subjective Bayesian interpretation, the kind of uncertainty that can be reduced by learning. Aleatory uncertainty is the objective Frequentist stuff, the kind of uncertainty you accept and work around.

Philosophical disagreements often have interesting implications. For example, you might approach deontological (rule-based) and consequential (outcome-based) ethical theories as a winner-take-all philosophical slugfest. But Joshua Greene has shown that both camps express unique circuitry in the human mind: every human being experiencing both ethical intuitions during moral dilemmas (but at different intensities and with different activation profiles. 

The sociological fact of persistent philosophical disagreement sometimes reveals conflicting intuitions within human nature itself. Controversy reification is a thing. Is it possible this controversy within philosophy of statistics suggests a tension buried in human nature?

I submit these rivaling definitions of uncertainty are grounded in the exploration and exploitation repertoires. Exploratory behavior treats unpredictability as ignorance to be overcome, exploitation behavior treats unpredictability as noise to be accomodated. All vertebrates possess two ways of approaching uncertainty. Human philosophers and statisticians are rationalizing and formalizing truly ancient intuitions.

Cleaving Nature At Its Joints

Most disagreements are trivial. Nothing biologically significant hinges on the fact that some people prefer the color blue, and others green. Do frequentist/Bayesian intuitions resemble blue/green, or deontological/consequential? How would you tell?

Blue-preferring statements don’t seem systematically different from green-preferring statements. But intuitions about epistemic vs aleatory uncertainty do systematically differ. The psychological data presented in Brun et al (2011) is very strong on this point.

Statistical concepts are often introduced with ridiculously homogenous events, like a coin flip. It is essentially impossible for a neurotypical human to perfectly predict the outcome of a coin flip (which are determined by the arcane minutiae of muscular spasms, atmospheric friction, and chaos theory). Coin flips are perceived as the same. Irrelevant is the location of the coin flip, the atmosphere of the room, the force you apply – none seem to disturb the outcome of a fair coin. In contrast, epistemic uncertainty is perceived within single-case heterogenous events, such as propositions like “Is it true that Osama Bin Ladin is inside the compound”

As mentioned previously, these uncertainties elicit different kinds of information search (causal mental models versus counting), linguistic markers (“plausible” vs “chance”), and even different behaviors (exploration vs exploitation). 

People experience epistemic uncertainty as more aversive. People prefer to guess the roll of a die, the sex of a child, and the outcome of a horse race before the event rather than after. Before a coin flip, we experience aleatory uncertainty; if you flip the coin and hide the result, out psychology switches to a more uncomfortable sense of epistemic uncertainty. We are often less willing to bet money when we experience significant epistemic uncertainty.  

These epistemic discomforts of course make sense from an sociological perspective: if we sit under epistemic uncertainty, we are more vulnerable to being exploited – both materially by betting, and reputationally by appearing ignorant.

Several studies have found that although participants tend to be underconfident assessing probabilities that their specific answers are correct, they tend to be underconfident when later asked to estimate the proportion of items that they had answered correctly. While the particular mechanism driving this phenomenon is unclear, the pattern suggests that evaluations of epistemic vs aleatory uncertainty rely on distinct information, weights, and/or processes.

People can be primed to switch their representation. If you advise a person to “think like a statistician”, they will invariably This is true drawing balls from an urn: if you remove it but don’t show the color, people switch from Outside View (extensional) to Inside View (intensional). 

Other Appearances of the Distinction

Perhaps the most famous expression of the distinction comes from Donald Rumsfeld in 2002:

As we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.

You can also find the distinction hovering in Barack Obama’s retrospective on the decision to raid a suspected OBL compound:

  • The question of whether Osama Bin Laden was within the compound is an unknown fact – an epistemic uncertainty.
  • The question of whether the raid would be successful is an outcome of a distribution – an alethic uncertainty.

A related distinction, Knightian uncertainty, comes from the economist Frank Knight. “Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated…. The essential fact is that ‘risk’ means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomena depending on which of the two is really present and operating…. It will appear that a measurable uncertainty, or ‘risk’ proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all.” It is  It is well illustrated by the Ellsburg Paradox:

As Hsu et al (2005) demonstrates, people literally use different systems in their brains to process the above games. When the game structure is known, the reward processing centers (the basal ganglia) are used. When the game structure is unknown, fear processing centers (amygdala nuclei) are instead employed. 

Mousavi & Gigerenzer (2017) use Knightian uncertainty to defend the rationality of heuristics in decision making. Nassim Taleb’s theory of “fat tailed distributions” are often interpreted as affirmations of Knightian uncertainty, a view he rejects

Towards a Formal Theory

For some, Knightian uncertainty has been a rallying cry driven by discontents with orthodox probability theory. It is associated with efforts at replacing its Kolmogorov foundations. Intuitionistic probability theory, replacing classical axioms with computationally tractable alternatives, is a classic example of this kind of work. But as Weatherson (2003) notes, other alternatives exist:

It is a standard claim of modern Bayesian epistemology that reasonable epistemic states should be representable by probability functions. There have been a number of authors who have opposed this claim. For example, it has been claimed that epistemic states should be representable by Zadeh’s fuzzy sets, Dempster and Shafer’s evidence functions, Shackle’s potential surprise functions, Cohen’s inductive probabilities or Schmeidler’s non-additive probabilities. A major motivation of these theorists has been that in cases where we have little or no evidence for or against p, it should be reasonable to have low degrees of belief in each of p and not-p, something apparently incompatible with the Bayesian approach. 

Evaluating the validity of these heterodoxies is beyond the scope of this article. For now, let me state that it may be possible to simply accommodate the epistemic/aleatory distinction within probability theory itself. As Andrew Gelman claims:

The distinction between different sources of uncertainty can in fact be encoded in the mathematics of conditional probability. So-called Knightian uncertainty can be modeled using the framework of probability theory.

You can arguably see the distinction in the statistical concept of Bayesian optimality. For tasks with low aleatory uncertainty (e.g., classification on high-res images), classification performance can approach 100%. But other tasks with higher aleatory uncertainty (e.g., predicting future stock prices), model performance asymptotically approaches a much lower bound. 

Recall the Bayesian interpretation of learning:

Learning is a plausibility calculus, where new data pays down uncertainty. What is uncertainty? Uncertainty is how “loosely held” our beliefs are. The more data we have, the less uncertain we must be, and the sharper the peaks in our belief distribution.

We can interpret learning as asymptoptic distribution refinement, some raw noise profile beyond which we cannot reach:

Science qua cultural learning, then, is not about certainty, not about facts etched into stone tablets. Rather, science is about painstakingly paying down epistemic uncertainty: sharpening our hypotheses to be “as simple as possible, but no simpler”. 

Inside vs Outside View

The epistemic/aleatory distinction seems to play an underrated role in forecasting. Consider the inside vs outside view, first popularized by Kahneman & Lovallo (1993):

Two distinct modes of forecasting were applied to the same problem in this incident.  The inside view of the problem is the one that all participants adopted.  An inside view forecast is generated by focusing on the case at hand, by considering the plan and the obstacles to its completion, by constructing scenarios of future progress, and by extrapolating current trends.  The outside view is the one that the curriculum expert was encouraged to adopt.   It essentially ignores the details of the case at hand, and involves no attempt at detailed forecasting of the future history of he project.  Instead, it focuses on the statistics of a class of cases chosen to be similar in relevant respects to the present one.  The case at hand is also compared to other members of the class, in an attempt to assess its position in the distribution of outcomes for the class.  …

Tetlock (2015) describes how superforecasters tend to start with the outside view, 

It’s natural to be drawn to the inside view. It’s usually concrete and filled with engaging detail we can use to craft a story about what’s going on. The inside view is typically abstract, bare, and doesn’t lend itself so readily to storytelling. But superforecasters don’t bother with any of that, at least not at first. 

Suppose I pose to you the following question. “The Renzettis live in a small house at 84 Chestnut Avenue. Frank Renzetti is forty-five and works as a bookkeeper for a moving company. Mary Renzetti is thirty-five and works part-time at a day care. They have one child, Tommy, who is five. Frank’s widowed mother, Camila, also lives with the family. Given all that information, how likely is it that the Renzettis have a pet?

A superforecaster knows to start with the outside view; in this case, the base rates. The first thing they would do is find out what percentage of American households own a pet. Starting from this probability, then you can slowly incorporating the idiosyncrasies of the Renzettis into your answer.

At first, it is very difficult to square this recommendation with how rats learn. This ordering is, in fact, precisely backwards:

Fortunately, the tension disappears when you remember the human faculty of social learning. In contrast with rats, we don’t merely form beliefs from experience; we also ingest mimetic beliefs – those which we directly download from the supermind of culture. The rivaling fields of personal epistemology and social epistemology is yet another example of controversy reification.

This, then, is why Tetlock’s advice tends to work well in practice1:

On some occasions, for some topics, humans cannot afford to engage in individual epistemic learning (see the evolution of faith). But for important descriptive matters, it is often advisable to start with a socially accepted position and “mix in” your own personal insights and perspectives (developing the Inside View). 

When I read complaints about the blind outside view, what I hear is a simple defense of individual learning.

Footnotes

1. Even this individual/social distinction is not quite precise enough. There are in fact, two forms of social learning. Qualitative social learning is learning by speech generated by others, quantitative social learning is learning by maths and data curated by others. Figuring out how the quantitative/qualitative data intake mechanisms work is left as an exercise to the reader 😉

References

  • Brun et al (2011). Two Dimensions of Uncertainty
  • Hsu et al (2005). Neural Systems Responding to Degrees of Uncertainty in Human Decision-Making
  • Kahneman & Lovallo (1993). Timid choices and bold forecasts: A cognitive perspective on risk taking
  • Mousavi & Gigerenzer (2017). Heuristics are Tools for Uncertainty
  • Tetlock (2015). Superforecasting
  • Weatherson (2003). From Classical to Intuitionistic Probability. 

GDP as Standard of Living

Part Of: Economics sequence
Content Summary: 2500 words, 25 min read

This will be an (embarrassingly high-level) overview of macroeconomics. This post is intended as a framework, a jumping-off point for more detailed analyses.

Introduction

During the Great Depression, Americans had a vague sense that it was harder to keep a job, and harder to pay your bills. But no one really knew how long it would last, or if it could be brought to a merciful end. Governments tried several policy solutions, but it was very hard to tell whether their policies were helping or hurting. Governments were making decisions on the basis of such sketchy data as stock price indices, freight car loading, and incomplete indices of industrial production. 

As the problem worsened, it weighed increasingly on the public mind. For the first time, the economy entered the public lexicon as a noun. And the situation prompted governments to get more serious about economic data collection. In order to forecast economic outcomes, it pays to get quantitative about the present. What is the state of the economy?

To answer this, we might endeavor to calculate the value of all the stuff in the United States. 

Imagine going through your living space, taking every possession and entering its value into a spreadsheet. Imagine doing this, but for all goods in every house, every apartment, every place of business, every square meter of pavement (and services too, like your last haircut). 

The calculation sounds daunting. So, why not just keep track of the stuff you bought this year? Rather than calculating wealth (net worth), it’s often simpler to calculate spending. Just as a wealthier person spends more, a wealthier nation (like the US) most plausibly produces more every year.

The formal definition:

Gross Domestic Product (GDP) is the market value of all finished goods and services produced within a country in a year. 

Now, three aspects facets of this definition are worth keeping in mind:

  • finished: A finished good is a good that will not be sold again as part of another good. Steel, engines, and flour often serve as examples of intermediate goods: raw materials that are repackaged into final goods like bicycles, cars, and bread. But if a customer buys eggs to make an omelette, those eggs still count as final goods, since the omelette will not be again put up for sale. 
  • produced: GDP only counts new goods and services. A used car sold this year does not count towards GDP; but a new car does. 
  • within a country: exports count, imports count against.

GDP is how economists measure three very important aspects of human societies:

  • Standard of living is GDP per capita. 
  • Productivity is GDP per hour worked.
  • Growth is GDP change over time.

Standard of living matters. In the DR Congo, people earn on average $500 per year. That’s only six baskets of stuff… for an entire year. In Mexico, $21,000 or 178 baskets. In the United States, it’s $67,000 or 545 baskets worth of stuff.

GDP Categories

In a personal budget, it often helps to group your spending habits by categories – so too for nations. GDP is often decomposed into four components: consumption,

  1. Consumption. Private expenditures, including durable goods, non-durable goods, and services.
  2. Investment. Does not include financial products (which is instead considered saving).
  3. Government Consumption. All government expenditures on final goods/services, and also its investments
  4. Net Exports. Exports have outbound value, imports have inbound value. Imports detracts from export receipt; Net Exports = Exports – Imports. 

To understand what drives changes in GDP, other disaggregations are possible. For example,

  • Partitioning by State or Province is useful in interrogating geographical information.
  • Partitioning by Industry is useful to flagging problematic industries. 

There is a related notion of Gross National Income (GNI). The relationship between expenditures and income is something like Newton’s Third Law: “for every action, there is an equal and opposite reaction”. In theory, GDP and GNI should be equivalent; in practice they sometimes slightly come apart (for complicated reasons). GNI thus provides a complementary way of measuring changes in wealth.

There are many ways to disaggregate GNI; one of the more popular operationalizations is to consider four factors: { employee compensation, rent, interest, and profit }.

Towards Real GDP

During the hyperinflation era of Zimbabwe, the price of a sheet of toilet paper went to 417 Z-dollars. Surely, we don’t want to confuse the act of printing more money, with producing more valuable goods and services. 

We’ve all heard our grandparents say, “when I was a kid, that cost a quarter”. But such memories conflate nominal versus real prices. If you control for inflation, some goods (e.g. movie tickets) have kept roughly the same price; other goods (e.g. electricity) have become easier to purchase. Yet inflation makes both feel more expensive.

In general, money illusion denotes our predisposition to focus on nominal rather than real prices. People positively revolt when their nominal salary is cut, but rarely notice if their real salary is cut (eg if inflation increases more than your raise).

To compute real GDP over time, simply fix your dollar value to a single year (eg 2020 dollars). This allows for comparison between real GDP versus nominal GDP. In the United States before 1980, nominal growth has been about 7.5% per year; whereas real growth has been about 3.5%.

Economic data like this also showcases two important facts: when Real GDP is negative for two consecutive quarters, that is the definition of a recession (you can’t see that as clearly in this dataset, which aggregates growth by year).

The cost of an iPhone is $700 USD in the United States, and $700 in India. The cost of a haircut is $20 in the United States, and $1 in India. This is the Balassa-Samuelson effect. Why should it exist?

If iPhones were sold for less in India, more people would purchase iPhones from India & have them shipped to their house. This process is called arbitrage, and it guarantees the Law of One Price. However, Law of One Price only applies to tradable goods: you cannot ship a haircut overseas.

Before adjusting for this effect, you might conclude that the average income of a person living in India is 33 times smaller than someone living in the United States. After the adjustment, the actual number becomes visible: only 10 times less purchasing power. 

The Significance of Growth

For most developed nations, GDP doesn’t increase linearly (say, an addition $10b per year), but exponentially (e.g. 2% more per year). Just as exponential growth in epidemics can lead to surprisingly horrendous outcomes, exponential growth in economies can lead to surprisingly affluent outcomes. 

The economically naive think to themselves, “previous lives were similar to mine, except with different ideas and older technologies.” But consider that, for the entirety of human history, our predecessors lived as close to starvation as the modern-day poorest nations. After controlling for inflation – with today’s dollars – almost everyone made less than $1 per day. 

Jesus once said, “The poor will always be with you”. And yes, a person living in the 1st century would have good reason to believe our species is eternally doomed to absolute poverty. But then the industrial revolution happened!

Take a moment to get your head around this. Extreme poverty has been the fate of 90% of the world’s population since our species emerged on the world scene some 270,000 years ago. Only two centuries ago did this state of affairs change.

Prior to the industrial revolution, all human beings were subject to the Malthusian trap, where resources were a zero-sum game. Wealth temporarily increased during the Black Death, simply because there were fewer people to “share the pie” with.

Another way to view this same data is by looking at land fertility (since agriculture used to be the only significant economic sector). Ever since the first agricultural revolution in 10,000 BCE, productivity has produced people, not prosperity. 

This was the state of affairs for 99.925% of human history. You are living in a very unusual time.

The Causes of Growth

So… how did our species escape the Malthusian trap?

Escape is not a guarantee. It didn’t happen before 1800. And it also didn’t happen uniformly; it began as a phenomenon of the West.

Why is there “divergence, big time”? What causes growth to succeed or fail? To answer this, we need a theory of the causes of growth. 

As a first pass, people use cultural knowledge and physical tools to produce goods and services. The Solow Model is used to model these immediate causes of growth. But as we arguably learned from communism, bad institutions can impeded incentives to produce. While harder to measure, institutional structure orchestrates economic production. Finally, institutions do not derive ex nihilo; rather, they too are (slowly) molded by the forces of history, geography, etc etc. Our account of growth thus features three tiers of causes.

You can see the effect of institutions clearly, by satellite photos of the Korea peninsula:

Most people see this picture and think, “wow, communism really made its citizenry poor”. But that is fuzzy thinking. In 1945, Korea was a single country, with the same (quite impoverished) economy. Sure, North Korea did become somewhat more poor, but the much larger effect was – South Korea became prosperous. 

The field of development economics studies what causes some nations to catch the growth train, and others to miss it (and what can be done).

GDP vs Wealth

If you could only choose two economic measures to track, which would they be?

  • A person’s finances cannot be completely described by income; it also helps to know your net worth
  • A company’s finances cannot be completely described by profits; it also helps to know your balance sheet
  • A country’s finances cannot be completely described by GDP; it also helps to know your total wealth.

Imagine a partially-full bathtub, with some water entering and some leaving. In system dynamics jargon of stocks and flows: GDP is an inflow, wealth is a stock.

After it has been bombed, a city’s GDP often increases. Why? The damage sustained during warfare is destruction of wealth: a large outflow. Yet by the law of diminishing marginal utility, it is often easier to replace capital rather than make even more stuff. While GDP gives you a rosy picture, if you also track wealth you will have an easier time grasping the true cost of war. 

It is often useful to extend our mental model to include the environment. In this sense, GDP relies on extraction of (often non-renewable) resources from the Earth. In this sense, GDP is not just an inflow to wealth, but also an outflow of natural resources.

Exactly how large is the stock of natural resources? Your answer will likely affect your judgment of the morality of the capitalistic enterprise. 

GDP vs Welfare

One way of interpreting policy decisions is that they ought to maximize a single variable: societal welfare. But what is this variable sensitive to?

Welfare is a multidimensional measure. Other dimensions arguably should be included in any final analysis:

Importantly, GDP tends to correlate with immaterial factors of welfare. As countries become more affluent, for example, they tend to invest more in health care (and vice versa). The correlation (bidirectional causal link) between GDP and life expectancy is very strong.

Positive psychology has been directly measuring subjective life satisfaction for many years now. Enduring low standards of living is unpleasant! 

In the above, GDP per capita has been log-transformed. When you are very poor, becoming more wealthy matters a lot; when you are rich, less so. 

I used to think consequentialist thinking was confined to 19th century philosophical traditions… and then I learned economics.

Five Concerns

I’ll mention five concerns often levied against free-market economics generally, and productivity specifically.

  1. Unsustainability. Exponential growth means exponential depletion. It cannot be sustained.
  2. Materialism. Developed nations produce much more than they need; so we lionize gratuitous consumption to increase demand. 
  3. Specialization. Division of labor produces more wealth. Yet as this process intensifies, our mental lives become increasingly banal.
  4. Inequality. Capitalism is extinguishing absolute poverty, but at the same time exacerbating relative inequality. This is unfair, and socially toxic.
  5. Monoculture. The West got rich first, and abused its power first by direct colonialist enslavement, and later by sneakily-abstract trade deals.

I will defer an evaluation of these charges for now; I simply felt it useful to present this incomplete list. 

Takeaways

This post discussed eight topics:

  1. GDP is “the market value of all finished goods and services produced within a country in a year”
  2. There are many ways to disaggregate GDP, including looks at GNI (the equivalent, income-based variant)
  3. After you adjust for inflation, Nominal GDP becomes Real GDP. After you adjust for the Balassa-Samuelson effect, Real GDP can facilitate between-country comparisons.
  4. Before the industrial revolution, our species was stuck in a Malthusian trap, where productivity produced people not prosperity.
  5. Physical capital, human capital, and ideas conspire to create wealth. More distal influences include institutions, including property rights, reliable courts, etc…
  6. Growth is an inflow into a country’s wealth. It is important to recognize that growth depletes natural resources.  
  7. Welfare (aggregate life satisfaction) requires more than material comfort. But note! GDP strongly correlates with life expectancy and happiness.
  8. There are five concerns often voiced towards GDP talk. They are unsustainability, materialism, specialization, inequality, and monoculture.

[Excerpt] The Moral/Conventional Distinction

Part Of: Demystifying Ethics sequence
Excerpt From: Kelly et al (2007). Harm, affect, and the moral/conventional distinction.
Content Summary: 800 words, 8 min read.

Commonsense intuition seems to recognize a distinction between two quite different sorts of rules governing behavior, namely moral rules and conventional rules. Prototypical examples of moral rules include those prohibiting killing or injuring other people, stealing their property, or breaking promises. Prototypical examples of conventional rules include those prohibiting wearing gender-inappropriate clothing (e.g., men wearing dresses), licking one’s plate at the dinner table, and talking in a classroom when one has not been called on by the teacher.

Starting in the mid-1970s, a number of psychologists, following the lead of Elliott Turiel, have argued that the moral/conventional distinction is both psychologically real and psychologically important. 

Though the details have varied from one author to another, the core ideas about moral rules are as follows:

  • Moral rules have objective, prescriptive force; they are not dependent on the authority of any individual or institution.
  • Moral rules hold generally, not just locally; they not only proscribe behavior here and now, they also proscribe behavior in other countries and at other times in history.
  • Violations of moral rules typically involve a victim who has been harmed, whose rights have been violated, or who has been subject to an injustice
  • Violations of moral rules are typically more serious than violations of conventional rules. 

By contrast, the following are offered as the core features of conventional rules:

  • Conventional rules are arbitrary, situation-dependent rules that facilitate social coordination and organization; they do not have an objective, prescriptive force, and they can be suspended or changed by an appropriate authoritative individual or institution. 
  • Conventional rules are often local; the conventional rules are applicable in one community often will not apply in other communities or at other times in history.
  • Violations of conventional rules do not involve a victim who has been harmed, whose rights have been violated, or who has been subject to an injustice
  • Violations of conventional rules are typically less serious than violations of moral rules. 

To make the case that the moral/conventional distinction is both psychologically real and important, Turiel and his associates developed an experimental paradigm in which subjects are presented with prototypical examples of moral and conventional rule transgressions and asked a series of questions aimed at eliciting their judgment of such actions. 

Early findings using this paradigm indicated that subjects’ responses to prototypical moral and conventional transgressions do indeed differ systematically. Transgressions of prototypical moral rules were judged to be more serious, the wrongness of the transgression was not ‘authority dependent’, the violated rule was judged to be general in scope, and the judgments were justified by appeal to harm, justice or rights. Transgressions of prototypical conventional rules, by contrast, were judged to be less serious, the rules themselves were authority dependent and not general in scope, and the judgments were not justified by appeal to harm, justice, and rights. 

During the last 25 years, much the same pattern has been found in an impressively diverse set of subjects ranging in age from toddlers (as young as 3.5yo) to adults, with a substantial array of different nationalities and religions. The pattern has also been found in children with a variety of cognitive and developmental abnormalities, including autism. Much has been made of the intriguing fact that the pattern is not found in psychopaths or in children exhibiting psychopathic tendencies. 

What conclusions have been drawn from this impressive array of findings? The clear majority of investigators in this research tradition would likely endorse something like the following collection of conclusions:

  1. In moral/conventional task experiments, subjects typically exhibit one of two signature response patterns. Moreover, these patterns are what philosophers of science call nomological clusters – there is a strong (‘lawlike’) tendency for the members of the cluster to occur together. 
  2. Transgressions involving harm, justice of rights evoke the signature moral pattern. Transgressions that do not invoke these things evoke the signature conventional pattern.
  3. The regularities described here are pan-cultural, and emerge quite early in development.

Kevin’s Addendum

The paper goes on to criticize the moral-conventional distinction as not well supported by the data. The above introduction is thus notable in its clarity of steel-manning. Their two biggest complaints are,

  1. Experiments designed to measure the distinction are based on “schoolyard dilemmas”; those with more real-to-life moral scenarios manifest the effect less robustly.
  2. The theory is highly predicated on the progressive conceit that care/harm is the only moral dimension that matters; but cross-cultural analyses have revealed many moral taste buds.

My personal betting money is that the research tradition will survive these objections, as it responds and re-engineers itself in the coming decades.