# [Sequence] Bayesianism

Main Sequence

Related Sequences

# Epistemic vs Aleatory Uncertainty

Part Of: Bayesianism series
Content Summary: 2300 words, 23 min read
Epistemic Status: several of these ideas are not distillations, but rather products of my own mind. Recommend a grain of salt.

The Biology of Uncertainty

In the reinforcement learning literature, there exists a bedrock distinction of exploration vs exploitation. A rat can either search for a new food source, or continue mining calories from his current stash. There is risk in exploration (what if you don’t find anything better?), and often diminishing returns (if you’re confined to 2 miles from your sleeping grounds, there’s only so much territory that needs to be explored). But without exploration, you hazard large opportunity costs and your food supply becomes quite fragile.

Exploitation can be conducted unconsciously. You simply need nonconscious modules to track the rate of returns provided by your food site. These devices will alarm if the food source degrades, but otherwise don’t bother you much. In contrast, exploration engages an enormous amount of cognitive resources: your cognitive map (neural GPS), action plans, world-beliefs, causal inference. Exploration is about learning, and as such requires consciousness. Exploration is paying attention to the details.

Exploration will tend to produce probability matching behaviors: your actions are in proportion to your action value estimates. Exploitation tends to produce maximizing behaviors: you always choose the action estimated to produce the most value.

Statistics and Controversy

Everyone agrees that probability theory is a profoundly useful tool for understanding uncertainty. The problem is, statisticians cannot agree on what probability means. Frequentists insist on interpreting probability as relative frequency; Bayesians interpret probability as degree of confidence. Frequentists use random variables to describe data; Bayesians are comfortable also using them to describe model parameters.

We can reformulate the debate as between two conceptions of uncertainty. Epistemic uncertainty is the subjective Bayesian interpretation, the kind of uncertainty that can be reduced by learning. Aleatory uncertainty is the objective Frequentist stuff, the kind of uncertainty you accept and work around.

Philosophical disagreements often have interesting implications. For example, you might approach deontological (rule-based) and consequential (outcome-based) ethical theories as a winner-take-all philosophical slugfest. But Joshua Greene has shown that both camps express unique circuitry in the human mind: every human being experiencing both ethical intuitions during moral dilemmas (but at different intensities and with different activation profiles.

The sociological fact of persistent philosophical disagreement sometimes reveals conflicting intuitions within human nature itself. Controversy reification is a thing. Is it possible this controversy within philosophy of statistics suggests a tension buried in human nature?

I submit these rivaling definitions of uncertainty are grounded in the exploration and exploitation repertoires. Exploratory behavior treats unpredictability as ignorance to be overcome, exploitation behavior treats unpredictability as noise to be accomodated. All vertebrates possess two ways of approaching uncertainty. Human philosophers and statisticians are rationalizing and formalizing truly ancient intuitions.

Cleaving Nature At Its Joints

Most disagreements are trivial. Nothing biologically significant hinges on the fact that some people prefer the color blue, and others green. Do frequentist/Bayesian intuitions resemble blue/green, or deontological/consequential? How would you tell?

Blue-preferring statements don’t seem systematically different from green-preferring statements. But intuitions about epistemic vs aleatory uncertainty do systematically differ. The psychological data presented in Brun et al (2011) is very strong on this point.

Statistical concepts are often introduced with ridiculously homogenous events, like a coin flip. It is essentially impossible for a neurotypical human to perfectly predict the outcome of a coin flip (which are determined by the arcane minutiae of muscular spasms, atmospheric friction, and chaos theory). Coin flips are perceived as the same. Irrelevant is the location of the coin flip, the atmosphere of the room, the force you apply – none seem to disturb the outcome of a fair coin. In contrast, epistemic uncertainty is perceived within single-case heterogenous events, such as propositions like “Is it true that Osama Bin Ladin is inside the compound”

As mentioned previously, these uncertainties elicit different kinds of information search (causal mental models versus counting), linguistic markers (“plausible” vs “chance”), and even different behaviors (exploration vs exploitation).

People experience epistemic uncertainty as more aversive. People prefer to guess the roll of a die, the sex of a child, and the outcome of a horse race before the event rather than after. Before a coin flip, we experience aleatory uncertainty; if you flip the coin and hide the result, out psychology switches to a more uncomfortable sense of epistemic uncertainty. We are often less willing to bet money when we experience significant epistemic uncertainty.

These epistemic discomforts of course make sense from an sociological perspective: if we sit under epistemic uncertainty, we are more vulnerable to being exploited – both materially by betting, and reputationally by appearing ignorant.

Several studies have found that although participants tend to be underconfident assessing probabilities that their specific answers are correct, they tend to be underconfident when later asked to estimate the proportion of items that they had answered correctly. While the particular mechanism driving this phenomenon is unclear, the pattern suggests that evaluations of epistemic vs aleatory uncertainty rely on distinct information, weights, and/or processes.

People can be primed to switch their representation. If you advise a person to “think like a statistician”, they will invariably This is true drawing balls from an urn: if you remove it but don’t show the color, people switch from Outside View (extensional) to Inside View (intensional).

Other Appearances of the Distinction

Perhaps the most famous expression of the distinction comes from Donald Rumsfeld in 2002:

As we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.

You can also find the distinction hovering in Barack Obama’s retrospective on the decision to raid a suspected OBL compound:

• The question of whether Osama Bin Laden was within the compound is an unknown fact – an epistemic uncertainty.
• The question of whether the raid would be successful is an outcome of a distribution – an alethic uncertainty.

A related distinction, Knightian uncertainty, comes from the economist Frank Knight. “Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated…. The essential fact is that ‘risk’ means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomena depending on which of the two is really present and operating…. It will appear that a measurable uncertainty, or ‘risk’ proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all.” It is  It is well illustrated by the Ellsburg Paradox:

As Hsu et al (2005) demonstrates, people literally use different systems in their brains to process the above games. When the game structure is known, the reward processing centers (the basal ganglia) are used. When the game structure is unknown, fear processing centers (amygdala nuclei) are instead employed.

Mousavi & Gigerenzer (2017) use Knightian uncertainty to defend the rationality of heuristics in decision making. Nassim Taleb’s theory of “fat tailed distributions” are often interpreted as affirmations of Knightian uncertainty, a view he rejects

Towards a Formal Theory

For some, Knightian uncertainty has been a rallying cry driven by discontents with orthodox probability theory. It is associated with efforts at replacing its Kolmogorov foundations. Intuitionistic probability theory, replacing classical axioms with computationally tractable alternatives, is a classic example of this kind of work. But as Weatherson (2003) notes, other alternatives exist:

It is a standard claim of modern Bayesian epistemology that reasonable epistemic states should be representable by probability functions. There have been a number of authors who have opposed this claim. For example, it has been claimed that epistemic states should be representable by Zadeh’s fuzzy sets, Dempster and Shafer’s evidence functions, Shackle’s potential surprise functions, Cohen’s inductive probabilities or Schmeidler’s non-additive probabilities. A major motivation of these theorists has been that in cases where we have little or no evidence for or against p, it should be reasonable to have low degrees of belief in each of p and not-p, something apparently incompatible with the Bayesian approach.

Evaluating the validity of these heterodoxies is beyond the scope of this article. For now, let me state that it may be possible to simply accommodate the epistemic/aleatory distinction within probability theory itself. As Andrew Gelman claims:

The distinction between different sources of uncertainty can in fact be encoded in the mathematics of conditional probability. So-called Knightian uncertainty can be modeled using the framework of probability theory.

You can arguably see the distinction in the statistical concept of Bayesian optimality. For tasks with low aleatory uncertainty (e.g., classification on high-res images), classification performance can approach 100%. But other tasks with higher aleatory uncertainty (e.g., predicting future stock prices), model performance asymptotically approaches a much lower bound.

Recall the Bayesian interpretation of learning:

Learning is a plausibility calculus, where new data pays down uncertainty. What is uncertainty? Uncertainty is how “loosely held” our beliefs are. The more data we have, the less uncertain we must be, and the sharper the peaks in our belief distribution.

We can interpret learning as asymptoptic distribution refinement, some raw noise profile beyond which we cannot reach:

Science qua cultural learning, then, is not about certainty, not about facts etched into stone tablets. Rather, science is about painstakingly paying down epistemic uncertainty: sharpening our hypotheses to be “as simple as possible, but no simpler”.

Inside vs Outside View

The epistemic/aleatory distinction seems to play an underrated role in forecasting. Consider the inside vs outside view, first popularized by Kahneman & Lovallo (1993):

Two distinct modes of forecasting were applied to the same problem in this incident.  The inside view of the problem is the one that all participants adopted.  An inside view forecast is generated by focusing on the case at hand, by considering the plan and the obstacles to its completion, by constructing scenarios of future progress, and by extrapolating current trends.  The outside view is the one that the curriculum expert was encouraged to adopt.   It essentially ignores the details of the case at hand, and involves no attempt at detailed forecasting of the future history of he project.  Instead, it focuses on the statistics of a class of cases chosen to be similar in relevant respects to the present one.  The case at hand is also compared to other members of the class, in an attempt to assess its position in the distribution of outcomes for the class.  …

Tetlock (2015) describes how superforecasters tend to start with the outside view,

It’s natural to be drawn to the inside view. It’s usually concrete and filled with engaging detail we can use to craft a story about what’s going on. The inside view is typically abstract, bare, and doesn’t lend itself so readily to storytelling. But superforecasters don’t bother with any of that, at least not at first.

Suppose I pose to you the following question. “The Renzettis live in a small house at 84 Chestnut Avenue. Frank Renzetti is forty-five and works as a bookkeeper for a moving company. Mary Renzetti is thirty-five and works part-time at a day care. They have one child, Tommy, who is five. Frank’s widowed mother, Camila, also lives with the family. Given all that information, how likely is it that the Renzettis have a pet?

A superforecaster knows to start with the outside view; in this case, the base rates. The first thing they would do is find out what percentage of American households own a pet. Starting from this probability, then you can slowly incorporating the idiosyncrasies of the Renzettis into your answer.

At first, it is very difficult to square this recommendation with how rats learn. This ordering is, in fact, precisely backwards:

Fortunately, the tension disappears when you remember the human faculty of social learning. In contrast with rats, we don’t merely form beliefs from experience; we also ingest mimetic beliefs – those which we directly download from the supermind of culture. The rivaling fields of personal epistemology and social epistemology is yet another example of controversy reification.

This, then, is why Tetlock’s advice tends to work well in practice1:

On some occasions, for some topics, humans cannot afford to engage in individual epistemic learning (see the evolution of faith). But for important descriptive matters, it is often advisable to start with a socially accepted position and “mix in” your own personal insights and perspectives (developing the Inside View).

When I read complaints about the blind outside view, what I hear is a simple defense of individual learning.

Footnotes

1. Even this individual/social distinction is not quite precise enough. There are in fact, two forms of social learning. Qualitative social learning is learning by speech generated by others, quantitative social learning is learning by maths and data curated by others. Figuring out how the quantitative/qualitative data intake mechanisms work is left as an exercise to the reader 😉

References

• Brun et al (2011). Two Dimensions of Uncertainty
• Hsu et al (2005). Neural Systems Responding to Degrees of Uncertainty in Human Decision-Making
• Kahneman & Lovallo (1993). Timid choices and bold forecasts: A cognitive perspective on risk taking
• Mousavi & Gigerenzer (2017). Heuristics are Tools for Uncertainty
• Tetlock (2015). Superforecasting
• Weatherson (2003). From Classical to Intuitionistic Probability.

# GDP as Standard of Living

Part Of: Economics sequence
Content Summary: 2500 words, 25 min read

This will be an (embarrassingly high-level) overview of macroeconomics. This post is intended as a framework, a jumping-off point for more detailed analyses.

Introduction

During the Great Depression, Americans had a vague sense that it was harder to keep a job, and harder to pay your bills. But no one really knew how long it would last, or if it could be brought to a merciful end. Governments tried several policy solutions, but it was very hard to tell whether their policies were helping or hurting. Governments were making decisions on the basis of such sketchy data as stock price indices, freight car loading, and incomplete indices of industrial production.

As the problem worsened, it weighed increasingly on the public mind. For the first time, the economy entered the public lexicon as a noun. And the situation prompted governments to get more serious about economic data collection. In order to forecast economic outcomes, it pays to get quantitative about the present. What is the state of the economy?

To answer this, we might endeavor to calculate the value of all the stuff in the United States.

Imagine going through your living space, taking every possession and entering its value into a spreadsheet. Imagine doing this, but for all goods in every house, every apartment, every place of business, every square meter of pavement (and services too, like your last haircut).

The calculation sounds daunting. So, why not just keep track of the stuff you bought this year? Rather than calculating wealth (net worth), it’s often simpler to calculate spending. Just as a wealthier person spends more, a wealthier nation (like the US) most plausibly produces more every year.

The formal definition:

Gross Domestic Product (GDP) is the market value of all finished goods and services produced within a country in a year.

Now, three aspects facets of this definition are worth keeping in mind:

• finished: A finished good is a good that will not be sold again as part of another good. Steel, engines, and flour often serve as examples of intermediate goods: raw materials that are repackaged into final goods like bicycles, cars, and bread. But if a customer buys eggs to make an omelette, those eggs still count as final goods, since the omelette will not be again put up for sale.
• produced: GDP only counts new goods and services. A used car sold this year does not count towards GDP; but a new car does.
• within a country: exports count, imports count against.

GDP is how economists measure three very important aspects of human societies:

• Standard of living is GDP per capita.
• Productivity is GDP per hour worked.
• Growth is GDP change over time.

Standard of living matters. In the DR Congo, people earn on average $500 per year. That’s only six baskets of stuff… for an entire year. In Mexico,$21,000 or 178 baskets. In the United States, it’s $67,000 or 545 baskets worth of stuff. GDP Categories In a personal budget, it often helps to group your spending habits by categories – so too for nations. GDP is often decomposed into four components: consumption, 1. Consumption. Private expenditures, including durable goods, non-durable goods, and services. 2. Investment. Does not include financial products (which is instead considered saving). 3. Government Consumption. All government expenditures on final goods/services, and also its investments 4. Net Exports. Exports have outbound value, imports have inbound value. Imports detracts from export receipt; Net Exports = Exports – Imports. To understand what drives changes in GDP, other disaggregations are possible. For example, • Partitioning by State or Province is useful in interrogating geographical information. • Partitioning by Industry is useful to flagging problematic industries. There is a related notion of Gross National Income (GNI). The relationship between expenditures and income is something like Newton’s Third Law: “for every action, there is an equal and opposite reaction”. In theory, GDP and GNI should be equivalent; in practice they sometimes slightly come apart (for complicated reasons). GNI thus provides a complementary way of measuring changes in wealth. There are many ways to disaggregate GNI; one of the more popular operationalizations is to consider four factors: { employee compensation, rent, interest, and profit }. Towards Real GDP During the hyperinflation era of Zimbabwe, the price of a sheet of toilet paper went to 417 Z-dollars. Surely, we don’t want to confuse the act of printing more money, with producing more valuable goods and services. We’ve all heard our grandparents say, “when I was a kid, that cost a quarter”. But such memories conflate nominal versus real prices. If you control for inflation, some goods (e.g. movie tickets) have kept roughly the same price; other goods (e.g. electricity) have become easier to purchase. Yet inflation makes both feel more expensive. In general, money illusion denotes our predisposition to focus on nominal rather than real prices. People positively revolt when their nominal salary is cut, but rarely notice if their real salary is cut (eg if inflation increases more than your raise). To compute real GDP over time, simply fix your dollar value to a single year (eg 2020 dollars). This allows for comparison between real GDP versus nominal GDP. In the United States before 1980, nominal growth has been about 7.5% per year; whereas real growth has been about 3.5%. Economic data like this also showcases two important facts: when Real GDP is negative for two consecutive quarters, that is the definition of a recession (you can’t see that as clearly in this dataset, which aggregates growth by year). The cost of an iPhone is$700 USD in the United States, and $700 in India. The cost of a haircut is$20 in the United States, and $1 in India. This is the Balassa-Samuelson effect. Why should it exist? If iPhones were sold for less in India, more people would purchase iPhones from India & have them shipped to their house. This process is called arbitrage, and it guarantees the Law of One Price. However, Law of One Price only applies to tradable goods: you cannot ship a haircut overseas. Before adjusting for this effect, you might conclude that the average income of a person living in India is 33 times smaller than someone living in the United States. After the adjustment, the actual number becomes visible: only 10 times less purchasing power. The Significance of Growth For most developed nations, GDP doesn’t increase linearly (say, an addition$10b per year), but exponentially (e.g. 2% more per year). Just as exponential growth in epidemics can lead to surprisingly horrendous outcomes, exponential growth in economies can lead to surprisingly affluent outcomes.

The economically naive think to themselves, “previous lives were similar to mine, except with different ideas and older technologies.” But consider that, for the entirety of human history, our predecessors lived as close to starvation as the modern-day poorest nations. After controlling for inflation – with today’s dollars – almost everyone made less than $1 per day. Jesus once said, “The poor will always be with you”. And yes, a person living in the 1st century would have good reason to believe our species is eternally doomed to absolute poverty. But then the industrial revolution happened! Take a moment to get your head around this. Extreme poverty has been the fate of 90% of the world’s population since our species emerged on the world scene some 270,000 years ago. Only two centuries ago did this state of affairs change. Prior to the industrial revolution, all human beings were subject to the Malthusian trap, where resources were a zero-sum game. Wealth temporarily increased during the Black Death, simply because there were fewer people to “share the pie” with. Another way to view this same data is by looking at land fertility (since agriculture used to be the only significant economic sector). Ever since the first agricultural revolution in 10,000 BCE, productivity has produced people, not prosperity. This was the state of affairs for 99.925% of human history. You are living in a very unusual time. The Causes of Growth So… how did our species escape the Malthusian trap? Escape is not a guarantee. It didn’t happen before 1800. And it also didn’t happen uniformly; it began as a phenomenon of the West. Why is there “divergence, big time”? What causes growth to succeed or fail? To answer this, we need a theory of the causes of growth. As a first pass, people use cultural knowledge and physical tools to produce goods and services. The Solow Model is used to model these immediate causes of growth. But as we arguably learned from communism, bad institutions can impeded incentives to produce. While harder to measure, institutional structure orchestrates economic production. Finally, institutions do not derive ex nihilo; rather, they too are (slowly) molded by the forces of history, geography, etc etc. Our account of growth thus features three tiers of causes. You can see the effect of institutions clearly, by satellite photos of the Korea peninsula: Most people see this picture and think, “wow, communism really made its citizenry poor”. But that is fuzzy thinking. In 1945, Korea was a single country, with the same (quite impoverished) economy. Sure, North Korea did become somewhat more poor, but the much larger effect was – South Korea became prosperous. The field of development economics studies what causes some nations to catch the growth train, and others to miss it (and what can be done). GDP vs Wealth If you could only choose two economic measures to track, which would they be? • A person’s finances cannot be completely described by income; it also helps to know your net worth • A company’s finances cannot be completely described by profits; it also helps to know your balance sheet • A country’s finances cannot be completely described by GDP; it also helps to know your total wealth. Imagine a partially-full bathtub, with some water entering and some leaving. In system dynamics jargon of stocks and flows: GDP is an inflow, wealth is a stock. After it has been bombed, a city’s GDP often increases. Why? The damage sustained during warfare is destruction of wealth: a large outflow. Yet by the law of diminishing marginal utility, it is often easier to replace capital rather than make even more stuff. While GDP gives you a rosy picture, if you also track wealth you will have an easier time grasping the true cost of war. It is often useful to extend our mental model to include the environment. In this sense, GDP relies on extraction of (often non-renewable) resources from the Earth. In this sense, GDP is not just an inflow to wealth, but also an outflow of natural resources. Exactly how large is the stock of natural resources? Your answer will likely affect your judgment of the morality of the capitalistic enterprise. GDP vs Welfare One way of interpreting policy decisions is that they ought to maximize a single variable: societal welfare. But what is this variable sensitive to? Welfare is a multidimensional measure. Other dimensions arguably should be included in any final analysis: Importantly, GDP tends to correlate with immaterial factors of welfare. As countries become more affluent, for example, they tend to invest more in health care (and vice versa). The correlation (bidirectional causal link) between GDP and life expectancy is very strong. Positive psychology has been directly measuring subjective life satisfaction for many years now. Enduring low standards of living is unpleasant! In the above, GDP per capita has been log-transformed. When you are very poor, becoming more wealthy matters a lot; when you are rich, less so. I used to think consequentialist thinking was confined to 19th century philosophical traditions… and then I learned economics. Five Concerns I’ll mention five concerns often levied against free-market economics generally, and productivity specifically. 1. Unsustainability. Exponential growth means exponential depletion. It cannot be sustained. 2. Materialism. Developed nations produce much more than they need; so we lionize gratuitous consumption to increase demand. 3. Specialization. Division of labor produces more wealth. Yet as this process intensifies, our mental lives become increasingly banal. 4. Inequality. Capitalism is extinguishing absolute poverty, but at the same time exacerbating relative inequality. This is unfair, and socially toxic. 5. Monoculture. The West got rich first, and abused its power first by direct colonialist enslavement, and later by sneakily-abstract trade deals. I will defer an evaluation of these charges for now; I simply felt it useful to present this incomplete list. Takeaways This post discussed eight topics: 1. GDP is “the market value of all finished goods and services produced within a country in a year” 2. There are many ways to disaggregate GDP, including looks at GNI (the equivalent, income-based variant) 3. After you adjust for inflation, Nominal GDP becomes Real GDP. After you adjust for the Balassa-Samuelson effect, Real GDP can facilitate between-country comparisons. 4. Before the industrial revolution, our species was stuck in a Malthusian trap, where productivity produced people not prosperity. 5. Physical capital, human capital, and ideas conspire to create wealth. More distal influences include institutions, including property rights, reliable courts, etc… 6. Growth is an inflow into a country’s wealth. It is important to recognize that growth depletes natural resources. 7. Welfare (aggregate life satisfaction) requires more than material comfort. But note! GDP strongly correlates with life expectancy and happiness. 8. There are five concerns often voiced towards GDP talk. They are unsustainability, materialism, specialization, inequality, and monoculture. # [Excerpt] The Moral/Conventional Distinction Part Of: Demystifying Ethics sequence Excerpt From: Kelly et al (2007). Harm, affect, and the moral/conventional distinction. Content Summary: 800 words, 8 min read. Commonsense intuition seems to recognize a distinction between two quite different sorts of rules governing behavior, namely moral rules and conventional rules. Prototypical examples of moral rules include those prohibiting killing or injuring other people, stealing their property, or breaking promises. Prototypical examples of conventional rules include those prohibiting wearing gender-inappropriate clothing (e.g., men wearing dresses), licking one’s plate at the dinner table, and talking in a classroom when one has not been called on by the teacher. Starting in the mid-1970s, a number of psychologists, following the lead of Elliott Turiel, have argued that the moral/conventional distinction is both psychologically real and psychologically important. Though the details have varied from one author to another, the core ideas about moral rules are as follows: • Moral rules have objective, prescriptive force; they are not dependent on the authority of any individual or institution. • Moral rules hold generally, not just locally; they not only proscribe behavior here and now, they also proscribe behavior in other countries and at other times in history. • Violations of moral rules typically involve a victim who has been harmed, whose rights have been violated, or who has been subject to an injustice • Violations of moral rules are typically more serious than violations of conventional rules. By contrast, the following are offered as the core features of conventional rules: • Conventional rules are arbitrary, situation-dependent rules that facilitate social coordination and organization; they do not have an objective, prescriptive force, and they can be suspended or changed by an appropriate authoritative individual or institution. • Conventional rules are often local; the conventional rules are applicable in one community often will not apply in other communities or at other times in history. • Violations of conventional rules do not involve a victim who has been harmed, whose rights have been violated, or who has been subject to an injustice • Violations of conventional rules are typically less serious than violations of moral rules. To make the case that the moral/conventional distinction is both psychologically real and important, Turiel and his associates developed an experimental paradigm in which subjects are presented with prototypical examples of moral and conventional rule transgressions and asked a series of questions aimed at eliciting their judgment of such actions. Early findings using this paradigm indicated that subjects’ responses to prototypical moral and conventional transgressions do indeed differ systematically. Transgressions of prototypical moral rules were judged to be more serious, the wrongness of the transgression was not ‘authority dependent’, the violated rule was judged to be general in scope, and the judgments were justified by appeal to harm, justice or rights. Transgressions of prototypical conventional rules, by contrast, were judged to be less serious, the rules themselves were authority dependent and not general in scope, and the judgments were not justified by appeal to harm, justice, and rights. During the last 25 years, much the same pattern has been found in an impressively diverse set of subjects ranging in age from toddlers (as young as 3.5yo) to adults, with a substantial array of different nationalities and religions. The pattern has also been found in children with a variety of cognitive and developmental abnormalities, including autism. Much has been made of the intriguing fact that the pattern is not found in psychopaths or in children exhibiting psychopathic tendencies. What conclusions have been drawn from this impressive array of findings? The clear majority of investigators in this research tradition would likely endorse something like the following collection of conclusions: 1. In moral/conventional task experiments, subjects typically exhibit one of two signature response patterns. Moreover, these patterns are what philosophers of science call nomological clusters – there is a strong (‘lawlike’) tendency for the members of the cluster to occur together. 2. Transgressions involving harm, justice of rights evoke the signature moral pattern. Transgressions that do not invoke these things evoke the signature conventional pattern. 3. The regularities described here are pan-cultural, and emerge quite early in development. Kevin’s Addendum The paper goes on to criticize the moral-conventional distinction as not well supported by the data. The above introduction is thus notable in its clarity of steel-manning. Their two biggest complaints are, 1. Experiments designed to measure the distinction are based on “schoolyard dilemmas”; those with more real-to-life moral scenarios manifest the effect less robustly. 2. The theory is highly predicated on the progressive conceit that care/harm is the only moral dimension that matters; but cross-cultural analyses have revealed many moral taste buds. My personal betting money is that the research tradition will survive these objections, as it responds and re-engineers itself in the coming decades. # Randomized Controlled Trials (RCTs) Part Of: Causality sequence See Also: Potential Outcomes model Content Summary: 2300 words, 23 min read Counterfactuals and the Control Group If businesses were affected by one factor at a time, the notion of a control group would be unnecessary: just intervene and see what changes. But in real life, many causal factors can influence an outcome. Consider click-through rates (CTR) for a website’s promotional campaign. Suppose we want to know how a website redesign will affect CTR. One naive approach would be to simply compare click-through rates before and after the change is deployed. However, even if the CTR did change, there are plenty of potential confounds: other processes that may better explain the change. Can we conclude the website decreased click-throughs by 2,000? Only if the other causal factors driving CTR were fixed. Call this assertion ceteris paribus: other things being equal. In practice, can we safely assert nothing else changed from Friday to Saturday? By no means! We have taken no action to ensure these factors are fixed, and the number of wrenches can be thrown at us. The trick is to create an environment where other causal factors are held constant. The control group is the experimental group, except for the causal factor under investigation. So we create two servers, and ensure the product and its consumers are as similar as possible. So long as the two groups are in fact similar, if the (sometimes unmeasured) causal forces are equivalent, then we can safely make a causal conclusion. From this data, we might conclude that the website helped, despite the drop in CTR. It is imperative that the experimental group must be as similar to the control group as possible. If the control group outcome was measured on a different day, the weekend effect would disappear. To recap, how would things be different, if something else had occurred? Such counterfactual questions capture something important about causality. But we cannot access such parallel universes. The best you can do is create “clones” (maximally similar groups) in this universe. Counterfactuals are replaced with ceteris paribus; clones with the control group. The Problem of Selection Bias The above argument was qualitative. To get more clear on randomized control trial (RCT), it helps to formalize the argument. Consider two individuals: hearty Alice and frail Bob. We want to know whether or not some drug improves their health. Alice is assigned to the control group, Bob the treatment group. Despite taking the drug, Bob has a worse health outcome than Alice. While the treatment group is performing worse than the control group, this is not due to drug inefficacy. Rather, the difference in outcome is caused by difference in group demographics. Let’s formalize this example. In Potential Outcome Models, we can represent whether or not she had the drug as $X = \{ 0, 1\}$, and whether or not their health improved as $Y$ For each person, the individual causal effect (ICE) of health insurance is: $Y_{1,Bob}- Y_{0,Bob} = 5 - 5 = 0$ $Y_{1,Bob} - Y_{0,Bob} = 4 - 3 = 1$ But these potential outcomes are fundamentally unobservable. The only observation we can make is: $Y_{treatment} - Y_{control} = Y_{1,Bob} - Y_{0,Alice} = -1$ Taken at face value, this suggests that Bob’s decision to accept health insurance is counterproductive. But this conclusion is erroneous. We can express this mathematically with the following device: $Y_{1,Bob} - Y_{0,Alice} = Y_{1,Bob} - Y_{0,Bob} + ( Y_{0, Bob} - Y_{0, Maria} )$ In other words, $Difference = Average Causal Effect + Selection Bias$ Different outcomes between experimental and control groups is a combination of the causal effect of the treatment, and the differences among groups before the treatment is applied. To isolate the causal effect, you must minimize selection bias. Randomization versus Selection Bias Group differences contaminate causal analyses. How often is observational data contaminated in this way? Quite often. For example, here are a few comparisons between those who have health insurance versus those who do not. People with health insurance are 2.71 years older, have 2.74 more years of education, are 7% more likely to be employed, and have an annual income of$60,000 more. With so many large differences in our data, we should suspect other differences in unobserved dimensions, too.

To minimize selection bias, we need our groups to be as similar as possible. We need to compare apples to apples

Random allocation is a good way to promote between-group homogeneity, before the causal intervention. We can demonstrate this statistically. Let’s say that the causal effect of a treatment is the same across individuals, $\forall i, Y_{1,i} - Y_{0,i} = \kappa$. Then,

$E_{treatment}[Y_{1,i}] - E_{control}[Y_{0,i}]$

$= E_{treatment}[\kappa + Y_{0,i}] - E_{control}[Y_{0,i}]$

$= \kappa + E_{treatment}[Y_{0,i}] - E_{control}[Y_{0,i}]$

$= \kappa$

Consider, for example, the Health Insurance Experiment undertaken by RAND. They randomly divided their sample into four 1000-person groups: a catastrophic plan with essentially zero insurance, and then three treatment groups with variations of different forms of health insurance.

The left column shows means for each attribute (e.g. 56% of the catastrophic group are female). Other columns represent differences between the various treatment groups and control (e.g. 56-2 = 54% of the deductible group are female). How do we know if random allocation succeeded? We simply compare the group differences with standard error: if group difference is more than 2x greater than standard error, the difference is statistically significant.

In these data, only two group differences are statistically significant, and the differences don’t seem to follow obvious patterns, so we can conclude that random allocation appears to have executed successfully. But it’s worth underscoring that we didn’t perform randomization and then walk away, rather we empirically validate our group composition is homogenous.

(For those wondering, RCT studies like this consistently reveal that health insurance improves financial outcomes, but not health outcomes, for the poor. In general, medicine correlates weakly with health. On the aggregate, US consumes 50% more medical services than we need.)

This post doesn’t address null hypothesis significance testing (NHST) which is an analysis technology frequently paired with RCT methodology. There are also extensions of NHST such as factorial designs and repeated measures (within-subject tests) which merit future discussion.

External vs Internal Validity

Randomness is a proven way to minimize selection bias. It occurs in two stages:

1. Random sampling mitigates sampling bias, thereby ensuring the study results inferences generalize to the broader population. By the law of large numbers (LLN), with sufficiently large samples, the distribution of the sample is guaranteed to approach that of the population. Random sampling promotes external validity.
2. Random allocation mitigates selection bias, thereby ensuring that the groups have a comparable baseline. We can then safely access a causal interpretation of the study results. Random allocation promotes internal validity.

RCTs were pioneered in the field of medicine. How do you test if a drug works? You might consider simply giving the pill to treatment subjects. But human beings are complicated. We often manifest the placebo effect, where even an empty pill can produce real physiological relief in the body. There is much debate how the mere expectation of health can produce health; recent research points to the top-down control signals containing the predictions of your body’s autonomic nervous system.

Remember our guiding principle: To minimize selection bias, we need our groups to be as similar as possible. If you want to isolate the medicinal properties of a drug, you need both groups to believe they are being treated. Giving the control group sugar-water pills is an example of blinding: your group similarity increases if subjects can’t see what group they are in.

Blinding can mitigate our psychological penchant for letting expectations structure our experience in other domains too. Experimenters may unconsciously measure trial outcomes differently if they are financially vested in the outcome (detection bias). The most careful RCTs are double-blind trials: both experimenters and participants are ignorant of their group status for the duration of the trial.

There are other complications to bear in mind:

• The Hawthorne effect: people behave differently if they are aware of being watched
• Meta-analyses have revealed high levels of unblinding in pharmacological trials.
• Often patients will fail to comply with experimental protocol. Compliance issues may not occur at random, effectively violating ceteris paribus.
• Often patients will drop out from the study. Just as before, attrition issues may not occur at random, effectively violating ceteris paribus.

How do you deal with noncompliance and attrition?

• Intention to treat studies will leave them in the analysis: more external validity, less internal validity
• Per protocol studies will exclude them from the analysis: less statistical power, more internal validity.

RCTs in Medical History

The field of medicine is a story of learning to trust experimental results over the opinions of the knowledgeable. Here’s an excerpt from Tetlock’s Superforecasting.

Consider Galen, the second-century physician to Roman emperors. No one has influenced more generations of physicians. Glaen’s writings were the indisputable source of medical authority for more than a thousand years. “It is I, and I alone, who has revealed the true path to medicine,” Galen wrote with his usual modesty. And yeti Galen never conducted anything resembling a modern experiment. Why should he? Experiments are what people do when they aren’t sure what the truth is. And Galen was untroubled by doubt. Each outcome confirmed he was right, no matter how equivocal the evidence might look to someone less wise than the master. “All who drink of this treatment recover in a short time, except those whom it does not help, who all die,” he wrote. “It is obvious, therefore, that it fails only in incurable cases.”

Galen is the sort of figure who pops up repeatedly in the history of medicine. They are men of strong conviction and a profound trust in their own judgment. They embrace treatments, develop bold theories for why they work, denounce rivals as quacks and charlatans, and spread their insights with evangelical passion. So it went from the ancient Greeks to Galen to Paracelsus to the German Samuel Hahnemann and the American Benjamin Rush. In the nineteenth century, American medicine saw pitched battles between orthodox physicians and a host of charismatic figures with curious new theories like Thomsonianism, which posited that most illness was due to an excess of cold in the body. Fringe or mainstream, almost all of it was wrong, with the treatments on offer ranging from the frivolous to the dangerous. Ignorance and confidence remained defining features of medicine. As the surgeon and historian Ira Rutkow observed, physicians who furiously debated the merits of various treatments and theories were like blind men arguing over the colors of the rainbow.”

Not until the twentieth century did the idea of RCTs, careful measurement, and statistical inference take hold. “Is the application of the numerical method to medicine a trivial and time-wasting idea as some hold, or is it an important stage in the development of our art, as others proclaim it”, the Lancet asked in 1921.

Unfortunately, this story doesn’t end with physicians suddenly realizing the virtues of doubt and rigor. The idea of RCTs was painfully slow to catch on and it was only after World War II that the first serious trials were attempted. They delivered excellent results. But still the physicians and scientists who promoted the modernization of medicine routinely found that the medical establishment wasn’t interested, or was even hostile to their efforts.

When hospitals created cardiac care units to treat patients recovering from heart attacks, Cochrane proposed an RCT to determine whether the new units delivered better results than the old treatment, which was to send the patients home for monitoring and bed rest. Physicians balked. It was obvious the cardiac care units were superior, they said, and denying patients the best care would be unethical. But Cochrane persisted in running a trial. Partway through the trial, Cochrane told a group of cardiologists preliminary results. The difference in outcomes between the two treatments was not statistically significant, he emphasized, but it appeared that patients might do slightly better in the cardiac care units. They were vociferous in their abuse: “Archie,” they said, “we always thought you were unethical. You must stop the trial at once.” But then Cochrane revealed that he had reversed the results: home care had done slightly better than the cardiac units. There was dead silence, and a palpable sense of nausea.

Today, evidence-based medicine (EBM) rightly privileges RCTs as more authoritative than expert opinion. This movement has put forward a hierarchy of evidence, to gesture at which sources of evidence to take lightly.

I personally deny that evidence-based medicine is the best approach to evidence. It gets confused by how to interpret “absence of evidence”, as we have seen in the Covid-19 debate on mask efficacy. Yet EBM is undeniably a big improvement from the epistemic learned helplessness that was ancient medicine.

Limitations & Prospects

Everyone agrees that RCTs are the gold standard at drawing conclusions about cause and effect. It is worth seriously considering whether RCTs can be effectively deployed to answer questions besides medicine. Can we use RCTs to get better at policy making? Charity? Managerial science?

There are several important criticisms of RCTs that are worth mentioning:

• Ecological Sterility. The more rigorously you attempt to enforce ceteris paribus, the less your laboratory environment resembles the real world.
• Ethical Limitations of Scope. RCTs were never employed to test whether smoking causes cancer, because it is unethical to force someone to smoke.
• Expense. Pharmacological RCTs cost 12 million dollars to implement, on average. • Statistical Power. Because of their expense, sample sizes for RCTs are often much lower than observational studies. RCTs are the gold standard for causal inference, but they are not the only product on the market. As we will see later, there are other technologies in the Furious Five toolbox, which statistics and econometrics use to learn causal relationships. These are, 1. Random Controlled Trials (RCTs) 2. Regression 3. Instrumental Variables 4. Regression Discontinuity 5. Differences-in-Differences Until next time. # Seeing Through Calibrated Eyes Part Of: Bayesianism sequence See Also: [Excerpt] Fermi Estimates Content Summary: 1500 words, 15 min read, 15 min exercise (optional) The most important questions of life are indeed, for the most part, really only problems of probability. Pierre Simon Laplace, 1812 Accessing One’s Own Predictive Machinery Any analyst can describe the unnerving intimacy one develops while acclimating to a dataset. With data visualizations, we acclimate ourselves to the contours of the Manifold of Interest, one slice at a time. Human beings simply become more incisive, powerful thinkers when we choose to put aside the rhetoric and reason directly with quantitative data. The Bayesian approach interprets learning as a plausibility calculus, where new data pays down uncertainty. What is uncertainty? Uncertainty is how “loosely held” our beliefs are. The more data we have, the less uncertain we must be, and the sharper the peaks in our belief distribution. The Bayesian approach affirms silicon and nervous tissue conform to the same principles. Machines learn from digital data, our brains do the same with perceptual data. The chamber of consciousness is small. Yet, could there be a way to directly tap into the sophisticated inference systems within our subconscious mind? Quantifying Error Bars How many hours per week do employees spend in meetings? Even if you don’t know the exact values to questions like these, you still know something. You know that some values would be impossible or at least highly unlikely. Getting clear on what you already know is an absolutely crucial skill to develop as a thinker. To do that, we need to find a way to accurately report our own uncertainty. One method to report our uncertainty is to use words of estimative probability. But these words are crude tools. A more sophisticated approach is to express uncertainty about a number is to think of it as a range of probable values. In statistics, a range that has a particular chance of containing the correct answer is called a confidence interval (CI). A 90% CI is a range that has a 90% chance of containing the correct answer. For example, if you are 90% sure the average number of hours spent in meetings is between 6 and 15 hours, then we can say you have a 90% CI [6, 15]. You might have produced this range with sophisticated statistical inference methods, but you might have just picked them out from your experience. Either way, the values should be a reflection of your uncertainty about this quantity. When you say “I am 70% sure of X”, how do you know your stated uncertainty is correct? Suppose you make 10 such predictions. A calibrated estimator should get about 7 out of 10 predictions correct. An overconfident estimator will get less than 7 answers right (they knew less than they thought). An unconfident estimator will get more than 7 answers correct (they knew more than they thought). You can be a better thinker if you learn to balance the scales between under- and over-confidence. Unfortunately, extensive research has shown that most people are systematically overconfident. For example, here are the results from 972 estimation tests for 90% CI intervals. If people were naturally calibrated, the number of correct responses would most typically be 9/10; but in practice the actual mean is roughly 5.5. Here’s a real life example of overconfidence: overly narrow error bars in expert forecasts of US COVID-19 case load. From a psychological perspective, our ignorance of our state of knowledge is not a particularly surprising fact. All animals are metacognitively incompetent – we are truly strangers to ourselves. Our biasing towards overconfidence is easily explained by the argumentative theory of reasoning, and closely aligns with the Dunning-Kruger effect. Bad news so far. However, with practice and some debiasing techniques, people can become much more reliably calibrated estimators. Consider the premise of superforecasting: In Superforecasting, Tetlock and coauthor Dan Gardner offer a masterwork on prediction, drawing on decades of research and the results of a massive, government-funded forecasting tournament. The Good Judgment Project involves tens of thousands of ordinary people—including a Brooklyn filmmaker, a retired pipe installer, and a former ballroom dancer—who set out to forecast global events. Some of the volunteers have turned out to be astonishingly good. They’ve beaten other benchmarks, competitors, and prediction markets. They’ve even beaten the collective judgment of intelligence analysts with access to classified information. They are “superforecasters.” Calibration is a foundational skill in the art of rationality. And it can be taught. Try It Yourself Like other skills, calibration emerges through practice. Let’s try it out! Instructions: • 90% CI. For each of the 90% CI questions, provide both an upper bound and a lower bound. Remember that the range should be wide enough that you believe there is a 90% chance that the answer will be between the bounds. • Binary Questions. Answer whether each of the statements is true or false, then circle the probability that reflects how confident you are in your answer. If you are absolutely certain in your answer, you should say you have a 100% chance of getting the answer right. If you have no idea whatsoever, then your chance would be the same as a coin flip (50%). Otherwise (probably usually), it is one of the values between 50% and 100%. Alright, good luck! 🙂 To evaluate your results, the answer key is an image at the end of this article. Go ahead and count how many answers you got correct. • 90% CI. If you were fully calibrated, then you should have gotten 9 out of 10 answers right. Your test performance can be interpreted like this: if you got 7 to 10 within your range, you might be calibrated; if you got 6 right, you are very likely to be overconfident; if you got 5 or less right, you are almost certainly overconfident and by a large margin. • Binary Questions. To compute the expected outcome, convert each of the percentages you circled to a decimal (i.e., .5, .6, … 1.0) and add them up. Let’s say your confidence in your answers was 0.5, 0.7, 0.6, 1, 1, 0.8, 0.5, 0.6, 0.5, 0.7, totaling to 6.9. This means your “expected” number is 6.9. For tests with 20 binary questions, most participants should get the expected score to within 2.5 points of the actual score. Calibration Training is Possible There are five tactics used to improve one’s calibration, in practice. We will discuss the most significant tactic first, in order of descending efficacy. First, the most important thing we can do to improve is practice, and going over one’s mistakes. This simple advice has deep roots in global workspace theory, where the primary function of consciousness is to serve as a learning device. As I wrote elsewhere: Consider the radical simplicity of the act of learning itself. To learn anything new, we merely pay attention to it, and thereby become conscious of it. For a public example of self-evaluation, see SlateStarCodex annual predictions and his calibration scores. If you would like to practice against more of these general trivia tests, three are provided in the book which inspired this article, How to Measure Anything. Second, a particularly powerful tactic for becoming more calibrated is to pretend to bet money. Consider another 90% CI question: what is the average weight in tons of an adult male African elephant? As you did before, provide an upper and lower bound that are far apart enough that you think there is a 90% chance the true answer is between them. Now consider the two following games: • Game A. You win1000 if the true answer turns out to be between your upper and lower bound. If not, you win nothing.
• Game B: You roll a 10-sided die. If the die lands on anything but 10, you win \$1000. Else you win nothing.

80% of subjects prefer Game B. This means that their “90% CI” is actually too narrow (they are unconsciously overconfident).

Give yourself a choice between betting on your answer being correct or rolling the dice. I call this the equivalent bet test. Research indicates that even just pretending to bet money significantly improves a person’s ability to assess odds (Kahneman & Tversky, 1972, 1973). In fact, actually betting money turns out to be only slightly better than pretending to bet.

Third, people apply sophisticated evaluation techniques to evaluate the claims of other people. These faculties are typically not employed for the stuff coming out of our mouth. But there is a simple technique to promote this behavior: the premortem. Imagine you got a question wrong, and on this hypothetical scenario, ask yourself why you got it wrong.  This technique has also been shown to significantly improve your performance (Koriat et al 2012).

Fourth, it’s worth noting that the anchoring heuristic can contaminate bound estimation (an example of anchoring might be, if I ask you whether Gandhi died at 120 year old, your estimate will be likely older than if I had not provided the anchor). In order not to be unduly influenced by your initial guess, it can help to determine bounds separately. Instead of asking yourself “Is there a 90% chance the answer is between LB and UB”, ask yourself “Is there a 95% chance the answer is below (above) my LB (UB)”?

Fifth, rather than approaching estimately by generating guesses, it can sometimes help to instead eliminate answers that seem absurd. Rather than guess 5,000 pounds for the elephant, explore what weights you consider absurd.

In practice, these techniques are fairly effective at improving calibration in people. Here are the results of Hubbard’s half-day of training (n=972); as you can see most people did achieve nearly perfect calibration within half a day.

All of this training was done on general trivia. Does calibrative skill generalize to other domains? There is not much research on this question, but provisionally speaking – generalization seems plausible. Individual forecasters who completed calibration training had their job performance measured and they saw improvements to their job performance.

Until next time.

References

• Kahneman & Tversky (1972) Subjective Probability: A judgment of representativeness.
• Kahneman & Tversky (1973) On the psychology of prediction
• Koriat et al (1980). Reasons for confidence

# [Excerpt] Fermi Estimates

Excerpt From: How to Measure Anything book
Part Of: Bayesianism sequence
Content Summary: 1200 words, 6 min read

Eratosthenes

Our first mentor of measurement did something that was probably thought by many in his day to be impossible. An ancient Greek named Eratosthenes (ca 276-194 BCE) made the first recorded measurement of the circumference of the Earth. If he sounds familiar, it might be because he is mentioned in many high school trigonometry and geometry textbooks.

Eratosthenes didn’t use accurate survey equipment and he certainly didn’t have lasers and satellites. He didn’t even embark on a risky and potentially lifelong attempt at circumnavigating the Earth. Instead, while in the Library of Alexandria, he read that a certain deep well in Syene (a city in southern Egypt) would have its bottom entirely lit by the noon sun one day a year. This meant the sun must be directly overhead at that point in time. He also observed that at the same time, vertical objects in Alexandria (almost directly north of Syene) cast a shadow. This meant Alkexandria received sunlight at a slightly different angle at the same time. Eratosthenes recognized that he could use this information to assess the curvature of Earth.

He observed that the shadows in Alexandria at noon at that time of year made an angle that was equal to an angle of 7.2 degrees. Using geometry, he could then prove that this meant that the circumference of Earth must be 50 times the distance between Alexandria and Syene. Modern attempts to replicate Eratosthenes’ calculations put his answer within 3% of the actual value. Eratosthenes’s calculation was a huge improvement on previous knowledge, and his error was much less than the error modern scientists had just a few decades ago for the size and age of the univers. Even 1700 year later, Columbus was apparently unaware of Eratosthenes’s result; his estimate was fully 25% shorrt. (This is one of the reasons Columbus thought he might be in India, not another large, intervening landmass where I reside). In fact, a more accurate measurement than Eratosthenes’s would not be available for another 300 years after Columbus. By then, two Frenchmen, armed with the finest survey equipment available in eighteenth-century France, numerous staff, and a significant grant, finally were able to do better than Eratosthenes.

Here is the lesson: Eratosthenes made what might seem like an impossible measurement by making a clever calculation on some simple observations. When I ask participants in my seminars how they would make this estimate without modern tools, they usually identify one of the “hard ways” to do it (e.g., circumnavigation). But Eratosthenes, in fact, need not have even left the vicinity of the library to make this calculation. He wrung more information out of the few facts he could confirm instead of assuming the hard way was the only way.

Enrico Fermi

Consider Enrico Fermi (1901-1954 CE), a physicist who won the Nobel Prize in Physics in 1938.

One renowned example of his measurement skills was demonstrated at the first detonation of the atom bomb on July 16, 1945, where he was one of the atomic scientists observing the blast from base camp. While other scientists were making final adjustments to instruments used to measure the yield of the blast, Fermi was making confetti out of a page of notebook paper. As the wind from the initial blast began to blow through the camp, he slowly dribbled the confetti into the air, observing how far back it was scattered by the blast (taking the farthest scattered pieces as being the peak of the pressure wave). Simply put, Fermi knew that how far the confetti scattered in the time it would flutter down from a known height (his outstretched arm) gave him a rough approximation of wind speed which, together with knowing the distance from the point of detonation, provided an approximation of the energy of the blast.

Fermi concluded that the yield must be greater than 10 kilotons. This would have been news, since other initial observers of the blast did not know that lower limit. Could the observed blast be less than 5 kilotons? Less than 2? These answers were not obvious at first. (As it was the first atomic blast on the planet, nobody had much of an eye for these things. After much analysis of the instrument readings, the final yield estimate was determined to be 18.6 kilotons. Like Eratosthenes, Fermi was aware of a rule relating one simple observation – the scattering of confetti in the wind – to a quantity he wanted to measure. The point of the story is not to teach you enough physics to estimate like Fermi, but that, rather, you should start thinking about measurements as a multistep chain of thought. Inferences can be made from highly indirect observations.

The value of quick estimates was something Fermi was known for throughout his career. He was famous for teaching his students skills to approxximate fanciful-sounding quantities that, at first glance, they might presume they knew nothing about. The best-known example of such a “Fermi question” was Fermi asking his students to estimate the number of piano tuners in Chicago. His students – science and engineering majors – would begin by saying that they could not possibly know anything about such a quantity. What Fermi was trying to teach his students was, to figure out that they already knew something about the quantity in question.

Fermi would start by asking them to estimate other things about pianos and piano tuners that, while still uncertain, might seem easier to estimate. These included the current population of Chicago (a little over 3 million in the 1930s), the average number of people per household (two or three), the share of households with regularly tuned pianos (not more than 1 in 10 but not less than 1 in 30), the required frequency of tuning (perhaps once a year, on average), how many pianos a tuner could tune in a day (four or five, including travel time), and how many days a year the tuner works (say, 250 or so). The result would be computed:

Tuners in Chicago = population / people per household
* percentage of households with tuned pianos
* tunings per year per piano / (tunings per tuner per day * workdays per year)

Depending on which specific values you chose, you would probably get answers in the range of 30 to 150, with something like 50 being fairly common. When this number was compared to the actual number (which Fermi would already have acquired from the phone directory of a guild list), it was always closer to the true value than the students would have guessed. This may seem like a very wide range, but consider the improvement this was from the “How could we possibly even guess?” attitude his students often started with.

Implications

Taken together, these examples show us something very different from what we are typically exposed to in business. Executives often say “We can’t even begin to guess at something like that.” They dwell ad infinitum on the overwhelming uncertainties. Instead of making any attempt at measurement, they sometimes prefer to be stunned into inactivity by the apparent difficulty in dealing with these uncertainties. Fermi might say, “Yes, there are a lot of things you don’t know, but what do you know?”

Viewing the world as these individuals do- through calibrated eyes that see things in a quantitative light – has been a historical force propelling both science and economic productivity. If you are prepared to rethink some assumptions and put in the time, you will see through calibrated eyes as well.

# [Excerpt] The Evolution of Infanticide

Excerpt From: Hrdy (2009) Mothers and Others. Page 70-72, 99-100
Content Summary: 1300 words, 13 minute read

Child Abandonment in Nonhuman Primates

Many mammalian mothers can be surprisingly selective about babies they care for. A mother mouse or prairie dog may cull her litter, shoving aside a runt; a lioness whose cubs are too weak to walk may abandon the entire litter “with no attempt to nudge them to their feet, carry them or otherwise help. Some mammals (and this includes humans) even discriminate against healthy babies, if they happen to be born the “wrong” sex. But not Great Ape or most primate mothers. No matter how deformed, scrawny, odd, or burdensome, there is no baby that a wild ape mother won’t keep. Babies born blind, limbless, or afflicted with cerebral palsy – newborns that a hunter-gatherer mother would likely abandon at birth – are picked up and held close. If her baby is too incapacitated to hold on, the mother may walk tripedally so as to support the baby with one hand.

Mother and ape mothers rarely discriminate based on a baby’s particular attributes, as some human mothers do. Except perhaps those born very prematurely, babies are cared for (and carried) almost no matter what. Even if her baby dies, the mother will continue to carry the desiccated corpse around for days.

Child Abandonment in Humans

Maternal devotion in the human case is more complicated. A woman undergoes the same endocrinological transformations during pregnancy as other apes. At birth, her cortisol levels and heartbeat reflect just how sensitive to infant cues she has become. But whereas the nonhuman ape mother undiscriminatingly accepts any infant born to her without taking into account physical attributes, the human mother’s devotion is more conditional. A newborn perceived as defective may be drowned, buried alive, or simply wrapped in leaves and left in the bush within a few hours of birth. “Defective” may mean anything from having too few toes or too few. It may mean being born with a deformed limb or at a very low birthweight, coming too soon after the birth of an older sibling, or having some culturally arbitrary “affliction” such as having too much or too little hair, or being born the wrong sex.

Unlike any other ape, a mother in a hunter-gatherer society examines her baby right after birth and, depending on its specific attributes and her own social circumstances (especially how much social support she is likely to have) makes a conscious decision to either keep the baby or let it die. In most traditional hunter-gatherer societies, abandonment is rare, and almost always undertaken with regret. It is an act no woman wants to recall, a topic ethnographers must tiptoe around gingerly. Typically, interviewers will broach the subject indirectly, asking other women rather than the mother herself. Back when the !Kung still lives as nomadic foragers, the rate of abandonment was about one in one hundred live births. Higher rates were reported among people with strong sex preferences, as among the pre-missionized Eipo horticulturalists of highland New Guinea. Forty-one percent of live births in this group resulted in abandonment, and in the vast majority of cases the abandoned babies were newborn daughters whose mothers hoped to reduce the time until a song might be born.

Once a baby has nursed at his mother’s breast and lactation is under way, a woman’s hormonal and neurological responses to this stimulation, combined with visual, auditory, tactile, and olfactory cues, produce a powerful emotional attachment to her baby. Once she passes this tipping point, a mother’s passionate desire to keep her baby safe usually overrides other (including conscious) considerations. This is why, if a mother is going to abandon her infant, she usually does so immediately, before her milk comes in and before mother-infant bonding is past the point of no return.

Two Kinds of Parenting Style

There are two kinds of primate parenting styles:

• Continuous care and contact, where the mother’s hyper-possessive instincts rebuff offers of otherwise-interested babysitters
• Cooperative breeding, where relatives (“allomothers”) take turns carrying the young, and sometimes provisioning them with food.

About half of all primate species use cooperative breeding models. However, only 20% of primate species do alloparents provision the young, and for the most part this provisioning does not amount to much. Let us call robust cooperative breeding those species that generously provision their young. So far the only full alloparents belong to the family callitrichidae– mostly marmosets and tamarins. Callitrichidae are famous for breeding fast and for their rapid colonization of new habitats.

More than 30 million years have passed since humans last shared a common ancestor with these tiny (rarely more than four pounds), clawed, squirrel-like arboreal creatures. New World monkeys literally inhabit a different world from that of their primate cousins who evolved in Africa. Theirs is a sensory world dominated by smell rather than sight. Yet in many respects callitrichids may provide better insight into early hominin family lives than do far more closely related species like chimpanzees or cercopithecine monkeys.

What humans have in common with the Callitrichidae is worth itemizing. In both types of primates, group members are unusually sensitive to the needs of others and are characterized by potent impulses to give. In both groups, a mother produces closely spaced offspring whose needs exceed her capacity to provide for them. Thus the mother must rely on others to help care for and provision her young. When prospects for support seem poor, mothers in both groups are more likely to bail out than other primates are. Human and callitrichid mothers stand out for their pronounced ambivalence toward newborns and their extremely contingent maternal commitment. Infants have adapted, as we will see later, with special traits for attracting the attention of potential caregivers. And finally, humans have a marmoset-like ability to colonize and thrive in novel environments.

What happens when you take a clever ape with incipient social intelligence, tool manufacturing, robust mindreading,then introduce cooperative breeding? This, we submit, is the recipe to produce a uniquely human cognitive system. Prosocial motivations transformed the mindreading system into a mindsharing system, which ultimately led to the development of norms, language, and cumulative culture.

This is the cooperative breeding hypothesis.

The Dark Side of Cooperative Breeding

As noted above, By far, the most common exceptions to this general primate pattern are found in the family Callitrichidae. Like all cooperative breeders, tamarin and marmoset mothers depend on others to help rear their young. Shared care and provisioning clearly enhances maternal reproductive success, but there is also a dark side to such dependence. Tamarin mothers short on help may abandon their young, bailing out at birth by failing to pick up neonates when they fall to the ground or forcing clinging newborns off their bodies. Although infanticide is a hazard across the Primate order, observations almost always implicate either strange males or females other than the mother, not the mother herself.

The high rates of maternal abandonment seen among callitrichids and humans are almost unheard of elsewhere among primates. Cooperative breeding systems endowed humans with a deep felt sense of cooperation and altruism… but increased rates of child abandonment are a corollary.

The Evolution of Abortion

Note: this section is my own; these are not Hrdy’s words.

It is possible to interpret modern debates about abortion to this ancient primate instinct documented above. As humans became increasingly culturally sophisticated, the motivation to abandon a child could be acted upon prenatally.

This is not to make an appeal to nature, “X is good because it is natural”. Indeed, our normative systems (mindsharing writ large) allow us to push against human nature when we so choose. And I won’t speak towards a moral appraisal of abortion here.

But let’s imagine human parenting systems were instead inherited from the continuous care and contact model of the other great apes. In such a system, I submit the topic of abortion would be as foreign as meat-eating might be to a talking gorilla.

# The Domestication of Sapiens

Part Of: Anthropogeny sequence
Followup To: An Introduction to Domestication
Content Summary: 2000 words, 20 min read

Two Forms of Aggression

Aggression is not a natural kind. Rather, as described in e.g., Siegel & Victoroff (2009), there are two kinds of aggression.

1. Reactive aggression is based on the RAGE subsystem. It is the biological basis of resource competition.
2. Proactive aggression is based on the SEEKING subsystem. It is the biological basis of predation, and sexual selection-driven infanticide.

These two systems have different behavioral signatures. Reactive aggression is associated with high arousal, sudden initiation, and functions to remove a threatening stimulus. As observers of a bar fight can tell you, you don’t want to get close to an enraged person at the wrong time – the aggressive behavior can easily switch its target. In contrast, proactive aggression is associated with low arousal, planned initiation, and functions to achieve some sort of goal.

These systems also feature different physiological signatures. Reactive aggression is caused by activation of the mediobasal hypothalamic nuclei, and dorsal nuclei of the periaqueductal gray (PAG). Amygdala activity promotes these behaviors, and are accompanied by low levels of prefrontal control. In contrast, proactive aggression is caused by activation of the lateral hypothalamic nucleus, and ventral regions of the PAG; amygdala activity suppresses its expression, and it is accompanied by significant cortical activity.

Of course, these two systems can interact.

• When a beta chimpanzee challenges an alpha, he may convert predatory aggression (plotting a coup) to an escalating sequence of reactive violence.
• When a human being suffers intense personal injury and is unable to immediately retaliate, he may convert that reactive rage into the more proactive and delayed phenomena known as vengeance

The distinction also prominently appears in human legal codes: we tend to punish proactive aggression (premeditated murder) more virulent than reactive aggressive (bar fight).

Anyone looking at homicide data will tell you that being male, and being young, render a person much more likely to kill. Violence-generating mechanisms differ by sex, because each sex is subject to diverging selective pressures.

Of course, homicide can be produced by two kinds of aggression. It would be more useful to policy makers to analyze rates of reactive versus proactive aggression separately. Given its more cognitive basis, I suspect proactive violence is more amenable to cultural interventions; whereas reactive violence might be best treated with therapy and pharmaceuticals to strengthen one’s self-control.

And indeed, just these kinds of considerations are now being employed by social scientists seeking to better understand and mitigate phenomena such as domestic violence, and delinquency in children.

From a historical perspective, our species spent most of its history as foragers (i.e.,, hunter-gatherers), with statecraft a consequence of the agricultural revolution. There is a keen interest in understanding the natural tendency of forager populations, since these are more representative of the “original social contract”. The Rousseau paradigm sees foraging humans as a naturally benign and unaggressive species. This position considers violence to be promoted by the state. The Hobbes paradigm rejects the idea of the noble savage and holds violence in the evolutionary path. In this view, the state is an instrument to restrain violence.

The Evolution of War

Comparative biology data can resolve the Rousseau-Hobbes debate.

First, consider how chimpanzees use gangs of allied individuals to achieve political ends through aggressive means. These coalitions are very rare in the animal kingdom. They are only known to occur among social carnivores and primates. These acts of coalition-based aggression are proactive in nature.

Second, it is important to understand how chimpanzees express xenophobia. Chimp troops don’t wander haphazardly; they instead inhabit clearly demarcated territories. The troops of neighboring communities are treated with hostility, so much so that up to 75% of the time is spent in the central 35% of the range. Another expression of chimp territoriality is border patrols, conducted by groups of male chimps moving stealthily to enforce their territory’s boundaries.

Third, these factors coalesce in the phenomenon of chimpanzee commando raids, with large groups of males penetrating deep into enemy territory, stalking and killing members of competing troops. Why small-scale raids instead of large-scale brawls? Well, warfare is only adaptive when the potential benefits outweigh the risk of personal injury. Thus, these raids are governed by the logic of a local imbalance of power. Raids preferentially occur when the attacking party has gathered significantly more fighting power than the defender (Wrangham 1999).

Killing doesn’t directly increase one’s biological fitness. Why then has such behavior been selected? Because successful raids promote the possibility of territorial expansion (Mitani et al 2010), plausibly by weakening the other groups’ overall fighting power. In turn, territory size directly correlates with resource and mate availability.

Here is Wrangham (1999) explaining parallels with human warfare:

It is clear that intergroup aggression has occurred among many, possibly all, hunter-gatherer populations and follows a rather uniform pattern. From the most northern to the most southern latitudes, the most common pattern of intergroup aggression was for a party of men from one group to launch a surprise attack in circumstances in which the attackers were unlikely to be harmed. Attacks were sometimes unsuccessful but were, at other times, responsible for the deaths of one or many victims. Women and girls were sometimes captured.

Chimpanzees and hunter gatherers, we conclude, share a tendency to respond aggressively in encounters with members of other social groups; to avoid intensely aggressive confrontations in battle line (typically, by retreating); and to seek, or take advantage of, opportunities to use imbalances of power for males to kill members of neighboring groups.

Indeed, even the rate at which foraging humans and chimpanzees engage in between-group violence is quite similar:

These data suggest a common mechanism. It is not that humans evolved a unique thirst for warfare. Rather, this instinct long predates our species.

The Domestication of Bonobos

It is hard to imagine species with more dramatically different social lives than bonobos and chimps. They are renowned for their ultra-sexualtity: sexual acts are used in lieu of grooming, as the primary vehicle to strengthen relationships. They also exhibit startlingly low rates of violence:

1. Killing of any kind (including coalition-based acts of violence) is literally unheard of.
2. Rape and infanticide have also never occurred.
3. Commando raids do not occur; bonobos do not even express hostility to neighboring troop “outgroups”.

The bonobos and chimp lineages diverged very recently (less than 1 mya); yet they lead entirely different social lives. How is this possible?

A clue comes from observations of unusually strong female coalitions in bonobos. Every time a male tries to coerce a female for food or sex, that female’s coalition vigorously rebuff the coalition. These female coalitions in effect give non-aggressive males an advantage. Over the generations, this selective pressure will yield decreasing levels of (proactive) aggression in the bonobo species.

As we learned in An Introduction to Domestication: when aggression is downregulated in a species, a whole complex of unintended byproducts occur. And we see precisely this domestication syndrome in bonobos. Bonobos have smaller crania, reduced pigmentation, increased sexual behaviors, and a general uptick in childlike mannerisms. Bonobos domesticated themselves! Here is the model from Hare et al (2012):

The Puzzle of Humanity

Chimpanzees and humans have comparably high rates of proactive (predatory) violence, and this proclivity underlies a shared love of warfare. In contrast, bonobos exhibit near-zero rates of proactive aggression.

Let’s turn our attention back to reactive violence. Bonobos exhibit moderate forms of reactive aggression; primarily expressed by female coalitions to curtail male domination behaviors. In contrast, chimps are notoriously short-tempered; reacting violently to even trivial “provocations”. How do rates of human reactive aggression compare in practice?

Even in comparatively violent forager groups, the difference is remarkably large. Humans experience reactive violence at rates two orders of magnitude less than our chimpanzee cousins.

With these data, the following picture has emerged:

Let’s assume chimpanzee aggression behaviors are representative of the LCA. Bonobos became docile by a process of self-domestication. Why are humans less reactively violent? Did we self-domesticate too?

Another Case of Self-Domestication

The surprising answer is yes. A host of anatomical changes in H. Sapiens around 300 ka all support the self-domestication hypothesis (Leach 2003, Cieri et al 2014).

One symptom of domestication is paedomorphism: childlike features that extend into adulthood. Our adult cranium (especially the smooth, round skull) resembles the skull of chimpanzee children (in contrast with a chimpanzee adult’s prognathic face):

In domestication (among others), we see a reduction in face size, and a feminization of the skull:

These changes look a lot like the change between our mid-Pleistocene ancestor and anatomically modern H. Sapiens:

Other “domestication signatures” in modern humans include:

• Brain volume reduction (in last 30,000 years)
• Smaller teeth, small face-body ratio
• Reduced sexual dimorphism (differences between male vs female skeletons)
• More childlike features in adults (longer juvenile period, extended learning, adult play)
• Increased fertility rate (incl. hidden estrus)
• Increased rates of lifelong homosexuality

This anatomical evidence of self-domestication nicely explains with our species’ unique relationship with violence.

Significance of Self-Domestication

Consider again that the domestication syndrome appears between 400 and 100 kya. The Heidelbergs were more violent than Sapiens.

Most primates don’t get enraged by acts of violence that don’t involve them personally. But humans do experience moral outrage at such acts, to the point of being willing to engage in so-called altruistic punishment (risking personal injury to punish a third-party offense).

Moral instincts are one of the couple dozen traits that are uniquely human. Evolutionary anthropology must explain when and why these uniquely human faculties were forged. Being willing to punish acts of reactive violence surely played a role in the self-domestication process. We can safely conclude that morality as a cognitive adaptation evolved late. Heidelbergs were amoral; Sapiens were increasingly subject to the moral sentiments.

I’ll speak towards why morality evolved another time. For now, let’s turn our attention from the causes, to the effects of self-domestication. For it turns out that these data give us unique insights into what made our species ecologically dominant. Heidelbergs did not conquer the globe – Sapiens did. But how could a reduction in intra-group violence create the necessary conditions for our species’ success?

Our species was not successful because of its pacifism. Rather, the cultural intelligence hypothesis holds that our species’ unique gifts for coordinating with others to transmit cultural knowledge created the conditions for cultural ratcheting. Rather than inheriting only our genetic legacy, we also inherit cultural knowledge which (together with our innate endowments) give us increasing powers to control our environment.

Importantly, the ability for our cultural knowledge (or “super mind”) to accumulate information is not guaranteed. If a particular community of humans is too few in number, or too antagonistic towards one another, its net cultural know-how will not grow across the generations.

In this model, our cultural instincts evolved earlier in our lineage. But the advent of morality, and its concomitant reduction in reactive violence, was the event that unleashed the astonishing generative potential of human culture.

Until next time.

Inspiring Materials

Some of these views are articulated in more detail in Wrangham (2019a). For video lecture on this topic, please see:

Works Cited

• Cieri et al (2014). Craniofacial Feminization, Social Tolerance, and the Origins of Behavioral Modernity
• Hare et al (2012). The self-domestication hypothesis: evolution of bonobo psychology is due to selection against aggression
• Leach (2003). Human Domestication Reconsidered.
• Marean (2015). An Evolutionary Anthropological Perspective on Modern Human Origins
• Mitani et al (2010). Lethal intergroup aggression leads to territorial expansion in chimpanzees
• Siegel & Victoroff (2009). Understanding human aggression: New insights from neuroscience
• Wrangham (1999). Evolution of Coalitionary Killing
• Wrangham (2003). Intergroup Relations in Chimpanzees
• Wrangham (2018). Two types of aggression in human evolution
• Wrangham (2019a). The Goodness Paradox: The Strange Relationship Between Virtue and Violence
• Wrangham (2019b). Hypotheses for the Evolution of Reduced Reactive Aggression in the Context of Human Self-Domestication