Part Of: Demystifying Language sequence
Content Summary: 1200 words, 12 min read.
The Structure of Reason
Learning is the construction of beliefs from experience. Conversely, inference predicts experience given those beliefs.
Reasoning refers to the linguistic production and evaluation of an argument. Learning and inference are ubiquitous across all animal species. But only one species are capable of reasoning: human beings.
Argument can be understood by the lens of deductive logic. Logical syllogisms are a calculus that maps premises to conclusions. An argument is valid if the conclusions follow from the premises. An argument is sound if it is valid, and its premises are true.
Premises can be evaluated directly via intuition. The relationship between argument structure and intuition parallels decision trees versus evaluative functions.
Two Theories of Reason
Why did reasoning evolve? What is its biological purpose? Consider the following theories:
- Epistemic theory: reasoning is an extension of our individual cognitive powers.
- Argumentative theory: reasoning is a device for social communication.
One way to adjudicate these rival theories is to examine domain gradients. Roughly, a biological mechanism performs optimally when situated in contexts for which they were originally designed. Our cravings for sugars and fats mislead us today, but encourage optimal foraging in the Pleistocene epoch.
Reasoning is used in both individual and social contexts. But our theories disagree on which is the original domain. Thus, they generate opponent predictions as to which context will elicit the most robust performance.
Here we see our first direct confirmation of the argumentative theory: in practice, people are terrible at reasoning in individual contexts. Their reasoning skills become vibrant only when placed in social contexts. It’s a bit like Kevin Malone doing mental math. 🙂
Structure of Argumentative Reason
All languages ever discovered contain both nouns and verbs. This universal distinction reflects the brain’s perception-action dichotomy. Nouns express perceptual concepts, and verbs express action concepts.
Recall that natural language has two processes: speech production & speech comprehension. These functions both accept nouns and verbs as arguments. Thus, we can express the cybernetics of language as follows:
Argumentative reasoning is a social extension of the faculty of language. It consists of two processes:
- Persuasion deals with arguments to support beliefs.
- Justification deals with reasons to justify our actions.
Persuasion and justification draw on perceptual and action concepts, respectively. Thus, the persuasion-justification distinction mirrors the noun-verb distinction, but at a higher level of abstraction. Here is our cybernetics of reasoning diagram.
We return to phylogeny. Why did reasoning-as-argumentation evolve?
For communication to persist, it must benefit both senders and receivers. But stability is often threatened by senders who seek to manipulate receivers. We know that humans are gullible by default. Nevertheless, our species does possess lie detection devices.
The evolution of argumentative reason was shaped by a similar set of ecological pressures as that of language. Let me cover these hypotheses in another post.
For now, it helps to think of belief as clothes, serving both pragmatic and social functions. A wide swathe of biases stems from persuasive arguments performing social rather than epistemic ends. This is not to say that truth is irrelevant to reasoning. It is simply not always the dominant factor.
On Persuasion
Persuasion processes involve arguments about beliefs. It has two subprocesses: argument production (listener persuasion) and argument evaluation (argument quality inspection). These two processes are locked in an evolutionary arms race, developing ever more sophisticated mechanisms to defeat the other.
Argument production is responsible for the two most damning biases in the human repertoire. There is extensive evidence that we are subject to confirmation bias: the attentional habit to preferentially examine evidence that helps our case. We are also victim to motivated reasoning, which biases our judgments towards our self-interest. We often describe instances of motivated reasoning as hypocrisy.
Consider the following example:
There are two tasks one short & pleasant, the other long & unpleasant. Selectors are asked to select their task, knowing that the other task is giving to another participant (the Receiver). Once they are done with the task, each participant states how fair the Selector has been. It is then possible to compare the fairness ratings of Selectors versus those of the Receivers.
Selectors rate their decisions as more fair than the Receivers, on the average. However, if participants are distracted when they asked their fairness judgments, the ratings were identical and showed no hint of hypocrisy. If reasoning were not the cause of motivated reasoning but the cure for it, the opposite would be expected.
In contrast to production, argument evaluation involves two subprocesses: trust calibration and coherence checking. The ability to distrust malevolent informants has been shown to develop in stages between the ages of 3 and 6.
Coherence checking is less self-serving than production mechanism. In fact, it is responsible for the phenomenon of truth wins. For example, in group puzzles the person whoever stumbles on the solution will successfully persuade her peers, regardless of her social standing. In practice, good arguments tend to be more persuasive than bad arguments.
On Justification
Justification processes involve reasons about behavior. This is not to be confused with motivations for behavior, which happen at the subconscious level. In fact, there is evidence to suggest that the reasons we acquire by introspection are not true. It has been consistently observed that attitudes based on reasons are much less predictive of future behaviors (and often not predictive at all) than were attitudes stated without recourse to reasons.
The justification module produces reason-based choice; that is, we tend to choose behaviors that are easy to justify to our peers. Reason-based choice explains an impressive number of documented human biases. For example,
The sunk cost fallacy is the tendency to continue an endeavor once an investment has been made. It doesn’t occur in children or non-human animals. If reasoning were not the cause of this phenomenon but the cure for it, the opposite would be expected.
The disjunction effect, endowment effect, and decoy effect can similarly be explained in terms of reason-based choice.
This is not to say that justification is insensitive to the truth. Better decisions are usually easier to justify. But when a more easily justifiable decision is not a good one, reasoning still drives us towards ease of justification.
Theory Evaluation
I was initially skeptical of the argumentative theory because it felt “fashionable” in precisely the wrong sense, underwritten by postmodern connotations of narrative-is-everything and epistemic nihilism. Another warning flag is that the theory draws from the field of social psychology, which has been quite vulnerable to the replication crisis.
However, the evidential weight in favor of the argumentative theory has recently persuaded me. For a comphrehensive view of that evidence, see [MS11]. I no longer believe argumentative reason entails epistemic nihilism, and I predict its evidential basis will not erode substantially in coming decades.
I am also attracted to the theory because it helps tie together several other theories into a comprehensive meta-theory: The Tripartite Mind. Let me sketch just one of example of this appeal.
The heuristics and biases literature has uncovered a bewildering variety of errors, shortcuts, and idiosyncrasies in human cognition. Responses to this literature vary widely. But too many voices take such biases as “conceptual atoms”, or fundamental facts of the human brain. Neuroscience can and must identify the mechanisms underlying these phenomena.
The argumentative theory is attractive in that it explains a wide swathe of the zoo.
Takeaway
Reason is not a profoundly flawed general mechanism. Instead, it is an efficient linguistic device adapted to a certain type of social interaction.
References
[MS11]. Mercer & Sperber (2011). Why do humans reason? Arguments for an argumentative theory.
Hi Kevin,
I would be interested in your thoughts on how you would partition the social vs epistemic influence on the development of our reasoning abilities. That is, if you had to assign percentages, would you say it’s 80% social and 20% epistemic?
While I appreciate that our communication of reasoning is semantic and thus based on a linguistic faculty that developed through social factors, I also recognize that logical connections are often realized before we can put them into words. Of course, that doesn’t fully disassociate the reasoning faculty from social and semantic origins, but I think it is well worth considering in the equation.
Additionally, I have been drawn to the hierarchal prediction models promoted by the likes of Andy Clark and Jeff Hawkins, wherein it seems to me that reasoning can be perceived as the process of traversing multiple levels of predictive inference. If we view semantic content as tokens that are associated with the sensory patterns (concepts) for the purpose of communication, then the semantic expression of reasoning is just a translation from the conceptual inferential hierarchy to the semantic associations for those concepts.
Long story short, I think I agree that there is something to be said for the social influence on the development of a reasoning faculty and this is seen in the biases you highlight, but I would tend to see it as a less dominant factor – maybe 25% social and 75% epistemic.
LikeLike
Hey Travis, appreciate the thought-provoking comments, as always. 🙂
So honestly, two years ago I could even see myself authoring parts of your reply. Let me try to sketch what changed my mind.
I was initially skeptical of the argumentative theory because it felt “fashionable” in precisely the wrong sense, underwritten by postmodern connotations of narrative-is-everything and epistemic nihilism.
Point 1. The metaphor of brain as neural network has helped me rethink the evolution of language. Deep Neural Networks (DNNs) are astonishingly effective at all the things. But understanding these models is notoriously difficult. They work, but no one knows how. Researchers have thus been trying to create neural algorithms capable of explaining and justifying their conclusions.
But this is precisely the design problem addressed in the phylogeny of language. The more you think about getting a DNN to talk, the more you’ll understand our own speech faculties.
Point 2. You present a model of language as a general-purpose Mentalese -> English translation device (c.f Jerry Fodor & the Language of Thought). This model strikes me as incorrect for two reasons
First, from an AI perspective, I don’t think you can train a neural network X to explain neural network Y. I think you must install grapheme and phoneme maps and figure out how to relate them to the original DNN. On the argumentative theory, these “language functors” are built only when it is socially advantageous to do so.
Second, from a behavioral perspective, the translation theory fails to predict that self-reports are uniformly wrong (NW77). This is not a matter of introspective dishonesty; these seem to be dominated by unintentionally lying (aka confabulation).
It turns out that introspection isn’t epistemically privileged, it is invented whole cloth. When we produce language about facts our consciousness cannot access, we confabulate in socially useful ways. This is another perspective on the Justification module. For more on this, see the Interpretive Sensory-Access theory of self-knowledge: https://www.youtube.com/watch?v=DHzY-q2JPgU&t=364s
[NW77] Nisbett & Wilson 1977. Telling more than we can know: verbal reports on mental processes.
We cannot say “I did X because Y” without confabulating. That is not to say that “I believe X” speech is similarly confused. But just that Persuasion evolved for a social function.
Point 3. I don’t yet feel confident enough to answer your “social vs epistemic” percentages. My answer will of course be constrained by the fact that ultimately reasoning must have *some* epistemic success, to explain the pragmatic success of our science. This relates to Evolutionary Debunking Arguments (WG12) in philosophy of science.
But ultimately such a 1D distinction is unsatisfying: as explanations they are incomplete, in the same manner as “nature vs nurture” debates. I hope to nail down a more robust account of the ecological pressures that produced language in the first place. I plan to read this book next,
https://www.amazon.com/Why-We-Talk-Evolutionary-Evolution/dp/0199563462
Cheers. 🙂
[WG12] Wilkins & Griffeths 2012: Evolutionary debunking arguments in three domains
LikeLike
Kevin,
Thanks for the follow-up. I admit that I only previously had a passing familiarity with the argumentative theory and having now read a bit more I am finding that I see more of the merits, in large part because I think I have a better handle the scope of idea.
I found Dan Sperber’s interview at edge.org to be particularly illuminating, especially with respect to his points about the prevalence of misunderstandings of the theory (to which I think I was party) and the emphasis on epistemic convergence. These clarifications are helpful is seeing the selective advantage that goes with reasoning in the social domain. There are still pieces of the puzzle, however, that I find difficult to fit together.
Sperber makes the point that persuasion by itself wouldn’t do anything unless we also possess the faculty that can be persuaded. As I understand it, the theory posits a co-evolution of these faculties but it seems like it might be difficult for this to get off the ground. It seems more promising to me if there is a somewhat robust meta-inferential faculty that is then utilized by this social dynamic to accelerate the expansion to more complex reasoning and evaluation faculties. Perhaps this is even what is intended by the theory?
On my previous point about recognizing a sense of logical relation prior to semantic elaboration, I am still inclined to see that there is something informative here. I appreciate the degree to which we can confabulate in the process of trying to communicate internal states but I also see that this is most prominent when experiments are setup to essential require participants to explain something that was probably never anything but a single-layer intuition in the first place (e.g., the “attractiveness” example in the video you linked). If I infer the utility of a spear by combining inferences about the superiority of sharpened stones for cutting, sticks for acting at a distance and the ability of wrapped vines to secure objects, this can all happen without language and it seems that my passing on of this meta-inference has no reliance on confabulation. I don’t see why more abstract meta-inferential reasoning cannot employ the same types of relations.
Lastly, I just want to throw out something that occurred to me in thinking about the biases. In the predictive models of Clark and Hawkins, confirmation bias and motivated reasoning can also be theorized in relation to energy conservation (use the patterns and relations you already have) and, similarly, as an artifact of the way associative networks operate, wherein data that “tickles” existing associations is more readily grafted into the tree. So it seems plausible that the argumentative theory is helpful but not exclusive.
LikeLike
Travis,
Re pg3: I don’t know :(. I need to wrangle all the competing theories of the evolution of language, compare the competing narratives about selection pressures (group vs sexual vs individual).
Re pg4: One way of expressing the barrier between subconscious inference & conscious reasoning is the symbolic-subsymbolic debate. At its core, the brain is a subsymbolic processor: neural information processing has no native mechanism for deductive logic nor symbol manipulation. You might enjoy an earlier post of mine, which explores this issue: https://kevinbinz.com/2016/06/04/philosophical-implications-of-objects/
Suffice it to say that I haven’t solved this question yet. Keep your eyes peeled for future entries entitled “The Invention of Logic” 🙂 I just purchased Gary Marcus’ The Algebraic Mind, which I expect may help. You may enjoy this paper, which outlines the neural net – subsymbolic link more clearly than I could: https://stanford.edu/~jlmcc/papers/PDP/Chapter1.pdf
Re pg5: there are three categories of responses to the heuristics & bias literature.
1. Panglossian – There may be minor errors, but irrationality is in principle not possible
2. Apologist – Behavior is often in error, but people are doing the best they can, and are not irrational
3. Meliorist – Behavior is often irrational, but people can learn to do better
Your appeal to bounded rationality is a common theme in the Apologist camp (c.f. the writings of Gigerenzer). While there is much to recommend the ecological focus of these kinds of arguments, I tend to sympathize with the Meliorist framework (c.f., Kahneman and Stanovitch). However, it is unclear to me how this debate (and energy conservation) bear on the theory of argumentative reasoning.
As an aside, I think I’ll break this post in half: cutting the “Interdisciplinary Integration” section and making it a standalone post.
LikeLike
I guess part of the issue is that I’m not sure why reasoning should be restricted to the linguistic domain. In your view, is the spear example not a form of reasoning?
My comments on biases were only intended to suggest that the argumentative theory has competition in explaining things like confirmation bias. Understanding prediction as a consequence of energy minimization (a la Karl Friston) leads to the recognition that a bias which causes us to favor existing patterns is explainable as an energetically motivated adaptation.
LikeLike
I elaborated on some of these ideas in a new post, The Tripartite Mind. There, I claim that the Linguistic Mind can only express information that appears in consciousness & working memory (WM). Your spear example was,
> If I infer the utility of a spear by combining inferences about the superiority of sharpened stones for cutting, sticks for acting at a distance and the ability of wrapped vines to secure objects, this can all happen without language and it seems that my passing on of this meta-inference has no reliance on confabulation.
On my view, I suspect all three predicates from semantic memory are available in WM, and thus accessible to the Linguistic Mind. Other predicates, such as “I am hunting because I am bored” might qualify as confabulations.
I’m fond of Friston’s Free Energy Principle (FEP), with some qualifications. That said, could you elaborate how it might be used to explain e.g., confirmation bias? I’m a little fuzzy on how { attn given to both sides of debate } versus { attn given to one side of debate } may be resolved by an appeal to FEP and/or the shape of one’s pre-existing knowledge base. It seems to me that social factors (ingroup alliances) have more predictive power, no?
LikeLike
Kevin,
I had meant the spear to be an example of something which did not require semantic expression to reach a conclusion. Obviously I can only relay it to you via semantic content and it can be expressed as such, but I have a distinct sense that the conclusion can be reached without semantics and it looks to me to be a form of reasoning.
Regarding FEP, it’s certainly possible that I have extrapolated too much from an overgeneralization of the theory, but it seems to me that a bias which would influence us to minimize “surprise” in our data consumption and evaluation accords nicely with a principle wherein the minimization of surprise is a generalization of the central tenet. I don’t know how to gauge whether this is more or less predictive than the social theory (and I certainly don’t think that they’re mutually exclusive) but it makes sense to me.
LikeLike
I follow. 🙂 Thanks man, for the interesting exchange.
LikeLike