Glimcher: Neuroeconomic Analysis 2: Because vs. As-If

The life blood of the theoretician is constraint.

To understand why, one must look to the size of conceptspace. The cardinality of conceptspace is formidable. For every fact, how can there be only a countable number of explanations for that fact? How many theories of physics have managed to explain what it means for an apple to fall from a tree? Putting on our Church-Turing goggles, we know that every event can be at least approximated by a string of binary code, that represents that data.
The number of programs that can be fed to a Turing Machine to generate that particular string is unbounded.

Constraint is how theoreticians slice away ambiguity, how they localize truth in conceptspace. To say “no”, to say “that is not possible”, is a creative and generative act. Constraint is a goal, not a limitation.

After summarizing each of the three fields he seeks to link, Glimcher spends an entire chapter responding to a particular claim of Milton Friedman, which permeates the economic climate of modernity. Friedman argued that it is enough for economics to model behavior *as if* it is congruent to some specified mathematical structure. In his words:

“Now of course businessmen do not actually solve the system of simultaneous equations in terms of which the mathematical economist finds it convenient to express this hypothesis… The billiard player, if asked how he decides where to hit the ball, may say that he “just figures it out” then also rubs a rabbit’s foot just to make sure… the one statement is about as helpful as the other, and neither is a relevant test of the associated hypothesis”.

This, Glimcher argues, is precisely the wrong way to go about economics, and scientific inquiry in general. Because human beings are embodied, there exist physical, causal mechanisms that generate their economic behavior. To turn attention away from causal models is to throw away an enormously useful source of constraint. It is time for economics to move towards a Because Model, a Hard model, that is willing to speak of the causal web.

Despite this strong critique of economic neo-classicism, Glimcher is unwilling to abandon its traditions in favor of the emerging school of behavioral economics. Glimcher insists that the logical starting place of neoclassicism – the concise set of axioms – retains its fecundity. Instead, he calls for a causal revolution within theories such as expected utility; from Soft-EU to Hard-EU.

According to Glimcher, “the central linking hypothesis” of neuroeconomics is that, when choice behavior conforms to the neoclassical axioms (GARP), then the new neurological natural kind of the Subjective Value must obey the constraints imposed by GARP. Restated, within its predictive domain, classical economics constrain neuroscience because utility IS a neural encoding known as a subjective value.

Glimcher then goes on to establish more linkages, which he uses to constrain the data deluge currently experienced within neuroscience. Simultaneously, he employs known impossibilities of the human nervous system to close the door on physically non-realizable economic models. It is to neuroscience that we turn next.


Glimcher: Neuroeconomic Analysis 1: Intertheoretic Reduction

Neuroeconomic Integrative Research (1)

Can we describe the universe satisfactorily, armed with the language of physics alone? Such questions of inter-theoretic reduction lie at the heart of much scientific progress in the past decades. The field of quantum chemistry, for example, emerged as physicists recognized that quantum mechanics had much to say about the emerging shape of the periodic table.

Philosophical inquiry into the nature of inter-theoretic (literally, “between disciplines”) reduction has produced several results recently. In philosophy of mind, the question is the primary differentiator between reductive from non-reductive physicalism.

But what does it mean for a discipline to be linked, or reduced, to another? Well, philosophers imagine the discourse of a scientific field to leverage its own idiosyncratic vocabulary. For example, the natural kinds (the vocabulary) of biology includes concepts such as “species”, “gene”, and “ecosystem”. In order to meaningfully state that biology reduces to chemistry, we must locate equivalencies between the natural kinds of their respective disciplines. The chemical-biological identity between “deoxyribonucleic acid” and “gene” suggests that such a broader vision of reduction might be possible. The seeming absence of a chemical analogue to the biological kind of “ecosystem” would argue against such optimism.

Where does Glimcher stand in the midst of this conversation? The academic field that he gave birth to, neuroeconomics, attempts to forge connecting ties between neuroscience, psychology, and economics. Glimcher is careful to disavow reductionist ambitions towards a total reduction. Rather, he claims that failed inter-theoretic links are signals of disciplinary misdirection. For example, if the neuroscientific kind of “stochastic decision” doesn’t have an analogue in the economic kind of “utility”, this would suggest that economics should reform towards a more probabilistic vision. This above model of innovation-through-linking is, according to Glimcher, the core reason why neuroscience/psychology/economics: the act of reducing produces insight.

I sympathize with Glimcher’s vision. I would argue that the pull within the sciences towards specialization is well counter-balanced by interdisciplinary work of precisely the kind that he is championing.

That said, I would criticize Glimcher’s vision of intertheoretic reduction as being inflexible. His goal is, essentially, to chain the abstract field of economics to one particular piece of meat – the human brain. This seems too limited in scope: shouldn’t economics be able to say something about the decision-making capacities of non-mammalian species, or computing agents, or alien races? To shamelessly leverage a metaphor from computer science, reductive schemes should be refactored such that human cytoarchitecture is a function parameter, instead of a hard-coded constant.

An interesting link to David Marr’s work should underscore this point. One of the founders of neuroscience, Marr’s most salient idea from his most popular book (Vision), was that causal systems can be evaluated at three different levels. The top level is the level of abstract principle, the middle level is algorithmic, the third is implementation. Marr strove to demonstrate how, for certain areas of vision like stereopsis, these three “levels of explanation”, and their inter-connections, were already essentially solved. It is interesting to link his idea of explanatory level with the present neuroscientific proposal. Would Marr consider economics to be isomorphic to abstract principle, psychology to algorithm, to neuroscience as implementation? If so, this would add another voice of support to my proposal for the “parameterization” of low-level details: Marr was very willing to detail multiple algorithms that interchangeably satisfy the same abstract specification.

[Sequence] Neuroeconomics

Neighboring Fields

Neurobiological Mechanisms

Classical Reinforcement Learning

Pulling It All Together

Philosophy of Decision Making