Baddeley: Working Memory Quotes

Representational Neglect
Of particular relevance to [the case for regarding the sketchpad as a workspace rather than a perceptual gateway] is the phenomenon of representational neglect. Bisiach and Luzzatti (1978) report the case of two patients who were asked to describe from memory the cathedral square in Milan, their native city. In both cases they gave a good description, except that the left side of the square was hardly mentioned. They were then told to imagine walking around the square, turning round, and again giving a description. This time the previously neglected part of the square was on their right, and was now described in detail; the side that was previously well-described was now ignored. Baddeley and Lieberman (1980) suggested that this might represent the impairment of a system for representing information within the visuospatial sketchpad.
(Baddeley, Working Memory, pp 93)

3D Image Rotation
Much of the research on imagery was initially stimulated by the classic demonstration by Shepard and Metzler (1971) who required subjects to judge whether two representations of three-dimensional object were identical or whether one was the mirror image of another. The two were presented in different relative orientations. Response time proved to be a linear function of the difference in orientation between the two, just as if the subjects were mentally rotating one of the objects until it lined up with the other, and then making the judgment…

[Description of Kosslyn’s (1978) computational model of the brain literally rotating a data structure.]

Intons-Peterson (1996) showed that the speed of ‘visual scanning’ varied depending on semantically relevant but non-visual factors such as whether the whether the the subject were imagining herself carrying a weight or not…

Equally problematic for Kosslyn’s interpretation was a study by Hinton and Parsons (1988) who asked their subjects to imagine a wire cube sitting on a shelf in front of them. They were then asked to take hold of the nearest lower right-hand corner, and the furthest [upper] left-hand corner, and then orient the cube such that their left hand was immediately above their right. The task then was to describe the location of the remaining corners. Almost everyone reports that they lay upon a horizontal line, like a cubic equator. In fact, they form a crown shape. Hinton and Parsons suggest that rather than actually manipulating the representation as Shepard or Kosslyn might suggest, subjects attempt to simulate it. When a problem is complex, they simply fail.
(Baddeley, Working Memory, pp 94-95)

Plato: Phaedo Quotes

On The Virtues Of Asceticism:
The soul reasons best when none of the senses troubles it… when it is most by itself, taking leave of the body and as far as possible having no contact or association with it in its search for reality.
(Plato, Phaedo, pp 102)

The body keeps us busy in a thousand ways because of its need for nurture… It fills us with wants, desires, fears, all sorts of illusions and much nonsense, so that, as it is said, in truth and in fact no thought of any kind ever comes to us from the body. Only the body and its desires cause war, civil discord, and battles… all this makes us too busy to practice philosophy.
(Plato, Phaedo, pp 103)

Philosophy sees that the worst feature of this imprisonment [to the body] is that it is due to desires, so that the prisoner himself is contributing to his own incarceration most of all. As I say, the lovers of learning know that philosophy gets hold of their soul when it is in that state, then gently encourages it and tries to free it by showing them that investigation through the eyes is full of deceit…
(Plato, Phaedo, pp 121)

On Reasonable Discourse:
There is no greater evil one can suffer than to hate reasonable discourse.
(Plato, Phaedo, pp 127)

You know how those in particular who spend their time studying contradiction in the end believe themselves to be very wise and that they alone have understood that there is no soundness or reliability in any object or in any argument.. it would be pitiable when a true and reliable argument, and one that can be understood, if a man who has dealt with such arguments as appear at one time true, and another time untrue, should not blame himself or his own lack of skill but, because of his distress, in the end gladly shift the blame away from himself to the arguments, and spend the rest of his life hating and reviling reasoned discussion and so be deprived of truth and knowledge of reality
(Plato, Phaedo, pp 128)

Against Induction:
Arguments of which the proof is based on probability are pretentious..
(Plato, Phaedo, pp 130)

On Death:
No one knows whether death may not be the greatest of all blessings for a man, yet men fear it as if they knew that it is the greatest of evils.
(Plato, Phaedo, pp 33)

On The Mediocrity Of Man:
The very good and the very wicked are both quite rare.. most men are between those extremes.
(Plato, Phaedo, pp 127)

Evans: In Two Minds

in_two_minds

In Two Minds surveys state of the art dual-process theories that have become enormously influential within modern psychology.

Dual-process theory holds that our mental lives are the result of activity of, and interactions between, two separate minds. The first mind – System 1 – is the mind that drives your car when you daydream. It is fast, implicit, subconscious, associative, and evolutionary ancient. System 2, in contrast, is the mind that generates directions. It is cognitively taxing, language-oriented, conscious, abstract, and a relative newcomer on the ecological scene.

Numerous flavors of dual-process theory have emerged over the years. The theory has emerged, relatively independently, from among the following traditions: social psychology, cultural psychology, psychometrics, developmental psychology, behavioral economics, and artificial intelligence. While such creative independence suggests a common biological substrate, little effort had been made to synthesize these different perspectives until now. This anthology, itself written to complement an interdisciplinary academic conference, represents a significant step towards such a harmonization.

One of my few complaints is that the lack of a canonical vocabulary made comparative analysis between chapters difficult. That said, the breadth of subjects treated was astonishing, and the writing quality was generally excellent. I should mention that most chapters have been made publicly available via their originating universities. The following chapters struck me as especially significant:

Ch 2: Evans: How many dual process theories do we need?
Book editor, and leader of the dual-process synthesis movement, Jonathan Evans presents his vision for dual-process theory development. He begins by presenting the clusters of properties associated with either mind. Insights from a diverse set of traditions are collected, with a particular interest taken in mediating inter-system communications. The chapter closes with a hybrid model of mental architectures. An ideal one-shot introduction to the field.

Ch 3: Stanovich: Distinguishing the reflective, algorithmic, and autonomous minds: Is it time for a tri-process theory?
Psychometric legend Keith Stanovich rocks the boat by his proposal to bifurcate System 2 into reflective and algorithmic types. The algorithmic mind is what IQ tests measure, and is correlated with working memory. It is also thought to be a measure of cognitive decoupling, echoing Aristotle’s famous dictum: “It is the mark of an educated mind to be able to entertain a thought without accepting it.” The reflective mind is, in contrast, driven by thinking dispositions. It is an explanation for how otherwise extremely intelligent people flounder – smarts need to be complemented by work ethic, mental resource-management, innovation, and other properties. The chapter closes with a stunning taxonomy of thinking errors, which explores in great detail how the heuristics and bias literature motivate the movement.

Ch 5: Carruthers: An architecture for dual reasoning
Philosopher of mind Peter Carruthers explores concepts developed in his acclaimed work, Architecture Of Mind. His argument, inspired by the massive modularity thesis of evolutionary psychology, moves at a brisk pace. Breathtaking structural diagrams are presented, and grounded in wide swathes of empirical data. Carruthers’ main thesis is that this architecture is shared between System 1 and System 2: when consciousness takes over, it disconnects the modules from the action production systems to simulate various outcomes.

Ch 8: Thompson: Dual-process theories: a metacognitive perspective
While theorists have much to say about the different roles of either system, little is known about how they interact. Thompson seeks to fill this gap with an account of the emotional payload people experience when, say, they solve a riddle. Such Feelings Of Rightness (FORs, also known as yedasentience) are transmitted from System 1 to System 2, which only decides whether to intervene when the FOR is insufficiently strong.

Ch 10: Buchtel, Norenzayan: Thinking across cultures: implications for dual processes
It is an unfortunate truth that many psychological studies generalize their conclusions even though their polled subjects consist entirely of American psychology undergraduates. In this important chapter, Buchtel and Norenzayan explain why such a scope conceals the true breadth of human cognitive diversity. Cross-cultural studies are analyzed, with the conclusion that the System 2 characteristics of East Asian peoples consistently diverge from Occidental students. Subjects immersed in East Asian culture tend to focus more attention at contextual features of problems. The implications of this difference – for theory modification, and an account of how culture shapes ontogeny of cognition – are explored.

Ch 11: Sun, Lane, Matthews: The two systems of learning: architectural perspective
Artificial intelligence research Ron Sun reviews his architectural innovation CLARION. Since this computational innovation is well-documented at length elsewhere (including Wikipedia), Sun zeroes in on its relationship with dual-process theorizing. Specifically, and in contrast with Carruthers above, his software posits two distinct computational entities, and is able to recreate human idiosyncrasies via an exploration of the systems’ interactions.

Ch 13: Lieberman: What zombies can’t do: a social cognitive neuroscience approach to the irreducibility of reflective consciousness
For centuries, academics have countenanced philosophical zombies: what would it mean if a human being could behave normally but lack conscious experience? Lieberman here harnesses dual-process theories and neuroimaging data to explore the more focused question on whether such a phenomenon is nomologically possible.

Ch 15: Saunders: Reason and intuition in the moral life: a dual-process account of moral justification
Saunders examines the phenomenon of moral dumbfounding, via one of its manifestations regarding incest. Most people, when asked whether a short story about incest represents something morally wrong, will answer affirmatively and provide their reasons. However, when the storyteller removes the offending reasons (both parties are psychologically unharmed, there is no risk of pregnancy, etc), subjects generally maintain that the behavior is wrong, yet they cannot explain why they think so. The author goes on to explain how such moral dumbfounding is the result of clashing moral conclusions between the still-outraged System 1 and the deprived-of-reasons System 2.

This is my favorite cognitive science text to date.

Jaworski: Philosophy Of Mind

Part Of: Philosophy of Mind sequence
Content Summary: 500 words, 5 min reading time

Let me now review this textbook, which I recently had the pleasure of reading.

My Overall Impressions

A thorough survey of philosophy of mind. I enjoyed the author’s style, particularly his accessible vocabulary and propensity for explicitly communicating premises of the more involved bits of argumentation.

This textbook is unusual insofar as it presents novel content, as opposed to only synthesizing current knowledge. In a surprising move, Jaworski devotes two out of eleven chapters exploring an idiosyncratic version of hylomorphism. While regrettably diverting attention from other under-explored areas, I found Jaworski’s blend of Aristotelian and embodied cognition traditions to be worth reading.

High-Level Picture

The book opens with a taxonomic bang, sporting a graphical enumeration of different theories of mind, along with their interrelationships and metaphysical assumptions. Here are the ten leading theories, along with their key beliefs.

philmindbeliefs

We can organize these theories into the following taxonomy:

philmindtax

Chapters 1-2

Here Jaworski does the necessary work of painting the philosophical landscape within which all theories of mind reside. He begins by discussing three problems that theories of mind tend to gravitate towards:

  1. The Problem Of Psychophysical Emergence: “how did mental activity appear within the sparse, particulate sea of the universe?”
  2. The Problem Of Other Minds: “how do people infer facts about the private mental lives of others?”
  3. The Problem Of Mental Causation: “how do mental phenomena affect physical phenomena?”

While these problems motivate mental theories, they do not prepare the reader for the breadth of discussions within the literature. In this light, the mental-physical distinction is explored, as are questions of first-person authority, subjectivity, qualia, mental representation, intentionality, and other topics.

Chapter 3

Substance dualism is discussed. This chapter was unusually well structured, perhaps on account of the length of time that society has countenances its subject. Supporting arguments, grounded in modal conceivability-possibility links, proved inconclusive. Counter arguments (problems of other minds, of interaction, of explanatory impotence) extract the following concessions:

  1. Denying knowledge of the mental states of others.
  2. Discarding conservation of energy *or* mental-physical causation.
  3. Rejecting the need to explain mental-physical correlations.”

Chapter 4

The physicalist worldview is introduced, with some overlap with philosophy of science giants like Hempel. Only eleven pages are devoted to exploring theories of consciousness, including first-order-representation, higher-order-perception, higher-order-thought, and sensorimotor ideas.

Chapters 5-7

Reductive, non-reductive, and other specific physicalist theories are treated. The author was not shy about marshalling arguments against the current philosophic consensus that is realization physicalism. Multiple-Realizability arguments prominently featured in the discussion; I especially enjoyed the typology-based reductivist responses to the MRT.

Chapters 8-9

Dual-Attribute Theory, and other specific non-physicalist theories are treated. An interesting discussion of Dennett’s and Wittgenstein’s arguments against qualia spiced the presentation.

Chapters 10-11

Jaworski’s brand of hylomorphism is presented, along with a related hylomorphic theory of mind. While Aristotelean approaches are becoming more popular within philosophy – notably philosophy of biology – there exists an uncomfortable lack of exposition into its tenets which these chapters help to fill. I found the connections with Morleau-Ponty’s empirical phenomenology, and modern embodied cognition theorists like Noe and Regan, to be a helpful inspiration for future research.

Despite the few targeted criticisms above, I highly recommend this book to anyone looking to efficiently absorb philosophy of mind material.

Fodor: Modularity Of Mind

Part Of: Cognitive Modularity sequence
Content Summary: 1100 words, 11 min reading time

Let me today review this text, which is widely held to be one of the most influential texts in the cognitive psychology tradition.

Motivations

A milestone within the cognitive psychology tradition. This extended argument for the modularity of input systems reoriented the field back when it was published in 1983, and responses continue to emerge to this day.

Modularity Of Mind is one of those rare books that combine a formidable vocabulary with a concise communicative style. Fodor’s dry humor and deep familiarity with relevant empirical results redeemed the occasionally abstruse discussion. The author’s penchant for polemics was not apparent in this essay. Five sections divide the work:

Part 1: Four Accounts Of Mental Structure

To Fodor, the four competing theories of mental structure are:

  1. Neo-Cartesianism
  2. horizontal faculties
  3. vertical faculties
  4. Associationism

While discussing Neo-Cartesianism, Fodor draws the distinction between innate faculties: propositional vs. architectural. Specifically, there are two kinds of reactions to the tabula rasa. The first is to propose that the mind does not begin life completely undifferentiated; rather, infants come into the world already possessing “cognitive furniture”, such as image rendering engines. The second kind of reaction is to claim that humans are born with a certain set of pre-installed knowledge (e.g., Chomskyan universal grammar).

After the discussion regarding innate faculties, Fodor treats the horizontal/vertical distinction within architectural theories of cognition. Horizontal modular theories are those that would have cognitive furniture be domain-general. Such ideas go back to ancient Greece; a good current exemplar is what modern psychology believes about long-term memory. Vertical modular theories hold cognitive furniture to be domain-specific. Rather than fractionating the mind into perception, memory, and motivational modules, vertical theorists such as Franz Gall (father of phrenology) would insist on different modules for mathematics, music, poetry, etc. Gall would go on to say that there is no such thing as domain-general memory. If there are similarities between musical memory and mathematical memory, that is merely a coincidental similarity across module implementations.

Finally, Associationism (incl. Behaviorism) is treated. Unsurprisingly, given the author’s functionalist credentials, arguments are presented that purport to demonstrate the inadequacy of the movement.

Part 2. A Functional Taxonomy Of Cognitive Mechanisms

Fodor outlines a three-tier mental architecture: transducers, input processing, and central systems. The brain is thought to transduce signals via sensory organs, and feed such raw data to input processing systems. These iteratively raise the level of abstraction, saving intermediate results into states known as interlayers. Finally, the final results of the input systems are presented to the central systems, which are responsible for binding them into coherent beliefs with the help of background knowledge. Interestingly, Fodor holds that language processing is its own sensory system, distinct from acoustic processing, and that this system encapsulates the entire lexicon. Organism output (behavior) was not considered.

Part 3. Input Systems As Modules

The most empirically rich and impactful section. I will briefly sketch each subsection.

  1. Domain specificity. There appear to be separate mechanisms to process distinct stimuli. While several systems may share select resources, they never share information.
  2. Mandatory operation. While human beings can ignore their phenomenological experiences, they cannot consciously repress them.
  3. Hidden interlevels. Introspection cannot unearth the intermediate states of visual stimuli transformation, only the finished product.
  4. Fast processing. Driven by evolutionary pressures, sensory processing is very rapid. For example, many people are able produce a mirrored language stream that trails the original by an astonishing one-quarter of a second.
  5. Informational encapsulation. In principle, input processing can never access the organism’s broader knowledge base. There are few to none feedback loops that inform sensory processing.
  6. Shallow outputs. Input systems do not issue beliefs, but rather non-conceptual (“shallow”) information. Other systems are responsible for subsequent conceptual fixation.
  7. Fixed neural architecture. In contrast with central processes, input systems appear to be localized to specific neural locations (e.g., Wernicke’s Area for language processing).
  8. Idiosyncratic breakdown patterns. Brain damage is associated with selective, severe failures of input processing, not general deficiency introduction.
  9. Shared ontogeny. Cognitive structural maturation occurs in an innately-specified way.

Informational encapsulation is singled out as the most important element of the thesis. This feature explains how an organism protects its raw percepts from contamination from its own biases. Constraining information flow is essential to human beings, and this feature goes a long way in motivating the existence of the others.

During his discussion of shallow outputs, Fodor makes an interesting observation about conceptual fixation. Human concepts are organized hierarchically: “a poodle is a dog is a mammal is a physical organism is a thing”. Central non-modular systems must locate their conclusions at a specific level within this hierarchy. Interestingly, beliefs tend to fixate at a particular level (e.g., “dog” in the above example).

What makes the “dog” level so special? It tends to be: (a) a high-frequency descriptor; (b) learned earliest within development; (c) the least abstract member that is monomorphemically lexicalized; (d) easiest to define without reference to other items in the hierarchy; (e) most informationally dense, in the sense of being the most productive item if one asks for the properties of each item in the hierarchy from most to least abstract; (f) used the most frequently in everyday descriptions; (g) used the most frequently in subvocal descriptions; (h) the most abstract members that give themselves to visual representation. These facts call out for explanation and further research.

Part 4: Central Systems

Fodor perceives little evidence to explicate central processes, so he reverts to analogy. Scientific confirmation is presented as an analogue of psychological belief fixation. An enthusiast of Quinean naturalized epistemology, Fodor is also sympathetic to Quinean holism: that any belief can in principle affect any other. But requiring unconstrained information transfer is a recipe for intractable computation. This is the deep trouble underlying the framing problem of artificial intelligence. According to Fodor, intractability is precisely why academic journals tend to avoid topics of general intelligence.

I found the previous section on input modules to be of greater import. Fodor’s arguments here are empirically impoverished, and his vague notions of networked learning leave much to be desired. If this section characterized the entirety of the text, the reader would be better advised to research modern probabilistic graphical models, and attempts within the AI community to approximate universal induction.

Part 5: Caveats and Conclusions

The essay concludes with a few comments regarding modularity and epistemic boundedness (“are there truths that we are not capable of grasping?”). After reviewing the historical discussion surrounded bounded cognition, Fodor ultimately has little to say on the matter, arguing that this conversation should proceed with little appeal to concepts of modularity. He closes with self-styled gloomy remarks about how our best thinkers have consistently evaluated local phenomena more effectively than global phenomena (c.f., deduction vs. confirmation theory), and that this sociological reality is unlikely to change in the near future.

An incisive, important text that helps to place modern cognitive science debates in sharper focus.

Glimcher: Neuroeconomic Analysis 2: Because vs. As-If

The life blood of the theoretician is constraint.

To understand why, one must look to the size of conceptspace. The cardinality of conceptspace is formidable. For every fact, how can there be only a countable number of explanations for that fact? How many theories of physics have managed to explain what it means for an apple to fall from a tree? Putting on our Church-Turing goggles, we know that every event can be at least approximated by a string of binary code, that represents that data.
The number of programs that can be fed to a Turing Machine to generate that particular string is unbounded.

Constraint is how theoreticians slice away ambiguity, how they localize truth in conceptspace. To say “no”, to say “that is not possible”, is a creative and generative act. Constraint is a goal, not a limitation.

After summarizing each of the three fields he seeks to link, Glimcher spends an entire chapter responding to a particular claim of Milton Friedman, which permeates the economic climate of modernity. Friedman argued that it is enough for economics to model behavior *as if* it is congruent to some specified mathematical structure. In his words:

“Now of course businessmen do not actually solve the system of simultaneous equations in terms of which the mathematical economist finds it convenient to express this hypothesis… The billiard player, if asked how he decides where to hit the ball, may say that he “just figures it out” then also rubs a rabbit’s foot just to make sure… the one statement is about as helpful as the other, and neither is a relevant test of the associated hypothesis”.

This, Glimcher argues, is precisely the wrong way to go about economics, and scientific inquiry in general. Because human beings are embodied, there exist physical, causal mechanisms that generate their economic behavior. To turn attention away from causal models is to throw away an enormously useful source of constraint. It is time for economics to move towards a Because Model, a Hard model, that is willing to speak of the causal web.

Despite this strong critique of economic neo-classicism, Glimcher is unwilling to abandon its traditions in favor of the emerging school of behavioral economics. Glimcher insists that the logical starting place of neoclassicism – the concise set of axioms – retains its fecundity. Instead, he calls for a causal revolution within theories such as expected utility; from Soft-EU to Hard-EU.

According to Glimcher, “the central linking hypothesis” of neuroeconomics is that, when choice behavior conforms to the neoclassical axioms (GARP), then the new neurological natural kind of the Subjective Value must obey the constraints imposed by GARP. Restated, within its predictive domain, classical economics constrain neuroscience because utility IS a neural encoding known as a subjective value.

Glimcher then goes on to establish more linkages, which he uses to constrain the data deluge currently experienced within neuroscience. Simultaneously, he employs known impossibilities of the human nervous system to close the door on physically non-realizable economic models. It is to neuroscience that we turn next.

Glimcher: Neuroeconomic Analysis 1: Intertheoretic Reduction

Neuroeconomic Integrative Research (1)

Can we describe the universe satisfactorily, armed with the language of physics alone? Such questions of inter-theoretic reduction lie at the heart of much scientific progress in the past decades. The field of quantum chemistry, for example, emerged as physicists recognized that quantum mechanics had much to say about the emerging shape of the periodic table.

Philosophical inquiry into the nature of inter-theoretic (literally, “between disciplines”) reduction has produced several results recently. In philosophy of mind, the question is the primary differentiator between reductive from non-reductive physicalism.

But what does it mean for a discipline to be linked, or reduced, to another? Well, philosophers imagine the discourse of a scientific field to leverage its own idiosyncratic vocabulary. For example, the natural kinds (the vocabulary) of biology includes concepts such as “species”, “gene”, and “ecosystem”. In order to meaningfully state that biology reduces to chemistry, we must locate equivalencies between the natural kinds of their respective disciplines. The chemical-biological identity between “deoxyribonucleic acid” and “gene” suggests that such a broader vision of reduction might be possible. The seeming absence of a chemical analogue to the biological kind of “ecosystem” would argue against such optimism.

Where does Glimcher stand in the midst of this conversation? The academic field that he gave birth to, neuroeconomics, attempts to forge connecting ties between neuroscience, psychology, and economics. Glimcher is careful to disavow reductionist ambitions towards a total reduction. Rather, he claims that failed inter-theoretic links are signals of disciplinary misdirection. For example, if the neuroscientific kind of “stochastic decision” doesn’t have an analogue in the economic kind of “utility”, this would suggest that economics should reform towards a more probabilistic vision. This above model of innovation-through-linking is, according to Glimcher, the core reason why neuroscience/psychology/economics: the act of reducing produces insight.

I sympathize with Glimcher’s vision. I would argue that the pull within the sciences towards specialization is well counter-balanced by interdisciplinary work of precisely the kind that he is championing.

That said, I would criticize Glimcher’s vision of intertheoretic reduction as being inflexible. His goal is, essentially, to chain the abstract field of economics to one particular piece of meat – the human brain. This seems too limited in scope: shouldn’t economics be able to say something about the decision-making capacities of non-mammalian species, or computing agents, or alien races? To shamelessly leverage a metaphor from computer science, reductive schemes should be refactored such that human cytoarchitecture is a function parameter, instead of a hard-coded constant.

An interesting link to David Marr’s work should underscore this point. One of the founders of neuroscience, Marr’s most salient idea from his most popular book (Vision), was that causal systems can be evaluated at three different levels. The top level is the level of abstract principle, the middle level is algorithmic, the third is implementation. Marr strove to demonstrate how, for certain areas of vision like stereopsis, these three “levels of explanation”, and their inter-connections, were already essentially solved. It is interesting to link his idea of explanatory level with the present neuroscientific proposal. Would Marr consider economics to be isomorphic to abstract principle, psychology to algorithm, to neuroscience as implementation? If so, this would add another voice of support to my proposal for the “parameterization” of low-level details: Marr was very willing to detail multiple algorithms that interchangeably satisfy the same abstract specification.

[Sequence] Neuroeconomics

Neighboring Fields

Neurobiological Mechanisms

Classical Reinforcement Learning

Pulling It All Together

Philosophy of Decision Making

Deprecated