Knowledge: An Empirical Sketch

Table Of Contents

  • Introduction
    • All The World Is Particle Soup
    • Soup Texture
  • Perceptual Tunnels
    • On Resolution
    • Sampling
    • Light Cones, Transduction Artifacts, Translation Proxies
  • The Lens-dependent Theorybuilding Triad
    • Step One: Conceptiation
    • Step Two: Graphicalization
    • Step Three: Annotation
    • Putting It All Together: The Triad
  • Conclusion
    • Going Meta
    • Takeaways

Introduction

All The World Is Particle Soup

Scientific realism holds that the entities scientists refer to are real things. Electrons are not figments of our imagination, they possess an existence independent of your mind. What does it mean for us to view particle physics with such a lens?

Here’s what it means: every single thing you see, smell, touch… every vacation, every distant star, every family member… it is all made of particles.

This is an account of how the nervous system (a collection of particles) came to understand the universe (a larger collection of particles). How could Particle Soup ever come to understand itself?

Soup Texture

Look at your hand. How many types of particles do you think you are staring at? A particle physicist might answer: nine. You have four first-generation fermions (roughly, particles that comprise matter) and five bosons (roughly, particles to carry force). Sure, you may get lucky and find a couple exotic particles within your hand, but such a nuance would not detract from the morale to the story: in your hand, the domain (number of types) of particles is very small.

Look at your hand. How large a quantity of particles do you think you are staring at? The object of your gaze is a collection of about 700,000,000,000,000,000,000,000,000 (7.0 * 10^26) particles. Make a habit about thinking in this way, and you’ll find a new appreciation for the Matrix Trilogy. 🙂 In your hand, the cardinality (number of tokens) of particles is very large.

These observations generalize. There aren’t many kinds of sand in God’s Sandbox, but there is a lot of it, with different consistencies across space.

Perceptual Tunnels

On Resolution

Consider the following image. What do you see?

Lincoln Resolution

Your eyes filter images at particular frequencies. At this default human frequency, your “primitives” are the pixelated squares. However, imagine being able to perceive this same image at a lower resolution (sound complicated? move your face away from the screen :P). If you do this, the pixels fade, and a face emerges.

Here, we learn that different resolution lens may complement one another, despite their imaging the same underlying reality. In much the same way, we can enrich our cognitive toolkit by examining the same particle soup with different “lens settings”.

Sampling

By default, the brain does not really collect useful information. It is only by way of sensory transductor cells – specialized cells that translate particle soup into Mentalese – that the brain gains access to some small slice of physical reality. With increasing quantity and type of these sensory organs, the perceptual tunnel burrowed into the soup becomes wide enough to support a lifeform.

Another term for the perceptual tunnel is the umwelt. Different biota experience different umwelts; for example, honeybees are able to perceive the Earth’s magnetic field as directly as we humans perceive the sunrise.

Perceptual tunneling may occur at different resolutions. For example, your proprioceptive cells create signals only on the event of coordinated effort of trillions and trillions of particles (e.g., the wind pushes against your arm). In contrast, your vision cells create signals at very fine resolutions (e.g., if a single photon strikes your photoreceptor, it will fire).

Perceptual Tunneling

Light Cones, Transduction Artifacts, Translation Proxies

Transduction is a physically-embedded computational process. As such, it is subject to several pervasive imperfections. Let me briefly point towards three.

First, nature precludes the brain from the ability to sample from the entirety of the particle soup. Because your nervous system is embedded within a particular spatial volume, it is subject to one particular light cone. Since particles cannot move faster than the speed of light, you cannot perceive any non-local particles. Speaking more generally: all information outside of your light cone is closed to direct experience.

Second, the nervous system is an imperfect medium. It has difficulty, for example, representing negative numbers (ever try to get a neuron firing -10 times per second?). Another such transduction artifact is our penchant for representing information in a comparative, rather than absolute, format. Think of all those times you have driven on the highway with the radio on: when you turn onto a sidestreet, the music feels louder. This experience has nothing at all to do with an increased sound wave amplitude: it is an artifact of a comparison (music minus background noise). Practically all sensory information is stained by this compressive technique.

Third, perceptual data may not represent the actual slice of the particle soup we want. To take one colorful example, suppose we ask a person whether they perceived a dim flashing light, and they say “yes”. Such self-reporting, of course, represents sensory input (in this case, audio vibrations). But this kind of sensory information is a kind of translation proxy to a different collection of particles we are interested in observing (e.g., the activity of your visual cortex).

This last point underscores an oft-neglected aspect of perception: it is an active process. Our bodies don’t just sample particles, they move particles around. Despite the static nature of our umwelt, our species has managed to learn ever more intricate scientific theories in virtue of sophisticated measurement technology; and measurement devices are nothing more than mechanized translation proxies.

The Lens-dependent Theorybuilding Triad

Step One: Conceptiation

Plato once describes concept acquisition as “carving nature at its joints”. I will call this process (constructing Mentalese from the Soup) theory conceptiation.

TheoryBuilding- Conceptiation

If you meditate on this diagram for a while, you will notice that theory conceptiation is a form of compression. Acccording to Kolmogorov information theory, the efficacy of compression hinges on how many patterns exist within your data. This is why you’ll find leading researchers claiming that:

Compression and Artificial Intelligence are equivalent problems

A caveat: concepts are also not carved solely from perception; as one’s bag of concepts expands, such pre-existent mindware exerts an influence on the further carving up of percepts. This is what the postmoderns attribute to hermeneutics, this is the root of memetic theory, this is what is meant by the nature vs. nurture dialogue.

Step Two: Graphicalization

Once the particle soup is compressed into a set of concepts, relations between these concepts are established. Call this process theory graphicalization.

TheoryBuilding- Graphicalization

If I were ask you to complete the word “s**p”, would you choose “soap” or “soup”?  How would your answer change if we were to have a conversation about food network television?

Even if I never once mention the word “soup”, you become significantly more likely to auto-complete that alternative after our conversation. Such priming is explained through concept graphs: our conversation about the food network activates food-proximate nodes like “soup” much more strongly than graphically distant nodes like “soap”.

Step Three: Annotation

Once the graph structure is known, metagraph information (e.g., “this graph skeleton occurs frequently”) is appended. Such metagraph information is not bound to graphs. Call this process theory annotation.

TheoryBuilding- Annotation

We can express a common complaint about metaphysics thusly: theoretical annotation is invariant to changes in conceptiation & graphicalization results. In my view (as hinted at by my discussion of normative therapy) theoretical annotation is fundamentally an accretive process – it is logically possible to generate an infinite annotative tree; this is not seen in practice because of the computational principle of cognitive speed limit (or, to make a cute analogy, the cognition cone).

Putting It All Together: The Triad

Call the cumulative process of conceptiation, graphicalization, and annotation the lens-dependent theorybuilding triad.

TheoryBuilding- Lens-Dependent Triad

Conclusion

Going Meta

One funny thing about theorybuilding is how amenable it is to recursion. Can we explain this article in terms of Kevin engaging in theorybuilding? Of course! For example, consider the On Resolution section above. Out of all possible adjectives used to describe theorybuilding, I deliberately chose to focus my attention on spatial resolution. What phase of the triad does that sound like to you?  Right: theory conceptiation.

Takeaways

This article does not represent serious research. In fact, its core model – the lens-dependent theorybuilding triad – cites almost no empirical results. It is a toy model designed to get us thinking about how a cognitive process can construct a representation of reality. Here is an executive summary of this toy model:

  1. Perception tunneling is how organisms begin to understand the particle soup of the universe.
    1. Tunneling only occurs by virtue of sensory organs, which transduce some subset of data (sampling) into Mentalese.
    2. Tunneling is a local effect, it discolors its target, and its sometimes merely represents data located elsewhere.
  2. The Lens-Dependent Theorybuilding Triad takes the perception tunnel as input, and builds models of the world. There are three phases:
    1. During conceptiation, perception contents are carved into isolable concepts.
    2. During graphicalization, concept interrelationships are inferred.
    3. During annotation, abstracted properties and metadata are attached to the conceptual graph.
Advertisement

Modularity & The Argument From Design

Part Of: Cognitive Modularity sequence
See Also: Fodor: Modularity of Mind
Content Summary: 1600 words, 16min read

Introduction

This post represents an argument for a particular thesis, known as massive modularity. This thesis, particularly popular among evolutionary psychologists, states that the mind is rife with mental modules, and that the cognitive life is the interplay between them.

What is a mental module? If you don’t have a clear grasp on what that means, I recommend just glancing my summary of Fodorian modularity. Bear in mind, though, that here the term is used somewhat differently: modules here may be some subset of the listed properties.

The following argument is not my own, it is rather an interpretation of Carruther’s argument, which is presented in this text, under Section 1.3.

Motivators From Biology

Carruthers starts by surveying the biological literature for instances of modularity. And he finds it, by the truckload:

There is a great deal of evidence from across many levels in biology to the effect that complex functional systems are built up out of assemblies of sub-components. This is true for the operations of genes, of cells, of cellular assemblies, of whole organs, of whole organisms, and of multi-organism units like a bee colony. And by extension, we should expect it to be true of cognition also, provided that it is appropriate to think of cognitive systems as biological ones, which have been subject to natural selection.

Amongst other sources, he cites the following research:

  • West-Eberhard, 2003. Developmental Plasticity and Evolution.
  • Seeley, 1995. The Wisdom of the Hive: the social physiology of honey bee colonies.

We thus possess considerable biological reason to believe that:

(3) Natural selection selects for modularity at a variety of different levels.

A Role For Evolvability

It’s one thing to observe natural selection promoting modularity, it is another to understand why it is doing so. To do this, we must appeal to the concept of evolvability.

Biological populations tend to conform themselves to ecological niches. That is, a species tends to adopt a particular survival strategy that exploits a certain subset of the local biosphere. Let me here decorate a concept I like to call niche distance: two species said to be in direct competition are so in virtue of the fact of short niche distance, etc. Thus, we could say that the niche distance between two types of weeds in your backyard is small, and the niche distance between the weed and the bald eagle is large.

The fact that niches change is one of the drivers for biological evolution. For example, as the earth warms in the coming centuries, mammalian species will need to acclimate to a different climate, which entails a changed vegetative response, which entails a need for change in eating patterns, etc. Such niche fluctuations are ubiquitous.

We know that evolution is driven by the engine of mutation. But mutation is simply a stochastic, quantum mechanical phenomenon:  there is no way to “speed it up”. Species typically cannot keep pace with niche fluctuations by directly modulating the rate of mutation. Rather, the genetic infrastructure of species must be able to harness mutations to keep pace with niche fluctuations. To put this concept of evolvability very crudely: natural selection does not only select for number of muscles, but also the ability to grow new ones.

(1) Evolvability is selected to allow for fluctuations within an ecological niche.

This video is a cute exploration of how evolvability may be supported in microorganisms by direct tampering of the genetic replication engine. But for larger organisms, the loci of behavior is trans-cellular. The sheer geometry of size compelled cells to become heterozygous, to constitute interdependent systems. The question of mutation containment, then, becomes central: is it possible for evolution to improve upon one function of an organism, without simultaneously affecting other functions?

Here, finally, is where modularity comes into play. One of the most important features of modularity is encapsulation: the hiding of information within specific containers. Rather than all functions affecting all other functions, computational processes erect walls around themselves, and communicate through them in a controlled fashion. Modular encapsulation is thus seen as a prerequisite for mutation containment:

(2) Modular subsystems are a necessary ingredient for evolvability.

Taken together, premise (1) and (2) support (3) in the following way:

Massive Modularity- Argument From Design- Evolvability

Motivators From Computer Science

In the above section, we were given a nice intuition regarding Premise 2: that modularity affords for mutation containment. But perhaps this intuition can be buffered with evidence from somewhere else entirely:

The basic reason why biological systems are organized hierarchically in modular fashion is a constraint of evolvability. Evolution needs to be able to add new functions without disrupting those that already exist; and it needs to be able to tinker with the operations of a given functional sub-system – either debugging it, or altering its processing in response to changes in external circumstances – without affecting the functionality of the remainder. Human software engineers have hit upon the same problem, and the same solution.

Two of the most widely used languages nowadays are C++ and Java. Languages in this class are often described as ‘object-oriented’. Many programming languages now require a total processing system to treat some of its parts as ‘objects’ which can be queried and informed, but where the processing that takes place within those objects isn’t accessible elsewhere. This enables the code within the ‘objects’ to be altered without having to make alterations in code elsewhere, with all the attendant risks that this would bring; and it likewise allows new ‘objects’ to be added to the system without necessitating wholesale re-writings of code elsewhere. And the resulting architecture is regarded as well nigh inevitable (irrespective of the programming language used) once a certain threshold in the overall degree of complexity of the system gets passed.

Interestingly, since the need for modular organization increases with increasing complexity, we can predict that the human mind will be the most modular amongst animal minds. This is the reverse of the intuition shared by many philosophers and social scientists, who would be prepared to allow that animal minds might be organized along modular lines, while believing that with the appearance of the human mind most of that organization was somehow superseded and swept away.

We extract the following argument from the above appeal to object-oriented programming (OOP):

(4) Software engineering suggests that OOP (modularization) is necessary to manage increasing complexity.
(5) Biological systems are very complex.

These premises buffer our Premise 2.

(2) Modular subsystems are a necessary ingredient for evolvability.

Massive Modularity- Argument From Design- OOP

I particularly enjoyed the originality of this argument. Even though software engineering is notoriously bad at quantifying its practices, its trajectory surely sheds some light on other disciplines. As a computer scientist, this argument made me speculate what other trends, current or future, could be brought to bear on such questions. The interchange between computer science and cognitive neuroscience is broad… with things like neuromorphic computing flowing in one direction, and information theory flowing in the other…

Is Mind Subject To Natural Selection

This phase of the argument is the most philosophical. The question is whether mental processes are subject to the forces of natural selection.

Carruthers begins with a fairly uncontroversial premise:

(6) The central nervous system is subject to natural selection.

So much, so obvious. But the crux of the issue is how to relate mind and brain. Carruthers wants to argue that:

(7) The central nervous system underwrites the mind.

However, this premise falls squarely into a philosophy of mind morass. Carruthers suggests a way forward is to notice that most mainstream approaches (“anyone who is neither an an epiphenomenalist nor an eliminativist about the mind”) support such a premise (see this post for some definitions).

If we find ourselves sympathetic to 7, we are led by the nose to Proposition 8:

(8) Mental processes are subject to natural selection.

Massive Modularity- Argument From Design- Mental Evolution

How Many Minds

While the weight of this argument labors to support the reality of computational modules, we must also spare some words to motivate massive modularity. Carruthers, leveraging Simon, H’s 1962 paper The Architecture of Complexity, points out that the question is one of degrees. Let us try to imagine a modularity thesis that is non-massive:

Moderate Modularity

The x-axis captures number of modules, the y-axis leverages David Marr’s concept of Tri-Level Analysis.  The concave shape of the curve represents the claim that, while the number of neurological functions may be large, the number of computational processes (e.g., belief, desire, motivation) is small.

In contrast, the shape of massive modularity thesis is convex:

Massive Modularity

While Carruthers elsewhere motivates massive modularity by way of task analysis and ethological surveys, he here defends this latter thesis by appealing to the empirically-robust observation that the brain appears to process its algorithms in parallel, and this would be impossible without a relatively plentiful number of processing units. So we have stumbled upon our last premise:

(9) In the mind, massive modularity is computationally superior to moderate modularity.

Putting It All Together

All that remains is to glue together the sub-conclusions of the above arguments. Specifically, take the following propositions:

(3) Natural selection selects for modularity at a variety of different levels.
(8) Mental processes are subject to natural selection.
(9) Within the mind, massive modularity is computationally superior to moderate modularity.

From these, it is clear we have successfully motivated our thesis:

(10) Natural selection selects for massive modularity in the mind

The entire argument, then, is pictured below.

Massive Modularity- Argument From Design- Summary

Concluding Thoughts

While I happen to affirm Premise 8, I feel like Carruthers – and even more so myself – do a poor job at motivating it. This observation is particularly painful because it is arguably the central thesis of evolutionary psychology. Mental note-to-self: revisit that section of the argument.

All told, I find this argument fairly compelling, although I would like to get more clear on several of its distinctions.