Intellectual History (2011-2014)

An incomplete list, which only covers books and courses (not articles) I have fully consumed (vs. started)

2011

  • Your Inner Fish [Shubin (2008)]
  • Structure Of Scientific Revolutions [Kuhn]
  • Open Society and its Enemies [Popper]
  • Who Wrote The Bible? [Friedman]
  • Don’t Sleep There Are Snakes [Everett]
  • Cows, Pigs, Wars, Witches [Harris]
  • A History Of God [Armstrong]
  • Witchcraft, Oracles, Magic among Azande [Evans-Pritchard]
  • Why Zebras Don’t Get Ulcers [Sapolsky]
  • The Trouble With Testosterone [Sapolsky]
  • The Myth Of Sisyphus [Camus]
  • Dialogues Concerning Natural Religion [Hume]
  • [Lecture Series] Philosophy Of Death [Kagan]
  • [Lecture Series] Human Behavioral Biology [Sapolsky]
  • [Lecture Series] Yale: New Testament Literature & History [Martin]
  • [Lecture Series] Philosophy Of Science [Kasser]
  • [MOOC] Intro To AI

2012

  • Influence [Cialdini]
  • The Origin Of Consciousness and Breakdown of the Bicameral Mind [Jaynes]
  • Hero With A Thousand Faces [Campbell]
  • Beyond Good and Evil [Nietzsche]
  • Genealogy Of Morals [Nietzsche]
  • Lost Christianities [Ehrman]
  • The Modularity Of Mind [Fodor]
  • Five Dialogues: Euthyphro, Apology, Crito, Meno, Phaedo [Plato]
  • The Mind’s I [Dennett]
  • The Protestant Ethic and the Spirit Of Capitalism  [Weber]
  • Interpretation Of Dreams [Freud]
  • Good and Real [Drescher]
  • In Two Minds [Evans, Frankish]
  • Thinking Fast and Slow [Kahneman (2011)]
  • Working Memory: Thought and Action [Baddeley]
  • Philosophy Of Mind [Jaworski]
  • [Lecture Series] Brain Structure And Its Origins [Schneider]
  • [Lecture Series] Justice [Sandel]
  • [MOOC] Machine Learning [Ng]
  • [MOOC] Health Policy & The ACA
  • [MOOC] Networked Life

2013

  • Evolutionary Physchology 4th edition [Buss (2011)]
  • Vision [Marr (1982)]
  • The Visual Brain in Action [Milner, Goodale (2006)]
  • Foundations Of Neuroeconomic Analysis [Glimcher]
  • Flow: The Psychology Of Optimal Experience [Csikszentmihalyi]
  • Architecture Of Mind [Carruthers (2006)]
  • [UW Course] CSEP524 Parallel Computation [Chamberlain]
  • [UW Course] CSEP514 Natural Language Processing [Zettlemoyer]
  • [UW Course] CSEP576 Computer Vision [Farhadi]

2014

  • The Conservative Mind [Kirk]
  • Guns, Gems, and Steel [Diamond]
  • Semiotics For Beginners [Chandler]
  • Rationality and the Reflective Mind [Stanovitch]
  • The Robot’s Rebellion [Stanovitch]
  • The Righteous Mind [Haidt]
  • The Selfish Gene [Dawkins]
  • The Better Angels Of Our Nature [Pinker]
  • The Illusion Of Conscious Will [Wegner (2003)]
  • [UW Course] CSEP590 Molecular and Neural Computation [Seelig]
  • [UW Course] CSEP573 Artificial Intelligence [Farhadi]
  • [UW Course] EE512A Advanced Inference In Graphical Models [Bilmes]

Tunneling Into The Soup

Tunneling- Colorless Sky

Table Of Contents

  • Context
  • Bathing In Radiation
  • Our Photoreceptive Hardware
  • The Birthplace Of Human-Visibility
  • Takeaways

Context

In Knowledge: An Empirical Sketch, I left you with the following image of perceptual tunneling:

Perceptual Tunneling

Today, we will explore this idea in more depth.

Bathing In Radiation

Recall what you know about the nature of light:

momentum_wavelength_equivalence

Since h and c are just constants, the relation becomes very simple: energy is inversely proportional to wavelength. Rather than identifying a photon by its energy, then, let’s identify it by its wavelength. We will do this because wavelength is easier to measure (in my language, we have selected a measurement-affine independent variable).

So we can describe one photon by its wavelength. How about billions? In such a case, it would be useful to draw a map, on which we can locate photon distributions. Such a photon map is called an electromagnetic spectrum.

With this background knowledge in place, we can explore a photon map of solar radiation: what types of photons strike the Earth from the Sun?

Tunneling- Solar Radiation

This image is rather information-dense. Let’s break it down.

Compare the black curve to the yellow distribution. The former represents the difference between an idealized energy radiator (a black body), whereas the latter represents the actual transmission characteristics of the sun. As you can see, while the black body abstraction does not perfectly model the idiosyncrasies of our Sun, it does a pretty good job “summarizing” the energy output of our star.

Next, compare the yellow distribution to the red distribution. The former represents solar radiation before it hits our atmosphere, the latter represents solar radiation after it hits our atmosphere (when it strikes the Earth). As you can see, at some wavelengths light passes through the atmosphere easily (red ~= yellow; call this atmospheric lucidity) whereas at other wavelengths, the atmosphere absorbs most of the photons (red << yellow; call this atmospheric opacity).

These different responses to different energy light does not occur at random, of course. Rather, the chemical composition of the atmosphere causes atmospheric opacity. Ever hear the meme “the ozone layer protects us from UV light”? Well, here is that data underlying the meme (see the “O3” marker at the 300 nm mark?). Other, more powerful but less well-known, effects can be seen in the above spectrum, which characterize the shielding effects of water vapor, carbon dioxide, and oxygen onto the spectra.

Our Photoreceptive Hardware

Your eyes house two types of photoreceptive cell: the rod and the cone.

Rods are tuned towards performing well in low-light conditions. After roughly 30 minutes in the dark, everything is ready for optical stimulation. In this “dark-adapted” state, the visual system is amazingly sensitive. A single photon of light can cause a rod to trigger. You will see a flash of light if as few as seven rods absorb photons at one time.

Cones, on the other hand, excel in daylight. They also underwrite the experience of color (a phenomenon we will discuss next time).  Now, unless you are a tetrachromat mutant, your eyes contain three kinds of cone:

  • 16% blue cones
  • 10% green cones
  • 74% red cones

Below are the absorption spectra.  Please note that, while not shown, rods manifest a similar spectrum: they reside between the blue and green curves , with a peak receptivity is 498 nm.

Tunneling- Cone SpectrumIt is important to take the above in context of the cone’s broader function. By virtue of phototransductive chemical processes, sense organs like the cone accept photons matching the above spectrum as input, and thereby facilitate the production of spike trains (neural code) as system output.

The Birthplace Of Human-Visibility

We now possess two spectra: one for solar radiation, and one for photoreceptor response profile. Time to combine spectra!  After resizing the images to achieve scale consistency, we arrive at the following:

Tunneling- Solar Radiation With Tunneling

Peak solar radiation corresponds to cone spectrum! Why should this be? Well, recall the purpose of vision in the first place. Vision is an evolutionary adaptation that extracts information from the environment & makes it available to the nervous system of its host. If most photon-mediated information is happening at the 450-700 nm energy level, should we really be so surprised to learn that our eyes have adapted to this particular range?

Notice that we drew two dotted lines around the intersection boundaries. We have now earned the right to use a name. Let us name photons that reside within the above interval, visible light. Then,

  • Let “ultraviolet light” represent photons to the left of the interval (smaller wavelengths, higher energy)
  • Let “infrared light” represent photons to the right of the interval (longer wavelengths, lower energy)

We have thus stumbled on the definition of visibility. Visibility is not an intrinsic physical property, like charge. Rather, it is human invention: the boundary at which our idiosyncratic photoreceptors carve into the larger particle soup.

Takeaways

We have now familiarized ourselves with the mechanism of tunneling. Perceptual tunneling occurs when sense organ transduces some slice of the particle soup of reality. In vision, photoreceptors cells transduce photons within the 450-700 nm energy band into the neural code.

With this precise understanding of transduction, we begin to develop a sense of radical contingency. For example,

  • Ever wonder what would happen if the human eye also contained photoreceptors on the 1100 nm range?  The human umwelt As you heat a fire up, for example, you would see tendrils of flame brighten, then vanish, then reappear. I suspect everyday language would feature “bright-visible” and “dim-visible”
  • Consider what would have happened if, during our evolution, the solar radiation spectrum if the sun had been colder than 5250 degrees Celsius. The black-body idealized spectrum of the sun would shift, and its peak would move towards the right. The actual radiation signature of the sun (yellow distribution) would follow. Given how precisely the rods in our eyes “found” the band of peak emitted energy in this universe, in that world, it seems likely that we would be wearing different photoreceptors with an absorption signature better calibrated to the  the new information. Thus, we have a causal link between the temperature of the Sun and the composition of our eyeballs.

I began this post with a partial quote from Metzinger. Here is the complete quote:

The evening sky is colorless. The world is not inhabited by colored objects at all. It is just as your physics teacher in high school told you: Out there, in front of your eyes, there is just an ocean of electromagnetic radiation, a wild and raging mixture of different wavelengths. Most of them are invisible to you and can never become part of your conscious model of reality. What is really happening is that your brain is drilling a tunnel through this inconceivably rich physical environment and in the process painting the tunnel walls in various shades of color. Phenomenal color. Appearance.

Next time, we’ll explore the other half of Metzinger’s quote: “painting the tunnel walls in various shades of color, phenomenal color”…

Movement Forecast: Effective Availabilism

Table Of Contents

  • The Availability Cascade
  • Attentional Budget Ethics
  • Effective Availabilism
  • Why Quantification Matters
  • Cascade Reform Technologies
  • Takeaways

The Availability Cascade

The following questions pop up in my Facebook feed all the time.

Why is mental illness, addiction, and suicide only talked about when somebody famous succumbs to their demons?

Why do we only talk about gun control when there is a school shooting?

What is the shape of your answer? Mine begins with a hard look at the nature of attention.

Attention is a lens by which our selves perceive the world. The experience of attention is conscious. However, the control of attention – where it lands, how long it persists – is preconscious. People rarely think to themselves: “now seems an optimal time to think about gun control”. No, the topic of gun control simply appears.

When we pay attention to attention, its flaws become visible. Let me sketch two.

  1. The preconscious control of attention is famously vulnerable to a wide suite of dysrationalia. Like transposons parasitizing your DNA, beliefs parasitize your semantic memory by latching onto your preconscious attention-control software. This is why Evans-Pritchard was so astonished in his anthropological survey of Zande mysticism. This is why your typical cult follower is pathologically unable to pay attention to a certain set of considerations. The first flaw of the attentional lens is that it is a biasing attractor.
  2. Your unconscious mind is subject to the following computational principle: what you see is all there is. This brings us the availability heuristic, the cognitive shortcut your brain uses to travel from “this was brought to mind easily” to “this must be important”. The attentional lens is that the medium distorts its contents. This is nicely summed up in the proverb, “nothing in life is as important as you think it is, while you are thinking about it.” The second flaw of the attentional lens is that bound in a positive feedback loop to memory (“that which I can recall easily, must be important, leads me to discuss more, is something I recall even more easily”).

My treatment of this positive feedback loop was at the level of individual. But that same mechanism must also promote failures at the level of social network. The second flaw writ large – the rippling eddies of attentional currents (as captured by services like Google News) – are known as availability cascades. And thus we have provided a cognitive reason why our social atmosphere disproportionately discusses gun control when school shootings appear in the news.

In electrical engineering, positive feedback typically produces runaway effects: a circuit “hits the rails” (draws maximum current from its power source). What prevents human cognition from doing likewise, from becoming so fixated on one particular memory-attention loop that it cannot escape? Why don’t we spend our days and our nights dreaming of soft drinks, fast food, pharmaceuticals? I would appeal to human boredom as a natural barrier to such a runaway effect.

Attentional Budget Ethics

We have managed to rise above the minutia, and construct a model of political discourse. Turn now to ethics. How should attention be distributed? When is the right time to discuss gun control, to study health care reform, to get clear on border control priorities?

The response profile of such a question is too diverse to treat here, but I would venture most approaches share two principles of attentional budgets:

  1. The Systemic Failure Principle. If a system performance fails to meet some arbitrary criteria of success, that would be an argument for increasing its attentional budget. For example, perhaps the skyrocketing costs of health care would seem to call for more attention than other, relatively more healthy, sectors of public life.
  2. The Low Hanging Fruit Principle. If attention is likely to produce meaningful results, that would be an argument for increasing its attentional budget. For example, perhaps not much benefit would come from a national conversation about improving our cryptographic approaches to e-commerce.

Despite how shockingly agreeable these principles are, I have a feeling that different political parties may yet disagree. In a two party system, for example, you can imagine competing attentional budgets as follows:

Attentional Budgets

Interpret “attentional resources” in a straightforward (measurement-affine) way: let it represent the number of hours devoted to public discussion.

This model of attentional budgets requires a bit more TLC. Research-guiding questions might include:

  • How ought we model overlapping topics?
  • Should budget space be afforded for future topics, topics not yet conceived?
  • Could there be circumstances to justify zero attention allocation?
  • Is it advisable to leave “attentional budget creation” topics out of the budget?
  • How might this model be extended to accomodate time-dependent, diachronic budgeting?

Effective Availibilism

Let us now pull together a vision of how to transcend the attentional cascade.

In our present condition, even very intelligent commentators must resort to the following excuse of a thought: “I have a vague sense that our society is spending too much time on X. Perhaps we shouldn’t talk about it anymore”.

In our envisioned condition, our best political minds would be able to construct the following chain of reasoning: “This year, our society has spent three times more time discussing gun control than discussing energy independence. My attentional budget prescribes this ratio to be closer to 1:1. Let us think of ways to constrain these incessant gun-control availability cascades.”

In other words, I am prophesying the emergence of an effective availabilism movement, in ways analogous to effective altruism. Effective availabilist groups would, I presume, primarily draw from neuropolitical movements more generally.

Notice how effective availabilism relies on, and comes after, of publically-available psychometric data. And this is typical: normative movements often follow innovations in descriptive technology.

Why Quantification Matters

Policy discussions influence votes which affect lives. Despite the obvious need for constructive discourse, a frustrating amount of political exchanges are content-starved. I perceive two potential solutions for this failure of our democracy:

  1. Politics is a mind-killer. By dint of our evolutionary origins, our brains do not natively excel at political reasoning. Group boundaries matter more than analyses, arguments are soldiers. But these are epistemic failure modes. Policy debates should not appear one-sided. Movements to establish the cognitive redemption of politics are already underway. See, for example, Jonathon Haidt’s http://www.civilpolitics.org/ (“educating the public on evidence-based methods for improving inter-group civility”)
  2. Greasing policy discussions with data would facilitate progress. One of my favorite illustrations of this effect is biofeedback: if you give a human being a graphical representation of her pulse, the additional data augments the brains ability to reason – biofeedback patients are even able to catch their breath faster. In the same way, improving our data streams gives hope of transcending formerly-intractable social debates.

The effective availabilism movement could, in my view, accelerate this second pathway.

Cascade Reform Technologies

It seems clear that availability cascades are susceptible to abuse. Many advertisers and political campaigns don’t execute an aggregated optimization across our national attentional profile. Instead, they simply run a maximization algorithm on their topic of interest (“think about my opponent’s scandal!”).

With modern-day technology (polls, trending Twitter tags, motive abduction, self-monitoring), noticing attentional budget failures can be tricky. With the above technology in place, even subtle attentional budget failures will be easily detectable. We have increased our supply of failures, but how might effective availabilists increase demand (open vectors of reform towards availability cascade failure modes)?

The first, obvious, pathway is to use the same tool – attentional cascades – to counterbalance. If gun control is getting too much attention, effective availabilists will strive to push social media towards a discussion of e.g., campaign finance reform. They could, further, use psychometric data to evaluate whether they have overshot (SuperPACs are now too interesting), and to adjust as necessary.

Other pathways towards reform might be empirically-precise amplification of boredom circuits. Recruit the influential to promote the message that “this topic has been talked to death” could work; as could the targeted use of satire.

Takeaways

  • Pay more attention to the quiet whispers of your mind. “Haven’t I heard about this enough” represents an undiscovered political movement.
  • Social discourse is laced with the rippling tides of availability cascades, and are at present left to their mercy.
  • As hard psychometric data makes its way towards public accessibility, a market of normative attentional budgets will arise.
  • The business of pushing current attentional profiles towards normative budgets will become the impetus of effective availabilism movements.
  • A cottage industry of cognitive technologies to achieve these ends will thereafter crystallize and mature.

Attentional Budgets Usage (1)

Fermions: Meat In Our Particle Soup

Part Of: Demystifying Physics sequence
Prerequisite Post: An Introduction To Energy
Content Summary: 2100 words, 21 min reading time.

Prerequisite Mindware

Today, we’re going to go spelunking into the fabric of the cosmos! But first, some tools to make this a safe journey.

Energy Quanta

As we saw in An Introduction to Energy,

Energy is the hypothesis of a hidden commonality behind every single physical process. There are many forms of energy: kinetic, electric, chemical, gravitational, magnetic, radiant. But these forms are expressions of a single underlying phenomena.

 

Consider the analogy between { electrons spinning around protons } and { planets spinning around stars }. In the case of planets, the dominant force is gravitational. In the case of the atom, the dominant force is electromagnetic.

But the analogy strength of the above is weak. In contrast to gravitational acceleration, an accelerating electric charge emits electromagnetic waves. Thus, we would expect an orbiting charge to steadily lose energy and spiral into the nucleus, colliding with it in a fraction of a second. Why have atoms not gone extinct?

To solve this problem, physicists began to believe that in some situations, energy cannot be lost. Indeed, they abandoned the intuitive idea that energy is continuous. On this new theory, at the atomic level energy must exist in certain levels, and never in between. Further, at one particular energy level, something we will call the ground state, an electron may never lose energy.

energy_levels

Antiparticles

Let’s talk about antiparticles. It’s time to throw out your “science fiction” interpretive lens: antiparticles are very real, and well-understood. In fact, they are exactly the same as normal particles, except charge is reversed. So, for example, an antielectron has the same mass and spin as an electron, but instead carries a positive charge.

Why does the universe contain more particles than antiparticles? Good question. 😛

Meet The Fermions

Nature Up Close

Consider this thing. What would you name it?

Atomic Structure

One name I wouldn’t select is “indivisible”. But that’s what the “atom” means (from the Greek “ἄτομος”). Could you have predicted the existence of this misnomer?

As I have discussed before, human vision can capture only a small subset of physical reality. Measurement technology is a suite of approaches that exploit translation proxies, the ability to translate extrasensory phenomena into a format amenable to perception. Our eyes cannot perceive atoms, but the scanning tunneling microscope translates atomic structures to scales our nervous systems are equipped to handle.

Let viable translation distance represent the difference in scale between human perceptual foci and the translation proxy target. Since translation proxies are facilitated through measurement technology, which is in turn driven by scientific advances, it follows that we ought to expect viable translation distance to increase over time.

We now possess a straightforward explanation of our misnomer. When “atom” was coined, its referent was the product of that time’s maximum viable translation distance. But technology has since moved on, and we have discovered even smaller elements. Let’s now turn to the state of the art.

Beyond The Atom

Reconsider our diagram of the atom. Do you remember the names of its constituents? That’s right: protons, neutrons, and electrons. Protons and neutrons “stick together” in the nucleus, electrons “circle around”.

Our building blocks of the universe so far: { protons, neutrons, electrons }. By combining these ingredients in all possible ways, we can reconstruct the periodic table – all of chemistry. Our building blocks are – and must be – backwards compatible. But are these particles true “indivisibles”? Can we go smaller?

Consider the behavior of the electrons orbiting the nucleus. After fixing one theoretical problem (c.f., Energy Levels section above), we now can explain why electrons orbit the nucleus: electromagnetic attraction (“opposites attract”). But here is a problem: we have no such explanation for the nucleus. If “like charges repel”, then the nucleus must be something like holding the same poles of a magnet close together: you can do it, but it takes a lot of force. What could possibly be keeping the protons in the nucleus together?

Precisely this question motivated a subsequent discovery: electrons may well be indivisible, but protons and neutrons are not. Protons and neutrons are instead composite particles made out of quarks. Quarks like to glue themselves together by a new force, known as the strong force. This new type of force not only explains why we don’t see quarks by themselves, it also explains the persistence of the nucleus.

The following diagram (source) explains how quarks comprise protons and neutrons:

atom_baryons

Okay, so our new set of building blocks are: { electron, up, down }. With a little help from some new mathematics – quantum chromodynamics – we can again reconstitute chemistry. biology, and beyond.

Please notice how some of our building blocks are more similar than others: the up and down particle comprise particles with charge divisible by three, the electron particle carries an integer charge. Let us group like particles together.

  • Call up and down particles part of the quark family.
  • Call electrons part of the lepton family.

Neutrinos

So far in this article, we’ve gestured towards gravitation and electromagnetism. We’ve also introduced the strong force. Now is the time to discuss Nature’s last muscle group, the weak force.

A simple way to bind the weak force to your experience: consider what you know about radioactive material. The types of atoms that are generated in, to pick one source, nuclear power do not behave like other atoms. They emit radiation, they decay. Ever heard of the “half-life” of a material? That term defines how long is takes for half of an unstable radioactive material to decay into a more stable form. For example, { magnesium-23 → sodium-23 + antielectron }.

Conservation of energy dictates that such decay reactions must preserve energy. However, when you actually calculate the energetic content of decay process given above, you find a mismatch. And so, scientists were confronted with the following dilemma: either reject conservation of energy, or posit the existence of an unknown particle to “balances the books”. Which would you chose?

The scientific community began to speculated that a fourth type of fermion existed, even with an absence of physical evidence. And they found it 26 years later, in 1956.

Why did it take relatively longer to discover this fourth particle? Well, these hypothesized neutrinos do not carry an electric charge or a color charge. As such, they only interact with other particles via the weak force (which has a very short range) and the atomic force (which is 10^36 times less powerful than electromagnetic force). Due to these factors, neutrinos such as those generated by the Sun pass through the Earth undetected. In fact, in the time it takes you to read this sentence, hundreds of billions of neutrinos have passed through every cubic centimeter of your body without incident. Such weak interactivity explains the measurement technology lag.

Are you sufficiently creeped out by how many particles pass through you undetected? 🙂 If not, consider neutrino detectors. Because of their weak interactivity, our neutrino detectors must be large, and buried deep inside the earth (to shield from “noise” – more common particle interactions). Here we see a typical detector, with scientists inspecting their instruments in the center, for contrast:

neutrino_detector

The Legos Of Nature

Here, then, is our picture of reality:

Fermions- One Generation

Notice that all fermions have spin ½; we’ll return to this fact later.

A Generational Divide

Conservation of energy is a thing, but conservation of particles is not. Just as particles spontaneously “jump” energy levels, sometimes particles morph into different types of particles, in a way akin to chemical reactions. What would happen if we were to pump a very large amount of energy into the system, say by striking an up quark with a high-energy photon? Must the output energy be expressed as hundreds of up quarks? Or does nature have a way to “more efficiently” spend its energy budget?

It turns out that you can: there exist particles identical to these four fermions with one exception: they are more massive. And we can pull this magic trick once more, and find fermions even heavier than these fermions. To date, physicists have discovered three generations of fermions:

Fermions- Three Generations

 

The latter generation took lots of time to “fill in” because you only see them in high-energy situations. Physicists had to close the translation distance gap, by building bigger and bigger particle accelerators. The fermion with the highest mass – the Top quark – was only discovered in 1995. Will there be a fourth generation, will we discover some upper bound on fermion generations?

Good question.

Even though we know of three generations, in practice only the first generation “matters much”. Why? Because the higher-energy particles that comprise the second and third generations tend to be unstable: give them time (fractions of a second, usually), and they will spontaneously decay – via the weak force – back into first generation forms. This is the only reason why we don’t find atomic nuclei orbited by tau particles.

Towards A Mathematical Lens

General & Individual Descriptors

The first phase of my lens-dependent theorybuilding triad I call conceptiation: the art of carving concepts out of a rich dataset. Such carving must be heavily dependent on descriptive dimensions: quantifiable ways that an entity may differ from one another.

For perceptual intake, the number of irreducible dimensions may be very large. However, for particles, this set is surprisingly small. There is something distressingly accurate in the phrase “all particles are the same”.

Each type of fermion is associated with one unique value for the following properties (particle-generic properties):

  • mass (m)
  • electric charge (e)
  • spin (s)

Fermions may differ according to their quantum numbers (particle-specific properties). For an electron, these numbers are:

  • principal. This corresponds to the energy level of the electron (c.f., energy level discussion)
  • azimuthal. This corresponds to the orbital version of angular momentum (e.g., the Earth rotating around the Sun). These numbers correspond to the orbitals of quantum chemistry (0, 1, 2, 3, 4, …) ⇔ (s, p, d, f, g, …); which helps explain the orbital organization of the periodic table.
  • magnetic. This corresponds to the orientation of the orbital.
  • spin projection. This corresponds to the “spin” version of angular momentum (e.g., the Earth rotating around its axis). Not to be confused with spin, this value can vary across electrons.

Quantum numbers are not independent; their ranges hinge on one another in the following way:

Quantum Numbers

Statistical Basis

With our fourth building block in place, we are in a position to answer the question: what does the particulate basis of matter have in common?

All elementary particles of matter we have seen have spin ½. By the Spin-statistics Theorem, we must associate all such particles with Fermi-Dirac statistics. Let us name all particles under this statistics – all particles we have seen so far – “fermions”. It turns out that this statistical approach generates a very interesting property known as the Pauli Exclusion Principle. The Pauli Exclusion Principle states, roughly, that two particles cannot share the same quantum state.

Let’s take an example: consider a hydrogen atom with two electrons. Give this atom enough time, and both electrons will be on its ground state, n=1. What happens if the hydrogen picks up an extra electron, in some chemical process? Can this third electron also enter the ground state?

No, it cannot. Consider the quantum numbers for our first two electrons: { n=1, l=0, m_l=0, m_s=1/2 } and { n=1, l=0, m_l=0, m_s=-1/2 }. Given the range constraints given above, there are no other unique descriptors for an electron with n=1. Since we cannot have two electrons with the same quantum numbers, the third electron must come to rest at the next highest energy level, n=2.

The Pauli Exclusion Principle has several interesting philosophical implications:

  • Philosophically, this means that if two things have the same description, then they cannot be two things. This has interesting parallels to the axiom of choice in ZFC, which accommodates “duplicate” entries in a set by conjuring some arbitrary way to choose between them.
  • Practically, the Pauli Exclusion Principle is the only thing keeping your feet from sinking into the floor right now. If that isn’t a compelling demonstration of why math matters, then I don’t know what is.

Composite Fermions

In this post, we have motivated the fermion particulate class by appealing to discoveries of elementary particles. But then, when we stepped back, we discovered that the most fundamental attribute of this class of particles was its subjugation to Fermi-Dirac statistics.

Can composite particles have spin-½ as well as these elementary particles? Yes. While all fermions considered in this post are elementary particles, that does not preclude composite particles from membership.

What Fermions Mean

In this post, we have done nothing less than describe the basis of matter.

But are fermions the final resolution of nature? Our measurement technology continues to march on. Will our ability to “zoom in” fail to produce newer, deeper levels of reality?

Good questions.

Knowledge: An Empirical Sketch

Table Of Contents

  • Introduction
    • All The World Is Particle Soup
    • Soup Texture
  • Perceptual Tunnels
    • On Resolution
    • Sampling
    • Light Cones, Transduction Artifacts, Translation Proxies
  • The Lens-dependent Theorybuilding Triad
    • Step One: Conceptiation
    • Step Two: Graphicalization
    • Step Three: Annotation
    • Putting It All Together: The Triad
  • Conclusion
    • Going Meta
    • Takeaways

Introduction

All The World Is Particle Soup

Scientific realism holds that the entities scientists refer to are real things. Electrons are not figments of our imagination, they possess an existence independent of your mind. What does it mean for us to view particle physics with such a lens?

Here’s what it means: every single thing you see, smell, touch… every vacation, every distant star, every family member… it is all made of particles.

This is an account of how the nervous system (a collection of particles) came to understand the universe (a larger collection of particles). How could Particle Soup ever come to understand itself?

Soup Texture

Look at your hand. How many types of particles do you think you are staring at? A particle physicist might answer: nine. You have four first-generation fermions (roughly, particles that comprise matter) and five bosons (roughly, particles to carry force). Sure, you may get lucky and find a couple exotic particles within your hand, but such a nuance would not detract from the morale to the story: in your hand, the domain (number of types) of particles is very small.

Look at your hand. How large a quantity of particles do you think you are staring at? The object of your gaze is a collection of about 700,000,000,000,000,000,000,000,000 (7.0 * 10^26) particles. Make a habit about thinking in this way, and you’ll find a new appreciation for the Matrix Trilogy. 🙂 In your hand, the cardinality (number of tokens) of particles is very large.

These observations generalize. There aren’t many kinds of sand in God’s Sandbox, but there is a lot of it, with different consistencies across space.

Perceptual Tunnels

On Resolution

Consider the following image. What do you see?

Lincoln Resolution

Your eyes filter images at particular frequencies. At this default human frequency, your “primitives” are the pixelated squares. However, imagine being able to perceive this same image at a lower resolution (sound complicated? move your face away from the screen :P). If you do this, the pixels fade, and a face emerges.

Here, we learn that different resolution lens may complement one another, despite their imaging the same underlying reality. In much the same way, we can enrich our cognitive toolkit by examining the same particle soup with different “lens settings”.

Sampling

By default, the brain does not really collect useful information. It is only by way of sensory transductor cells – specialized cells that translate particle soup into Mentalese – that the brain gains access to some small slice of physical reality. With increasing quantity and type of these sensory organs, the perceptual tunnel burrowed into the soup becomes wide enough to support a lifeform.

Another term for the perceptual tunnel is the umwelt. Different biota experience different umwelts; for example, honeybees are able to perceive the Earth’s magnetic field as directly as we humans perceive the sunrise.

Perceptual tunneling may occur at different resolutions. For example, your proprioceptive cells create signals only on the event of coordinated effort of trillions and trillions of particles (e.g., the wind pushes against your arm). In contrast, your vision cells create signals at very fine resolutions (e.g., if a single photon strikes your photoreceptor, it will fire).

Perceptual Tunneling

Light Cones, Transduction Artifacts, Translation Proxies

Transduction is a physically-embedded computational process. As such, it is subject to several pervasive imperfections. Let me briefly point towards three.

First, nature precludes the brain from the ability to sample from the entirety of the particle soup. Because your nervous system is embedded within a particular spatial volume, it is subject to one particular light cone. Since particles cannot move faster than the speed of light, you cannot perceive any non-local particles. Speaking more generally: all information outside of your light cone is closed to direct experience.

Second, the nervous system is an imperfect medium. It has difficulty, for example, representing negative numbers (ever try to get a neuron firing -10 times per second?). Another such transduction artifact is our penchant for representing information in a comparative, rather than absolute, format. Think of all those times you have driven on the highway with the radio on: when you turn onto a sidestreet, the music feels louder. This experience has nothing at all to do with an increased sound wave amplitude: it is an artifact of a comparison (music minus background noise). Practically all sensory information is stained by this compressive technique.

Third, perceptual data may not represent the actual slice of the particle soup we want. To take one colorful example, suppose we ask a person whether they perceived a dim flashing light, and they say “yes”. Such self-reporting, of course, represents sensory input (in this case, audio vibrations). But this kind of sensory information is a kind of translation proxy to a different collection of particles we are interested in observing (e.g., the activity of your visual cortex).

This last point underscores an oft-neglected aspect of perception: it is an active process. Our bodies don’t just sample particles, they move particles around. Despite the static nature of our umwelt, our species has managed to learn ever more intricate scientific theories in virtue of sophisticated measurement technology; and measurement devices are nothing more than mechanized translation proxies.

The Lens-dependent Theorybuilding Triad

Step One: Conceptiation

Plato once describes concept acquisition as “carving nature at its joints”. I will call this process (constructing Mentalese from the Soup) theory conceptiation.

TheoryBuilding- Conceptiation

If you meditate on this diagram for a while, you will notice that theory conceptiation is a form of compression. Acccording to Kolmogorov information theory, the efficacy of compression hinges on how many patterns exist within your data. This is why you’ll find leading researchers claiming that:

Compression and Artificial Intelligence are equivalent problems

A caveat: concepts are also not carved solely from perception; as one’s bag of concepts expands, such pre-existent mindware exerts an influence on the further carving up of percepts. This is what the postmoderns attribute to hermeneutics, this is the root of memetic theory, this is what is meant by the nature vs. nurture dialogue.

Step Two: Graphicalization

Once the particle soup is compressed into a set of concepts, relations between these concepts are established. Call this process theory graphicalization.

TheoryBuilding- Graphicalization

If I were ask you to complete the word “s**p”, would you choose “soap” or “soup”?  How would your answer change if we were to have a conversation about food network television?

Even if I never once mention the word “soup”, you become significantly more likely to auto-complete that alternative after our conversation. Such priming is explained through concept graphs: our conversation about the food network activates food-proximate nodes like “soup” much more strongly than graphically distant nodes like “soap”.

Step Three: Annotation

Once the graph structure is known, metagraph information (e.g., “this graph skeleton occurs frequently”) is appended. Such metagraph information is not bound to graphs. Call this process theory annotation.

TheoryBuilding- Annotation

We can express a common complaint about metaphysics thusly: theoretical annotation is invariant to changes in conceptiation & graphicalization results. In my view (as hinted at by my discussion of normative therapy) theoretical annotation is fundamentally an accretive process – it is logically possible to generate an infinite annotative tree; this is not seen in practice because of the computational principle of cognitive speed limit (or, to make a cute analogy, the cognition cone).

Putting It All Together: The Triad

Call the cumulative process of conceptiation, graphicalization, and annotation the lens-dependent theorybuilding triad.

TheoryBuilding- Lens-Dependent Triad

Conclusion

Going Meta

One funny thing about theorybuilding is how amenable it is to recursion. Can we explain this article in terms of Kevin engaging in theorybuilding? Of course! For example, consider the On Resolution section above. Out of all possible adjectives used to describe theorybuilding, I deliberately chose to focus my attention on spatial resolution. What phase of the triad does that sound like to you?  Right: theory conceptiation.

Takeaways

This article does not represent serious research. In fact, its core model – the lens-dependent theorybuilding triad – cites almost no empirical results. It is a toy model designed to get us thinking about how a cognitive process can construct a representation of reality. Here is an executive summary of this toy model:

  1. Perception tunneling is how organisms begin to understand the particle soup of the universe.
    1. Tunneling only occurs by virtue of sensory organs, which transduce some subset of data (sampling) into Mentalese.
    2. Tunneling is a local effect, it discolors its target, and its sometimes merely represents data located elsewhere.
  2. The Lens-Dependent Theorybuilding Triad takes the perception tunnel as input, and builds models of the world. There are three phases:
    1. During conceptiation, perception contents are carved into isolable concepts.
    2. During graphicalization, concept interrelationships are inferred.
    3. During annotation, abstracted properties and metadata are attached to the conceptual graph.

An Introduction To Electromagnetic Spectra

Part Of: Demystifying Physics sequence
Content Summary: 1200 words, 12 min read

Motivations

Consider the following puzzle. Can you tell me the answer?

We see an object O. Under white light, O appears blue. How would O appear, if it is placed under a red light?

As with many things in human discourse, your simple vocabulary (color) is masking a more rich reality (quantum electrodynamics). These simplifications generate the correct answers most of the time, and make our mental lives less cluttered. But sometimes, they block us from reaching insights that would otherwise reward us. Let me “pull back the curtain” a bit, and show you what I mean.

The Humble Photon

In the beginning was the photon. But what is a photon?

Photons are just one type of particle, in this particle zoo we call the universe. Photons have no mass and no charge. This is not to say that all photons are the same, however: they are differentiated by how much energy they possess.

Do you remember that famous equation of Einstein’s, E = mc^2? It is justly famous for demonstrating mass-energy interchangeability. If you are set up a situation to facilitate a “trade”, you can purchase energy by selling mass (and vice versa). Not only that, but you can purchase a LOT of energy with very little mass (the ratio is about 90,000,000,000,000,000 to 1). This kind of lopsided interchangeability helps us understand why things like nuclear weapons are theoretically possible. (In nuclear weapons, a small amount of uranium mass is translated into considerable energy). Anyways, given E = mc^2, can you find the problem with my statement above?

Well, if photons have zero mass, then plugging in m=0 to E = mc^2 tells us that all photons have the same energy: zero! This falsifies my claim that photons are differentiated by energy.

Fortunately, I have a retort: E = mc^2 is not true; it is only an approximation. The actual law of nature goes like this (p stands for momentum):

E = \sqrt{\left( (mc^2)^2 + (pc)^2 \right) }

Since m=0 for photons, we can eliminate the left-hand side of the equation. This leaves E = pc (“energy equals momentum times speed-of-light”). We also know that that p = \frac{ \hslash }{ \lambda } (“momentum equals Planck’s constant divided by wavelength”). Putting these together yields the cumulative value for energy of a photon:

E = \frac{\hslash c}{\lambda}

Since h and c are just constants, the relation becomes very simple: energy is inversely proportional to wavelength. Rather than identifying a photon by its energy, then, let’s identify it by its wavelength. We will do this because wavelength is easier to measure (in my language, we have selected a measurement-affine independent variable).

Meet The Spectrum

So we can describe one photon by its wavelength. How about billions? In such a case, it would be useful to draw a map, on which we can locate photon distributions.  Such a photon map is called an electromagnetic spectrum. It looks like this:

spectrum

Pay no attention to the colorful thing in the middle called “visible light”. There is no such distinction in the laws of nature, it is just there to make you comfortable.

Model Building

We see an object O.

Let’s start by constructing a physical model of our problem. How does seeing even work?

Once upon a time, the emission theory of vision was in vogue. Plato, and many other renowned philosophers, believed that perception occurs in virtue of light emitted from our eyes. This theory has since been proven wrong. The intromission theory of vision has been vindicated: we see in virtue of the fact that light (barrages of photons) emitted by some light source, arrives at our retinae. The process goes like this:

Spectrum Puzzle Physical Setup

If you understood the above diagram, you’re apparently doing better than half of all American college students… who still affirm emission theory… moving on.

Casting The Puzzle To Spectra

Under white light, O appears blue.

White is associated with the activation of all of the spectra (this is why prisms work). Blue is associated with high-energy light (this is why flames are more blue at the base). We are ready to cast our first sentence. To the spectrum-ifier!

Spectrum Puzzle Setup

Building A Prediction Machine

Here comes the key to solving the puzzle. We are given two data points: photon behavior at the light source, and photon behavior at the eye. What third location do we know is relevant, based on our intromission theory discussion above? Right: what is photon behavior at the object?

It is not enough to describe the object’s response to photons of energy X. We ought to make our description of the object’s response independent from details about the light source. If we could find the reflection spectrum (“reflection signature“) of the object, this would do the trick: we could anticipate its response to any wavelength. But how do we infer such a thing?

We know that light-source photons must interact with the reflection signature to produce the observed photon response. Some light-source photons may be always absorbed, others may be always reflected. What sort of mathematical operation might support such a desire? Multiplication should work. 🙂 Pure reflection can be represented as multiply-by-one, pure absorption can be represented as multiply-by-zero.

At this point, in a math class, you’d do that work. Here, I’ll just give you the answer.

Spectrum Puzzle Object Characteristics

For all that “math talk”, this doesn’t feel very intimidating anymore, does it? The reflection signature is high for low-wavelength photons, and low for high-wavelength light. For a very generous light source, we would expect to see the signature in the perception.

Another neat thing about this signature: it is rooted in properties of the object atomic structure! Once we know it, you can play with your light source all day: the reflection signature won’t change. Further, if you combine this mathematical object with the light source spectrum, you produce a prediction machine – a device capable of anticipating futures.  Let’s see our prediction machine in action.

And The Answer Is…

How would O appear, if it is placed under a red light?

We have all of the tools we need:

  • We know how to cast “red light” into an emissions spectra.
  • We have already built a reflection signature, which is unique to the object O.
  • We know how to multiply spectra.
  • We have an intuition of how to translate spectra into color.

The solution, then, takes a clockwise path:

Spectrum Puzzle Solution

The puzzle, again:

We see an object O. Under white light, O appears blue. How would O appear, if it is placed under a red light?

Our answer:

O would appear black.

Takeaways

At the beginning of this article, your response to this question was most likely “I’d have to try it to find out”.

To move beyond this, I installed three requisite ideas:

  1. A cursory sketch of the nature of photons (massless bosons),
  2. Intromission theory (photons enter the retinae),
  3. The language of spectra (map of possible photon wavelengths)

With these mindware applets installed, we learned how to:

  1. Crystallize the problem by casting English descriptions into spectra.
  2. Discover a hidden variable (object spectrum) and solve for it.
  3. Build a prediction machine, that we might predict phenomena never before seen.

With these competencies, we were able to solve our puzzle.

Why Serialization?

Part Of: [Deserialized Cognition] sequence

Introduction

Nietzsche once said:

My time has not yet come; some men are born posthumously.

Well, this post is “born posthumously” too: its purpose will become apparent by its successor. Today, we will be taking a rather brisk stroll through computer science, to introduce serialization. We will be guided by the following concept graph:

Concept Map To Serialization

On a personal note, I’m trying to make these posts shorter, based on feedback I’ve received recently. 🙂

Let’s begin.

Object-Oriented Programming (OOP)

In the long long ago, most software was cleanly divided between data structures and the code that manipulated them. Nowadays, software tends to bundle these two computational elements into smaller packages called objects. This new practice is typically labelled object-oriented programming (OOP).

OOP- Comparison to imperative style (1)

The new style, OOP, has three basic principles:

  1. Encapsulation. Functions and data that pertain to the same logical unit should be kept together.
  2. Inheritance. Objects may be arranged hierarchically; they may inherit information in more basic objects.
  3. Polymorphism. The same inter-object interface can be satisfied by more than one object.

Of these three principles, the first is most paradigmatic: programming is now conceived as a conversation between multiple actors. The other two simply elaborate the rules of this new playground.

None of this is particularly novel to software engineers. In fact, the ability to conjure up conversational ecosystems – e.g., the taxi company OOP system above – is a skill expected in practically all software engineering interviews.

CogSci Connection: Some argue that conversational ecosystems is not an arbitrary invention, but necessary to mitigate complexity.

State Transitions

Definition: Let state represent a complete description of the current situation. If I were to give you full knowledge of the state of an object, you could (in principle) reconstitute it.

During a program’s lifecycle, the state of an object may change over time. Suppose you are submitting data to the taxi software from the above illustration. When you give your address to the billing system, that object updates its state. Object state transitions, then, look something like this:

OOP- Object State Transitions

Memory Hierarchy

Ultimately, of course, both code and data are 1s and 0s. And information has to be physically embedded somewhere. You can do this in switches, gears, vacuum tubes, DNA, and entangled quantum particles: there is nothing sacred about the medium. Computer engineers tend to favor magnetic disks and silicon chips, for economic reasons. Now, regardless of the medium, what properties do we want out of an information vehicle? Here’s a tentative list:

  • Error resistant.
  • Inexpensive.
  • Non-volatile (preserve state even if power is lost).
  • Fast.

Engineers, never with a deficit of creativity, have invented dozens of such information vehicle technologies. Let’s evaluate four separate candidates, courtesy of Tableau. 🙂

memory technology comparison

Are any of these technologies dominant (superior to all other candidates, in every dimension)?

No. We are forced to make tradeoffs. Which technology do you choose? Or, to put it more realistically, what would you predict computer manufacturers have built, guided by our collective preferences?

The universe called. It says my question is misleading. Economic pressures have caused manufacturers to choose… several different vehicles. And no, I don’t mean embedding different programs into different mediums. Rather, we embed our programs into multiple vehicles at the same time. The memory hierarchy is a case study in redundancy.

CogSci Connection: I cannot answer why economics has gravitated towards this highly counter-intuitive solution? But, it is important to realize that the brain does the same thing! It houses a hierarchy of trace memory, working memory, and long-term memory. Why is duplication required here, as well? So many unanswered questions…

Serialization

It is time to combine OOP and the memory hierarchy. We now imagine multiple programs, duplicated across several vehicles, living in your computer:

OOP- Memory Hierarchy

In the above illustration, we have two programs being duplicated in two different information vehicles (main memory and hard drive). The main memory is faster, so state transitions (changes made by the user, etc) land there first. This is represented by the mutating color within the objects of main memory. But what happens if someone trips on your power cord, unplugging your CPU before main memory can be copied to the hard drive? All changes to the objects are lost! How do we fix this?

One solution is serialization (known in some circles as marshalling). If we simply write down the entire state of an object, we would be able to re-create it later. Many serialization formats (competing techniques for how best to record state) exist. Here is an example in the JavaScript Object Notation (.json) format:

{“menu”: {
“id”: “file”,
“value”: “File”,
“popup”: {
“menuitem”: [
{“value”: “New”, “onclick”: “CreateNewDoc()”},
{“value”: “Open”, “onclick”: “OpenDoc()”},
{“value”: “Close”, “onclick”: “CloseDoc()”}
]
}
}}

Applications

So far, we’ve motivated serialization by appealing to a computer losing power. Why else would we use this technique?

Let’s return to our taxi software example. If the software becomes very popular, perhaps too many people will want to use it at the same time. In such a scenario, it is typical for engineers to load balance: distribute the same software on multiple different CPUs. How could you copy the same objects across different computers? By serialization!

CogSci Connection: Let’s pretend for a moment that computers are people, and objects are concepts. … Notice anything similar to interpersonal communication? 🙂

Conclusion

In this post, we’ve been introduced to object-oriented programming, and how it changed software to becoming more like a conversation between agents. We also learned the surprising fact about memory: that duplicate hierarchies are economically superior to single solutions. Finally, we connected these ideas in our model of serialization: how the entire state of an object can be transcribed to enable future “resurrections”.

Along the way, we noted three parallels between computer science and psychology:

  1. It is possible that object-oriented programming was first discovered by natural selection, as it invented nervous systems.
  2. For mysterious reasons, your brain also implements a duplication-heavy memory hierarchy.
  3. Inter-process serialization closely resembles inter-personal communication.