An Introduction to Set Theory

Part Of: Algebra sequence
Content Summary: 1800 words, 18 min read

Fundamentals of Sets

Definition 1. A set is a collection of distinct objects.

A few examples to kick us off:

  • MyFavoriteFruits = \left\{ apples, persimmon, pineapple \right\} represents the set of fruits which I prefer.
  • A = \left\{ 1, 2, 3, 4, 5 \right\} can represent, among other things, the fingers on my left hand.
  • The set of natural numbers \mathbb{N} = \left\{ 0, 1, 2, 3, 4, ... \right\}.
  • The set of integers \mathbb{Z} = \left\{ ..., -2, -1, 0, 1, 2, ... \right\}.

Sets are subject to the following properties:

  • Order blindness: \left\{2, 1, 3\right\} and \left\{1, 2, 3\right\} expresses the very same set.
  • Duplicate blindness. \left\{ 1, 1, 2, 3, 3 \right\} = \left\{1, 2, 3 \right\}. We will prefer to express sets with the latter, more compact, notation.
  • Recursion. \left\{ 1, \left\{2, 3\right\} \right\} is a perfectly valid two-element set, quite distinct from the three-element set \left\{ 1, 2, 3 \right\}.

Definition 2. Two sets A and B are said to be equal (A = B) if A and B contain exactly the same elements.

  • Let A = \left\{ 1, 2, 3 \right\} and B = \left\{ 3, 1, 2 \right\}. Then, A = B.
  • Let A = \left\{ 1, 2, 3 \right\} and B = \left\{ 1, \left\{2, 3\right\} \right\}. Then, A \neq B.

Definition 3. If an object x is an element of (i.e., member of) set S, we write x \in S. Otherwise, we write x \notin S.

  • Let PrimaryColors = \left\{ red, yellow, blue \right\}. Then yellow \in PrimaryColors means “yellow is an element of the set of primary colors”.
  • -1 \notin \mathbb{N} means “-1 is not an element of the natural numbers”.
  • 1 \notin B = \left\{ 0, \left\{ 1 \right\}, \left\{ \left\{ 2 \right\} \right\} \right\}. The element 1 is not in B: only the set \left\{ 1 \right\} is.

Definition 4. For some set X, its cardinality (i.e., size) |X|, is the number of elements in that set.

  • Let A = \left\{ 1, 2, 3, 4, 5 \right\}. Then |A| = 5.
  • Let B = \left\{ 1, \left\{2, 3 \right\} \right\}. Then |B| = 2. Note that cardinality only looks at “the outer layer”.

Definition 5. The empty set (i.e., the null set) \varnothing is the set containing no elements.

  • \varnothing = \left\{ \right\}.
  • | \varnothing | = 0.
  • | \left\{ \varnothing \right\} | = 1

Definition 6. Instead of listing all elements, set builder notation specifies rules or properties that govern the construction of a set. The notation takes the following form: { x | \normalfont{property of} x }, where the | symbol is pronounced “such that”.

  • A = \left\{ 1, 2, 3, 4, 5 \right\} = \left\{ x \in \mathbb{Z} | x > 0, x < 6 \right\}. In words “let A be the set of integers X such that x is greater than zero and less than six.”
  • The set of rational numbers \mathbb{Q} = \left\{ x / y \mid x \in \mathbb{Z}, y \in \mathbb{Z}, y \neq 0 \right\}.

Sets defined by their properties are often called set comprehension. Such properties or rules are called the intension, the resultant set is called the extension.

  • Let A = \left\{ y | x \in \mathbb{R}, y = (x+1)(x-1) \right\} and let B = \left\{ y | x \in \mathbb{R}, y = x^{2} -1 \right\}. Here A = B, despite their use of different rules. We say that A and B have the same extension, but different intensions.

The intension-extension tradeoff denotes an inverse relationship between the number of intensional rules versus the size of the set those rules pick out. Let’s consider two examples to motivate this tradeoff.

Consider hierarchical addressing in computer architecture. Suppose we have 2^{6} = 64 bits of computer memory, each bit of which is uniquely identified with a 6-bit address. Suppose further that our memory has been allocated to data structures of varying size. To promote addressing efficiency, a computer can adopt the following strategy: assign shorter addresses to larger variables.

Naive Set Theory- Hierarchical Addressing

We can also see the intension vs. extension tradeoff in the memory systems of the brain. Specifically, semantic memory is organized into a concept hierarchy. We might classify a Valentine’s day gift according to the following tree:

Naive Set Theory- Concept Hierarchy

The number of objects classified as a RED_ROSE is clearly less than the number of objects classified as LIVING_THING. But as our extensional size decreases, the size of our intension (the number of properties needed to classify a RED_ROSE) increases.

Subsets and Power set

Definition 7. A set A is a subset of another set B, written A \subseteq B, if every element of A is also an element of B.

  • Let A = \left\{ 2, 3, 9 \right\} and B = \left\{1,9,3,6, 2\right\}. Then A \subseteq B (recall that element order is irrelevant).
  • Let A = \left\{ 2, 3, 9 \right\} and C = \left\{1,7, 3, 6, 2 \right\}. Then A \nsubseteq C (C does not contain 9).
  • Is \varnothing a subset of \left\{2,3,9\right\}?  Yes. For all A, \varnothing \subseteq A is true.

Definition 8. A set A is a proper subset of another set B, written A \subset B, if A \subseteq B and A \neq B.

Definition 9. For a given set A, its power set \mathbb{P} (A) is the set of all subsets of A.

  • Let A = \left\{ 0, 1 \right\}. Then \mathbb{P}(A) = \left\{ \varnothing ,  \left\{ 0 \right\} , \left\{ 1 \right\}, \left\{ 0, 1 \right\} \right\}.

A power set can be constructed by the use of a binary tree, as follows:

Naive Set Theory- Building Powersets (1)

As can be seen above, the total number of subsets must be a power of two. Specifically, if |A| = n, then |\mathbb{P}(A)| = 2^n.

It is important to get clear on the differences between the element-of ( \in ) versus subset-of ( \subseteq ) relations. Consider again A = \left\{ 0, 1 \right\} and its power set \mathbb{P}(A) = \left\{ \varnothing ,  \left\{ 0 \right\} , \left\{ 1 \right\}, \left\{ 0, 1 \right\} \right\}

  • A \in \mathbb{P}(A). But \left\{A\right\} \not\in \mathbb{P}(A). The \in relation requires the brackets match.
  • \left\{A\right\} \subseteq \mathbb{P}(A). But A \nsubseteq \mathbb{P}(A). The \subseteq relation requires the “extra bracket”.

Probability theory is intimately connected with the notion of power set. Why? Because many discrete probability distributions have \sigma-algebras draw from the power set of natural numbers \mathbb{P} ( \mathbb{N} ).

Cartesian Product, Tuples

Definition 10. Given two sets A and B, their Cartesian product A \times B is the set of all ordered pairs \langle a, b \rangle such that a \in A and b \in B. Note that, unlike the elements in a set, the elements of an ordered pair cannot be reordered.

  • Let A = \left\{1, 2, 3 \right\} and B = \left\{4, 5, 6\right\}. Then A \times B = \left\{ (1, 4) , (1, 5), (1,6), (2, 4), (2, 5), (2, 6),  (3, 4), (3, 5), (3, 6) \right\}.

We can represent this same example visually, as follows:

Naive Set Theory- Cartesian Product (1)

  • Contrast this with B \times A = \left\{ (3, 1) , (4, 1), (2, 4), (1, 3) \right\}. Thus, A \times B \neq B \times A. This is because elements within ordered pairs cannot be rearranged.
  • Note that |A \times B| = 9 = |A_{1}| \times |A_{2}|. In combinatorics, this observation generalizes to the multiplication principle.
  • The real plane \mathbb{R}^2 = \mathbb{R} \times \mathbb{R} is a well-known example of a Cartesian product.

Definition 11. Given n sets, A_{1}, A_{2}, \ldots , A_{n}, their Cartesian product is the set of all n-tuples.

  • Let A = \left\{1, 2\right\}, B = \left\{a, b\right\} and C = \left\{ 100 \right\}. Now A \times B \times C = \left\{ (1, a, 100) , (1, b, 100), (2, a, 100), (2, b, 100) \right\}.

Linear algebra is intimately connected with the Cartesian product operation. Why? Because n-tuples are strongly related to n-dimensional vectors. 

Intersection and Union

Definition 12. The intersection of two set A and B, written A \cap B , is the set of elements common to both sets.

  • Let A = \left\{ 2, 4, 9 \right\} and B = \left\{1, 2, 3, 6, 9 \right\}. Then A \cap B = \left\{ 2, 9 \right\}.
  • Let A = \left\{ 2, 4, 9 \right\} and B = \left\{1, 3, 6, 8, 10\right\}. Then A \cap B = \varnothing.
  • Let A = \left\{ 2, 4, 9 \right\}. Then \varnothing \cap A = A \cap \varnothing = \varnothing.

Definition 13. The union of two sets A and B, written A \cup B is the set of all elements that are in A or B, or both.

  • Let A = \left\{ 2, 4, 9 \right\} and B = \left\{1, 2, 3, 6, 9 \right\}. Then A \cup B = \left\{ 1, 2, 3, 4, 6, 9 \right\}.
  • Let A = \left\{ 2, 4, 9 \right\}. Then \varnothing \cup A = A \cup \varnothing = A.

Venn diagrams represent sets as enclosed areas in a 2D plane. If two (or more!) sets have shared elements, their areas overlap. We can use this technique to visualize sets and their overlap:

Naive Set Theory- Elements vs Venn Diagram

We can also use Venn diagrams to represent our intersection and union relations:

Naive Set Theory- Union vs Intersection Venn (1)

Note that |A \cup B| = |A| + |B| - |A \cap B|. This makes sense in light of the Venn diagram. Adding the cardinality of both sets counts the elements that exist in the middle section twice. To avoid this, we subtract the cardinality of the intersection. In combinatorics, this formula is generalized by the inclusion-exclusion principle

Difference and Complement

Definition 14. Given two sets A and B, their difference A \setminus B is the set of elements in A but not also in B.

  • Let A = \left\{ 1, 2, 3, 4, 5, 6 \right\} and B = \left\{0, 2, 4, 6, 8 \right\}. Then A \setminus B = \left\{ 1, 3, 5 \right\} and B \setminus A = \left\{ 0, 8 \right\}.
  • Let A = \left\{ 1, 2, 3, 4, 5, 6 \right\} and B = \left\{7, 8, 9, 10 \right\}. Then A \setminus B = A and B \setminus A = B.

Definition 15. Given two sets A and B, the symmetric difference A \triangle B is the set of elements in A or B, but not both.

  • Let A = \left\{ 1, 2, 3, 4, 5, 6 \right\} and B = \left\{0, 2, 4, 6, 8 \right\}. Then A \triangle B = \left\{ 0, 1, 3, 5, 8 \right\}.

Definition 16. In many set problems, all sets are defined as subsets of some reference set. This reference set is called the universe U.

  • Let A = \left\{ 1+i, 12-8j, 3+0i \right\} and let its universe be the set of complex numbers \mathbb{C}. It is true that A \subseteq \mathbb{C}.

Definition 17. Relative to a universe U, the complement of A, written \overline{A} is the set of all elements of U not contained in A.

  • Let U be the set of positive integers less than 10: U = \left\{ x | x \in \mathbb{Z}^{+}, x < 10 \right\} and A = \left\{ 1, 2, 3, 4, 5 \right\}. Then \overline{A} = \left\{ 6, 7, 8, 9 \right\}.

We can again represent these relations graphically, via Venn diagrams:

Naive Set Theory- Complement and Difference Venn (2)

 

 

Takeaways

Let me summarize this post in terms of our 17 definitions 🙂

  • Def 1-5 introduced the notion of set, set equality, the element-of operator, cardinality (set size), and empty set.
  • Def 6 introduced set builder notation, and the intension-extension tradeoff.
  • Def 7-9 introduced subset, proper subset, and power set.
  • Def 10-11 introduced Cartesian product, ordered pairs, and n-tuples.
  • Def 12-13 introduced intersection (“and”) and union (“or”), as well as Venn diagrams.
  • Def 14-17 introduced difference, symmetric difference (“xor”), and complement (“not”).

Want to learn more? I recommend the following resources:

This introductory article focused on promoting intuitions through worked examples. Next time, we’ll look at these same operations more carefully, and examine the relationship between set theory and classical predicate logic.

Advertisements

New Foundations: Towards Tribal Unity

Part Of: Principles of Machine Learning sequence
Followup To: Five Tribes of Machine Learning
Content Summary: 1700 words, 17 min read

Overview

In Five Tribes of Machine Learning, I reviewed Pedro Domingos’ account of tribes within machine learning. These were the Symbolists, Connectionists, Bayesians, Evolutionaries, and Analogizers. Domingos thinks the future of machine learning lies in unifying these five tribes into a single algorithm. This master algorithm would weld together the different focal points of the various tribes (c.f. the parable of the blind men and the elephant).

Today, I will argue that Domingos’ goal is worthy, but his approach too confined. Integrating theories of learning surely constitutes a constructive line of inquiry. But direct attempts to unify the tribes (e.g., Markov logic) are inadequate. Instead, we need to turn our gaze towards pure mathematics: the bedrock of machine learning theory. Just as there are tribes within machine learning, mathematical research has its own tribes (image credit Axel Sarlin):

New Foundations- Mapthematics

The tribes described by Domingos draw from the math of the 1950s. Attempting mergers based on these antiquated foundations is foolhardy. Instead, I will argue that updating towards modern foundational mathematics is a more productive way to pursue the master algorithm. Specifically, I submit that machine learning tribes should strive to incorporate constructive mathematics, category theory, and algebraic topology.

Classical Foundations

Domingos argues for five machine learning tribes. I argue for four. I agree that his Symbolists, Connectionists , and Bayesians are worthy of attention. But I will not consider his Evolutionaries and Analogizers: these tribes have been much less conceptually coherent, and also less influential. Finally, I submit Frequentists as a fourth tribe. While this discipline tend to self-identify as “predictive statistics” instead of  “machine learning”, their technology is sufficiently similar to merit consideration.

The mathematical foundations of the Symbolists rests on predicate logic, invented by Gottlieb Frege and C.S. Peirce. This calculus in turn forms the roots of set theory, invented by Georg Cantor and elaborated by Bertrand Russell. Note that 3 out of 4 of these names come from analytic philosophy. Alan Turing’s invention of his eponymous machine marked the birthplace of computer science. The twin pillars of computer science are computability theory and complexity theory, which in turn both rest on top of set theory. Finally, algorithm design connects with the mathematical discipline of combinatorics.

The foundation of the Statisticians (both Bayesian and Frequentist) is measure theory (which, coincidentally, borrows from set theory). The field of information theory gave probability distributions the concept of uncertainty: see entropy as belief uncertainty. Finally, formal theories of learning draw heavily from optimization: where model parameters are tuned to optimize against miscellaneous objective functions.

Mathematical research can largely be decomposed into two flavors: algebraic and analytical. Algebra focuses on mathematical objects and structures: group theory, for example, falls under its umbrella. Analysis alternatively focuses on continuity, and includes fields like measure theory and calculus. Notice that the mathematical foundations of the Symbolists is fundamentally algebraic; whereas that of the Statisticians are analytic. This gets at the root of why machine learning tribes often have difficulty communicating with one another.

New Foundations- Foundations

Classical Applications

We have already noted that that Symbolists, Connectionists, and Bayesians have all created applications in machine learning (decision trees, neural networks and graphical models, respectively). These tribes are also expressed in neuroscience (language of thought, Hebbian learning, and Bayesian Brain, respectively). They have also all developed their own flavors of cognitive architectures (e.g., production rule systems, attractor networks, and predictive coding respectively).

Frequentist Statisticians have no real presence in machine learning, neuroscience, nor cognitive architecture. But they are the only dominant force in the social sciences; e.g., econometrics.

I should also note that, in addition to the fields already noted Symbolists have unique presence in linguistics (especially Chomskyian universal grammar) and analytic philosophy (c.f., that field’s heavy reliance on predicate logic, and the linguistic turn in the early twentieth century).

Finally, causal inference only exists in the Bayesian (Pearlean d-separation) and Frequentist (Rubin potential outcome models). To my knowledge, this technology has not yet been robustly integrated into the Symbolist nor Connectionist tribes to date.

New Foundations- Applications

These four tribes largely draw from early twentieth century mathematics. Let us now turn to what mathematicians have been up to, in the past century.

Towards New Foundations

Let me now introduce you to the three developments in modern mathematics: constructive mathematics, category theory, and algebraic topology.

In classical logic, truth is interpreted ontologically: a fact about the world. But truth can also be interpreted epistemically: a true proposition is one that we can prove. But epistemic logic (aka intuitionistic logic) has us reject the Law of Excluded Middle (LEM): failing to prove a theorem is not the same thing as disproving it.

By removing LEM from mathematics, proof-by-contradiction become impossible. While this may seem limiting, in fact it also opens the doors for constructive mathematics: mathematics that can be input, and verified, by a computer. Erdos’ Book of God will be supplanted by the Github of God.

In recent years, category theory has emerged as the lingua franca of theoretical mathematics. It is built on the observation that all mathematical disciplines (algebraic and analytic) fundamentally describe mathematical objects and their relationships. Importantly, category theory allows theorems proved in one category to be translated into entirely novel disciplines.

Finally, since Alexander Grothendieck’s work on sheaf and topos theory, algebraic topology (and algebraic geometry) have come to occupy an increasingly central role in mathematics. This trend has only intensified in the 21st century. As John Baez puts it,

These are just the first steps in the ‘homotopification of mathematics, a trend in which algebra more and more comes to resemble topology, and ultimately abstract ‘spaces’ (for example, homotopy types) are considered as fundamental as sets.

New Foundations- Three Pillars Overview

These three “pillars” are perhaps best motivated by the technology that rests on it.

Computational trinitarianism is built on deep symmetries between proof theory, type theory and category theory. The movement is encapsulated in the slogan “Proofs are Programs” and “Propositions are Types”. This realization led to the development of Martin-Lof dependent type theory, which in turn has led to theorem proving software packages such as Coq.

In metamathematics, researchers investigate whether a single formal language can form the basis of the rest of mathematics. Historically, three candidates have been Zermelo-Frankel (ZF) set theory, and more recently Elementary Theory of the Category of Sets (ETCS). Homotopy type theory (HoTT) is a new entry into the arena, and extends computational trinitarianism by the Univalence Axiom, an entirely new interpretation of logical equality. Under the hood, the univalence axiom relies on a topological interpretation of the equality type. Suffice it to say, this particular theory has recently inspired a torrent of novel research. Time will tell how things develop.

In thermodynamics is built on the idea of Gibbs entropy (or, more formally, free energy). The basic intuition, which stems from statistical physics, is that disorder tends to increase over time. And thermodynamics does appear to be relevant in a truly diverse set of physical phenomena.

  • In physics, entropy is the reason behind the arrow of time (its “forward directionality”)
  • In chemistry, entropy forms the basis for spontaneous (asymmetric) reactions
  • In paleoclimatology, there is increasing reason to think that abiogenesis occurred via a thermodynamic process.
  • In anatomy, entropy is the organizing principle underlying cellular metabolism.
  • In ecology, entropy explains emergent phenomena related to biodiversity.

If I were to point at one candidate for the Universal Algorithm, entropy minimization would be my first pick. It turns out, strangely enough, that thermodynamic (Gibbs) entropy has the same functional form as information-theoretic (Shannon) entropy, which measures uncertainty in probability distributions. This is no accident. Information geometry extends this notion of “thermodynamic information” by interpreting entropy-distributions as stochastic manifolds.

In physics, of course, the two dominant theories of nature (general relativity + QFT) are mutually incompatible. It is increasingly becoming apparent that quantum topology is most viable way to achieve a Grand Unified Theory. From this paper,

Feynman diagrams are used to reason about quantum processes. In the 1980s, it became clear that underlying these diagrams is a powerful analogy between quantum physics and topology. Namely, a linear operator behaves very much like a ‘cobordism’: a manifold representing spacetime, going between two manifolds representing space. This led to a burst of work on topological quantum field theory and ‘quantum topology’

New Foundations- Three Pillars Complete (1)

Searching For Unity

That was a lot of content. Let’s zoom out. What is the point of being introduced to these new foundations? To give an more detailed intuition on which ML research is worthy of your attention (and participation!).

Most attempts to unify machine learning draw from merely classical foundations. For example, consider fuzzy logic, Markov logic networks, Dempster-Shafer theory, and Bayesian Neural Networks. While these ideas may be worth learning (particularly the last two), as candidates for unification they are necessarily incomplete; doomed by their unimaginative foundations.

New Foundations- Old Research Strategies (2)

In contrast, I submit you should funnel more enthusiasm towards ideas that draw from our new foundations. These may be active research concepts.

  • In linguistics, categorical compositionality is the marriage of category theory and traditional syntax. It blends nicely with probabilistic approaches of meaning (e.g., word2vec). See this 2015 paper, for example.
  • In statistics, topological data analysis is a rapidly expanding discipline. Rather than limiting oneself to probabilistic distribution theory (exponential families), this approach to statistics incorporates structural notions from algebraic topology. See this introductory tutorial, for example.
  • In neuroscience, the most recent Blue Brain experiment suggests that the Hebbian-style learning is not the whole story. Instead, the brain seems to rely on connectome topography: dynamically summon and disperse cliques of neurons, whose cooperation subsequently disappears like a tower of sand.
  • In macroeconomics, neoclassical models (based on partial differential equations) are being challenged by a new kind of model, econophysics, which views the market as a kind of heat machine.

New Foundations- New Research Strategies (1)

Or they may be entirely unexplored questions that dawn on you by contemplating conceptual lacunae.

  • What would happen if I were to re-imagine probability theory from intuitionistic principles?
  • How might I formalize production rule cognitive architectures like ACT-R in category theory?
  • Is there a way to understand neural network behavior and the information bottleneck from a topological perspective?

Until next time.

An Introduction to Topology

Part Of: Analysis sequence
Content Summary: 1000 words, 10 min read

Motivating Example

Can you draw three lines connecting A to A, B to B, and C to C?  The catch: the lines must stay on the disc, and they cannot intersect.

Topology- Motivating Problem (2)

Here are two attempts at a solution:

Topology- Potential Solutions (1)

Both attempts fail. In the first, there is no way for the Bs and Cs to cross the A line. In the second, we have made more progress… but connecting C is impossible.

Does any solution exist? It is hard to see how…

Consider a simplified puzzle. Let’s swap the inner points B and C.

Topology- Original vs Easy Puzzle (2)

In the new puzzle, the solution is easy: just draw straight lines between the pairs!

To understand where this solution breaks down, let’s use continuous deformation (i.e., homeomorphism) to transform this easier puzzle back to the original. In other words, let’s swap point B towards C, while not dropping the “strings” of our solution lines:

topology

Deformation has led us to the solution! Note what just happened: we solved an easy problem, and than “pulled” that solution to give us insight into a harder problem.

As we will see, the power of continuous deformation extends far beyond puzzle-solving. It resides at the heart of topology, one of mathematics’ most important disciplines.

Manifolds: Balls vs Surfaces

The subject of arithmetic is the number. Analogously, in topology, manifolds are our objects. We can distinguish two kinds of primitive manifold: balls and surfaces.

Topology- Balls and Surfaces (1)

These categories generalize ideas from elementary school:

  • A 1-ball B^1 is a line segment
  • A 2-ball B^2 is a disc
  • S^1 is a circle
  • S^2 is a sphere

Note the difference between volumes and their surfaces. Do not confuse e.g., a disc with a circle. The boundary operation \partial makes the volume-surface relationship explicit. For example, we say that \partial B^2 = S^1.

Note that surfaces are one dimension below their corresponding volume. For example, a disc resides on a plane, but a circle can be unrolled to fit within a line.

Importantly, an m-ball and an m-cube are considered equivalent! After all, they can be deformed into one another. This is the reason for the old joke:

A topologist cannot tell the difference between a coffee cup and a donut. Why? Because both objects are equivalent under homeomorphism:

coffeetodonut

If numbers are the objects of arithmetic, operations like multiplication act on these numbers. Topological operations include product, division, and connected sum. Let us address each in turn.

On Product

The product (x) operation takes two manifolds of dimension m and n, and returns a manifold of dimension m+n. A couple examples to whet your appetite:

Topology- Examples of Product (1)

These formulae only show manifolds of small dimension. But the product operation can just as easily construct e.g. a 39-ball as follows:

B^{39} = \prod_{i=1}^{39} I^1

How does product relate to our boundary operator? By the following formula:

\partial (M x N) = ( \partial M x N) \cup (M x \partial N )

This equation, deeply analogous to the product rule in calculus, becomes much more clear by inspection of an example:
Topology- Product vs Boundary (1)

On Division

Division ( / ) glues together the boundaries of a single manifold. For example, a torus can be created from the rectangle I^{2}:

torus_by_division

We will use arrows to specify which edges are to be identified. Arrows with the same color and shape must be glued together (in whatever order you see fit).

Topology- Division Simple Examples (2)

Alternatively, we can specify division algebraically. In the following equation, x=0 means “left side of cylinder” and x=1 means right side:

S^1 x I^1 = Cylinder = \frac{I^2}{(0,y) \sim (1, y) \forall y}

The Möbius strip is rather famous for being non-orientable: it neither has an inside nor an outside. As M.C. Escher once observed, an ant walking on its surface would have to travel two revolutions before returning to its original orientation.

More manifolds that can be created by division on I^{2}. To construct a Klein bottle by division, you take a cylinder, twist it, and fold it back on itself:

Topology- Klein Bottle Construction (5)

In our illustration, there is a circle boundary denoting the location of self-intersection. Topologically, however, the Klein bottle need not intersect itself. It is only immersion in 3-space that causes this paradox.

Our last example of I^{2} division is the real projective plane RP^{2}. This is even more difficult to visualize in 3-space, but there is a trick: cut I^{2} again. As long as we glue both pieces together along the blue line, we haven’t changed the object. 

Topology- Deriving Real Projective Plane First Part (1)

The top portion becomes a Möbius strip; the bottom becomes a disc. We can deform a disc into a sphere with a hole in it. Normally, we would want to fill in this hole with another disc. However, we only have a Möbius strip available.

But Möbius strips are similar to discs, in that its boundary is a single loop. Because we can’t visualize this “Möbius disc” directly, I will represent it with a wheel-like symbol.  Let us call this special disc by a new name: the cross cap.

The real projective plane, then, is a cross cap glued into the hole of a sphere.  It is like a torus; except instead of a handle, it has an “anomaly” on its surface.

Topology- Deriving Real Projective Plane Second Part

These then, are our five “fundamental examples” of division:

Topology- Division Overview (3)

On Connected Sum

Division involves gluing together parts of a single manifold. Connected sum (#), also called surgery, involves gluing two m-dimensional manifolds together. To accomplish this, take both manifolds, remove an m-ball from each, and identify (glue together) the boundaries of the holes. In other words:

\frac{ ( M_1 / B_1 ) \cup ( M_2 / B_2 ) }{ \partial ( M_1 / B_1 ) \sim \partial ( M_2 / B_2 )} = M_1 \# M_2

Let’s now see a couple examples. If we glue tori together, we can increase the number of holes in our manifold. If we attach a torus with a real projective plane, we acquire a manifold with holes and cross-cuts.
Topology- Connected Sum examples (3)

Takeaways

  • Topology, aka. “rubber sheet geometry”, is the study of malleable objects & spaces.
  • In topology, manifolds represent objects in n-dimensional space.
  • Manifolds either represent volumes (e.g., disc) and boundaries (e.g., circles)
  • Manifolds are considered equivalent if a homeomorphism connects them.
  • There are three basic topological operations:
    • Product (x) is a dimension-raising operation (e.g., square can become a cube).
    • Division (/) is a gluing operation, binding together parts of a single manifold.
    • Connected sum (#) i.e., surgery describes how to glue two manifolds together.

Related Materials

This post is based on Dr. Tadashi Tokeida’s excellent lecture series, Topology & Geometry. For more details, check it out!

The X-Bar Theory of Phrase Structure

Part Of: Language sequence
Followup To: An Introduction to Generative Syntax
Content Summary: 800 words, 8 min read

Explaining Substitution

Consider the sentence “I bought this big book of poems with the red cover”.

XBar- Flat Noun Phrase (1)

In everyday language, we often replace words and phrases with indexing words like “one”. Call this indexing replacement.The meaning of these words can be obtained from the context.

At first glance, indexing replacement seems to target a branch in the syntax tree. For example:

  • I bought that big one of poems with the red cover (“one” replaces the noun)
  • I bought one (“one” replaces the entire noun phrase)

But there are several other substitutions don’t follow from branch replacement:

  • I bought that big one.
  • I bought that small one
  • I bought that big one of poems with the blue cover

Perhaps our notion of noun phrases is too flat. Perhaps we need additional nodes to describe structure within the noun phrase. We will call these intermediate nodes N’, (where N → N’ → N’’ = NP):

towards_noun-bar

This new tree successfully predicts all substitution phenomena, by modeling “one” as replacing various “N-bar” nodes:

xbar_noun_substitution

We can similarly introduce depth to our verb phrases (VPs), by using intermediate V’ (“V-bar”) nodes:

XBar- Verb Substitution (2)

The X-Bar syntax tree provides a simple explanation of the “do so” substitution effects:

  • I will do so in the office before the party.
  • I will do so before the party.
  • I will do so.

A General Theory of Phrases

We can revise our original NP and VP rules to reflect our intermediate N’ and V’ nodes:

Xbar Theory- Towards XBar Rules

What if noun and verb phrases are instantiations of a more general phrase structure? Just as group theory identifies overlap in the axioms of addition and subtraction, X-bar theory explores the similarity between NP and VP rules.  

Xbar Theory- XBar Parameterization (1)

There are only four kinds of phrase constituents:

  1. The head carries the central meaning of the phrase. Consider the sentence “The tall student who is wearing the red shirt asked questions of her professor, after the lecture.” The central meaning is retained if we remove all non-head words: “student asked questions”.
  2. The specifier points to the head. For nouns, specifiers include determiners (“the”) and possessives (“her”). For verbs, adverbs occasionally fill this role (“quickly”).
  3. The complement tends to feel intimately related to the head of a phrase (e.g., “of poems” in “a book of poems”).
  4. Adjuncts, on the other hand, tend to feel more optional (e.g., “big” in “big book”).

Xbar Theory- Phrase Structure

Adjuncts vs Complement

Given that adjuncts and complements both often inhabit prepositional phrases, it is perhaps surprising that they should behave differently. The distinction between adjuncts and complements explains why this should be the case. Let us look at four behavioral differences:

Difference #1. Adjuncts can be reordered freely. 

Consider our example verb phrase:

Xbar Theory- VBar and NBar Example

This rule means that our two adjuncts can be shuffled, but the complement NP must retain its original position

  • I will read the letter in the office before the party (Original order: valid)
  • I will read the letter before the party in the office (Adj reorder: valid)
  • *I will read in the office before the party the letter (Compl reorder: invalid)

Difference #2. Indexing replacement cannot strand the complement.

For example,

  • I will do so in the office before the party (Adj is stranded: valid)
  • *I will do so the letter before the party (Compl is stranded: invalid)

Consider another part of speech we have not yet considered: conjunction words like “and” and “or”.

Difference #3. Conjunction words bind adjuncts together, and complements together. But adjunct-complement bindings are non-grammatical.

Consider our example noun phrase:

Xbar Theory- NBar Example (1)

Three examples to illustrate how conjunction works:

  • I bought the book of poems and of short stories. (Compl-compl conjunction: valid)
  • The book with the red cover and the black spine. (Adj-adj conjunction: valid)
  • *The book of poems and with the red cover. (Compl-adj conjunction: invalid)

What X-Bar Theory Tells Us About Memory

Earlier, I introduced the distinction between episodic and semantic memory:

  • Semantic: ability to remember facts and concepts (e.g., hands have five fingers)
  • Episodic: ability to remember events or episodes (e.g., dinner last Tuesday night)

Concepts are learned by extracting commonalities from episodic memories. If you see enough metallic blocks moving around on four cylinders, you’ll eventually consolidate these objects into the CAR concept:

XBar Theory- Semantic vs Episodic Memory

In philosophy, I suspect the concepts of necessity and contingency relate to semantic and episodic memory, respectively.

In linguistics, I suspect complements help locate concepts in semantic memory, whereas adjuncts assist episodic localization. In the sentence “I bought the book of poems with the red cover”, the complement helps us activate the concept POEM-BOOK, whereas the adjunct creates sense-predictions that locate it within our episodic memory.

Takeaways

  • With flat syntax trees, it is difficult to explain indexing substitution (e.g., “bought a book” → “bought one”)
  • If we make syntax trees binary, by introducing intermediate  X’ (“X-Bar”) nodes, substitution becomes more straightforward.
  • Noun and verb phrases thus parameterize a more general phrase structure.
  • Phrases have four kinds of constituents: head, specifier, complement, and adjuncts.
  • The differences between complements and adjuncts are instructive:
    • Only adjuncts can be reordered.
    • Indexing replacement cannot strand the complement.
    • Conjunction cannot bind across categories
  • In human cognition, complements and adjuncts may correspond to semantic and episodic memory, respectively.

An Introduction to Generative Syntax

Part Of: Language sequence
Content Summary: 900 words, 9 min read

Syntax vs Semantics

In language, we distinguish between syntax (structure) and semantics (meaning).

Compare the following:

  • “Colorless green ideas sleep furiously”
  • “Sleep ideas colorless green furiously”

Both sentences are nonsensical (a semantic transgression). But the first is grammatically correct, whereas the second is malformed.

The brain responds differently to errors of syntax and semantics, as measured by an EEG machine. Semantic errors produce a negative voltage after 400 milliseconds (“N400”); syntactic errors produce a positive voltage after 600 milliseconds (“P600”):

Syntax- Linguistic ERPs (1)

Parts of Speech

To understand syntax more precisely, we must differentiate parts of speech. Consider the following categories:

  • Noun (N).  cat, book, computer, peace, …
  • Verb (V). jump, chase, eat, sleep, …
  • Adjective (A). long, purple, young, old, …
  • Determiner (D) the, this, many, all, …
  • Preposition (P) in, on, to, for, with…

Nouns and verbs correspond to perception- and action- representations, respectively. They are an expression of the perception-action cycle. But to study syntax, it helps to put aside semantic context, and explore how parts of speech relate to one another.

Phrases as Color Patterns

To understand syntax intuitively, start by adding color to sentences.  Then try to find patterns of color unique to well-formed sentences.

Let’s get started!

Syntax- Noun Phrase Abstraction (3)

“Noun-like” groups of words appear on either side of the verb. Let noun phrase (NP) denote such a group. Optional parts of speech are indicated by the parentheses. Thus, our grammar contains the following rules:

  1. S → NP V NP
  2. NP → (D) (A) N

These rules explain why the following sentences feel malformed:

  • “Chase dogs cats” (violates rule 1)
  • “Old some dogs chase cats” (violates rule 2)

But these rules don’t capture regularities in how verbs are expressed. Consider the following sentences:

Syntax- Verb Phrase Abstraction (1)

A verb phrase contains a verb, optionally followed by a noun, and/or a preposition.

  1. S → NP VP
  2. NP → (D) (A) N
  3. VP → V (NP) (P NP)

This is better. Did you notice how we improved our sentence (S) rule? 🙂 Subject-only sentences (e.g. “She ran”) are now recognized as legal.

Prepositions are not limited to verb phrases, though. They also occur in noun phrases. Consider the following:

Syntax- Prepositional Phrase Abstraction

Prepositions are sometimes “attached to” a noun phrase. We express these as a prepositional phrase, which includes a preposition (e.g. “on”) and an optional noun phrase (e.g. “the table”).

  1. S → NP VP
  2. NP → (D) (A) N (PP)
  3. VP → V (NP) (PP)
  4. PP → P (NP)

Notice how we cleaned up the VP rule, and improved the NP rule.

Congratulations! You have discovered the rules of English. Of course, a perfectly complete grammar must include determiners (e.g., “yours”), conjunction (e.g., “and”), interjection (e.g., “wow!”). But these are fairly straightforward extensions to the above system.

These grammatical rules need not only interest English speakers. As we will see later, a variant of these rules appear in all known human languages. This remarkable finding is known as universal grammar. Language acquisition is not about reconstructing syntax rules from scratch. Rather, it is about learning the parameters by which your particular natural language (English, Chinese, Egyptian) varies from the universal script.

From Rules to Trees

Our four rules are polymorphic: they permit more than one kind of structure. Unique rule sets are easier to analyze, so let’s translate our rules into this format:

Syntax- Compressed vs Unique Ruleset (1)

 

Importantly, we can conceive of these unique rules as directions to construct a tree. We can conceive of the sentence “Dogs chase cats” as:

Syntax- Simple Tree (1)

Sentences are trees. These trees are not merely used to verify whether grammatical correctness. They play a role in speech production: which transforms the language of thought (Mentalese) to natural language (e.g., English). For more on this, see my discussion of the Tripartite Mind.

How can (massively parallel) conscious thought be made into (painfully serial) speech utterances? With syntax! Simply take the concepts you desire to communicate, and construct a tree based on (a common set of) syntactical rules.

syntax_tree_construction

Tree construction provides much more clarity on the phenomena of wordplay (linguistic ambiguity). Consider the sentence “I shot a wolf in my pajamas”. Was the gun fired while you were wearing pajamas? Or was the wolf dressed in pajamas?

Syntax- Multiple Interpretation Ambiguity

Both interpretations agree on parts of speech (colors). It is the higher-order structure that admits multiple choices. In practice, semantics constrain syntax: we tend to select the interpretation is feels the most intuitive.

The Sociology of Linguistics

The above presentation uses a simple grammar, for pedagogic reasons. I will at some point explain the popular X’ theory (pronounced “X bar”), which explores similarities between different phrase structures (e.g., NP vs PP). Indeed, there is a wide swathe of possible grammars that we will explore.

Syntax- Sociology of Linguistic Research

Generative grammar is part of the Symbolist tribe of machine learning. As such, this field has rich connections with algebra, production systems, and logic. For example, propositional logic was designed as the logic of sentences; predicate logic is the logic of phrases.

Other tribes besides the Symbolists care about language and grammar, of course. Natural Language Processing (NLP) and computational linguistics have been heavily influenced by the Bayesian tribe, and use probabilitic grammars (i.e., PCFGs).

More recently, the Connectionist tribe (and deep learning technologies) are taking a swing at producing language. In fact, I suspect neural network interpretability will only be achieved once a Connectionist account of language production has matured.

Takeaways

  • Language can be understood via syntax (structure) and semantics (meaning).
  • Syntax requires delineating parts of speech (e.g., nouns vs verbs).
  • Parts of speech occur in patterns called phrases. We can express these patterns as the rules of syntax.
  • Sentences are trees. Syntax rules are instructions for tree construction.
  • Sentence-trees provide insight into problems like sentence ambiguity.

For more resources on syntax trees, I recommend this lecture, this website, and this Youtube channel.

Until next time.

Logic Inference: Sequent Calculus

Part Of: Logic sequence
Followup To: Natural Deduction
Content Summary: 600 words, 6 min read

Motivating Sequent Calculus

Last time, we labelled propositions in the language of verification.

  • ↑ represents conjecture: propositions that require verification
  • ↓ represents assumption: propositions that can be used for verification.

Two of our connective rules (⊃I and ∨E) expanded our set of assumptions, which we could use at any later time. Logic acumen is invoking the right assumption at the right time.

In contrast to natural deduction, sequent calculus explicitly tracks the set of assumptions) as they vary across different branches of the proof tree.

We will use the turnstile to distinguish assumptions from conjecture: { assumptions } ⊢ { conjectures }

In natural deduction, progress in bidirectional: we are done when we found a connection between assumptions and conjecture. In sequent calculus, progress is unidirectional. Instead, we start with no assumptions, and finish when we have no conjectures left to demonstrate.

Sequence Calculus- Different Schematics (1)

Both logical systems rely on two sets of five rules. They bear the following relationships:

  • R = I. Right rules are very similar to Introduction rules.
  • L = E-1. Left rules must be turned “upside down”.

Right and Left rules

We here define capital gamma Γ to represent the context, or current set of assumptions.

Right rules simply preface Introduction rules with  “Γ ⊢”. The exception ⊃R is instructive. There, A is added to the context, and our “target” conjecture shrinks to just B.
Sequent Calculus- Right vs Introduction (2)

Left rules are less transparently related to Elimination. They are more easily understood by an English explanation:

Sequent Calculus- Left Rule English Interpretation (2)

The entire structure of sequent calculus, then, looks like this:

Sequent Calculus- Left and Right Rules (1)

Enough theory! Let’s use sequent calculus to prove stuff.

Example 1: Implication

Show that (A ⊃ (B ⊃ C)) ⊃ ((A ⊃ B) ⊃ (B ⊃ C)).

Here, ⊃R serves us well:

Sequent Calculus- Implication Step0 (1)

We have parsed the jungle of connectives, and arrived at a clear goal. We need to prove C. How?

Recall what ⊃L means: “if you have assumed A ⊃ B, you may also assume B (right branch) if you can prove A with your current assumptions (left branch).

Let’s apply ⊃L to the A ⊃ B proposition sitting in our context. To save space, let us here define Γ with the following three elements: { A⊃(B⊃C), A⊃B, and A }.

Sequent Calculus- Implication Step1.png

We can solve the left branch immediately. Since A ∈ Γ, we can invoke the hyp rule.

Unfortunately, assuming B is not enough to prove C. We must invoke ⊃L again, this time against our A⊃(B⊃C) assumption.

Sequent Calculus- Implication Step2

And again, on our newfound B⊃C assumption.

Sequent Calculus- Implication Step3.png

Wait! By now our context by now contains A, B, and C. Each leaf of the proof tree is provable by hyp.

Sequent Calculus- Implication Step4

QED. It is instructive to compare this sequent calculus proof with the analogous natural deduction (which we solved together, last time).

Sequent Calculus- Implication Comparing Proofs

Example 2: Distributivity

Show that (A ∨ (B ∧ C)) ⊃ ((A ∨ B)  ∧ (A ∨ C)).

The first two steps here are straightforward. Simplify the conjecture string!

Sequent Calculus- Distributivity Step0 (1)

Note that Γ = { A ∨ (B ∧ C) }. Here, we use ∨L to split this assumption into two components:

Sequent Calculus- Distributivity Step1 (2)

We now have four conjectures to prove. Fortunately, each proof has become trivial:

Sequent Calculus- Distributivity StepF

QED.

Takeaways

In this post, we introduced sequent calculus (SC) as an alternative deductive calculus. Sequent calculus makes the notion of context (assumption set) explicit: which tends to make its proofs bulkier but more linear than the natural deduction (ND) style. The two approaches share several symmetries: SC right rules correspond fairly rigidly to ND introduction rules, for example.

If you want to learn sequent calculus for yourself, I recommend solving the converse problems to the two examples above. Specifically,

  • Given (A ⊃ B) ⊃ (B ⊃ C), show that A ⊃ (B ⊃ C).
  • Given (A ∨ B) ∧ (A ∨ C), show  that A ∨ (B ∧ C).

Until next time!