An Introduction to Generative Syntax

Part Of: Language sequence
Content Summary: 900 words, 9 min read

Syntax vs Semantics

In language, we distinguish between syntax (structure) and semantics (meaning).

Compare the following:

  • “Colorless green ideas sleep furiously”
  • “Sleep ideas colorless green furiously”

Both sentences are nonsensical (a semantic transgression). But the first is grammatically correct, whereas the second is malformed.

The brain responds differently to errors of syntax and semantics, as measured by an EEG machine. Semantic errors produce a negative voltage after 400 milliseconds (“N400”); syntactic errors produce a positive voltage after 600 milliseconds (“P600”):

Syntax- Linguistic ERPs (1)

Parts of Speech

To understand syntax more precisely, we must differentiate parts of speech. Consider the following categories:

  • Noun (N).  cat, book, computer, peace, …
  • Verb (V). jump, chase, eat, sleep, …
  • Adjective (A). long, purple, young, old, …
  • Determiner (D) the, this, many, all, …
  • Preposition (P) in, on, to, for, with…

Nouns and verbs correspond to perception- and action- representations, respectively. They are an expression of the perception-action cycle. But to study syntax, it helps to put aside semantic context, and explore how parts of speech relate to one another.

Phrases as Color Patterns

To understand syntax intuitively, start by adding color to sentences.  Then try to find patterns of color unique to well-formed sentences.

Let’s get started!

Syntax- Noun Phrase Abstraction (3)

“Noun-like” groups of words appear on either side of the verb. Let noun phrase (NP) denote such a group. Optional parts of speech are indicated by the parentheses. Thus, our grammar contains the following rules:

  1. S → NP V NP
  2. NP → (D) (A) N

These rules explain why the following sentences feel malformed:

  • “Chase dogs cats” (violates rule 1)
  • “Old some dogs chase cats” (violates rule 2)

But these rules don’t capture regularities in how verbs are expressed. Consider the following sentences:

Syntax- Verb Phrase Abstraction (1)

A verb phrase contains a verb, optionally followed by a noun, and/or a preposition.

  1. S → NP VP
  2. NP → (D) (A) N
  3. VP → V (NP) (P NP)

This is better. Did you notice how we improved our sentence (S) rule? 🙂 Subject-only sentences (e.g. “She ran”) are now recognized as legal.

Prepositions are not limited to verb phrases, though. They also occur in noun phrases. Consider the following:

Syntax- Prepositional Phrase Abstraction

Prepositions are sometimes “attached to” a noun phrase. We express these as a prepositional phrase, which includes a preposition (e.g. “on”) and an optional noun phrase (e.g. “the table”).

  1. S → NP VP
  2. NP → (D) (A) N (PP)
  3. VP → V (NP) (PP)
  4. PP → P (NP)

Notice how we cleaned up the VP rule, and improved the NP rule.

Congratulations! You have discovered the rules of English. Of course, a perfectly complete grammar must include determiners (e.g., “yours”), conjunction (e.g., “and”), interjection (e.g., “wow!”). But these are fairly straightforward extensions to the above system.

These grammatical rules need not only interest English speakers. As we will see later, a variant of these rules appear in all known human languages. This remarkable finding is known as universal grammar. Language acquisition is not about reconstructing syntax rules from scratch. Rather, it is about learning the parameters by which your particular natural language (English, Chinese, Egyptian) varies from the universal script.

From Rules to Trees

Our four rules are polymorphic: they permit more than one kind of structure. Unique rule sets are easier to analyze, so let’s translate our rules into this format:

Syntax- Compressed vs Unique Ruleset (1)

 

Importantly, we can conceive of these unique rules as directions to construct a tree. We can conceive of the sentence “Dogs chase cats” as:

Syntax- Simple Tree (1)

Sentences are trees. These trees are not merely used to verify whether grammatical correctness. They play a role in speech production: which transforms the language of thought (Mentalese) to natural language (e.g., English). For more on this, see my discussion of the Tripartite Mind.

How can (massively parallel) conscious thought be made into (painfully serial) speech utterances? With syntax! Simply take the concepts you desire to communicate, and construct a tree based on (a common set of) syntactical rules.

syntax_tree_construction

Tree construction provides much more clarity on the phenomena of wordplay (linguistic ambiguity). Consider the sentence “I shot a wolf in my pajamas”. Was the gun fired while you were wearing pajamas? Or was the wolf dressed in pajamas?

Syntax- Multiple Interpretation Ambiguity

Both interpretations agree on parts of speech (colors). It is the higher-order structure that admits multiple choices. In practice, semantics constrain syntax: we tend to select the interpretation is feels the most intuitive.

The Sociology of Linguistics

The above presentation uses a simple grammar, for pedagogic reasons. I will at some point explain the popular X’ theory (pronounced “X bar”), which explores similarities between different phrase structures (e.g., NP vs PP). Indeed, there is a wide swathe of possible grammars that we will explore.

Syntax- Sociology of Linguistic Research

Generative grammar is part of the Symbolist tribe of machine learning. As such, this field has rich connections with algebra, production systems, and logic. For example, propositional logic was designed as the logic of sentences; predicate logic is the logic of phrases.

Other tribes besides the Symbolists care about language and grammar, of course. Natural Language Processing (NLP) and computational linguistics have been heavily influenced by the Bayesian tribe, and use probabilitic grammars (i.e., PCFGs).

More recently, the Connectionist tribe (and deep learning technologies) are taking a swing at producing language. In fact, I suspect neural network interpretability will only be achieved once a Connectionist account of language production has matured.

Takeaways

  • Language can be understood via syntax (structure) and semantics (meaning).
  • Syntax requires delineating parts of speech (e.g., nouns vs verbs).
  • Parts of speech occur in patterns called phrases. We can express these patterns as the rules of syntax.
  • Sentences are trees. Syntax rules are instructions for tree construction.
  • Sentence-trees provide insight into problems like sentence ambiguity.

For more resources on syntax trees, I recommend this lecture, this website, and this Youtube channel.

Until next time.

Advertisements

Logic Inference: Sequent Calculus

Part Of: Logic sequence
Followup To: Natural Deduction
Content Summary: 600 words, 6 min read

Motivating Sequent Calculus

Last time, we labelled propositions in the language of verification.

  • ↑ represents conjecture: propositions that require verification
  • ↓ represents assumption: propositions that can be used for verification.

Two of our connective rules (⊃I and ∨E) expanded our set of assumptions, which we could use at any later time. Logic acumen is invoking the right assumption at the right time.

In contrast to natural deduction, sequent calculus explicitly tracks the set of assumptions) as they vary across different branches of the proof tree.

We will use the turnstile to distinguish assumptions from conjecture: { assumptions } ⊢ { conjectures }

In natural deduction, progress in bidirectional: we are done when we found a connection between assumptions and conjecture. In sequent calculus, progress is unidirectional. Instead, we start with no assumptions, and finish when we have no conjectures left to demonstrate.

Sequence Calculus- Different Schematics (1)

Both logical systems rely on two sets of five rules. They bear the following relationships:

  • R = I. Right rules are very similar to Introduction rules.
  • L = E-1. Left rules must be turned “upside down”.

Right and Left rules

We here define capital gamma Γ to represent the context, or current set of assumptions.

Right rules simply preface Introduction rules with  “Γ ⊢”. The exception ⊃R is instructive. There, A is added to the context, and our “target” conjecture shrinks to just B.
Sequent Calculus- Right vs Introduction (2)

Left rules are less transparently related to Elimination. They are more easily understood by an English explanation:

Sequent Calculus- Left Rule English Interpretation (2)

The entire structure of sequent calculus, then, looks like this:

Sequent Calculus- Left and Right Rules (1)

Enough theory! Let’s use sequent calculus to prove stuff.

Example 1: Implication

Show that (A ⊃ (B ⊃ C)) ⊃ ((A ⊃ B) ⊃ (B ⊃ C)).

Here, ⊃R serves us well:

Sequent Calculus- Implication Step0 (1)

We have parsed the jungle of connectives, and arrived at a clear goal. We need to prove C. How?

Recall what ⊃L means: “if you have assumed A ⊃ B, you may also assume B (right branch) if you can prove A with your current assumptions (left branch).

Let’s apply ⊃L to the A ⊃ B proposition sitting in our context. To save space, let us here define Γ with the following three elements: { A⊃(B⊃C), A⊃B, and A }.

Sequent Calculus- Implication Step1.png

We can solve the left branch immediately. Since A ∈ Γ, we can invoke the hyp rule.

Unfortunately, assuming B is not enough to prove C. We must invoke ⊃L again, this time against our A⊃(B⊃C) assumption.

Sequent Calculus- Implication Step2

And again, on our newfound B⊃C assumption.

Sequent Calculus- Implication Step3.png

Wait! By now our context by now contains A, B, and C. Each leaf of the proof tree is provable by hyp.

Sequent Calculus- Implication Step4

QED. It is instructive to compare this sequent calculus proof with the analogous natural deduction (which we solved together, last time).

Sequent Calculus- Implication Comparing Proofs

Example 2: Distributivity

Show that (A ∨ (B ∧ C)) ⊃ ((A ∨ B)  ∧ (A ∨ C)).

The first two steps here are straightforward. Simplify the conjecture string!

Sequent Calculus- Distributivity Step0 (1)

Note that Γ = { A ∨ (B ∧ C) }. Here, we use ∨L to split this assumption into two components:

Sequent Calculus- Distributivity Step1 (2)

We now have four conjectures to prove. Fortunately, each proof has become trivial:

Sequent Calculus- Distributivity StepF

QED.

Takeaways

In this post, we introduced sequent calculus (SC) as an alternative deductive calculus. Sequent calculus makes the notion of context (assumption set) explicit: which tends to make its proofs bulkier but more linear than the natural deduction (ND) style. The two approaches share several symmetries: SC right rules correspond fairly rigidly to ND introduction rules, for example.

If you want to learn sequent calculus for yourself, I recommend solving the converse problems to the two examples above. Specifically,

  • Given (A ⊃ B) ⊃ (B ⊃ C), show that A ⊃ (B ⊃ C).
  • Given (A ∨ B) ∧ (A ∨ C), show  that A ∨ (B ∧ C).

Until next time!

 

Logic Inference: Natural Deduction

Part Of: Logic sequence
Content Summary: 500 words, 5 min read

Introduction

Logical systems like IPL have the following ingredients:

  • proposition is an atomic statement that can acquire a truth value.
  • connective takes atomic propositions, and melds them into a composite.

We can label propositions in the language of verification.

  • ↑ represents conjecture: propositions that require verification
  • ↓ represents knowledge: propositions that can be used for verification.

Introduction and elimination rules can be expressed in this language:

Logic Metalanguage- Original Rules (1)

Elimination rules tend to “point down”; introduction rules point up. Roughly, deduction involves applying such rules until the paths meet:

IPL Inference- Schematic

Enough theory! Let’s see how this works in practice.

Exercise One: Implication Exploration

Given A ⊃ (B ⊃ C), show that  (A ⊃ B) ⊃ (B ⊃ C).

We can visualize the challenge as follows. The red line indicates common knowledge.

IPL Inference- Implication Exploration Step0

First, let’s apply elimination on the premises:

IPL Inference- Implication Exploration Step1

Next, let’s apply introduction on the conclusion:

IPL Inference- Implication Exploration Step2.png

Are we done? No: we have not verified A↑ and B↑. If we had, they would have a red line over them.

To finish the proof, we need to invoke our introduction-rule assumptions.

IPL Inference- Implication Exploration Step3

Proving A↑ is trivial. Proving B↑ requires combining assumptions via elimination.

IPL Inference- Implication Exploration Step4 (2)

Done. 🙂 Good work!

Exercise for the Reader

Prove the converse is true. Given (A ⊃ B) ⊃ (B ⊃ C), show that A ⊃ (B ⊃ C).

Example 2: Distributivity

In arithmetic, distributivity refers to how addition and multiplication can interleave with one another. It requires that a + (b * c) = (a*b) + (a*c). For example:

  • 2 * (4+5) = 2 * 9 = 18
  • (2*4) + (2*5) = 8 + 10 = 18

Are logical conjunction and disjunction distributive? Let’s find out!

IPL Inference- Distributivity Exploration Step0 (1)

First, let’s introduce conjunction on the conclusion.

IPL Inference- Distributivity Exploration Step1

Here we reach an impasse. We need to introduce disjunction elimination on the premise. But what should we choose for C?

Let’s set C = A or B.

IPL Inference- Distributivity Exploration Step2.png

Filling in the gaps is straightforward. On the right, we eliminate conjunction and retain B. Then we introduce disjunction on both sides.

IPL Inference- Distributivity Exploration Step2 (1)

Here is where I originally got stuck. How can we use disjunction elimination?

The way forward becomes easier to grasp, when you remember:

  • We can use knowledge as many times as we like.
  • The symbols in the rule schematics are arbitrary.

Let’s set the arbitrary elimination symbol “C”  equal to A or C:

IPL Inference- Distributivity Exploration Step4 (1)

From here, the solution is straightforward.

IPL Inference- Distributivity Exploration Step5 (1)

Exercise for the Reader

Prove the converse is true. Given (A ⊃ B) ⊃ (B ⊃ C), show that A ⊃ (B ⊃ C).

Takeaways

In this post, we saw worked examples of deduction. Specifically:

  • Given A ⊃ (B ⊃ C),  show that  (A ⊃ B) ⊃ (B ⊃ C).
  • Given A ∨ (B ∧ C), show that (A ∨ B) ∧ (A ∨ C).

The best way to learn is practice. For the interested reader, I recommend these exercises:

  • Given (A ⊃ B) ⊃ (B ⊃ C), show that A ⊃ (B ⊃ C).
  • Given (A ∨ B) ∧ (A ∨ C), show  that A ∨ (B ∧ C).

In the latter exercise, you must also “get creative” on how to use disjunction elimination. Instead of choosing an arbitrary C, you must set A^B to a useful value.

… still stuck? Okay, see solution here. 🙂

Until next time.

 

Five Tribes of Machine Learning

Part Of: Machine Learning sequence
Content Summary: 900 words, 9 min read

ML is tribal, not monolithic

Research in artificial intelligence (AI) and machine learning (ML) has been going on for decades. Indeed, the textbook Artificial Intelligence: A Modern Approach reveals a dizzying variety of learning algorithms and inference schemes. How can we make sense of all the technologies on offer?

As argued in Domingos’ book The Master Algorithm, the discipline is not monolithic. Instead, five tribes have progressed relatively independently. What are these tribes?

  1. Symbolists use formal systems. They are influenced by computer science, linguistics, and analytic philosophy.
  2. Connectionists use neural networks. They are influenced by neuroscience.
  3. Bayesians use probabilistic inference. They are influenced by statistics.
  4. Evolutionaries are interested in evolving structure. They are influenced by biology.
  5. Analogizers are interested in mapping to new situations. They are influenced by psychology.

Expert readers may better recognize these tribes by their signature technologies:

  • Symbolists use decision trees, production rule systems, and inductive logic programming.
  • Connectionists rely on deep learning technologies, including RNN, CNN, and deep reinforcement learning.
  • Bayesians use Hidden Markov Models, graphical models, and causal inference.
  • Evolutionaries use genetic algorithms, evolutionary programming, and evolutionary game theory.
  • Analogizers use k-nearest neighbor, and support vector machines.

Five Tribes- Strengths and Technologies

In fact, my blog can be meaningfully organized under this research landscape.

History of Influence

Here are some historical highlights in the development of artificial intelligence.

Symbolist highlights:

  • 1950: Alan Turing proposes the Turing Test in Computing Machinery & Intelligence.
  • 1974-80: Frame problem & combinatorial explosion caused First AI Winter.
  • 1980: Expert systems & production rules re-animate the field. 
  • 1987-93: Expert systems too fragile & expensive, causing the Second AI Winter.
  • 1997: Deep Blue defeated reigning chess world champion Gary Kasparov.

Connectionist highlights:

  • 1957: Perceptron invented by Frank Rosenblatt.
  • 1968: Minsky and Papert publish the book Perceptrons, criticizing single-layer perceptrons. This puts the entire field to sleep, until..
  • 1986: Backpropagation invented, and connectionist research restarts.
  • 2006: Hinton et al publish A fast learning algorithm for deep belief nets, which rejuvinates interest in Deep Learning.
  • 2017: AlphaGo defeats reigning Go world champion, using DRL.

Bayesian highlights:

  • 1953: Monte Carlo Markov Chain (MCMC) invented. Bayesian inference finally becomes tractable on real problems.
  • 1968: Hidden Markov Model (HMM) invented.
  • 1988: Judea Pearl authors Probabilistic Reasoning in Intelligent Systems, and creates the discipline of probabilistic graphical models (PGMs).
  • 2000: Judea Pearl authors Causality: Models, Reasoning, and Inference, and creates the discipline of causal inference on PGMs.

Evolutionary highlights

  • 1975: Holland invents genetic algorithms.

Analogizer highlights

  • 1968: k-nearest neighbor algorithm increases in popularity.
  • 1979: Douglas Hofstadter publishes Godel, Escher, Bach.
  • 1992: support vector machines (SVMs) invented.

We can summarize this information visually, by creating an AI version of the Histomap:

Five Tribes- Historical Size and Competition (2)

These data are my own impression of AI history. It would be interesting to replace it with real funding & paper volume data.

Efforts Towards Unification

Will there be more or fewer tribes, twenty years from now? And which sociological outcome is best for AI research overall?

Theory pluralism and cognitive diversity are underappreciated assets to the sciences. But scientific progress is often heralded by unification. Unification comes in two flavors:

  • Reduction: identifying isomorphisms between competing languages,
  • Generalization: creating a supertheory that yields antecedents as special cases.

Perhaps AI progress will mirror revolutions in physics, like when Maxwell unified theories of electricity and magnetism.

Symbolists, Connectionists, and Bayesians suffer from a lack of stability, generality, and creativity, respectively. But one tribe’s weakness is another tribe’s strength. This is a big reason why unification seem worthwhile.

What’s more, our tribes possesses “killer apps” that other tribes would benefit from. For example, only Bayesians are able to do causal inference. Learning causal relations in logical structure, or in neural networks, are important unsolved problems. Similarly, only Connectionists are able to explain modularity (function localization). Symbolist and Bayesian tribes are more normative than Connectionism, which makes their technologies tend towards (overly?) universal mechanisms.

Symbolic vs Subsymbolic

You’ve heard of the symbolic-subsymbolic debate? It’s about reconciling Symbolist and Connectionist interpretations of neuroscience. But some (e.g., [M01]) claim that both theories might be correct, but at different levels of abstraction. Marr [M82] once outlined a hierarchy of explanation, as follows:

  • Computational: what is the structure of the task, and viable solutions?
  • Algorithmic: what procedure should be carried out, in producing a solution?
  • Implementation: what biological mechanism in the brain performs the work?

One theory, supported by [FP98] is that Symbolist architectures (e.g., ACT-R) may be valid explanations, but somehow “carried out” by Connectionist algorithms & representations.

Five Tribes- Tribes vs Levels (2)

I have put forward my own theory, that Symbolist representations are properties of the Algorithmic Mind; whereas Connectionism is more relevant in the Autonomic Mind.

This distinction may help us make sense for why [D15] proposes Markov Logic Networks (MLN) as a bridge between Symbolist logic and Bayesian graphical models. He is seeking to generalize these technologies into a single construct; in the hopes that he can later find a reduction of MLN in the Connectionist paradigm. Time will tell.

Takeaways

Today we discussed five tribes within ML research: Symbolists, Connectionists, Bayesians, Evolutionaries, and Analogists. Each tribe has different strengths, technologies, and developmental trajectory. These categories help to parse technical disputes, and locate promising research vectors.

The most significant problem facing ML research today is, how do we unify these tribes?

References

  • [D15] Domingos (2015). The Master Algorithm
  • [M01] Marcus (2001). The Algebraic Mind
  • [M82] Marr (1982). Vision
  • [FP98] Fodor & Pylyshyn (1998). Connectionism and cognitive architecture: A critical analysis

Logic Design: Harmony in IPL

Followup To: Logic Structure: Connectives in IPL
Part Of: Logic sequence
Content Summary: 300 words, 3 min read

Motivations

Last time, we looked at Intuitionistic Propositional Logic (IPL). In IPL, there are five connectives, and hence five introduction-elimination pairs:

IPL- All Rules (1)

What if you had to design a new logic from scratch? Suppose we were to invent five new connective symbols. Would you start by defining their introduction rule, and use these to infer elimination? Or would you instead define elimination first?

This choice reflects different ways to interpret the semantics of logic:

  • The verificationist starts with introduction first. For them, the meaning of a connective is in their constructor (introduction rules).
  • The pragmatist starts with elimination first. For them, the meaning of a proposition is how you use it.

But if introduction and elimination rules agree, then a logical system has harmony.

How do we evaluate harmony in practice? Harmony is defined as two propositions:

  • Local soundness: if I introduce and then eliminate a connective, do I gain information? If so, the elimination rule is too weak.
  • Local completeness: if I eliminate then re-introduce connective, do I lose information? If so, the elimination rule is too strong.

Demonstrating Harmony in IPL

We can show that conjunction rules exhibit harmony.

IPL Harmony- Conjunction Connective (1)

Note that we have only shown soundness for left-elimination. But demonstrating soundness for right-elimination is highly analogous.

Implication rules also exhibit harmony.

IPL Harmony- Implication Connective (4)

So does disjunction.

IPL Harmony- Disjunction Connective (5)

It is trivial to demonstrate the harmony of truth and falsity. Thus, we can say that IPL, as a formal system, has harmony.

Takeaways

In this article, we have discussed harmony, which helps us evaluate how useful a given formal system is. This notion may seem straightforward in IPL; however, it will prove useful in designing new logics, such as linear logic.

Another more subtle point to consider is that the soundness demonstration also seems to reflect a logic of simplification. This point will return when we discuss the Curry-Howard-Lambek correspondence, and the deep symmetries between logic and computation.

Until next time.

Constraint Satisfiability: Zebra Puzzle

Part Of: Logic sequence
Content Summary: 1000 words, 10 min read.

Today, we look at the Zebra Puzzle (aka Einstein Puzzle). According to legend, Albert Einstein invented this as a child, and claimed that 98% of the human population cannot solve it.

Let’s see if we are in the 2%.

The Puzzle

Five men of different nationalities and with different jobs live in consecutive houses on a street. These houses are painted different colors. The men have different pets and have different favorite drinks. The following rules are provided:

  1. The English man lives in a red house
  2. The Spaniard owns a dog
  3. The Japanese man is a painter
  4. The Italian drinks tea
  5. The Norwegian lives in the first house on the left
  6. The green house immediately to the right of the white one
  7. The photographer breeds snails
  8. The diplomat lives in the yellow house
  9. Milk is drunk in the middle house
  10. The owner of the green house drinks coffee
  11. The Norwegian’s house is next to the blue one
  12. The violinist drinks orange juice
  13. The fox is in a house that is next to that of the physician
  14. The horse is in a house next to that of the diplomat

Who owns a zebra? And whose favorite drink is mineral water?

To answer this problem, we must learn 5 house-nation-color-drink-pet-job combinations. A solution might look like this:

  • Yellow far-left house has Norwegian diplomat who drinks water and owns a fox
  • White left house has Italian photographer who drinks tea and owns a zebra.
  • Red middle house has English photographer who drinks milk and owns snails.
  • Green right house has Spanish physician who drinks OJ and owns a dog
  • Blue far-right house has Japanese painter who drinks coffee and owns a horse.

But this solution is incorrect: it violates Rule 6: “The green house immediately to the right of the white one.”

How do we find a solution that doesn’t violate any of our constraints? Does one even exist? Or is this set of constraints not satisfiable?

Formalizing Logical Structure

Words are distracting. Let’s use symbols instead.

Einstein's Puzzle- Symbol Code (7)

With this code, we can write the above solution as a matrix.

Einstein's Puzzle- Solution Matrix

We can also formalize our constraints.

Einstein's Puzzle- Constraint Formalization

These constraints are ugly. Let’s write them in matrix form instead!

Einstein's Puzzle- Constraint Matrix Horizontal (1)

Constraint Satisfaction as a Jigsaw Puzzle

We can use the above constraints to visually check satisfiability. Whereas before you had to parse the meaning of Rule 6 verbally, now you can just inspect whether there is a visual match between rule and solution.

Einstein's Puzzle- Visual Satisfiability Check (1)

One way to determine satisfiability is to perform these checks until you find a viable solution. But this is computationally expensive: there are 25 billion solutions. Instead of inspecting every possible solutions, why don’t we generate one solution?

How? Since our Rules are used for solution-checking, why can’t we use them for solution-building?

On this view, solution building takes on the flavor of a jigsaw puzzles. Each constraint is a puzzle piece, from these ingredients we construct the solution.

sat2

Unfortunately, there is more than one way to solve a 5×5 jigsaw puzzle. Let me show you one way to solve this one. We will be use choice minimization to simplify our lives: try to play the move with the fewest degrees of freedom.

Solution: Path A

Rule 5 and 9 relate to the houses, they are easy to apply.

After these, the Rule 11 puzzle piece fits unambiguously.

sat_path0

Let’s apply Rule 6 next. That jigsaw piece can fit in two locations, the M+R columns, or the R + FR columns. Gotta choose one: let’s select the former. After that move, Rule 10 fits unambiguously.

The FR column is the only place that has an unclaimed nation and color: Rule 1 must go there. similarly, the FL column is the only available spot for Rule 8.

sat_patha1

Here we can apply Rule 14 (the original clue’s wording “The horse is in a house next to that of the diplomat” means that the puzzle piece can be flipped horizontally).

After that, only column L can accommodate Rule 4. Then FR must accept Rule 12. 

sat_patha2

Disaster! Consider Clue 2, 3, and 7. These rules are mutually exclusive (they have at least one row in common with one another), and have overlapping domains (they all cannot fit in FL, but must fit in either M or R).

Einstein's Puzzle- hPath A3 Paradox

This is the pigeonhole principle: just as three pigeons cannot fit into two holes, there is no way to reach a solution.

Does that mean the puzzle is unsolvable? No, it means we explore other choices.

Solution: Path B

Let’s return to the other possible placement of Rule 6. Instead of putting it in M+R columns, we’ll put it in R+FR. Then, Rules 10, 1, 8, and 14 follow inevitably (each has precisely one choice).

sat_pathb1

Here we face another choice: do we put puzzle piece 4 in the left or right house? Let’s choose the right house. Then, Rule 12 and Rule 3 follow logically.

sat_pathb2

Alas! Another disaster. Rule 2 doesn’t fit. 😦

Solution: Path C

Retrace our steps! The last choice we made was Place(4, R). What if we place it in the left house instead?

sat_pathc1

To our delight, we now see that Path 2b is the only correct logical journey through our puzzle.  The concluding steps are given below, and the desired quantities are shown in the “missing” tiles.

sat_pathc2

Recall the original questions:

Who owns a zebra (P5)? Whose favorite drink is mineral water (D5)?

Our symbol table can translate our answer:

The Japanese man (N3) owns the zebra, and the Norwegian (N5) drinks mineral water

Implications

The above solution is nothing more to solving a 5×5 jigsaw puzzle. I suspect this technique will only become clear with practice. Go solve Einstein’s Riddle on your own, or one of these variants!

For the solution above, it is helpful to review our search history. Remarkably, we only faced two choices in our solution. When one branch failed, we turned out attention to other branches. This is known as recursion, and will be the subject of another blog post.

Einstein's Puzzle- Search History (2)

Many programming solutions exist for these kinds of problems. In practice, libraries can be used to write more concise solvers.

This kind of problem is called propositional satisfiability (SAT), or constraint programming (CP), although these two disciplines differ in subtle ways.

As we will see next time, SAT problems are at the root of complexity theory and artificial intelligence. Until then.