An Introduction To Markov Chains

Part of: Reinforcement Learning sequence
Related to: An Introduction to Linear Algebra
Content Summary: 700 words, 7 min read

Motivation

We begin with an example.  

Suppose a credit union classifies automobile loans into four categories: Paid in Full (F), Good Standing (G), Behind Schedule (B), Collections (C).

Past records indicate that each month, accounts in good standing change as follows: 10% pay the loan in full, 10% fall behind on payments, 80% remain in good standing.

Similarly, bad loans historically change every month as follows: 10% are paid in full, 40% return to good standing, 40% remain behind schedule, 10% are sent to collection.

A Markov Chain allows us to express such situations graphically:

markov-chains-loan-example

Loan statuses are nodes, transition probabilities are arrows.

Markov Property

Formally, a Markov Chain is a tuple (S, P)

  • A set of states s ∈ S
  • A transition model P(s’ | s).

At the core of Markov Chains is the Markov Property, which states (for time t = n):

P(s_n | s_{n-1}, s_{n-2}, \dots, s_0) = P(s_n | P(s_{n-1})

This is a statement of conditional independence. If I tell you the history of all prior states, and ask you to predict the next time step, you can forget everything except the present state. Informally, a complete description of the present screens off any influence of the past. Thus, the Markov Property ensures a kind of “forgetful” system.

Any model which relies on the Markov Property is a Markov Model. Markov models represent an important pillar in the field of artificial intelligence. Three extensions of Markov Chains are particularly important:

Reinforcement Learning- Markov Models

 

State Expectations by Unweaving

Let’s imagine a Markov Chain with three states: A, B, and C.  If you begin at A, where should you expect to reside in the future?

An intuitive way to approach this question is to”unweave” the Markov Chain, as follows:

markov-chain-expected-location-via-unweaving

Each branch in this tree represents a possible world. For example, at t1 there is a 20% chance the state will be A, and an 80% chance the state will be B. Computing expected locations for subsequent timesteps becomes straightforward enough. At t2, we see that:

  • There is an (0.2)(0.2) = 4% chance of residing in A.
  • There is an (0.8)(0.2) + (0.2)(0.8) = 32% chance of residing in B.
  • There is an (0.8)(0.8) = 64% chance of residing in C.

The above computations can be expressed with a simple formula:

S_t(s) = \sum_{paths}\prod_{edges} P(s|s')

However, these computations become tedious rather quickly. Consider, for example  S3(C):

markov-chain-state-expectation-via-unweaving-example

State Expectations By Linear Algebra

Is there a way to simplify the maths of expectation?

Yes, by approaching  Markov Chains through the lens of linear algebra. Conditional probabilities are encoded as transition matrices, as follows:

markov-chains-transition-matrix

This representation enables computation of expected location by matrix multiplication:

S_{t+1} = S_t * T

We compute expectation timesteps sequentially.  By defining a base case and an inductive step, this process qualifies as mathematical induction.

markov-chains-linear-algebra-expected-value

As you can see, these maths are equivalent: S3(C) = 0.896 in both cases.

Steady-State Analysis

In the above example, C is called an absorbing state. As time goes to infinity, the agent becomes increasingly likely to reside in state C.  That is, Sn = [0 0 1] as n→∞. This finding generalizes. Every Markov Chain that contains a (reachable) absorbing state converges on a distribution in the limit, or limiting distribution.

Can we discover the limiting distribution?

Yes, with the following recipe. First, convert the transition matrix into standard form. Second, apply matrix multiplication and inversion to derive the fundamental and limiting matrix. Last, use these matrices to answer real-world questions about our data:

markov-chains-computing-limiting-matrix-recipe-2

Let me illustrative with our automotive loan example. First, we prepare our data.

markov-chain-computing-limiting-matrix-example-p01-1

With T in standard form, we compute F = (I – Q)-1 and T’.

markov-chain-computing-limiting-matrix-example-p2-1

Now that we know F and T’, we are in a position to answer questions with our data.

markov-chain-computing-limiting-matrix-example-p3-2

Thus, we are able to predict how Markov Chains will behave over the long run.

Takeaways

  • Markov Chains are convenient ways of expressing conditional probabilities graphically
  • But they require the Markov Property, that knowledge the present screens off any influence of the past.
  • We can compute expected locations by reasoning graphically.
  • However, it is simpler to compute expected locations by linear algebra techniques.
  • Linear algebra also enables us to discover what (some) Markov chains will approach, their limiting distribution.

Further Resources

  • To gain more intuition with linear algebra, see here.
  • To see Markov Chains applied to real-world situations, see here.
  • To see steady-state computations worked out in more detail, see here.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s