**Part Of**: Algebra sequence

**Followup To**: An Introduction to Linear Algebra

**Next Up**: Singular Value Decomposition

**Content Summary**: 1300 words, 13 min read

Geometries of Eigenvectors

Matrices are functions that act on vectors, by mapping from row-vectors to column-vectors. Consider two examples:

- Reflection matrices, which reflect vectors across some basis.
- Rotation matrices, which rotate vectors clockwise by degrees.

The set of **eigenvectors** of a matrix is a special set of input vectors for which the matrix behaves as a scaling transformation. In other words, we desire the set of vectors whose output vectors differ by a scaling factor.

Eigenvectors have a straightforward geometric interpretation:

- Reflection eigenvectors are orthogonal or parallel to the reflecting surface. In the left image above, that is the top two pairs of vectors.
- Rotation eigenvectors do not exist (more formally, cannot be visualized in ).

Algebra of Eigenvectors

We can express our “parallel output” property as:

Thus and point in the same direction, but differ by scaling factor .

Scaling factor is the **eigenvalue**. There can be many pairs that satisfy the above equality.

For an matrix, there are eigenvalues. These eigenvalues can be difficult to find. However, two facts aid our search:

- The sum of eigenvalues equals the trace (sum of values along the diagonal).
- The product of eigenvalues equals the determinant.

To solve, subtract from both sides:

We would like to identify n unique eigenvectors. But if the new matrix has an empty nullspace, it will contain zero eigenvectors. So we desire this new matrix to be singular.

How to accomplish this? By finding eigenvalues that satisfy the **characteristic equation** . Matrices are singular iff their determinants equal zero.

Let’s work through an example! What is the **eigendecompositon** for matrix :

We need to find eigenvalues that solve the characteristic equation.

Are these eigenvalues correct? Let’s check our work:

How to find our eigenvectors? By solving the nullspace given each eigenvalue.

For :

For :

Desirable Matrix Properties

The above example was fairly straightforward. But eigendecomposition can “go awry”, as we shall see. Consider a rotation matrix, which in two dimensions has the following form:

What are the eigenvalues for rotation ?

We can check our work:

We saw earlier that rotation matrices have no geometric interpretation. Here, we have algebraically shown that its eigenvalues are complex.

has real eigenvalues, but has less-desirable complex eigenvalues.

We can generalize the distinction between and as follows:

Spectral Theorem. Any matrix that is symmetric (A = A^{T}) is guaranteed to have real, nonnegative eigenvalues. The corresponding n eigenvectors are guaranteed to be orthogonal.

In other words, eigendecomposition works best against symmetric matrices.

Diagonalization

Let us place each eigenvector in the column of a matrix . What happens when you multiply the original matrix by this new matrix? Since contains eigenvectors, multiplication by reduces to multiplication by the associated eigenvalues:

We see the product contains a mixture of eigenvalues and eigenvectors. We can separate these by “pulling out” the eigenvalues into a diagonal matrix. Call this matrix (“capital lambda”).

Most matrices have the property that its eigenvectors are linearly independent. For such matrices, is invertible. Given this fact, we can solve for :

Matrices that can be factorized in this way are said to be **diagonalizable**. We can see that both elimination and eigendecomposition are performing the same type of work: factorizing matrices into their component parts.

If is symmetric, then we know is real, and its eigenvectors in are orthogonal. Let us rename to be , to reflect this additional property. But orthogonal matrices have the property that transposition equats inversion: . Thus, if is symmetric, we can simplify the diagonalization formula to:

Asymptoptic Interpretations

This diagonalization approach illustrates an important use case of eigenvectors: power matrices. What happens when is applied arbitrarily many times? What does the output look like in the limit?

We can use the diagonalization equation to represent :

We can simplify by canceling the inner terms :

This equation tells us that the eigenvectors is invariant to how many times is applied. In contrast, eigenvalue matrix has important implications for ongoing processes:

- If each eigenvalue has magnitude less than one, the output will trend towards zero.
- If each eigenvalue has magnitude greater than one, the output will trend to infinity.

Fibonacci Eigenvalues

The powers interpretation of eigenvalues sheds light on the behavior of all linear processes. This includes number sequences such as the **Fibonacci numbers**, where each number is the sum of the previous two numbers.

Recall the Fibonacci numbers are What is ?

Eigenvalues can answer this question. We must first express the Fibonacci generator as a linear equation:

In order to translate this into a meaningful matrix, we must add a “redundant” equation

With these equations, we can create a 2×2 **Fibonacci matrix** .

This matrix uniquely generates Fibonacci numbers.

To discover the rate at Fibonacci numbers grow, we decompose into its eigenvalues:

We can go on to discover eigenvectors and . We can then express the Fibonnaci matrix as

As k goes to infinity, the second term goes to zero. Thus, the ratio is dominated by the larger eigenvalue, 1.61803.

Mathematicians in the audience will recognize this number as the **golden ratio**.

We have long known that the ratio of successive Fibonnaci numbers converges to 1.61803. Eigenvalues provide a mechanism to derive this value analytically.

Until next time.