MAT225 Section Summary: 7.1

Diagonalization of Symmetric Matrices

Summary

As we begin chapter seven, we should keep track of our specific objectives: we're interested in two goals:

  1. we're examining the actions of symmetric matrices as linear transformations, and
  2. we're interested in analyzing the structure of general matrices of information (like images, say, as described in the opening pages of the chapter, p. 447).
Great things happen when you find yourself working with symmetric matrices. Their special structure leads to some seemingly magical properties, as we see here. Symmetric matrices are obviously an important special case, as we found in working with the least-squares problems (where the left-hand side was tex2html_wrap_inline310 , a symmetric matrix!).

Theorem 1: If A is symmetric, then any two eigenvectors from different eigenspaces are orthogonal.

Example: #13, p. 454

orthogonally diagonalizable: A matrix is orthogonally diagonalizable if there is an orthogonal matrix P and diagonal matrix D such that

displaymath296

Example: #22, p. 454

Theorem 2: tex2html_wrap_inline318 is orthogonally diagonalizable if and only if A is a symmetric matrix.

The Spectral Theorem: Symmetric tex2html_wrap_inline318 has the following properties:

  1. A has n real eigenvalues, counting multiplicities (no complex eigenvalues!).
  2. The dimension of the eigenspace for each eigenvalue tex2html_wrap_inline328 equals the multiplicity of tex2html_wrap_inline328 as a root of the characteristic equation (no ``missing'' dimensions).
  3. The eigenspaces are mutually orthogonal: eigenvectors corresponding to different eigenvalues are orthogonal.
  4. A is orthogonally diagonalizable.

Example: #31, p. 455

Since tex2html_wrap_inline334 , where p is an orthogonal matrix, we can write

displaymath297

the spectral decomposition of A. Each matrix tex2html_wrap_inline340 is a projection matrix: the projection of vector x onto the subspace spanned by tex2html_wrap_inline342 is given by

displaymath298

(the last part of the equation is one way of thinking of the projection that I've emphasized).

Example: #34, p. 455

The action of A as a linear transformation is well understood, therefore:

displaymath299

or

displaymath300

That is, we project x onto each basis vector, and then multiply each of these projections by the corresponding eigenvalue. Alternatively, if

displaymath301

where P represents the basis composed of its columns, then

displaymath302

Neat!


LONG ANDREW E
Sat Jan 29 21:08:22 EST 2011