Command Palette

Search for a command to run...

Linear Methods of AI

Basic Procedure for Diagonalization

General Matrix Diagonalization Procedure

Imagine you are trying to rearrange a messy room to make it orderly. Matrix diagonalization is similar to that, we transform a complex matrix into a diagonal form that is much simpler to analyze.

To diagonalize a matrix AKn×nA \in \mathbb{K}^{n \times n}, we use a systematic procedure that will determine whether the matrix can be simplified and how to do it.

Diagonalization Steps

The diagonalization procedure consists of three main steps that must be performed sequentially:

  1. Compute the characteristic polynomial to find all eigenvalues λ1,,λkK\lambda_1, \ldots, \lambda_k \in \mathbb{K} along with their algebraic multiplicities μA(λ1),,μA(λk)\mu_A(\lambda_1), \ldots, \mu_A(\lambda_k).

    This step is like finding the "keys" that will unlock the hidden structure of the matrix. The absolute requirement that must be satisfied is that the characteristic polynomial χA(t)\chi_A(t) must factor completely into linear factors, meaning:

    i=1kμA(λi)=n\sum_{i=1}^k \mu_A(\lambda_i) = n

    If not, then the matrix cannot be diagonalized at all.

  2. Compute the eigenspaces for each eigenvalue by solving the homogeneous linear system:

    (AλiI)v=0(A - \lambda_i \cdot I) \cdot v = 0

    Here we look for all vectors that "survive" when the matrix AA acts on them, only changing their length by a factor of λi\lambda_i without changing their direction.

  3. Check the diagonalization conditions by verifying whether the algebraic multiplicity equals the geometric multiplicity for all eigenvalues. Mathematically, for all i=1,,ki = 1, \ldots, k we must have μA(λi)=dimEigA(λi)\mu_A(\lambda_i) = \dim \text{Eig}_A(\lambda_i).

    This condition ensures that we have enough independent eigenvectors to form a complete basis. If satisfied, the basis vectors from all eigenspaces form the columns of the transformation matrix SS, yielding:

    Λ=S1AS\Lambda = S^{-1} \cdot A \cdot S

Example Application of the Procedure

Consider the matrix:

A=(011323223)A = \begin{pmatrix} 0 & -1 & 1 \\ -3 & -2 & 3 \\ -2 & -2 & 3 \end{pmatrix}

Let us apply the diagonalization procedure to this concrete example:

  1. We compute the characteristic polynomial to find the eigenvalue "keys":

    χA(t)=det(AtI)=t3+t2+t1\chi_A(t) = \det(A - t \cdot I) = -t^3 + t^2 + t - 1

    After factoring, we obtain:

    χA(t)=(1t)2(1t)\chi_A(t) = (1 - t)^2 \cdot (-1 - t)

    From this we see that the eigenvalues are λ1=1\lambda_1 = 1 with algebraic multiplicity μA(1)=2\mu_A(1) = 2 and λ2=1\lambda_2 = -1 with algebraic multiplicity μA(1)=1\mu_A(-1) = 1. Since 2+1=32 + 1 = 3 equals the matrix dimension, the initial requirement is satisfied.

  2. We find all vectors that "survive" the transformation for each eigenvalue:

    EigA(1)=Kern(AI)=Span{(101),(011)}\text{Eig}_A(1) = \text{Kern}(A - I) = \text{Span}\left\{\begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \\ 1 \end{pmatrix}\right\}
    EigA(1)=Kern(A+I)=Span(132)\text{Eig}_A(-1) = \text{Kern}(A + I) = \text{Span}\begin{pmatrix} 1 \\ 3 \\ 2 \end{pmatrix}
  3. We check whether the diagonalization conditions are satisfied. For eigenvalue λ1=1\lambda_1 = 1, its algebraic multiplicity is 2 and its eigenspace has dimension 2 (two independent basis vectors). For eigenvalue λ2=1\lambda_2 = -1, its algebraic multiplicity is 1 and its eigenspace has dimension 1.

    Since μA(1)=2=dimEigA(1)\mu_A(1) = 2 = \dim \text{Eig}_A(1) and μA(1)=1=dimEigA(1)\mu_A(-1) = 1 = \dim \text{Eig}_A(-1), the diagonalization conditions are completely satisfied.

Now we can form the transformation matrix SS by arranging all eigenvectors as columns, and the diagonal matrix Λ\Lambda with eigenvalues on the main diagonal:

S=(101013112)S = \begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 3 \\ 1 & 1 & 2 \end{pmatrix}
Λ=(111)\Lambda = \begin{pmatrix} 1 & & \\ & 1 & \\ & & -1 \end{pmatrix}

Thus, the matrix AA is successfully diagonalized to Λ\Lambda through the transformation Λ=S1AS\Lambda = S^{-1} \cdot A \cdot S.

Diagonalization means there exists a basis of eigenvectors, where the basis transformation matrix SS is invertible. This result shows that the systematic procedure we use can definitively determine whether a matrix can be diagonalized and how to do it.