Command Palette

Search for a command to run...

Linear Methods of AI

Individual Eigenvalue Calculation

Inverse Iteration Method with Shift

The inverse iteration method with shift is designed to calculate specific eigenvalues that approach initial guesses. This algorithm uses a shift parameter μ\mu as a guide to search towards particular eigenvalues.

Imagine you're searching for a radio station among many frequencies. Without shift, you hear all stations at once (unclear). With shift, you direct the dial to a specific frequency to get one clear station.

The algorithm starts with matrix ARn×nA \in \mathbb{R}^{n \times n} and shift parameter μR\mu \in \mathbb{R}. The initial vector w0Rnw_0 \in \mathbb{R}^n is normalized to w^0:=w0/w0\hat{w}_0 := w_0/\|w_0\| and iteration starts from k:=0k := 0.

LU Decomposition and Iteration

The first step is to compute the LU decomposition of matrix AμIA - \mu \cdot I. This decomposition is performed once at the beginning and used repeatedly in each iteration for computational efficiency.

wk+1:=(AμI)1w^kw_{k+1} := (A - \mu \cdot I)^{-1} \cdot \hat{w}_k

In practice, instead of computing the matrix inverse directly, we solve the linear system using LU decomposition

(AμI)wk+1=w^k(A - \mu \cdot I) \cdot w_{k+1} = \hat{w}_k

Normalization and Convergence

After obtaining wk+1w_{k+1}, perform normalization to prevent uncontrolled growth

w^k+1:=wk+1/wk+1\hat{w}_{k+1} := w_{k+1}/\|w_{k+1}\|

Eigenvalue estimation uses the ratio of vector components. For index jj with (w^k)j0(\hat{w}_k)_j \neq 0

λ:=(wk+1)j/(w^k)j\lambda := (w_{k+1})_j / (\hat{w}_k)_j

Iteration continues until meeting the convergence criteria 1/λ1/λoldtolerance|1/\lambda - 1/\lambda_{\text{old}}| \leq \text{tolerance}. The tolerance parameter is a threshold value that determines how accurate the desired result should be, for example 10610^{-6} for six decimal digit accuracy.

Imagine measuring height with a ruler. Tolerance determines how precise the measurement you accept (whether accurate to centimeters or needing millimeters). The smaller the tolerance value, the more accurate the result, but requires more iterations.

The final result provides eigenvalue 1/λ+μ1/\lambda + \mu and eigenvector w^k\hat{w}_k.

von Mises Method for Dominant Eigenvalue

The von Mises vector iteration method finds the eigenvalue with the largest magnitude (dominant eigenvalue). This algorithm uses a simple iterative process with repeated matrix multiplication.

The algorithm starts with initial vector w0Rnw_0 \in \mathbb{R}^n normalized to w^0:=w0/w0\hat{w}_0 := w_0/\|w_0\| and iteration starts from k:=0k := 0.

Iteration and Convergence

Each iteration performs two main operations

wk+1:=Aw^kw_{k+1} := A \cdot \hat{w}_k
w^k+1:=wk+1/wk+1\hat{w}_{k+1} := w_{k+1}/\|w_{k+1}\|

Eigenvalue estimation uses component ratio for index jj with (w^k)j0(\hat{w}_k)_j \neq 0

λ:=(wk+1)j/(w^k)j\lambda := (w_{k+1})_j / (\hat{w}_k)_j

Iteration continues until λλoldtolerance|\lambda - \lambda_{\text{old}}| \leq \text{tolerance}. The tolerance value determines the level of precision needed, typically ranging from 10810^{-8} to 101210^{-12} for high-precision calculations. The final result is the dominant eigenvalue λ\lambda and eigenvector w^k\hat{w}_k.

This method succeeds if λ1>λ2|\lambda_1| > |\lambda_2| and the initial vector w0w_0 has non-zero components in the direction of the dominant eigenvector. With these assumptions, iteration converges to the dominant eigenvalue λ1\lambda_1 and associated eigenvector.

Technique for Finding Smallest Eigenvalue

For invertible matrices, there is an important relationship between the eigenvalues of a matrix and its inverse. If Avn=λnvnA \cdot v_n = \lambda_n \cdot v_n, then

A1vn=1λnvnA^{-1} \cdot v_n = \frac{1}{\lambda_n} \cdot v_n

This means the smallest eigenvalue of AA becomes the largest eigenvalue of A1A^{-1}. By applying vector iteration on A1A^{-1}, we obtain the smallest eigenvalue of the original matrix.

In practice, each iteration solves the linear system Awk+1=w^kA \cdot w_{k+1} = \hat{w}_k using LU decomposition, avoiding inefficient explicit inverse computation.