Command Palette

Search for a command to run...

Linear Methods of AI

Matrix Similarity

Definition of Matrix Similarity

In linear algebra, the concept of matrix similarity or equivalence is very important for understanding how two different matrices can represent the same linear transformation in different spaces. Imagine two different portraits of the same object, but taken from different perspectives.

Two matrices A,BKn×nA, B \in \mathbb{K}^{n \times n} are said to be similar if there exists an invertible matrix SKn×nS \in \mathbb{K}^{n \times n} such that:

B=S1ASB = S^{-1} \cdot A \cdot S

The matrix SS in this case is called the similarity transformation matrix.

Basis Transformation and Coordinate Representation

To understand why matrix similarity is so important, we need to look at its relationship with basis transformation. Let e1,,enKne_1, \ldots, e_n \in \mathbb{K}^n be the canonical basis and v1,,vnKnv_1, \ldots, v_n \in \mathbb{K}^n be another basis of Kn\mathbb{K}^n.

If SS is an invertible matrix with columns vkv_k:

S=(v1vn)Kn×nS = (v_1 \quad \ldots \quad v_n) \in \mathbb{K}^{n \times n}

Then we have vk=Sekv_k = S \cdot e_k or ek=S1vke_k = S^{-1} \cdot v_k for k=1,,nk = 1, \ldots, n. The matrix SS represents the basis transformation.

A vector xKnx \in \mathbb{K}^n can be expressed in the canonical basis through coordinates xkx_k and in the basis v1,,vnv_1, \ldots, v_n through coordinates ξk\xi_k:

x=k=1nxkekx = \sum_{k=1}^n x_k \cdot e_k
=k=1nξkvk=Sξ= \sum_{k=1}^n \xi_k \cdot v_k = S \cdot \xi
ξ=(ξ1ξn)=S1x\xi = \begin{pmatrix} \xi_1 \\ \vdots \\ \xi_n \end{pmatrix} = S^{-1} \cdot x

The matrix S1S^{-1} represents the coordinate transformation.

Linear Transformation in Different Bases

Now consider the linear transformation y=Axy = A \cdot x. In the canonical basis, yy is expressed through coordinates yky_k, while in the basis v1,,vnv_1, \ldots, v_n through coordinates ηk\eta_k:

y=k=1nykeky = \sum_{k=1}^n y_k \cdot e_k
=k=1nηkvk=Sη= \sum_{k=1}^n \eta_k \cdot v_k = S \cdot \eta
η=(η1ηn)=S1y\eta = \begin{pmatrix} \eta_1 \\ \vdots \\ \eta_n \end{pmatrix} = S^{-1} \cdot y

Therefore:

Sη=y=AxS \cdot \eta = y = A \cdot x
=ASξ= A \cdot S \cdot \xi

or in other words:

η=S1ASξ\eta = S^{-1} \cdot A \cdot S \cdot \xi

In the basis v1,,vnv_1, \ldots, v_n, the linear transformation y=Axy = A \cdot x is represented by η=Bξ\eta = B \cdot \xi with the matrix:

B=S1ASB = S^{-1} \cdot A \cdot S

This is why similar matrices represent the same linear transformation but viewed from different bases. Similar matrices represent the same linear transformation with respect to different bases of Kn\mathbb{K}^n.

Invariant Properties of Similar Matrices

Similar matrices have several fundamental properties that are very useful. Since they represent the same linear transformation in different spaces, similar matrices preserve the same intrinsic characteristics.

Based on the theorem about similar matrices, if matrices AA and B=S1ASB = S^{-1} \cdot A \cdot S are similar, then they both have:

  1. The same determinant
  2. The same characteristic polynomial
  3. The same eigenvalues
  4. The same trace

Proof of Determinant Equality

For the determinant, we can show:

det(B)=det(S1AS)\det(B) = \det(S^{-1} \cdot A \cdot S)
=det(S1)det(A)det(S)= \det(S^{-1}) \cdot \det(A) \cdot \det(S)

Since det(S1)=1det(S)\det(S^{-1}) = \frac{1}{\det(S)}, then:

det(B)=1det(S)det(A)det(S)\det(B) = \frac{1}{\det(S)} \cdot \det(A) \cdot \det(S)
=det(A)= \det(A)

Eigenvalue Equality

If vKnv \in \mathbb{K}^n is an eigenvector of AA with eigenvalue λK\lambda \in \mathbb{K}, such that Av=λvA \cdot v = \lambda \cdot v, then w=S1vw = S^{-1} \cdot v is an eigenvector of BB with the same eigenvalue:

Bw=S1ASwB \cdot w = S^{-1} \cdot A \cdot S \cdot w
=S1Av= S^{-1} \cdot A \cdot v
=S1λv= S^{-1} \cdot \lambda \cdot v
=λw= \lambda \cdot w

This shows that matrix similarity preserves the spectrum or set of eigenvalues, which is a fundamental characteristic of linear transformations.