Matrix Norm from Vector Norm
Have you ever wondered how we can measure the "size" of a matrix? Just like vectors that have length, matrices also require the concept of "size" called matrix norm. What's interesting is that we can build matrix norms directly from vector norms that we already know.
If we have a vector norm on space , then we can define a corresponding matrix norm on through the formula:
The norm produced this way is called the natural matrix norm that is induced by the vector norm. This norm has two important properties that make it very useful in numerical analysis.
-
Compatibility Property: For all matrices and vectors , the following holds:
-
Multiplicative Property: For all matrices , the following holds:
Both properties are very fundamental because they ensure that matrix norms behave consistently with matrix and vector multiplication operations.
Examples of Special Matrix Norms
Let's look at some concrete examples of matrix norms that are often used in practice.
-
Maximum Column Norm: If we use the norm on vectors, then the induced matrix norm is:
This means we look for the column with the largest sum of absolute values.
-
Maximum Row Norm: If we use the maximum norm on vectors, then the induced matrix norm is:
This means we look for the row with the largest sum of absolute values.
Both norms are very easy to compute and provide good estimates for numerical algorithm stability analysis.
Linear System Stability
Why do we need to understand matrix condition? The answer lies in the problem of numerical stability. When we solve linear equation systems using computers, there is always the possibility of small errors in data or calculations.
Imagine we have a slightly perturbed system. Instead of solving , we actually solve the perturbed system where and .
The crucial question is how much influence do small perturbations and have on the solution ?
If matrix is regular and the perturbation is small enough such that , then the perturbed matrix is also regular.
For the relative error in the solution, we obtain the estimate:
where is the condition number of matrix .
The condition number measures the sensitivity of linear system solutions to small perturbations in input data.
Spectral Radius and Eigenvalues
Before discussing condition numbers further, we need to understand the concept of spectral radius. The spectral radius of a matrix is defined as:
The spectral radius provides information about the eigenvalue with the largest magnitude of the matrix.
There is an interesting relationship between spectral radius and matrix norms. For every eigenvalue of matrix , the following holds:
This means that matrix norms provide an upper bound for all eigenvalues.
A more specific result applies to the spectral norm or 2-norm of matrices. For symmetric matrices , the spectral norm equals the spectral radius:
For general matrices, the spectral norm is computed as:
Condition Number
Now we arrive at the central concept in numerical analysis, namely the condition number. For invertible matrices , the condition number is defined as follows.
The condition number measures how "bad" a matrix is in the context of numerical stability. The larger the condition number, the more sensitive the system is to small perturbations.
Spectral Condition
For symmetric matrices, we can compute the condition number explicitly using eigenvalues. The spectral condition of symmetric matrices is:
where and are the eigenvalues with the largest and smallest magnitudes.
The spectral condition provides a very clear interpretation. A matrix has bad condition if:
- Its eigenvalues are very different in magnitude (large ratio)
- There are eigenvalues that are very small (approaching singular)
Conversely, matrices with good condition have eigenvalues that are relatively uniform in magnitude.
The condition number provides a quantitative measure of how sensitive linear system solutions are to small perturbations in input data.