Problem Definition
Imagine you're trying to match a puzzle with imperfect pieces. That's the situation we face in the linear equilibrium problem. We have a matrix and vector , but the equation system may not have an exact solution.
In cases like this, we search for the best solution we can obtain by minimizing the error function
This function measures how far the product result is from our desired target .
Optimization Problem
Now let's understand what we actually want to achieve. The minimization problem
is the core of the linear equilibrium problem or what is often called the linear least squares problem. Imagine looking for the best position to throw a ball so it lands as close as possible to the target, even though you can't hit it exactly.
Why do we use this approach? Because in many real situations, the data we have contains noise or disturbances that make finding an exact solution impossible.
Formula Expansion
Now let's break down this formula to see what actually happens inside it
This expanded form shows that we're actually summing the squares of each error component. Similar to calculating the total squared distance between several prediction points and the actual target points. By squaring each error, we give a larger penalty to big errors compared to small ones.
Alternative Norms
It turns out there are several different ways to measure error, depending on the characteristics of the problem we're dealing with.
-norm works by summing the absolute values of errors
This approach is like measuring distance by walking in a city with grid-shaped streets. You have to move horizontally and vertically only, no diagonal movement. This method is more resistant to extreme outlier data.
-norm focuses on the largest error
Imagine you're adjusting the height of an uneven table. This method will focus on fixing the highest or lowest table leg, ensuring no single part is too extreme.
The right choice of norm depends on the type of error we most want to avoid and the characteristics of the data we have.
Geometric Interpretation
This geometric visualization helps us understand what's actually happening. The flat plane in this diagram represents the column space of matrix (called Image A), which is all possible results of multiplication .
Vector is the target we want to reach, but it may not lie within the column space of matrix . The optimal solution produces which is the closest point to in that column space.
Vector shows the error vector connecting the best projection with the original target. Like a shadow of an object falling to the floor, is the closest "shadow" of in the available space.
Solution Methods
Each type of norm requires a different solution approach. Problems with -norm and -norm can be solved using linear optimization techniques, where we transform the problem into a form that can be handled by linear programming algorithms.
Meanwhile, the least squares problem with Euclidean norm has a special advantage. When errors in the data follow a normal distribution pattern (bell-shaped), then the solution we obtain provides the best estimate in a statistical sense, called the maximum likelihood estimator.