# Nakafa Framework: LLM URL: /en/subject/university/bachelor/ai-ds/linear-methods/linear-equilibrium-problem Source: https://raw.githubusercontent.com/nakafaai/nakafa.com/refs/heads/main/packages/contents/subject/university/bachelor/ai-ds/linear-methods/linear-equilibrium-problem/en.mdx Output docs content for large language models. --- export const metadata = { title: "Linear Equilibrium Problem", description: "Solve linear equilibrium problems: least squares optimization for over-determined systems with L1, L2, L∞ norms and geometric projection methods.", authors: [{ name: "Nabil Akbarazzima Fatih" }], date: "07/15/2025", subject: "Linear Methods of AI", }; import { getColor } from "@repo/design-system/lib/color"; import { LineEquation } from "@repo/design-system/components/contents/line-equation"; ## Problem Definition Imagine you're trying to match a puzzle with imperfect pieces. That's the situation we face in the linear equilibrium problem. We have a matrix and vector , but the equation system may not have an exact solution. In cases like this, we search for the best solution we can obtain by minimizing the error function This function measures how far the product result is from our desired target . ## Optimization Problem Now let's understand what we actually want to achieve. The minimization problem is the core of the **linear equilibrium problem** or what is often called the *linear least squares problem*. Imagine looking for the best position to throw a ball so it lands as close as possible to the target, even though you can't hit it exactly. Why do we use this approach? Because in many real situations, the data we have contains noise or disturbances that make finding an exact solution impossible. ## Formula Expansion Now let's break down this formula to see what actually happens inside it This expanded form shows that we're actually summing the squares of each error component. Similar to calculating the total squared distance between several prediction points and the actual target points. By squaring each error, we give a larger penalty to big errors compared to small ones. ## Alternative Norms It turns out there are several different ways to measure error, depending on the characteristics of the problem we're dealing with. **-norm** works by summing the absolute values of errors This approach is like measuring distance by walking in a city with grid-shaped streets. You have to move horizontally and vertically only, no diagonal movement. This method is more resistant to extreme outlier data. **-norm** focuses on the largest error Imagine you're adjusting the height of an uneven table. This method will focus on fixing the highest or lowest table leg, ensuring no single part is too extreme. The right choice of norm depends on the type of error we most want to avoid and the characteristics of the data we have. ## Geometric Interpretation Geometric illustration showing the relationship between vector , projection , and error vector.} data={[ { points: [ { x: 0, y: 0, z: 0 }, { x: 4, y: 2, z: 1 }, { x: 6, y: 3, z: 1.5 } ], color: getColor("SKY"), cone: { position: "end" }, showPoints: false, labels: [ { text: "A·x̂", at: 2, offset: [0.5, 0.5, 0] } ] }, { points: [ { x: 0, y: 0, z: 0 }, { x: 2.5, y: 2, z: 1.5 }, { x: 5, y: 4, z: 3 } ], color: getColor("AMBER"), cone: { position: "end" }, showPoints: false, labels: [ { text: "b", at: 2, offset: [0.5, 0, 0.5] } ] }, { points: [ { x: 5, y: 4, z: 3 }, { x: 5.5, y: 3.5, z: 2.25 }, { x: 6, y: 3, z: 1.5 } ], color: getColor("PURPLE"), cone: { position: "end" }, showPoints: false, labels: [ { text: "A·x̂ - b", at: 1, offset: [0, -0.5, 0] } ] } ]} /> This geometric visualization helps us understand what's actually happening. The flat plane in this diagram represents the column space of matrix (called *Image A*), which is all possible results of multiplication . Vector is the target we want to reach, but it may not lie within the column space of matrix . The optimal solution produces which is the closest point to in that column space. Vector shows the error vector connecting the best projection with the original target. Like a shadow of an object falling to the floor, is the closest "shadow" of in the available space. ## Solution Methods Each type of norm requires a different solution approach. Problems with -norm and -norm can be solved using linear optimization techniques, where we transform the problem into a form that can be handled by linear programming algorithms. Meanwhile, the least squares problem with Euclidean norm has a special advantage. When errors in the data follow a normal distribution pattern (bell-shaped), then the solution we obtain provides the best estimate in a statistical sense, called the maximum likelihood estimator.