Chapter 28: Matrix Operations: Linear Equations, Inversion, and Least Squares

Loading audio…

ⓘ This audio and summary are simplified educational interpretations and are not a substitute for the original text.

If there is an issue with this chapter, please let us know → Contact Us

While a system Ax = b has the theoretical solution x = A⁻¹b when A is nonsingular, directly computing the inverse is numerically unstable, so the chapter instead centers on LUP decomposition — factoring A into a permutation matrix P, a unit lower-triangular matrix L, and an upper-triangular matrix U such that PA = LU — which reduces solving Ax = b to two stable substitution steps: forward substitution on Ly = Pb and back substitution on Ux = y, each taking Θ(n²) time after the Θ(n³) decomposition. LU decomposition without pivoting is valid only when no zero pivot is encountered, while LUP decomposition improves numerical robustness by always selecting the largest absolute value in the current column as the pivot, using Schur complements in its recursive formulation and storing L and U in-place within A. When explicit matrix inversion is required, each column of A⁻¹ is obtained by solving AXⁱ = eⁱ using LUP-SOLVE, completing the full inversion in Θ(n³) time across n columns. The chapter establishes a theoretical equivalence between matrix multiplication and inversion — Theorems 28.1 and 28.2 show each is computable in asymptotically the same time as the other using Schur complement–based divide-and-conquer — and introduces symmetric positive-definite matrices, which guarantee all LU pivots are positive and whose Schur complements preserve the same property. Finally, least-squares approximation handles overdetermined systems by minimizing squared error through the normal equations AᵀAc = Aᵀy, where the pseudoinverse A⁺ = (AᵀA)⁻¹Aᵀ provides the best-fit solution, with applications in polynomial fitting, data modeling, and curve fitting.