Is the Product of Two Invertible Matrices Invertible?
In linear algebra, the concept of invertibility (or nonsingularity) is central to understanding how matrices behave under multiplication. A square matrix (A) is called invertible if there exists another matrix (A^{-1}) such that
[ AA^{-1}=A^{-1}A=I, ]
where (I) is the identity matrix of the same size. ” appears frequently in textbooks and exams because it touches on fundamental properties of matrix groups and determinants. The question “is the product of two invertible matrices invertible?In practice, the short answer is yes—the product of two invertible matrices is itself invertible, and its inverse can be expressed neatly in terms of the inverses of the factors. Below we explore why this holds, provide a rigorous proof, discuss related consequences, and answer common questions that arise when studying this topic.
Why Invertibility Matters
Before diving into the proof, it helps to recall what invertibility guarantees:
- Existence of a two‑sided inverse – the matrix can be “undone” by multiplication from either side.
- Non‑zero determinant – (\det(A)\neq0) for an invertible matrix (A).
- Full rank – the matrix has rank equal to its dimension, meaning its columns (and rows) are linearly independent.
- Bijective linear transformation – the associated map (x\mapsto Ax) is one‑to‑one and onto.
These equivalent characterizations make it possible to approach the product question from several angles: determinant properties, rank arguments, or direct construction of an inverse.
Proof Using Determinants
One of the quickest ways to show that the product of two invertible matrices is invertible relies on the multiplicative property of the determinant:
[ \det(AB)=\det(A)\det(B). ]
- If (A) and (B) are invertible, then (\det(A)\neq0) and (\det(B)\neq0).
- This means (\det(AB)=\det(A)\det(B)\neq0).
- A matrix with a non‑zero determinant is invertible.
Thus, (AB) must be invertible. Beyond that, we can write its inverse explicitly:
[ (AB)^{-1}=B^{-1}A^{-1}. ]
Notice the reversal of order—a crucial detail that stems from the non‑commutative nature of matrix multiplication That alone is useful..
Direct Construction of the Inverse Another approach avoids determinants altogether and constructs the inverse directly. Suppose (A) and (B) are invertible (n\times n) matrices with known inverses (A^{-1}) and (B^{-1}). Consider the matrix (C = B^{-1}A^{-1}). We test whether (C) serves as the inverse of (AB):
[ \begin{aligned} (AB)(B^{-1}A^{-1}) &= A\bigl(BB^{-1}\bigr)A^{-1} \ &= AIA^{-1} \ &= AA^{-1} \ &= I. \end{aligned} ]
Similarly,
[ \begin{aligned} (B^{-1}A^{-1})(AB) &= B^{-1}\bigl(A^{-1}A\bigr)B \ &= B^{-1}IB \ &= B^{-1}B \ &= I. \end{aligned} ]
Since both products give the identity matrix, (C) is indeed the two‑sided inverse of (AB). Which means, (AB) is invertible and ((AB)^{-1}=B^{-1}A^{-1}).
Implications and Related Results
Understanding that the product of invertible matrices stays invertible leads to several useful corollaries:
1. The Set of Invertible Matrices Forms a Group
The collection of all (n\times n) invertible matrices over a field (commonly (\mathbb{R}) or (\mathbb{C})) is denoted (GL(n,\mathbb{F})) (the general linear group). The proof above shows:
- Closure – if (A,B\in GL(n,\mathbb{F})) then (AB\in GL(n,\mathbb{F})).
- Associativity – inherited from matrix multiplication.
- Identity – the identity matrix (I) is its own inverse.
- Inverses – each element has an inverse by definition.
Thus, (GL(n,\mathbb{F})) satisfies the group axioms.
2. Determinant of a Product
Because (\det(AB)=\det(A)\det(B)), the determinant of a product is the product of determinants. On top of that, this property is frequently used to compute determinants of complicated matrices by breaking them into simpler invertible factors (e. g., LU decomposition) And that's really what it comes down to..
3. Preservation of Rank
If (A) and (B) are invertible, they each have full rank (n). Also, multiplying by an invertible matrix does not change rank, so (\operatorname{rank}(AB)=n). Basically, invertible matrices act as rank‑preserving transformations.
4. Solving Linear Systems
When solving (Ax=b) with an invertible (A), we can multiply both sides by (A^{-1}) to obtain (x=A^{-1}b). If we instead have a factorization (A=LU) where (L) and (U) are invertible (lower and upper triangular), the same logic applies:
[ x = U^{-1}L^{-1}b. ]
This underlies many numerical algorithms.
Examples
Example 1: 2×2 Matrices
Let
[ A=\begin{pmatrix}1&2\0&1\end{pmatrix},\qquad B=\begin{pmatrix}3&0\1&4\end{pmatrix}. ]
Both matrices have determinant (1) and (12) respectively, so they are invertible. Their product is
[ AB=\begin{pmatrix}1&2\0&1\end{pmatrix} \begin{pmatrix}3&0\1&4\end{pmatrix} =\begin{pmatrix}5&8\1&4\end{pmatrix}. ]
[ \det(AB)=5\cdot4-8\cdot1=20-8=12\neq0, ]
confirming invertibility. The inverse can be computed as [ (AB)^{-1}=B^{-1}A^{-1} =\begin{pmatrix}\tfrac13&0\-\tfrac1{12}&\tfrac14\end{pmatrix} \begin{pmatrix}1&-2\0&1\end{pmatrix} =\begin{pmatrix}\tfrac13&-\tfrac23\-\tfrac1{12}&\tfrac13\end{pmatrix}. ]
Multiplying (AB) by this matrix yields the identity, as expected Most people skip this — try not to. And it works..
Example 2: Using Elementary Matrices
Elementary matrices (those representing a single row operation) are always invertible. If we perform a sequence of row operations represented by elementary matrices (E_1,E_2,\dots,E_k) on a matrix (A), the resulting matrix is
[ E_k\cdots E_2E_1A. ]
Since each (E_i) is invertible, the product
Sinceeach (E_i) is invertible, the product (E_k\cdots E_2E_1) is invertible as well, with inverse ((E_k\cdots E_2E_1)^{-1}=E_1^{-1}E_2^{-1}\cdots E_k^{-1}). This observation is the foundation of the LU decomposition: any invertible matrix (A) can be written as (A=LU) where (L) is a product of lower‑triangular elementary matrices (hence unit lower triangular) and (U) is upper triangular. This means applying a sequence of elementary row operations to an invertible matrix yields another invertible matrix, and the whole process can be reversed by applying the inverse operations in the opposite order. Because both (L) and (U) are invertible, the determinant of (A) equals the product of the diagonal entries of (U), and solving (Ax=b) reduces to two triangular solves, which is computationally cheap Simple, but easy to overlook..
Conclusion
The set of invertible (n\times n) matrices over a field (\mathbb{F}) forms the general linear group (GL(n,\mathbb{F})), satisfying closure, associativity, identity, and inverses. The multiplicative property of the determinant provides a quick test for invertibility and underpins techniques such as LU decomposition. Invertible matrices preserve rank, enabling them to act as rank‑preserving transformations, and they allow linear systems to be solved efficiently by isolating the unknown via multiplication with the inverse (or its factors). Elementary matrices, each representing a single reversible row operation, are themselves invertible; any product of them remains invertible, and any invertible matrix can be expressed as such a product. Together, these facts illustrate why invertible matrices are central both to theoretical linear algebra and to practical numerical algorithms Not complicated — just consistent..
Beyond the algebraic propertieshighlighted earlier, invertible matrices play a important role in understanding the geometry of linear transformations. When a matrix (A) is invertible, it maps the unit (n)-cube onto a parallelotope whose volume equals (|\det A|); the non‑zero determinant guarantees that no dimension is collapsed, preserving the full dimensionality of the space. This volume‑preserving interpretation underlies the change‑of‑variables formula in multivariable calculus, where the Jacobian determinant must be non‑zero for the transformation to be locally invertible Less friction, more output..
In numerical linear algebra, the condition number (\kappa(A)=|A||A^{-1}|) quantifies how sensitively the solution of (Ax=b) reacts to perturbations in (b) or in the entries of (A). An invertible matrix with a modest condition number yields stable computations, whereas a large (\kappa(A)) signals near‑singularity, prompting the use of preconditioning or iterative refinement techniques. The LU decomposition discussed previously not only provides a fast solver but also furnishes an easy way to estimate (\kappa(A)) from the factors (L) and (U).
Invertibility also appears prominently in applied disciplines. In computer graphics, homogeneous transformation matrices that are invertible enable seamless switching between world, camera, and object coordinates, allowing both forward rendering and inverse operations such as ray tracing. In control theory, the controllability and observability Gramians are invertible precisely when the corresponding linear system is controllable or observable, a condition that guarantees the existence of state‑feedback controllers and observers. Cryptographic schemes based on linear transformations over finite fields—such as the Hill cipher—rely on the invertibility of the key matrix to confirm that encryption can be undone by legitimate parties while thwarting attackers who lack the matrix’s inverse.
People argue about this. Here's where I land on it.
From a theoretical standpoint, the collection of all invertible matrices forms a Lie group, (GL(n,\mathbb{F})), whose smooth structure enables the study of continuous symmetries via its Lie algebra (\mathfrak{gl}(n,\mathbb{F})). Exponential maps from this algebra to the group link solutions of linear differential equations (\dot X = AX) to one‑parameter subgroups (e^{tA}), which are invertible for all real (t) because the exponential of a matrix never loses rank Worth knowing..
Together, these perspectives illustrate that invertibility is far more than a mere algebraic curiosity; it is a linchpin that connects geometric intuition, numerical stability, practical algorithms, and deep structural theory across mathematics and its applications. Embracing the properties and constructions associated with invertible matrices equips both theorists and practitioners with a powerful toolkit for analyzing and solving a wide array of problems Simple, but easy to overlook..
Conclusion
Invertible matrices are central to linear algebra because they preserve dimensionality, admit a rich factorization theory, and underpin both exact and approximate solution methods. Their determinant offers a swift invertibility test, while elementary‑matrix decompositions reveal the underlying sequence of reversible row operations. Beyond the algebra, invertibility informs geometric volume changes, governs numerical conditioning, and appears in diverse fields ranging from graphics and control to cryptography and differential equations. Mastery of these concepts enables a deeper comprehension of linear systems and equips one with efficient, reliable tools for both theoretical exploration and practical computation.