In the previous sections, we have learned how matrices relate to linear transformations and how different types of linear transformations affect their inputs. So here is one interesting question to ask: can linear transformations be undone or reversed? That is, for a given matrix , can we find some matrix that brings vectors transformed by back to their original place?
Yes, if we satisfy two conditions:
Under these conditions, it is possible to find a matrix such that:
which is equivalent to writing
where represents the -by- identity matrix. The matrix is called the inverse of .
If a square matrix is not invertible (i.e. ), it is called singular or degenerate. An invertible matrix is called non-singular or nondegenerate.
But why can we not invert a transformation matrix if its determinant is 0? We have seen that a transformation matrix with zero determinant corresponds to a rank-deficient matrix, which maps the input space to a lower dimensional space. Once in the lower-dimensional space, we have lost all the information about the dimensions we discarded during the transformation. Thus, we can no longer reconstruct the original input in that space.
Let's now go over some properties of inverse matrices and add some intuition whenever we can: