Why does numpy.linalg.solve() offer more precise matrix inversions than numpy.linalg.inv()?

np.linalg.solve(A, b) does not compute the inverse of A. Instead it calls one of the gesv LAPACK routines, which first factorizes A using LU decomposition, then solves for x using forward and backward substitution (see here).

np.linalg.inv uses the same method to compute the inverse of A by solving for A-1 in A·A-1 = I where I is the identity*. The factorization step is exactly the same as above, but it takes more floating point operations to solve for A-1 (an n×n matrix) than for x (an n-long vector). Additionally, if you then wanted to obtain x via the identity A-1·b = x then the extra matrix multiplication would incur yet more floating point operations, and therefore slower performance and more numerical error.

There’s no need for the intermediate step of computing A-1 – it is faster and more accurate to obtain x directly.


* The relevant bit of source for inv is here. Unfortunately it’s a bit tricky to understand since it’s templated C. The important thing to note is that an identity matrix is being passed to the LAPACK solver as parameter B.

Leave a Comment