Linalg (linear algebra)¶
Numpy integrates matrix calculus (or linear algebra) in its sub-library numpy.linalg. To be effective it is important that Numpy is linked to the Lapack and BLAS libraries (Intel's version is MKL). These libraries are unbeatable, Numpy linked to these libraries will be much faster than a program in any other language that does not use them (you take up the challenge?).
The Scipy library also has a linalg sub-library which is very similar. If you can't find what you're looking for in Numpy's version, it might be worth looking to see if Scipy has it.
import numpy as np
import numpy.linalg as lin
np.set_printoptions(precision=3)
np.show_config()
blas_mkl_info: NOT AVAILABLE blis_info: NOT AVAILABLE openblas_info: libraries = ['openblas', 'openblas'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None)] runtime_library_dirs = ['/usr/local/lib'] blas_opt_info: libraries = ['openblas', 'openblas'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None)] runtime_library_dirs = ['/usr/local/lib'] lapack_mkl_info: NOT AVAILABLE openblas_lapack_info: libraries = ['openblas', 'openblas'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None)] runtime_library_dirs = ['/usr/local/lib'] lapack_opt_info: libraries = ['openblas', 'openblas'] library_dirs = ['/usr/local/lib'] language = c define_macros = [('HAVE_CBLAS', None)] runtime_library_dirs = ['/usr/local/lib'] Supported SIMD extensions in this NumPy install: baseline = SSE,SSE2,SSE3 found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2 not found = AVX512F,AVX512CD,AVX512_KNL,AVX512_KNM,AVX512_SKX,AVX512_CLX,AVX512_CNL,AVX512_ICL
Basic operations¶
We have seen that the operators +, -, *and / are applied term by term which is correct for + and - in the context of matrix calculation but not for * and /.
- The dot product uses the
dot
method or the@
operator - The division that we can imagine as
- the resolution of a matrix system uses the
solve
function - the calculation of the inverse of the matrix uses the
inv
function
- the resolution of a matrix system uses the
A = np.array([[1,2],[3,4]])
print("multiplication terme à terme : \n",A * A) # tous les opérateurs sont appliqués terme à terme
print("produit matriciel : \n", A.dot(A)) # A @ A can be used too
multiplication terme à terme : [[ 1 4] [ 9 16]] produit matriciel : [[ 7 10] [15 22]]
Matrix system resolution:
b = np.array([17,37])
x = lin.solve(A, b) # bien mieux que de calculer la matrice inverse (plus rapide et plus stable)
print("x = ", x)
print("verification : ", A @ x)
x = [3. 7.] verification : [17. 37.]
If you really want to, you can calculate the inverse matrix (but it takes longer):
print("A⁻¹ :\n", lin.inv(A)) # la matrice inverse
print("x = ", lin.inv(A).dot(b))
A⁻¹ : [[-2. 1. ] [ 1.5 -0.5]] x = [3. 7.]
Finally the transpose is simply T
:
A.T
array([[1, 3], [2, 4]])
Extraction¶
We can extract
- the diagonal of a matrix with the
diag
function (note the result is a vector if the argument is a matrix and a matrix if the argument is a vector) - the lower triangular matrix with the function
tril
(l for lower) and upper withtriu
(u for upper). It is possible to shift the diagonal of the triangle, see the doc
We can also construct a triangular matrix with 1s and 0s with tri
and therefore also any triangular matrix:
np.tri(3,3) * np.random.randint(1, 10, size=(3,3))
array([[4., 0., 0.], [8., 9., 0.], [5., 5., 4.]])
And here is how to extract a tridiagonal matrix:
A = np.random.randint(1, 10, size=(5,5))
np.tril(np.triu(A, k=-1), k=1)
array([[3, 7, 0, 0, 0], [6, 1, 8, 0, 0], [0, 3, 5, 9, 0], [0, 0, 4, 4, 7], [0, 0, 0, 6, 3]])
Matrix operations¶
The library provides functions for
- decomposition (LU, Choleski, QR, SVD...)
- eigenvalues and eigenvectors
- norm, determinant, conditioning and rank
Q,R = lin.qr(A) # Q est orthogonale, R est triangulaire supérieur
print(Q, '\n\n', R)
print("\nVérification :\n", Q.dot(R))
[[-0.349 0.585 0.132 0.359 0.624] [-0.697 -0.557 0.221 -0.298 0.257] [-0.465 -0.061 -0.052 0.668 -0.576] [-0.349 0.585 0.132 -0.564 -0.447] [-0.232 0.036 -0.956 -0.134 0.115]] [[ -8.602 -7.44 -10.927 -13.252 -10.23 ] [ 0. 7.527 -0.039 3.907 7.559] [ 0. 0. 1.61 -3.513 -0.301] [ 0. 0. 0. 4.333 0.656] [ 0. 0. 0. 0. 1.302]] Vérification : [[3. 7. 4. 8. 9.] [6. 1. 8. 5. 3.] [4. 3. 5. 9. 4.] [3. 7. 4. 4. 7.] [2. 2. 1. 6. 3.]]
Eigenvalues and eigenvectors¶
lin.eig(A) # donne les valeurs propres et les vecteurs propres de A
(array([23.099+0.j , -3.442+1.822j, -3.442-1.822j, -1.408+0.j , 1.192+0.j ]), array([[-0.547+0.j , -0.191+0.047j, -0.191-0.047j, -0.51 +0.j , 0.487+0.j ], [-0.457+0.j , -0.035-0.466j, -0.035+0.466j, -0.462+0.j , -0.423+0.j ], [-0.476+0.j , 0.441+0.128j, 0.441-0.128j, 0.41 +0.j , -0.519+0.j ], [-0.447+0.j , -0.536+0.j , -0.536-0.j , -0.167+0.j , -0.101+0.j ], [-0.257+0.j , 0.435+0.233j, 0.435-0.233j, 0.575+0.j , 0.552+0.j ]]))
Determinant, norm etc.¶
print("Déterminant :", lin.det(A))
print("Norme 2 :", lin.norm(A), "\nNorme 1 :", lin.norm(A, 1) )
print("Conditionnement :", lin.cond(A,2)) # I choose norm 2 to compute the condition number
Déterminant : -587.9999999999999 Norme 2 : 26.343879744638983 Norme 1 : 32.0 Conditionnement : 35.929347867977604