Vector Programming¶

The aim of the exercises is

  • to have a program that gives the correct answer
  • which is as fast as possible (and for this we use massively Numpy)

Generally if you have nested for it's a bad sign.

In [1]:
import numpy as np

np.set_printoptions(precision=10, linewidth=150, suppress=True)

Partial Gaussian pivot method¶

The announcement is in the course. We will check on the case which generates rounding errors.

In [ ]:
 

Choleski factorization¶

This is to decompose A into $A = B\, B^T$ with B a lower triangular matrix. This is not possible that if the matrix A is symmetric and positive definite (this is moreover a way of verifying that a matrix is ​​positive definite).

Write Choleski's algorithm that takes A and returns B (to guess the algorithm, try to find the coefficients of B from the coefficients of A on a 4x4 matrix A).

In [ ]:
 

Reminder: no nested for loops but vector and matrix operations (on sub-matrices).

Create a positive definite symmetric matrix and verify that its program works.

In [ ]:
 

Improve Jacobi¶

When we write an iteration of the Jacobi method with all the coefficients, we find that if we calculate the new value of x element by element then when we want to update x[1], we already know x[0]. Ditto when we update x[2] we already know x[0] and x[1], etc.

The idea of ​​the improved version of Jacobi is to use the new value of x[0] and not the old one as is the case in the algorithm of the course. So by using updated values ​​we can expect converge faster.

In this case, each iteration requires a line-by-line calculation and therefore a for loop within the for loop on the iterations.

In [ ]:
 

Test stop¶

We will add an error argument to the function which indicates the desired precision of the result. Unfortunately, to know the accuracy of our calculation, it is necessary to know the solution. We could also calculate ${\bf A \, x}^t - {\bf b}$ but it's also not the gap between ${\bf x}^t$ and the solution (besides it is a calculation in n² therefore cumbersome).

Also we will look when the convergence slows down and therefore we stop when $||{\bf x}^{t+1} - {\bf x}^t|| < \texttt{error}$.

In [ ]:
 

Bonuses¶

Rewrite the method without the for loop but taking the new values ​​of x. For this we will cut the matrix A into two triangular matrices.

In [2]:
# les méthodes de Numpy triu et tril seront utiles
In [ ]: