Exercise ma21¶

In [ ]:
import numpy as np
import numpy.linalg as lin
import matplotlib.pylab as plt

%matplotlib inline

np.set_printoptions(precision=3, linewidth=150, suppress=True)

We will increase the radius of convergence the improved Jacobi method made in TD, namely the Gauss-Seidel method.

We will study its convergence in different cases.

Gauss-Seidel¶

When we calculate the following x with Jacobi we do not take advantage of the fact that N is triangular and therefore we know the new value of $x_n$ when we calculate $x_{n-1}$. With Gauss-Seidel the last computed value is always used, which accelerates convergence.

To summarize Gauss-Seidel from a matrix point of view, we have:

  • D = the diagonal matrix extracted from A: D = np.diag(np.diag(A))* L = the lower strictly triangular matrix of A: L = np.tril(A, -1)
  • U = the upper strictly triangular matrix of A: U = np.triu(A, 1)

and an iteration is given by the following formula:

$$ (D + L)\, {\bf x}^{k+1} = -U\; {\bf x}^k + {\bf b} $$ Where $$ {\bf x}^{k+1} = D^{-1} \, ( -L\, {\bf x}^{k+1} - U\; {\bf x}^k + {\bf b}) $$ i.e. $$ \begin{bmatrix} x_{1}^{k+1} \\ x_{2}^{k+1} \\ \vdots \\ x_{n}^{k+1} \\ \end{bmatrix} = \begin{bmatrix} 1/a_{11} \quad 0 \quad \ldots \quad 0 \\ 0 \quad 1/a_{22} \quad \ldots \quad 0 \\ \vdots \\ 0 \quad 0 \quad \ldots \quad 1/a_{nn} \\ \end{bmatrix} \; \left( \; - \begin{bmatrix} 0 \quad 0 \quad \ldots \quad 0 \\ a_{21} \; 0 \quad \ldots \quad 0 \\ \vdots \\ a_{n1} \, a_{n2} \; \ldots \quad 0 \\ \end{bmatrix} \; \begin{bmatrix} x_{1}^{k+1} \\ x_{2}^{k+1} \\ \vdots \\ x_{n}^{k+1} \\ \end{bmatrix} - \begin{bmatrix} 0 \; a_{12} \; \ldots \; a_{1n} \\ 0 \quad 0 \; \ldots \; a_{2n} \\ \vdots \\ 0 \quad 0 \; \ldots \; 0 \\ \end{bmatrix} \; \begin{bmatrix} x_{1}^k \\ x_{2}^k \\ \vdots \\ x_{n}^k \\ \end{bmatrix} + \begin{bmatrix} b_{1} \\ b_{2} \\ \vdots \\ b_{n} \\ \end{bmatrix} \; \right) $$

Note that I can put $L\, {\bf x}^{k+1}$ to the right of the equal sign if I solve my system line by line starting from the top since in in this case the ${\bf x}^{k+1}$ used are known. This is what we did during the last lab.

Gauss-Seidel overrelaxation¶

As we did with Jacobi, we introduce inertia with $w$:

$$ {\bf x}^{k+1} = w \, D^{-1} \, ( -L\, {\bf x}^{k+1} - U\; {\bf x}^k + {\bf b}) + (1-w) \; {\bf x}^k $$

Check that you arrive at the following matrix entry:

$$ \left(\frac{D}{w} + L\right)\, {\bf x}^{k+1} = \left(\frac{1-w}{w} \, D - U\right)\; {\bf x}^k + {\bf b} $$

Written thus we see that this method consists in splitting elements of the diagonal on both sides of the equality. This can be interpreted as an advantage linked to a better repartition of the information included in our matrix (it needs to be tested).

Let's program overrelaxed Gauss-Seidel¶

We will write two functions:

  • solve_triangular(L,b) which returns the solution of Lx = b when L is lower triangular
  • gauss_seidel_r(A, b, x0, w, n) which does n Gauss-Seidel iteration starting at x0 with w the given relaxation coefficient in argument. This function returns an array of calculated x values ​​(thus an array in 2D).

As always, be careful to limit for and do as many vector and matrix operations as possible.

In [ ]:
 
In [ ]:
 
In [ ]:
np.random.seed(123)
A = np.random.randint(10, size=(4,4))
b = A.sum(axis=1)
x0 = np.random.random(4)

res = gauss_seidel_r(A, b, x0, w=0.2, n=100)
print(res[-1])
In [ ]:
def plot_convergences(values, result):
    error = np.square(values - result).sum(axis = -1) / np.square(result).sum(axis=-1)
    error2 = np.square(np.diff(values)).sum(axis = -1) / np.square(values).sum(axis=-1)
    fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14,4))
    ax1.plot(range(len(error)), error)
    ax1.set_title('Erreur absolue normalisée')
    ax1.semilogy();
    ax2.plot(range(len(error2)), error2)
    ax2.set_title('Erreur relative normalisée')
    ax2.semilogy()
    print("Itération du minimum :",np.argmin(error), np.argmin(error2))
In [ ]:
plot_convergences(res, np.ones(4))

Does the unrelaxed Gauss-Seidel method converge in this case?

In [ ]:
 

The good case¶

Find a seed which allows to generate a case which does not converge with the basic Gauss-Seidel but which converges with relaxation ($w=0.2$).

In [ ]:
 

Plot the convergence curves for the selected case with and without relaxation.

In [ ]:
 
In [ ]:
 
In [ ]:
 

Study by $w$¶

Still in our chosen case, indicate what is the interval of values ​​of $w$ which guarantees convergence for our matrix system A x = b with always the same x0 and a number of iterations to be determined.

Find the optimal value of $w$ to converge fastest for this case.

The requested precision for the interval and the optimal value is $10^{-2}$.

In [ ]:
 
In [ ]: