Iterative Methods
itMet: Iterative Methods
This module contains iterative solver for linear systems
Moreover it contains a routine to generate the discretize laplace operator in 1,2 and 3 dimensions to generate huge test matrices.
Here we can use either stationary methods like
- Jacobi
- Gauß-Seidel
- SOR
or we use krylov methods like
- Steepest Descent
- CG
- GMRES
The stationary methods are based on splittings
where D corresponds to the diagonal of A, L to to strictly lower part of A and U to the strictly upper part of A.
For future release we are planing to add preconditioning in the krylov methods. There of course you will be able to use the stationary methods as precondition method.
Documentation is available in the docstrings and online here.
Methods
oppy.itMet.cg module
Conjugated gradient method.
This module contains the function cg() to solve a system
\[Ax = f\]
of linear equations using the conjugated gradient (CG) method.
-
oppy.itMet.cg.cg(A, f, options=None) -
Solve the linear system \(Ax=f\) with the conjugate gradient method.
Input matrix
Acan be a 2-D-array (numpy.ndarray),scipy.sparsematrix or a linear operator, e.g. via ascipy.sparse.linalg.LinearOperatorobject (matrix free is possible).Parameters: -
A (
numpy.ndarray,scipy.sparsematrix or operator, shape (n,n) or output (,n)) – The square matrixAor operator for solving the linear system \(Ax = f\). -
f (
numpy.ndarray, shape (n,)) – The vector representing the right-hand side of the equation. -
options (
Options, optional) –Options for the
cgsolver, represented as aOptionsobject. The default isNone. Possible options are-
- u_0
numpy.ndarray, shape (n,) - Initial guess, represented as a ndarry with shape (n,). The default is None and therefore initialized with the zero vector.
- u_0
-
- tol_abs
float - Absolute tolerance for termination condition, initialized with \(10^{-7}\).
- tol_abs
-
- tol_rel
float - Relative tolerance for termination condition, initialized with \(10^{-7}\).
- tol_rel
Note
The algorithm stops, if the norm of the current residual
res[k]satisfies the inequality\[\mathrm{res}[k] < \mathrm{stopping\_criteria},\]where
stopping_criteriais given by\[\mathrm{stopping\_criteria} = \mathrm{tol\_rel} \cdot \mathrm{res}[0] + \mathrm{tol\_abs},\]and
res[0]is the norm of the first residual. If the value ofstopping_criteriais bigger than \(0.1\), e.g. if the norm of the first residual is big and the relative tolerance is small, then this value is set to \(0.1\). -
Returns: results – Instance of the class
Results. Represents the results of the solver. Attributes of the object depend on the choice of theoptions.resultsparameter of the options attribute. With default choice the object has-
- x
numpy.ndarray, shape (n,) -
Solution vector of the optimization problem.
- x
-
- iterations
int -
Number of iterations the algorithm needed.
- iterations
-
- res
list -
List with the norm of the residuals.
- res
Return type: Warns: -
default warning – If the input
Ais neither ascipy.sparsematrix, nor a linear operator (e.g. viascipy.sparse.linalg.LinearOperator) a warning is raised and in this caseNoneis returned. -
default warning – If the initial guess \(u_0\) is set manually, the algorithm checks
whether the sizes of \(u_0\) and the right-hand side
fare the same. If not, a warning is raised and \(u_0\) is set to the zero vector.
Examples
Solving a linear system where
Ais given as the discrete Laplacian operator computed with finite differences. For tests, this is implemented inFDLaplacian().>>> from oppy.tests.itMet import FDLaplacian
The minimizing method was chosen as the conjugate gradient method
>>> from oppy.itMet import cg
To construct a test we use numpy arrays
numpy.ndarray>>> import numpy as np
Set up test case
The following computes the 2 dimensional discrete Laplacian
>>> m = 100 >>> d = 2 >>> A = FDLaplacian(m,d)
Notice, in this case
Ais ascipy.sparsematrix. Choosing a random solution and computing the right hand side>>> x_sol = np.random.rand(m**d) >>> f = A.dot(x_sol)
Now, we can solve our system
>>> cg_results = cg(A,f)
and compare the solution to the exact one
>>> print(np.linalg.norm(cg_results.x - x_sol)) >>> 0.00015858787538540118
Notice, depending on your system, this number could be a bit different but with default options the error should be in the order of \(10^{-4}\).
-
A (
oppy.itMet.gauss_seidel module
Gauss Seidel method.
This module contains the stationary gauss_seidel() solver to solve
a linear system
\[Ax = f\]
with the stationary iteration induced by the splitting
\[A = M - N\]
where \(M = \mathrm{tridiagonal}(A)\).
-
oppy.itMet.gauss_seidel.gauss_seidel(A, f, options=None) -
Solve the linear system \(Ax = f\) with the Gauss-Seidel method.
The input matrix
Acan be a 2-D-array (numpy.ndarrayobject) or ascipy.sparsematrix.Parameters: -
A (
numpy.ndarrayorscipy.sparsematrix, shape (n,n)) – The square matrixAfor solving the linear system \(Ax = f\). -
f (
numpy.ndarray, shape (n,)) – The vector representing the right-hand side of the equation. -
options (
Options, optional) –Options for the
gauss_seidelsolver, represented as aOptionsobject. The default isNone. Possible options are-
- u_0
numpy.ndarray, shape (n,) - Initial guess, represented as a ndarry with shape (n,). The default is None and therefore initialized with the zero vector.
- u_0
-
- tol_abs
float - Absolute tolerance for termination condition, initialized
with \(10^{-7}\).
Note
The absolute tolerance
tol_absis also used to check whether the diagonal elements of the matrixAare greater than this tolerance. If this is not the case, a warning is raised andNoneis returned (see the section Warns for more information).
- tol_abs
-
- tol_rel
float - Relative tolerance for termination condition, initialized with \(10^{-7}\).
- tol_rel
Note
The algorithm stops, if the norm of the current residual
res[k]satisfies the inequality\[\mathrm{res}[k] < \mathrm{stopping\_criteria},\]where
stopping_criteriais given by\[\mathrm{stopping\_criteria} = \mathrm{tol\_rel} \cdot \mathrm{res}[0] + \mathrm{tol\_abs},\]and
res[0]is the norm of the first residual. If the value ofstopping_criteriais bigger than \(0.1\), e.g. if the norm of the first residual is big and the relative tolerance is small, then this value is set to \(0.1\). -
Returns: results – Instance of the class
Results. Represents the results of the solver. Attributes of the object depend on the choice of theoptions.resultsparameter of the options attribute. With default choice the object has-
- x
numpy.ndarray, shape (n,) -
Solution vector of the optimization problem.
- x
-
- iterations
int -
Number of iterations the algorithm needed.
- iterations
-
- res
list -
List with the norm of the residuals.
- res
Return type: Warns: -
default warning – If one of the diagonal entries of the matrix
Ais smaller than the absolute tolerancetol_relin terms of amount,Noneis returned and a warning is raised. -
default warning – If
Ais not a square matrix, a warning is raised andNoneis returned. -
default warning – If the initial guess \(u_0\) is set manually, the algorithm checks
whether the sizes of \(u_0\) and the right-hand side
fare the same. If not, a warning is raised and \(u_0\) is set to the zero vector.
Examples
Solving a linear system where
Ais given as the discrete Laplacian operator computed with finite differences. For tests, this is implemented inFDLaplacian().>>> from oppy.tests.itMet import FDLaplacian
The minimizing method was chosen as the
gauss_seidel()method>>> from oppy.itMet import gauss_seidel
To construct a test we use numpy arrays
numpy.ndarray>>> import numpy as np
Set up test case
The following computes the 2 dimensional discrete Laplacian
>>> m = 100 >>> d = 2 >>> A = FDLaplacian(m,d)
Notice, in this case
Ais ascipy.sparsematrix. Choosing a random solution and computing the right hand side>>> x_sol = np.random.rand(m**d) >>> f = A.dot(x_sol)
Now, we can solve our system
>>> gs_results = gauss_seidel(A,f)
and compare the solution to the exact one
>>> print(np.linalg.norm(gs_results.x - x_sol)) >>> 0.00667744566323901
Notice, depending on your system, this number could be a bit different but the error should be something around \(10^{-3}\) with default options.
-
A (
oppy.itMet.gmres module
Generalized minimal residual method.
This module contains the function gmres() to solve a system
\[Ax = f\]
of linear equations using the generalized minimal residual (gmres) method.
-
oppy.itMet.gmres.gmres(A, f, options=None) -
Solve the linear system \(Ax=f\) with the
gmres()method.Input matrix
Acan be a 2-D-array (numpy.ndarrayobject),scipy.sparsematrix or a linear operator, e.g. via ascipy.sparse.linalg.LinearOperatorobject (matrix free is possible).Parameters: -
A (
numpy.ndarray,scipy.sparsematrix or operator, shape (n,n) or output (,n)) – The square matrixAor operator for solving the linear system \(Ax = f\). -
f (
numpy.ndarray, shape (n,)) – The vector representing the right-hand side of the equation. -
options (
Options, optional) –Options for the
gmressolver, represented as aOptionsobject. The default isNone. Possible options are-
- u_0
numpy.ndarray, shape (n,) - Initial guess, represented as a ndarry with shape (n,). The default is None and therefore initialized with the zero vector.
- u_0
-
- tol_abs
float - Absolute tolerance for termination condition, initialized with \(10^{-7}\).
- tol_abs
-
- tol_rel
float - Relative tolerance for termination condition, initialized with \(10^{-7}\).
- tol_rel
Note
The algorithm stops, if the norm of the current residual
res[k]satisfies the inequality\[\mathrm{res}[k] < \mathrm{stopping\_criteria},\]where
stopping_criteriais given by\[\mathrm{stopping\_criteria} = \mathrm{tol\_rel} \cdot \mathrm{res}[0] + \mathrm{tol\_abs},\]and
res[0]is the norm of the first residual. If the value ofstopping_criteriais bigger than \(0.1\), e.g. if the norm of the first residual is big and the relative tolerance is small, then this value is set to \(0.1\). -
Returns: results – Instance of the class
Results. Represents the results of the solver. Attributes of the object depend on the choice of theoptions.resultsparameter of the options attribute. With default choice the object has-
- x
numpy.ndarray, shape (n,) -
Solution vector of the optimization problem.
- x
-
- iterations
int -
Number of iterations the algorithm needed.
- iterations
-
- res
list -
List with the norm of the residuals.
- res
Return type: Warns: -
default warning – If the input
Ais neither ascipy.sparsematrix, nor a linear operator (e.g. viascipy.sparse.linalg.LinearOperator) a warning is raised and in this caseNoneis returned. -
default warning – If the initial guess \(u_0\) is set manually, the algorithm checks
whether the sizes of \(u_0\) and the right-hand side
fare the same. If not, a warning is raised and \(u_0\) is set to the zero vector.
Examples
Solving a linear system where
Ais given as the discrete Laplacian operator computed with finite differences. For tests, this is implemented inFDLaplacian().>>> from oppy.tests.itMet import FDLaplacian
The minimizing method was chosen as the
gmres()method>>> from oppy.itMet import gmres
To construct a test we use numpy arrays
np.ndarray>>> import numpy as np
Set up test case
The following computes the 2 dimensional discrete Laplacian
>>> m = 100 >>> d = 2 >>> A = FDLaplacian(m,d)
Notice, in this case
Ais ascipy.sparsematrix. Choosing a random solution and computing the right hand side>>> x_sol = np.random.rand(m**d) >>> f = A.dot(x_sol)
Now, we can solve our system
>>> gmres_results = gmres(A,f)
and compare the solution to the exact one
>>> print(np.linalg.norm(gmres_results.x - x_sol)) >>> 0.00155800676948457
Notice, depending on your system, this number could be a bit different but with default options the error should be in the order of \(10^{-3}\).
-
A (
oppy.itMet.jacobi module
Jacobi method.
This module contains the stationary jacobi() solver to solve
a linear system
\[Ax = f\]
with the stationary iteration induced by the splitting
\[A = M - N\]
where \(M = \mathrm{diag}(A)\).
-
oppy.itMet.jacobi.jacobi(A, f, options=None) -
Solve the linear system \(Ax = f\) with the
jacobi()method.The input matrix
Acan be a 2-D-array (numpy.ndarrayobject) or ascipy.sparsematrix.Parameters: -
A (
numpy.ndarrayorscipy.sparsematrix, shape (n,n)) – The square matrixAfor solving the linear system \(Ax = f\). -
f (
numpy.ndarray, shape (n,)) – The vector representing the right right-hand of the equation. -
options (
Options, optional) –Options for the
jacobisolver, represented as aOptionsobject. The default isNone. Possible options are-
- u_0
numpy.ndarray, shape (n,) - Initial guess, represented as a ndarry with shape (n,). The default is None and therefore initialized with the zero vector.
- u_0
-
- tol_abs
float - Absolute tolerance for termination condition, initialized
with \(10^{-7}\).
Note
The absolute tolerance
tol_absis also used to check whether the diagonal elements of the matrixAare greater than this tolerance. If this is not the case, a warning is raised andNoneis returned (see the section Warns for more information).
- tol_abs
-
- tol_rel
float - Relative tolerance for termination condition, initialized with \(10^{-7}\).
- tol_rel
Note
The algorithm stops, if the norm of the current residual
res[k]satisfies the inequality\[\mathrm{res}[k] < \mathrm{stopping\_criteria},\]where
stopping_criteriais given by\[\mathrm{stopping\_criteria} = \mathrm{tol\_rel} \cdot \mathrm{res}[0] + \mathrm{tol\_abs},\]and
res[0]is the norm of the first residual. If the value ofstopping_criteriais bigger than \(0.1\), e.g. if the norm of the first residual is big and the relative tolerance is small, then this value is set to \(0.1\). -
Returns: results – Instance of the class
Results. Represents the results of the solver. Attributes of the object depend on the choice of theoptions.resultsparameter of the options attribute. With default choice the object has-
- x
numpy.ndarray, shape (n,) -
Solution vector of the optimization problem.
- x
-
- iterations
int -
Number of iterations the algorithm needed.
- iterations
-
- res
list -
List with the norm of the residuals.
- res
Return type: Warns: -
default warning – If one of the diagonal entries of the matrix
Ais smaller than the absolute tolerancetol_relin terms of amount,Noneis returned and a warning is raised. -
default warning – If
Ais not a square matrix, a warning is raised andNoneis returned. -
default warning – If the initial guess \(u_0\) is set manually, the algorithm checks
whether the sizes of \(u_0\) and the right-hand side
fare the same. If not, a warning is raised and \(u_0\) is set to the zero vector.
Examples
Solving a linear system where
Ais given as the discrete Laplacian operator computed with finite differences. For tests, this is implemented inFDLaplacian().>>> from oppy.tests.itMet import FDLaplacian
The minimizing method was chosen as the
jacobi()method>>> from oppy.itMet import jacobi
To construct a test we use numpy arrays
numpy.ndarray>>> import numpy as np
Set up test case. The following computes the 2 dimensional discrete Laplacian
>>> m = 100 >>> d = 2 >>> A = FDLaplacian(m,d)
Notice, in this case
Ais ascipy.sparsematrix. Choosing a random solution and computing the right hand side>>> x_sol = np.random.rand(m**d) >>> f = A.dot(x_sol)
Now, we can solve our system
>>> jacobi_results = jacobi(A, f)
and compare the solution to the exact one
>>> print(np.linalg.norm(jacobi_results.x - x_sol)) >>> 0.006659729108731856
Notice, depending on your system, this number could be a bit different but the error should be something around \(10^{-3}\).
-
A (
oppy.itMet.sor module
Successive Over-relaxation (SOR) method.
This module contains the stationary sor() solver to solve
a linear system
\[Ax = f\]
with the stationary iteration induced by the splitting
\[A = M - N\]
where \(M = \frac{1}{\omega}\cdot \mathrm{diag}(A) + \mathrm{LowerDiag}(A)\).
-
oppy.itMet.sor.sor(A, f, options=None) -
Solve the linear system \(Ax = f\) with the
sor()method.The input matrix
Acan be a 2-D-array (numpy.ndarrayobject) or ascipy.sparsematrix.Parameters: -
A (
numpy.ndarrayorscipy.sparsematrix, shape (n,n)) – The square matrixAfor solving the linear system \(Ax = f\). -
f (
numpy.ndarray, shape (n,)) – The vector representing the right right-hand of the equation. -
options (
Options, optional) –Options for the
sorsolver, represented as aOptionsobject. The default isNone. Possible options are-
- u_0
numpy.ndarray, shape (n,) - Initial guess, represented as a ndarry with shape (n,). The default is None and therefore initialized with the zero vector.
- u_0
-
- tol_abs
float - Absolute tolerance for termination condition, initialized
with \(10^{-7}\).
Note
The absolute tolerance
tol_absis also used to check whether the diagonal elements of the matrixAare greater than this tolerance. If this is not the case, a warning is raised andNoneis returned (see the section Warns for more information).
- tol_abs
-
- tol_rel
float - Relative tolerance for termination condition, initialized with \(10^{-7}\).
- tol_rel
Note
The algorithm stops, if the norm of the current residual
res[k]satisfies the inequality\[\mathrm{res}[k] < \mathrm{stopping\_criteria},\]where
stopping_criteriais given by\[\mathrm{stopping\_criteria} = \mathrm{tol\_rel} \cdot \mathrm{res}[0] + \mathrm{tol\_abs},\]and
res[0]is the norm of the first residual. If the value ofstopping_criteriais bigger than \(0.1\), e.g. if the norm of the first residual is big and the relative tolerance is small, then this value is set to \(0.1\). -
Returns: results – Instance of the class
Results. Represents the results of the solver. Attributes of the object depend on the choice of theoptions.resultsparameter of the options attribute. With default choice the object has-
- x
numpy.ndarray, shape (n,) -
Solution vector of the optimization problem.
- x
-
- iterations
int -
Number of iterations the algorithm needed.
- iterations
-
- res
list -
List with the norm of the residuals.
- res
Return type: Warns: -
default warning – If one of the diagonal entries of the matrix
Ais smaller than the absolute tolerancetol_relin terms of amount,Noneis returned and a warning is raised. -
default warning – If
Ais not a square matrix, a warning is raised andNoneis returned. -
default warning – If the initial guess \(u_0\) is set manually, the algorithm checks
whether the sizes of \(u_0\) and the right-hand side
fare the same. If not, a warning is raised and \(u_0\) is set to the zero vector.
Examples
Solving a linear system where
Ais given as the discrete Laplacian operator computed with finite differences. For tests, this is implemented inFDLaplacian().>>> from oppy.tests.itMet import FDLaplacian
The minimizing method was chosen as the
sor()method>>> from oppy.itMet import sor
To construct a test we use numpy arrays
numpy.ndarray>>> import numpy as np
Set up test case
The following computes the 2 dimensional discrete Laplacian
>>> m = 100 >>> d = 2 >>> A = FDLaplacian(m,d)
Notice, in this case
Ais ascipy.sparsematrix. Choosing a random solution and computing the right hand side>>> x_sol = np.random.rand(m**d) >>> f = A.dot(x_sol)
Now, we can solve our system. First we need a parameter \(\omega\). For this matrix one can show that
>>> omega = 2/(1+np.sin(np.pi/(m+1)))
is the best choice. We import the
Optionsclass to set the value for \(\omega\) manually.>>> from oppy.options import Options >>> sor_options = Options(omega=omega) >>> sor_results = sor(A, f, sor_options)
and compare the solution to the exact one
>>> print(np.linalg.norm(sor_results.x - x_sol)) >>> 3.329043927513322e-05
Notice, depending on your system, this number could be a bit different but the error should be something around \(10^{-5}\).
-
A (
oppy.itMet.steepest_descent module
Steepest descent method.
This module contains the function steepest_descent() to solve a system
\[Ax = f\]
of linear equations using the steepest_descent() method.
-
oppy.itMet.steepest_descent.steepest_descent(A, f, options=None) -
Solve the linear system \(Ax = f\) with the
steepest_descent()method.The input matrix
Acan be a 2-D-array (numpy.ndarrayobject) or ascipy.sparsematrix.Parameters: -
A (
numpy.ndarrayorscipy.sparsematrix, shape (n,n)) – The square matrixAfor solving the linear system \(Ax = f\). -
f (
numpy.ndarray, shape (n,)) – The vector representing the right hand side of the equation. -
options (
Options, optional) –Options for the
steepest_descentsolver, represented as aOptionsobject. The default isNone. Possible options are-
- u_0
numpy.ndarray, shape (n,) - Initial guess, represented as a ndarry with shape (n,). The default is None and therefore initialized with the zero vector.
- u_0
-
- tol_abs
float - Absolute tolerance for termination condition, initialized with \(10^{-7}\).
- tol_abs
-
- tol_rel
float - Relative tolerance for termination condition, initialized with \(10^{-7}\).
- tol_rel
Note
The algorithm stops, if the norm of the current residual
res[k]satisfies the inequality\[\mathrm{res}[k] < \mathrm{stopping\_criteria},\]where
stopping_criteriais given by\[\mathrm{stopping\_criteria} = \mathrm{tol\_rel} \cdot \mathrm{res}[0] + \mathrm{tol\_abs},\]and
res[0]is the norm of the first residual. If the value ofstopping_criteriais bigger than \(0.1\), e.g. if the norm of the first residual is big and the relative tolerance is small, then this value is set to \(0.1\). -
Returns: results – Instance of the class
Results. Represents the results of the solver. Attributes of the object depend on the choice of theoptions.resultsparameter of the options attribute. With default choice the object has-
- x
numpy.ndarray, shape (n,) -
Solution vector of the optimization problem.
- x
-
- iterations
int -
Number of iterations the algorithm needed.
- iterations
-
- res
list -
List with the norm of the residuals.
- res
Return type: Warns: -
default warning – If
Ais not a square matrix, a warning is raised andNoneis returned. -
default warning – If the initial guess \(u_0\) is set manually, the algorithm checks
whether the sizes of \(u_0\) and the right-hand side
fare the same. If not, a warning is raised and \(u_0\) is set to the zero vector.
Examples
Solving a linear system where
Ais given as the discrete Laplacian operator computed with finite differences. For tests, this is implemented inFDLaplacian().>>> from oppy.tests.itMet import FDLaplacian
The minimizing method was chosen as the
steepest_descent()method>>> from oppy.itMet import steepest_descent
To construct a test we use numpy arrays
numpy.ndarray>>> import numpy as np
Set up test case
The following computes the 2 dimensional discrete Laplacian
>>> m = 100 >>> d = 2 >>> A = FDLaplacian(m,d)
Notice, in that case, that
Ais ascipy.sparsematrix. Choosing a random solution and computing the right hand side>>> x_sol = np.random.rand(m**d) >>> f = A.dot(x_sol)
Now, we can solve our system
>>> sd_results = steepest_descent(A, f)
and compare the solution to the exact one
>>> print(np.linalg.norm(sd_results.x-x_sol)) >>> 0.0013979296735099711
Notice, depending on your system, this number could be a bit different but the error should be something around \(10^{-3}\).
-
A (