Resolution of large-scale linear equations

To compute steady-state solutions and analyse their stability requires to solve large-scale linear systems, characterized by a very large number of degrees of freedom. As an illustration, the following figure displays the block structures the fluid-solid Jacobian obtained after using an Arbitrary Lagrangian Eulerian approach for coupling the fluid and solid equations, and a finite element method for the spatial discretization of these equations.

The fluid variables are depicted in blue, the solid variables are depicted in red and the orange variables correspond to extension variables. The latter are numerical variables introduced to handle the deformation of the mesh used to discretize the fluid equations.

For two-dimensional configurations, the number of degrees of freedom remains limited and direct strategies can be applied. For three-dimensional configurations, the number of degrees of freedom strongly increases, as well as the sparsity of the Jacobian matrix. For these reasons, a direct strategy then requires a huge amount of memory only accessible using a very large number of cpu's on a cluster.  We have tested this strategy on a three-dimensional problem to determine its limits.  To overcome the above difficulty, we have developed and implemented iterative strategies specifically adapted to the numerical resolution of steady incompressible flows and their coupling with elastic structures. We also extended those methods for Linear Stability Analysis purposes, making them an alternative to the widely used Laplace preconditioning [Tuckerman]

L. S. Tuckerman. Laplacian preconditioning for the inverse Arnoldi method.Commun. Com-put. Phys., 18:1336–1351, 2015.

  • Direct LU factorization

Direct sparse LU solvers (MUMPS, SuperLU, etc) solvers are very popular due to their robustness and because they can be used as a "black-box" .
For 2D problems, they usually perform well due to the low number of degrees of freedom and high sparsity of the matrices at play.
For 3D problems, the memory required to perform the LU factorization explodes and may rapidly become prohibitive. As an example of that, we tested the Direct approach on a purely fluid problem, consisting in a laminar flow described by Navier-Stokes equations and discretized with Finite Elements. Unsustainable memory requirements are quickly reached (50 To for about 100M unknowns !)


  • Iterative algorithms for solving the fluid block

On the contrary to Direct LU solvers, an iterative method (GMRES, BiCGSTAB, etc) has very low memory requirements, allowing one to tackle large scale 3D problems. However, due to the very bad spectral properties of the equations at play in FSI computations, one has to use adequate preconditioning in order for those methods to converge in a reasonable amount of time.

Given the large predominance of fluid unknows in our configurations, we first considered the resolution of the fluid sub-problem. Various preconditioners proposed in the literature for the iterative resolution of the incompressible steady Navier-Stokes equations have been implemented, such as the SIMPLE algorithm [Patankar], the PCD preconditioner [Kay et al.] and the Augmented Lagrangian preconditioner [Benzi & Olshanskii]. Their performance have been assessed by varying the mesh refinement and the Reynolds number.

The modified Augmented Lagrangian (mAL) preconditionner has been selected due to its robustness with the mesh refinement and its mild dependence with the Reynolds number.

S. V. Patankar. Numerical heat transfer and fluid flow. McGraw-Hill, New York, 1980
D. Kay, D. Loghin, and A. Wathen. A Preconditioner for the Steady-State Navier-Stokes Equations. SIAM J. Sci. Comput., 24(1):237–256, 2002.
M. Benzi and M. A. Olshanskii. An Augmented Lagrangian-Based Approach to the Oseen Problem.SIAM J. Sci. Comput., 28(6):2095–2113, 2006.


  • Iterative algorithms for solving the coupled fluid/solid problems

The Augmented Lagrangian method is incorporated in a FSI preconditioner based on block Gauss-Seidel strategy, inspired from the work of [Deparis et al.]


Deparis, D. Forti, G. Grandperrin, A. Quarteroni. FaCSI: A block parallel preconditioner for fluid–structure interaction in hemodynamics. Journal of Computational Physics., 327:700–718, 2016.

FaCSI: A block parallel preconditioner for fluid–structure interaction in hemodynamics


  • Parallel implementation and performance

In order to tackle large scale 3D FSI problems, we are currently developping a fully parallel solver to perform Linear Stability Analysis. We use the Finite Element language FreeFem++ for spatial discretization purposes and the parallel library PETSc/SLEPc for linear system resolutions and eigenvalue computations. The interface between those separate tools has been realized through a joint work with P.Jolivet (CNRS/IRIT).

The GMRES method, preconditioned by modified Augmented Lagrangian has been implemented and compared to a Direct LU solver. We present below some strong scalability results, obtained on SATOR cluster, for a small 3D fluid problem (5 millions unknowns). The case of Newton iterations is shown on the right whereas the Linear Stability Analysis eigenproblem is shown on the right.