In the implicit scheme, one ends up with a large sparse block matrix . For a typical mesh size of ~2 million cells, it adds up to a 10 million 10 million system. One can choose to either solve the system directly (through Gaussian elimination) or iteratively. Direct solvers solve the system in a fixed number of steps and are very robust. However, the number of operations scale with operations, which can be extremely costly. Typically one wants to keep for direct solvers. Hence, iterative methods are the preferred choice in typical CFD applications. In addition, preconditioners are commonly used to accelerate the solution. We will first go through one of the more modern iterative method – Krylov methods, before talking about Algebraic multigrid (AMG) preconditioners.
It took me awhile to wrap my head around the Krylov type linear solvers. This is because most sources start by saying the the method searches for a solution in the Krylov subspace:
Where with being the first guess for . Now I do not have a mathematics degree so looking at the notation above gives me a headache. Only after further reading, I learnt that any vector which can be expressed as a linear combination of a set of vectors can be said to be in the span of said set. Bascially, if you can express your vector to be a linear combination of , your vector is said to belong in the order Krylov subspace of and . As a further illustration:
So we can say that belongs to the order Krylov subspace.
Okay, so now the next question is why of all possible places, do we want to search for in the Krylov subspace? One has to look at the minimal polynomial of a matrix to understand this. Suppose we were to keep multiplying and form a sequence:
When we reach a minimum for a certain combination of such that the sequence goes to zero, is said to be the degree of the minimal polynomial of . As an example, for a minimal polynomial of degree :
Multiplying throughout by , then by ,
Aha! So can be expressed in terms of where is the degree of minimal polynomial of . Now there is a theory that for a matrix, . Hence the Krylov subspace is not a bad place to start looking for a solution. What it also means is that if we expand the space large enough until , we will arrive at an exact solution of . Of course, the point of Krylov methods is to avoid that scenario, and hopefully get a satisfactory answer for as small a subspace as possible.
As the subspace grows, we run into another issue where the new vectors become increasingly close to each other. If one were familiar the the power iteration method to find eigenvectors, one would recognise that approaches the dominant eigenvector of as . To circumvent this problem, we have to orthogonalize the vectors in .
To get a feel of what that means, imagine we have 2 vectors at a angle to each other (i.e. and ). We can describe any point in as a linear combination of and . For example, the vector can be expressed as with and . The coefficients were found by solving a system of equation:
So we see that the coefficients are sensitive to the concatenated matrix. As you can imagine, the closer and are to each other, the worse the condition number of the matrix. If we live in a world with unlimited floating points, this would not be a problem at all (and many other problems which keep mathematicians up at night would disappear too). However, we still live in a world where unicorns are nonexistent, so we have to think of ways to circumvent this. One way would be to orthogonalize the vectors (i.e. let and ) since the condition number of an orthonormal matrix is . The new vectors are still able to describe any point in the plane, we do not lose any territory with the change in basis.
Alright! Now that we have covered the gist of Krylov methods, we can take a closer look at one of the algorithms: Generalized Minimum Residual (GMRES) method. In this method, the basis vectors for the Krylov subspace are orthogonalized by the Arnoldi iteration. Stacking these orthogonalized basis vectors side by side results in the matrix which is in size for the order Krylov space. The new basis vector for the space is calculated from:
is known as the Hessenberg matrix which is in size. Remember for an orthonormal matrix, so our system can be rewritten as:
With . Note that since is a matrix, we have an over defined system. We now solve a least squares problem for :
Hence the name Minimum Residual. The estimated is recovered from by , so we see that holds the coefficients to the vectors in the orthogonalized . In the algorithm, one starts from , then expand the space with every iteration. As increases, the size of and increases, potentially overtaking the memory requirements of the original sparse matrix. Hence, most algorithms restart the process for a large enough with the latest estimate (usually around ).
Algebraic multigrid preconditioners
As the name suggests, there are main concepts to this: algebraic multigrid methods (AMG) and preconditioners. We will go through each concept individually before piecing them together.
We know that iterative linear solvers take an initial guessed solution and improve on it to get the final answer. As a convenient guess, we can set or . This would result in a noisy residual vector . With the fixed iteration methods like Jacobi or Gauss Seidel, the noise in the residual gets damped down really quickly (usually within 2 to 3 iterations). After the first few iterations, the error becomes smooth and subsequent iterations drive it extremely slowly down to zero. Such methods are really effective for smoothing out spikes in the errors, but perform terribly thereafter.
Hence, the idea of multigrid methods is to transfer the smoothed error from one grid to another such that it becomes spiky again (albeit with reduced amplitude) in the other grid. The same smoothing operation is carried out in the new grid and the process continues recursively. After a few repetitions, the process reverses and we get a final small error vector in relatively little iterations. So how do we make the smooth error become spiky again?
There! As we can see from the diagram I took from Saad’s book, by removing every alternate point in the original grid, we get an oscillatory error in the new grid. The error is then smoothed (usually with only to Jacobi or Gauss Seidel iterations) in this new grid and transferred to another coarser grid and so on. The diagram shows the original geometric multigrid method, where the grids were defined physically. For an unstructured mesh used commonly, it becomes challenging to create physically coarser and coarser mesh. Hence methods to do this algebraically (without considering the physical manifestation of the mesh) were heavily explored, resulting in algebraic multigrid methods.
To understand this a little more, we define the error term in terms of the actual solution and estimated solution :
We see that by solving the linear system with the residual vector on the RHS, we get the error term which leads us to the actual solution from . As we cycle through the grids, we get a more refined estimate of . The trick in algebraic multigrid methods is in manipulating the system without the need of any geometric information, allowing us to treat the process as a “blackbox”, which is an engineer’s favorite approach. In the algebraic method, we transfer results from the fine mesh to the coarse one (restriction operation) by using a short and fat interpolation matrix known as the restriction matrix ():
In the reverse direction (prolongation operation), we use a tall and skinny interpolation matrix termed the prolongation matrix. The restriction and prolongation matrices are just transposes of each other. Hence, when we want to cycle to the next coarse mesh, after dropping the fine-to-coarse notation for the short and fat matrix , we perform:
This is known as a Galerkin projection. We see that after solving the coarse system of equation, we get the error in the fine mesh by using the interpolation matrix again. The interpolation is thus a key ingredient in AMG methods. Here is a pseudo code that shows how AMG works between 2 grids:
x_f = smooth(A, b, x_0); // Initial guess, 2-3 iterations on x_0 r_f = b - A*x_f; // Find residual r_c = I*r_f; // Interpolate residual to coarse mesh e_c = solve(I*A*I_transpose, r_c); // Solve for error (or recursion) e_f = I_transpose*e_c; // Prolongate error back unto fine mesh x_f += e_f; // Update guess x_f = smooth(A, b, x_f); // Additional smoothing, 2-3 iterations on x_f
At the finest grid, we perform a smoothing on the initial guess x_0. This means we “solve” A*x_f = b using fixed point iteration like damped Jacobi or Gauss-Seidel with x_0 as the initial guess. By “solve”, I mean we stop after only 2 to 3 iterations. Then, we interpolate the residual to the next mesh, before solving the projected system (I*A*I_transpose)*e_c = r_c. In a 2 grid set up, we solve the system for real. Once that is done, we go in the reverse direction: transfer the error term back to the fine mesh, update the solution, then perform another smoothing process with the updated solution as initial guess.
With more grids, we simply do a recursion at the solve part until we reach the coarsest grid. A direct solver is then used. At the end of the process, if the residuals are not satisfactory, the obtained x_f is used as the new x_0, and the process loops until convergence. This process is known as a V-cycle.
The algebraic part of the process comes in the form of constructing . When solving a PDE with finite difference/volume/element, a node’s neighbor will cause a nonzero entry in its own row of the coefficient matrix. This can be used heuristically to group nodes together for the coarse mesh. A classical method is the Ruge-Stuben method, where it uses the strength of connection between each node to the off-diagonal nodes in its own row as a grouping criteria. A very detailed set of notes by Falgout from the Lawrence Livermore National Laboratories explaining the method and AMG in general can be found here.