Category Archives

4 Articles

Linear solvers

In the implicit scheme, one ends up with a large sparse block matrix \mathbf{A} \vec{x} = \vec{b}. For a typical mesh size of ~2 million cells, it adds up to a 10 million \times 10 million system. One can choose to either solve the system directly (through Gaussian elimination) or iteratively. Direct solvers solve the system in a fixed number of steps and are very robust. However, the number of operations scale with \mathcal{O} (n^3) operations, which can be extremely costly. Typically one wants to keep n<10^{4} for direct solvers. Hence, iterative methods are the preferred choice in typical CFD applications. In addition, preconditioners are commonly used to accelerate the solution. We will first go through one of the more modern iterative method – Krylov methods, before talking about Algebraic multigrid (AMG) preconditioners.

Krylov methods

It took me awhile to wrap my head around the Krylov type linear solvers. This is because most sources start by saying the the method searches for a solution in the Krylov subspace:

    \begin{align*} \mathcal{K}_m(\mathbf{A}, \vec{r}_o) = span \{ \vec{r}_0, \mathbf{A}\vec{r}_0 , \mathbf{A}^2\vec{r}_0 , \dots , \mathbf{A}^{m-1}\vec{r}_0 \} \end{align*}

Where r_0 = \mathbf{A} \vec{x}_0 - \vec{b} with \vec{x}_0 being the first guess for \vec{x}. Now I do not have a mathematics degree so looking at the notation above gives me a headache. Only after further reading, I learnt that any vector which can be expressed as a linear combination of a set of vectors can be said to be in the span of said set. Bascially, if you can express your vector to be a linear combination of \{ \vec{r}_o, \mathbf{A}\vec{r}_o, \dots, \mathbf{A}^{m-1}\vec{r}_o \}, your vector is said to belong in the order m Krylov subspace of \mathbf{A} and \vec{r}_0. As a further illustration:

    \begin{align*} &\mathbf{A} = \begin{bmatrix} 2 & 1 \\ 1 & 0 \end{bmatrix} \quad , \quad \vec{r} = \begin{bmatrix} 1 \\ 2 \end{bmatrix} \quad , \quad \Rightarrow \mathcal{K}_3(\mathbf{A}, \vec{r}) = span \left\{ \begin{bmatrix} 1 \\ 2 \end{bmatrix}, \begin{bmatrix} 4 \\ 1 \end{bmatrix}, \begin{bmatrix} 9 \\ 4 \end{bmatrix} \right\},\\ &Let\ \vec{q} = 2\begin{bmatrix} 1 \\ 2 \end{bmatrix} + \begin{bmatrix} 4 \\ 1 \end{bmatrix} - \begin{bmatrix} 9 \\ 4 \end{bmatrix} = \begin{bmatrix} -3 \\ 1 \end{bmatrix} \in \mathcal{K}_3(\mathbf{A}, \vec{r}) . \end{align*}

So we can say that \vec{q} belongs to the order 3 Krylov subspace.

Okay, so now the next question is why of all possible places, do we want to search for \vec{x} in the Krylov subspace? One has to look at the minimal polynomial of a n\times n matrix \mathbf{A} to understand this. Suppose we were to keep multiplying \mathbf{A} and form a sequence:

    \begin{align*} p = \alpha_0 \mathbf{I} + \alpha_1 \mathbf{A} + \alpha_2 \mathbf{A}^2 + \alpha_3 \mathbf{A}^3 + \dots + \alpha_k \mathbf{A}^k. \end{align*}

When we reach a minimum k for a certain combination of \alpha such that the sequence goes to zero, k is said to be the degree of the minimal polynomial of \mathbf{A}. As an example, for a minimal polynomial of degree 3:

    \begin{align*} 0 = \alpha_0 \mathbf{I} + \alpha_1 \mathbf{A} + \alpha_2 \mathbf{A}^2 + \alpha_3 \mathbf{A}^3. \end{align*}

Multiplying throughout by \mathbf{A}^{-1}, then by \vec{b},

    \begin{align*} 0 = \alpha_0 \mathbf{A}^{-1} + \alpha_1 \mathbf{I} + \alpha_2 \mathbf{A} + \alpha_3 \mathbf{A}^2 \\ \Rightarrow \mathbf{A}^{-1} = -\frac{1}{\alpha_0} \left( \alpha_1 \mathbf{I} + \alpha_2 \mathbf{A} + \alpha_3 \mathbf{A}^2 \right) \\ \mathbf{A}^{-1}\vec{b} = -\frac{1}{\alpha_0} \left( \alpha_1 \vec{b} + \alpha_2 \mathbf{A}\vec{b} + \alpha_3 \mathbf{A}^2\vec{b} \right) \in \mathcal{K}_3(\mathbf{A}, \vec{b}). \end{align*}

Aha! So \vec{x} can be expressed in terms of \mathcal{K}_k(\mathbd{A}, \vec{b}) where k is the degree of minimal polynomial of \mathbf{A}. Now there is a theory that for a n \times n matrix, k \leq n. Hence the Krylov subspace is not a bad place to start looking for a solution. What it also means is that if we expand the space large enough until \mathcal{K}_n(\mathbf{A}, \vec{b}), we will arrive at an exact solution of \vec{x}. Of course, the point of Krylov methods is to avoid that scenario, and hopefully get a satisfactory answer for as small a subspace as possible.

As the subspace \mathcal{K}_m grows, we run into another issue where the new vectors become increasingly close to each other. If one were familiar the the power iteration method to find eigenvectors, one would recognise that \mathbf{A}^m \vec{b} approaches the dominant eigenvector of \mathbf{A} as m \rightarrow \infty. To circumvent this problem, we have to orthogonalize the vectors in \mathcal{K}_m.

To get a feel of what that means, imagine we have 2 vectors at a 0.1^{\circ} angle to each other (i.e. \vec{e}_1 = [1, 0]^T and \vec{e}_2 = [cos(0.1^{\circ}), sin(0.1^{\circ})]^T). We can describe any point in (x, y) as a linear combination of \vec{e}_1 and \vec{e}_2. For example, the vector \vec{q} = [0,1]^T can be expressed as -572.9572\vec{e}_1 + 572.9581\vec{e}_2 with \alpha_1 = -572.9572 and \alpha_2 = 572.9581. The coefficients were found by solving a system of equation:

    \begin{align*} \begin{bmatrix} \alpha_1 \\ \alpha_2 \end{bmatrix} = \begin{bmatrix} \vec{e}_1,\vec{e}_2 \end{bmatrix}^{-1} \begin{bmatrix} 0 \\ 1 \end{bmatrix}. \end{align*}

So we see that the coefficients are sensitive to the concatenated \vec{e} matrix. As you can imagine, the closer \vec{e}_1 and \vec{e}_2 are to each other, the worse the condition number of the matrix. If we live in a world with unlimited floating points, this would not be a problem at all (and many other problems which keep mathematicians up at night would disappear too). However, we still live in a world where unicorns are nonexistent, so we have to think of ways to circumvent this. One way would be to orthogonalize the vectors (i.e. let \vec{e}_1 = [1, 0]^T and \vec{e}_2 = [0, 1]^T) since the condition number of an orthonormal matrix is 1. The new vectors are still able to describe any point in the (x,y) plane, we do not lose any territory with the change in basis.

Alright! Now that we have covered the gist of Krylov methods, we can take a closer look at one of the algorithms: Generalized Minimum Residual (GMRES) method. In this method, the basis vectors for the Krylov subspace are orthogonalized by the Arnoldi iteration. Stacking these orthogonalized basis vectors side by side results in the \mathbf{Q} matrix which is n \times m in size for the order m Krylov space. The new basis vector for the m+1 space is calculated from:

    \begin{align*} \mathbf{A} \mathbf{Q}_{m} = \mathbf{Q}_{m+1} \mathbf{\tilde{H}}_{m}. \end{align*}

\mathbf{\tilde{H}}_m is known as the Hessenberg matrix which is (m+1) \times m in size. Remember for an orthonormal matrix, \mathbf{Q}\mathbf{Q}^T = \mathbf{I} so our \mathbf{A}\vec{x} = \vec{b} system can be rewritten as:

    \begin{align*} \mathbf{A} &= \mathbf{Q}_{m+1} \mathbf{\tilde{H}}_{m} \mathbf{Q}_{m}^T \\ \Rightarrow \mathbf{\tilde{H}}_{m} \mathbf{Q}_{m}^T \vec{x} &= \mathbf{Q}_{m+1}^T \vec{b} \\ \mathbf{\tilde{H}}_{m} \vec{y} &= \mathbf{Q}_{m+1}^T \vec{b}. \end{align*}

With \vec{y} = \mathbf{Q}_{m}^T \vec{x}. Note that since \mathbf{\tilde{H}}_m is a (m+1) \times m matrix, we have an over defined system. We now solve a least squares problem for \vec{y}:

    \begin{align*} min \left\Vert \mathbf{\tilde{H}}_{m} \vec{y} - \mathbf{Q}_{m+1}^T \vec{b}. \right\Vert = min \left\Vert \vec{r}_m \right\Vert \end{align*}

Hence the name Minimum Residual. The estimated \vec{x}_m is recovered from \vec{y} by \vec{x}_m = \mathbf{Q}_{m} \vec{y}, so we see that \vec{y} holds the coefficients to the vectors in the orthogonalized \{ \vec{b}, \mathbf{A}\vec{b}, \mathbf{A}^2\vec{b}, \dots , \mathbf{A}^{m-1}\vec{b} \}. In the algorithm, one starts from m=1, then expand the space with every iteration. As m increases, the size of \mathbf{Q} and \mathbf{\tilde{H}} increases, potentially overtaking the memory requirements of the original sparse \mathbf{A} matrix. Hence, most algorithms restart the process for a large enough m with the latest \vec{x} estimate (usually around m = 30).

Algebraic multigrid preconditioners

As the name suggests, there are 2 main concepts to this: algebraic multigrid methods (AMG) and preconditioners. We will go through each concept individually before piecing them together.

We know that iterative linear solvers take an initial guessed solution \vec{x}_0 and improve on it to get the final answer. As a convenient guess, we can set \vec{x}_0 = \vec{0} or \vec{x}_0 = \vec{b}. This would result in a noisy residual vector \vec{r}_0 = \mathbf{A}\vec{x}_0 - \vec{b}. With the fixed iteration methods like Jacobi or Gauss Seidel, the noise in the residual gets damped down really quickly (usually within 2 to 3 iterations). After the first few iterations, the error becomes smooth and subsequent iterations drive it extremely slowly down to zero. Such methods are really effective for smoothing out spikes in the errors, but perform terribly thereafter.

Hence, the idea of multigrid methods is to transfer the smoothed error from one grid to another such that it becomes spiky again (albeit with reduced amplitude) in the other grid. The same smoothing operation is carried out in the new grid and the process continues recursively. After a few repetitions, the process reverses and we get a final small error vector in relatively little iterations. So how do we make the smooth error become spiky again?

There! As we can see from the diagram I took from Saad’s book, by removing every alternate point in the original grid, we get an oscillatory error in the new grid. The error is then smoothed (usually with only 2 to 3 Jacobi or Gauss Seidel iterations) in this new grid and transferred to another coarser grid and so on. The diagram shows the original geometric multigrid method, where the grids were defined physically. For an unstructured mesh used commonly, it becomes challenging to create physically coarser and coarser mesh. Hence methods to do this algebraically (without considering the physical manifestation of the mesh) were heavily explored, resulting in algebraic multigrid methods.

To understand this a little more, we define the error term \vec{e} in terms of the actual solution \vec{x} and estimated solution \hat{\vec{x}}:

    \begin{gather*} \vec{e} = \vec{x} - \hat{\vec{x}} , \\ \mathbf{A}\left(\vec{x}-\hat{\vec{x}}\right) + \mathbf{A}\hat{\vec{x}} = \vec{b} \\ \Rightarrow \mathbf{A}\vec{e} = \vec{b} - \mathbf{A}\hat{\vec{x}} = \vec{r}. \end{gather*}

We see that by solving the linear system with the residual vector \vec{r} on the RHS, we get the error term which leads us to the actual solution from \hat{\vec{x}}. As we cycle through the grids, we get a more refined estimate of \vec{e}. The trick in algebraic multigrid methods is in manipulating the system without the need of any geometric information, allowing us to treat the process as a “blackbox”, which is an engineer’s favorite approach. In the algebraic method, we transfer results from the fine mesh to the coarse one (restriction operation) by using a short and fat interpolation matrix known as the restriction matrix (\mathbf{I}_f^c):

    \begin{align*} \vec{r}^{c} = \mathbf{I}_f^c  \vec{r}^{f} . \end{align*}

In the reverse direction (prolongation operation), we use a tall and skinny interpolation matrix \mathbf{I}_c^f = \left(\mathbf{I}_f^c\right)^T termed the prolongation matrix. The restriction and prolongation matrices are just transposes of each other. Hence, when we want to cycle to the next coarse mesh, after dropping the fine-to-coarse notation for the short and fat matrix \mathbf{I}_f^c, we perform:

    \begin{align*} \left( \mathbf{I} \mathbf{A} \mathbf{I}^T \right) \vec{e}^{c} = \mathbf{I}  \vec{r}^{f} \\ \vec{e}^{f} = \mathbf{I}^T \vec{e}^{c} . \end{align*}

This is known as a Galerkin projection. We see that after solving the coarse system of equation, we get the error in the fine mesh by using the interpolation matrix again. The interpolation is thus a key ingredient in AMG methods. Here is a pseudo code that shows how AMG works between 2 grids:

x_f = smooth(A, b, x_0);              // Initial guess, 2-3 iterations on x_0
r_f = b - A*x_f;                      // Find residual
r_c = I*r_f;                          // Interpolate residual to coarse mesh
e_c = solve(I*A*I_transpose, r_c);    // Solve for error (or recursion)
e_f = I_transpose*e_c;                // Prolongate error back unto fine mesh
x_f += e_f;                           // Update guess
x_f = smooth(A, b, x_f);              // Additional smoothing, 2-3 iterations on x_f

At the finest grid, we perform a smoothing on the initial guess x_0. This means we “solve” A*x_f = b using fixed point iteration like damped Jacobi or Gauss-Seidel with x_0 as the initial guess. By “solve”, I mean we stop after only 2 to 3 iterations. Then, we interpolate the residual to the next mesh, before solving the projected system (I*A*I_transpose)*e_c = r_c. In a 2 grid set up, we solve the system for real. Once that is done, we go in the reverse direction: transfer the error term back to the fine mesh, update the solution, then perform another smoothing process with the updated solution as initial guess.

With more grids, we simply do a recursion at the solve part until we reach the coarsest grid. A direct solver is then used. At the end of the process, if the residuals are not satisfactory, the obtained x_f is used as the new x_0, and the process loops until convergence. This process is known as a V-cycle.

The algebraic part of the process comes in the form of constructing \mathbf{I}. When solving a PDE with finite difference/volume/element, a node’s neighbor will cause a nonzero entry in its own row of the coefficient matrix. This can be used heuristically to group nodes together for the coarse mesh. A classical method is the Ruge-Stuben method, where it uses the strength of connection between each node to the off-diagonal nodes in its own row as a grouping criteria. A very detailed set of notes by Falgout from the Lawrence Livermore National Laboratories explaining the method and AMG in general can be found here.

Automatic differentiation

Yay! I finally get to talk about one of my favourite topics today: Automatic Differentiation (AD). I was horrendous at calculus in school, and learning about this was akin to a life changing experience for me – I never have to worry about differentiating long and complicated mathematical expressions again.

Sample code

All you need is some programming knowledge about template classes and poof, you can find even the Jacobian of your vector functions without any laborious manual differentiation and the subsequent programming effort. For a function that maps 3 input vector to 3 output vector, you save yourself from the misery 9 times over!

We’ll jump straight into an example:

#include <iostream>
#include <Eigen/Dense>
#include <unsupported/Eigen/AutoDiff>

using namespace std;

template<typename T>
T scalarFunctionOne(T const & x) {
  return 2*x*x + 3*x + 1;
};

void checkFunctionOne(double & x, double & dfdx) {
  dfdx = 4*x + 3;
}

template<typename T>
T scalarFunctionTwo(T const & x, T const & y) {
  return 2*x*x + 3*x + 3*x*y*y + 2*y + 1;
};

void checkFunctionTwo(double & x, double & y, double & dfdx, double & dfdy ) {
  dfdx = 4*x + 3 + 3*y*y;
  dfdy = 6*x*y + 2;
}

int main () {
  
  double x, y, z, f, g, dfdx, dgdy, dgdz;
  Eigen::AutoDiffScalar<Eigen::VectorXd> xA, yA, zA, fA, gA;
  
  cout << endl << "Testing scalar function with 1 input..." << endl;
  
  xA.value() = 1;
  xA.derivatives() = Eigen::VectorXd::Unit(1, 0);
  
  fA = scalarFunctionOne(xA);
  
  cout << "  AutoDiff:" << endl;
  cout << "    Function output: " << fA.value() << endl;
  cout << "    Derivative: " << fA.derivatives() << endl;
  
  x = 1;
  checkFunctionOne(x, dfdx);
  
  cout << "  Hand differentiation:" << endl;
  cout << "    Derivative: " << dfdx << endl << endl;
  
  cout << "Testing scalar function with 2 inputs..." << endl;
  
  yA.value() = 1;
  zA.value() = 2;
  
  yA.derivatives() = Eigen::VectorXd::Unit(2, 0);
  zA.derivatives() = Eigen::VectorXd::Unit(2, 1);
  
  gA = scalarFunctionTwo(yA, zA);
  
  cout << "  AutoDiff:" << endl;
  cout << "    Function output: " << gA.value() << endl;
  cout << "    Derivative: " << gA.derivatives()[0] << ", " 
    << gA.derivatives()[1] << endl;
  
  y = 1;
  z = 2;
  checkFunctionTwo(y, z, dgdy, dgdz);
  
  cout << "  Hand differentiation:" << endl;
  cout << "    Derivative: " << dgdy << ", " 
    << dgdz << endl;
  
  return EXIT_SUCCESS;
  
}

And the output:

Testing scalar function with 1 input...
  AutoDiff:
    Function output: 6
    Derivative: 7
  Hand differentiation:
    Derivative: 7

Testing scalar function with 2 inputs...
  AutoDiff:
    Function output: 22
    Derivative: 19, 14
  Hand differentiation:
    Derivative: 19, 14

First off, I created 4 functions, 2 to test the automatic differentiation method, and 2 for checking the results. scalarFunctionOne has 1 input while scalarFunctionTwo has 2 inputs. Both have only one output. Template programming is key to AD. We have to make our functions able to accept a generic input type other than the usual double. This is really not an issue at all for most problems. If you had already written the function, all you need to do is to change the type declaration and voila, you are now able to tap unto the automatic differentiation package.

We are using Eigen’s automatic differentiation package here. Eigen is an amazing C++ math library. Being an include only library, you don’t have to worry about portability and distribution issues. The AutoDiffScalar type will handle the dual number component for us. I will explain a little more about dual numbers later.

We see that before passing the AutoDiffScalar types into our function, we need to set its value and derivative output index. For example, yA.derivatives() = Eigen::VectorXd::Unit(2, 0) mean that we want the derivative of the function with respect to y to be written in the first index of the output derivative vector. We can now pass the AutoDiffScalars into our function and let Eigen do the magic.

Dual numbers

As advertised, we see that AutoDiff gives us the expected results without the need of creating new functions. So why does this work? We start with complex numbers. Let’s take the square of (1+2i):

    \begin{align*} (1+2i)^2 = 1 + 4i +4i^2 = -3 + 4i . \end{align*}

Complex number arithmetic works like algebra, except for the additional rule that i^2 = -1. Dual numbers work in the same way, except that \epsilon^2 = 0, where i is replaced by \epsilon. Applying dual numbers to scalarFunctionOne:

    \begin{align*} f\left(x+\epsilon\right) &= 2\left(x+\epsilon\right)^2 + 3\left(x+\epsilon\right) + 1 \\ &= 2\left(x^2 + 2x\epsilon + \epsilon ^2\right)^2 + 3\left(x+\epsilon\right) + 1 \\ &= 2\left(x^2 + 2x\epsilon\right)^2 + 3\left(x+\epsilon\right) + 1 \\ &= 2 x^2 + 3x + 1 + \left(4x+3\right)\epsilon . \end{align*}

We see that the coefficient of epsilon is equal to the derivative of the function! One way dual numbers can be implemented in programming is to define a dual number class with 2 components: real component, and dual component. We then overload operators on dual numbers (i.e. addition, subtraction, multiplication etc.) such that we can enforce \epsilon^2 = 0. Hence, one way to look at AD is to appreciate that no matter how complex a function is, it can be broken down into a collection of simpler operations that we can overload with dual numbers.

Templates supercharged

The toughest challenge I faced with AD was with structuring the code to reduce repeatability. I had created a base class for the various flux calculations of the different boundary conditions (BC), and their input and output array sizes are determined at runtime depending on whether the problem is 2D or 3D. These flux calculators are also dependent on a Gas class which I created to solve the thermodynamic relations (i.e. perfect gas). An example is:

class FluxRoe : public Flux {
  
  public:
  
    void initialize(Config * config, Gas * gas, Mesh * mesh) 
    {
      // Save pointers to required information. Gas class used in
      // vector function, so it also needs to be a template class
      // for AD.
    }
    
    // Vector function
    void computeFlux
    (
      double const * leftVariable, 
      double const * rightVariable, 
      double const * normal,
      double * flux
    ) 
    {
      // Array sizes determined at runtime. Output to flux.
    }
    
  protected:
  
    Gas * gas_;
      
};

Eigen provides a convenient AutoDiffJacobian class, which calculates the jacobian of your vector function without the need for you to manually define the AutoDiffScalar like we did previously. The AutoDiffJacobian class required me to structure my vector function in a certain way. Namely, I needed the vector function to be called via the () operator, and several variables had to be defined. To add on to the woes, I would definitely need a base class for the flux jacobians. An example of a vector function to be used with AutoDiffJacobian is:

template <short N_IN, short N_OUT>
class VectorFunction {
  
  public:
  
    // Eigen needs this to determine no. of variables at compile time
    enum 
    {
      InputsAtCompileTime = N_IN,
      ValuesAtCompileTime = N_OUT
    };
    
    // Also needed by Eigen
    typedef Eigen::Matrix<double, N_IN, 1> InputType;
    typedef Eigen::Matrix<double, N_OUT, 1> ValueType;
  
    // Vector function
    template <typename T>
    void operator()
    (
      const Eigen::Matrix<T, N_IN, 1>& vIn, 
      Eigen::Matrix<T, N_OUT, 1>* vOut
    ) const 
    {
      // Compute function with vIn and output to vOut.
    }
    
};

int main () {

  Eigen::Matrix<double, 3, 1> vIn;
  Eigen::Matrix<double, 3, 1> vOut;
  Eigen::Matrix<double, 3, 3> mJacobian;

  Eigen::AutoDiffJacobian< VectorFunction<3, 3> > vectorFunAD;
  
  // Set values in vIn, vOut and mJacobian...
  
  vectorFunAD(vIn, &vOut, &mJacobian); // Voila! jacobian is in mJacobian.
  
  return EXIT_SUCCESS;
  
}

With the 2 code snippets above, we see the disconnect between what we have and what is required. Just thinking back about these issues make my head about to explode. I spent a good 3 weeks on coming up with a satisfactory solution (at least to me). The initial thought was to copy and paste code into new jacobian classes for individual BC and number of dimensions, but that would be a clumsy solution since many chunks of code would be repeated. For a more acceptable solution, I had to turn to static polymorphism and make use of template template parameters (yes, the word template is used twice).

The first major step was to make the Flux and Gas classes into template classes. i.e.

template<T>
Flux {
  public:
    computeFlux
    (
      T const * leftVariable, 
      T const * rightVariable, 
      double const * normal,
      T * flux
    ) 
    { 
      //... 
    }
  protected:
    Gas<T> * gas_;
};

The next major step was to apply static polymorphism to a wrapper class for Eigen:

template< template<typename> typename F, short N_LEFT, short N_RIGHT, short N_OUT >
class FluxWrapper {
  
  public:
    
    enum 
    {
      InputsAtCompileTime = N_LEFT+N_RIGHT,
      ValuesAtCompileTime = N_OUT
    };
    typedef Eigen::Matrix<double, InputsAtCompileTime, 1> InputType;
    typedef Eigen::Matrix<double, ValuesAtCompileTime, 1> ValueType;
    
    void setNormal (double const * normal) {
      normal_ = normal;
    }
    
    template <typename T>
    void operator()
    (
      const Eigen::Matrix<T, N_IN, 1>& vIn, 
      Eigen::Matrix<T, N_OUT, 1>* vOut
    ) const 
    {
      F<T> f; // Flux calculator instantiated with AutoDiff variable
      
      f.computeFlux(
        &( vIn[0] ),
        &( vIn[N_LEFT] ),
        normal_,
        &( (*vOut)[0]) );
      
    }
    
  protected:
  
    double const * normal_;
    
};

The various wrapped flux calculators are then created with:

typedef FluxWrapper<FluxRoe, 4, 4, 4> FluxRoeWrappedTwoDim;
typedef FluxWrapper<FluxRoe, 5, 5, 5> FluxRoeWrappedThreeDim;

Finally, I use static polymorphism again to create my individual jacobian classes which are derived from a base class:

template <typename F> 
class FluxJacobianWrapper : public FluxJacobian {
  public:
    void computeJacobian
    (
      double const * leftVariable,
      double const * rightVariable,
      double const * normal,
      double * leftJacobian,
      double * rightJacobian
    )
    {
      Eigen::Matrix<double, F::InputsAtCompileTime, 1> vIn;
      Eigen::Matrix<double, F::ValuesAtCompileTime, 1> vOut;
      Eigen::Matrix<double, F::ValuesAtCompileTime, F::InputsAtCompileTime> mJacobian;
    
      Eigen::AutoDiffJacobian<F> fluxAD; // Create flux class for AD
      
      fluxAD.setNormal(normal);
      
      // Map double array to Eigen storage type (MatrixXd etc.)...
      
      fluxAD(vIn, &vOut, &mJacobian);
      
      // Map jacobian values to double array...
      
    }
};

The flux jacobians are used by:

typedef FluxJacobianWrapper<FluxRoeWrappedTwoDim> FluxJacobianRoeTwoDim;
typedef FluxJacobianWrapper<FluxRoeWrappedThreeDim> FluxJacobianRoeThreeDim;

Great! Now I just need to code the various combination that may happen once and the appropriate code will be generated for me.

Characteristic form of the Euler equations

When I first learnt about eigenvalues and eigenvectors, I never thought that it would actually be useful. Nowadays I can’t seem to avoid them even if I wanted to – Linear solvers? Don’t forget about the spectral radius! Control systems? Can’t go without the natural frequencies! Euler equations? Guess where do the wavespeeds come from!

Eigenvalues and eigenvectors intuitively

In order to understand the role of eigenvalues and eigenvectors in fluid dynamics, lets turn to it’s sibling structural dynamics for awhile. The canonical introductory example in structural dynamics is the problem of 2 weights connected by springs as shown in the diagram I pulled from here.

The system can be described with constant mass and stiffness matrices by: \mathbf{M}\ddot{\vec{x}} + \mathbf{K} \vec{x} = 0 when no forces are applied. \mathbf{K} has off diagonal terms so the system is coupled. Rearranging, we get \ddot{\vec{x}} + \mathbf{M}^{-1}\mathbf{K} \vec{x} = 0. By diagonalizing the matrix, one gets:

    \begin{align*} \ddot{\vec{x}} + \mathbf{Q} \mathbf{\Lambda} \mathbf{Q}^{-1} \vec{x} &= 0 \\ \mathbf{Q}^{-1} \ddot{\vec{x}} + \mathbf{\Lambda} \mathbf{Q}^{-1} \vec{x} &= 0 \\ \ddot{\vec{y}} + \mathbf{\Lambda} \vec{y} &= 0 . \end{align*}

We have made the transformation \vec{y} = \mathbf{Q}^{-1}\vec{x}. Since \mathbf{\Lambda} is a diagonal matrix, we get an uncoupled system of equations which is now trivial to solve. By projecting the state vector into the eigenvectors, we are working in components that are independent from each other. Collectively, these independent components give the system its characteristics, like mode shapes in beams or principle components of a reduced system. Note that I am using the term characteristic very loosely here since we are just trying to get an intuitive feel.

Fluid dynamics

So now we’ve seen how to uncouple a system of equations, we can simply apply the same method to the fluid equations and we’re done! Simple, no? Okay, let’s see where does that lead us with the Euler equations. Consider the one-dimensional case with conserved variables \vec{W}:

    \begin{gather*} \frac{\partial \vec{W}}{\partial t} + \frac{d \vec{F}}{d \vec{W}} \frac{\partial \vec{W}}{\partial x} = \frac{\partial \vec{W}}{\partial t} + \mathbf{A} \frac{\partial \vec{W}}{\partial x} = 0 \quad , \quad \mathbf{A} = \mathbf{T}\mathbf{\Lambda}\mathbf{T}^{-1} \\ \Rightarrow \frac{ \mathbf{T}^{-1} \partial \vec{W} }{\partial t} + \mathbf{\Lambda} \frac{ \mathbf{T}^{-1} \partial \vec{W} }{\partial x} = 0. \end{gather*}

So this leads us to the characteristic form with d\vec{C} = \mathbf{T}^{-1} d\vec{W}. Note that \mathbf{T} is not a constant matrix so we can’t bring it into the differential operator. Then the characteristic form looks like:

    \begin{align*} \frac{\partial \vec{C} }{\partial t} + \mathbf{\Lambda} \frac{\partial \vec{C} }{\partial x} = 0. \end{align*}

Despite the simple looking form of the equation, the set of equation in full is:

    \begin{align*} \left(\frac{\partial \rho}{\partial t} - \frac{1}{a^2}\frac{\partial P}{\partial t}\right) + u \left( \frac{\partial \rho}{\partial x} - \frac{1}{a^2}\frac{\partial P}{\partial x} \right) &= 0 \\ \left( \frac{\partial u}{\partial t} + \frac{1}{\rho a}\frac{\partial P}{\partial t} \right) + \left(u+a\right)\left(\frac{\partial u}{\partial x} + \frac{1}{\rho a}\frac{\partial P}{\partial x} \right)&= 0 \\ \left( \frac{\partial u}{\partial t} - \frac{1}{\rho a}\frac{\partial P}{\partial t} \right) + \left(u-a\right)\left(\frac{\partial u}{\partial x} - \frac{1}{\rho a}\frac{\partial P}{\partial x} \right)&= 0 . \end{align*}

It’s still not too bad, we now have 3 advection equations which obey:

    \begin{align*} d\rho - \frac{dP}{a^2} = 0 \quad &for \quad \frac{dx}{dt} = u \\ du + \frac{dP}{\rho a} = 0 \quad &for \quad \frac{dx}{dt} = u+a \\ du - \frac{dP}{\rho a} = 0 \quad &for \quad \frac{dx}{dt} = u-a . \end{align*}

Integrating the left hand side of each equation gives us the characteristic variables which remain constant along the curve dx/dt = \lambda where \lambda is either u, (u+a) or (u-a). The curves are called wave fronts since they are the path where the variables propagate. Let’s try to get the characteristic variable for the middle equation for example,

    \begin{align*} \int du + \int \frac{dP}{\rho a} = const \quad &for \quad \frac{dx}{dt} = u+a \\ u + \int \frac{dP}{\rho a} = const \quad &for \quad \frac{dx}{dt} = u+a . \end{align*}

It seems like we’ve run into some trouble with the integration for the second term on the left. What about applying the perfect gas relation?

    \begin{align*} \int \frac{dP}{\rho a} = \sqrt{\frac{R}{\gamma}} \int \frac{dP}{P / \sqrt{T}}. \end{align*}

No dice. In fact, for the characteristic form of the Euler equation, only the first equation can be integrated to entropy. The other characteristic variables do not have a physical manifestation. In addition, the wave speeds u, (u+a) and (u-a) are also dependent on the solution, which does not lend a simple solution like we had previously thought.

So we’ve gone through all these efforts for nothing? Well, not really. The characteristic variables may not be integratable into something physical, but at least the wave fronts tell us something. For this one-dimensional problem, we see that information in the flow moves with u, (u+a) and (u-a) wave speeds.

Now imagine for a moment that you are swimming alone in a large pool. As you pull your strokes, you generate ripples in the water. Looking ahead, you have a wave moving forward away from you and if you were to look back, you would also see a wave moving away behind you. Now outside of the ripple, the water is still, but within the ripple, the water has been influenced by your presence. Suppose a friend were to join you a little later and he’s caught up with your backward propagating ripple. You can imagine that the shape of his ripples will be affected by your presence, it is dependent on the state of the water surrounding him before he pulls his strokes.

This is very roughly what the characteristics tell us about the fluid. The characteristic variables are like the ripples, and they are broadcast at the speed of sound relative to the flow (i.e. (u+a) and (u-a)), as well as the actual flow speed u. For a subsonic forward flow (u < a), we have 2 characteristics moving forward and one moving back. For example when we have change in area for 1D flow, the air particles where the change occurs sends 2 characteristics downwind and one upwind. It is because of the one characteristic being sent at (u-a) that allows the flow upwind of the area change to adjust gradually, accommodating for the change in area.

Another point to note here, as was already pointed out, is that \mathbf{T} is not constant. Hence we run into difficulties integrating the characteristic variables. If we can approximate a constant \mathbf{T}_0, we have for the second term in \vec{C}:

    \begin{gather*} c_2 = \int du + \frac{1}{\rho_0 a_0} \int dP = u + \frac{P}{\rho_0 a_0} = const \\ \Rightarrow \Delta c_2 = \Delta u + \frac{\Delta P}{\rho_0 a_0} . \end{gather*}

This would come in handy when we look at flux calculation later.

The Euler equations

    \begin{align*} \frac{\partial \rho \vec{u}}{\partial t} + \nabla\cdot\left(\rho\vec{u}\otimes\vec{u}\right) = -\nabla P + \nabla \cdot \overline{\overline{\tau}} \end{align*}

This is the Navier-Stokes equation. Having been around since early 1800s, this simple one-liner is the seed of all that we do in CFD. What is this equation telling us?

Very roughly, the first term on the left governs the time evolution of the flow for a box fixed in space. The next term to its right is the convective term, it makes the equations nonlinear, which also causes shockwaves to form in supersonic flows. This term is responsible for the very elaborate schemes unique to CFD, which can be a nightmare to implement for the coder. However, it is also where we see some beautiful solutions being introduced.

On the right hand side, the first term is the hydrostatic stress tensor, which is related to volume change. The next term is the deviatoric stress tensor, which is related to shape change. So for a unit cube, the pressure terms will cause it to be squashed in directions normal to its faces. Whereas the deviatoric stress tensor will cause it to change its shape, for example, into a parallelepiped. Also, the rightmost term relates motion of the fluid particles to diffusion.

Dimensionally speaking, for a Newtonian fluid, these equations boil down to 2 parameters: Reynolds number and Mach number. The dependence on Reynolds number stem from the deviatoric stress tensor, which translates to the shear stresses for a viscid fluid. In the transition or turbulent regime, one would have to employ some kind of stochastic measure due to the chaotic nature of the flow. The challenges faced in the field of turbulence modelling needs no introduction, and it is one area in CFD where active research is still ongoing.

As such, I chose to work with the Euler equations for my own CFD code. By cutting out the empiricism concerned with turbulence modelling, we are working with something a bit more “clean” mathematically. We can still get to the interesting shock waves and some useful engineering figures (like lift coefficient etc.) from the Euler equations. The program architecture for such a code would be the same as one solving the full NS equations, with the handling of the source term being the only difference.