The Transition Series: Part I – Mathematical Perspective

MOTIVATION

MY FIRST  IN A LONG SET ABOUT THE PHENOMENA OF TRANSITION – SO EXCITED…

The set shall certainly be for me the most important set of posts I’ve ever posted (I’m focused but it will take time…), based on my own messy scribbled writing, and the very new advances published lately in the works of Stanford University Center for Turbulence Research, with novel computation of Spatial Perturbation Equation (SPE) with Mean Flow Distortion (MSD) forcing terms added (seeking moving from modes of the boundary layer base flow to modes of the distorted flow by MFD obtained by streamwise marching procedure, or in other words, finding pseudomodes of the original undisturbed base flow which create bypass transition with a much higher disturbance and a much lower Reynolds than we would expected from natural transition). What I mean by “in other words” is adding to the SPE+MSD the exceptional very formal mathematical point of view of the phenomenon by the work of Trefethen and Embree.
I’m also planning on introducing the “non-linear normality Vs. non-normal linearity” and why such a debate rise with the Reynolds-Orr equation essentiality of a linear growth mechanism, with a toy problem showing the “bootstrapping” non-linear procedure which arouses growth due to redistribution of energy… And then a whole lot more…

 









SO… Let’s deep right in….

 

PART I: Nonmodal instability of PDE discretizations – Transient growth through the mathematical standpoint

 

All cool, but let’s start with some simple and known algebra background of the modal eigenvalue kind:

What exactly do eigenvalues offer that makes them useful for so many  problems? We believe there are three principal answers to this question,  more than one of which may be important in a particular application.

  1. Diagonalization and separation of variables: use of the eigenfunctions as a basis. One thing eigenvalues may accomplish is the decoupling, as in (1.3)–(1.5), of a problem involving vectors or functions into a collection  of problems involving scalars, which may make subsequent computations  easier. For example, in Fourier’s problem of heat conduction in a solid  bar with zero temperature at both ends, the eigenmodes are sine waves  that decay independently as a function of time. If an arbitrary initial  temperature distribution is expanded as a sum of these sine waves, then  the solution at a later time can be calculated by summing the components  of the expansion.

 

  1. Resonance: heightened response to selected inputs. Diagonalization is an algorithmic idea; the other uses of eigenvalues are more physical. One is the analysis of the phenomenon of resonance, perhaps most familiar in the context of vibrating strings, drums, and mechanical structures. Any visitor to science museums has seen demonstrations showing that certain systems respond preferentially to vibrations at special frequencies. These frequencies are the eigenvalues of the linear or linearized operator that governs the system in question, and the form of the response is associated with the corresponding eigenfunctions. Examples of resonance are familiar: One thinks of soldiers breaking step as they cross bridges; of the less fortunate  Tacoma Narrows Bridge in the 1940s, whose collapse was initiated by a  wind-induced flow oscillation too close to a structural eigenfrequency; of  buildings and their response to the vibrations of earthquakes. Essentially there the wide range of complexity  in applications of eigenvalue ideas, for the radio problem is straightforward  and almost perfectly linear, whereas the ear is a complicated nonlinear system, not yet fully understood, for which eigenmodes are only a crude first  step.

 

  1. Asymptotics and stability: dominant response to general inputs. A related application of eigenvalues is to questions of the form, What will happen as time elapses (or in the extreme, t → ∞) to a system that has experienced some more or less random disturbance? Fourier’s heat problem is an awesome example: Whatever the shape of the initial temperature distribution, the higher sine waves decay faster than the lowest one, and  therefore almost any initial distribution will eventually come to look like  the half-wavelength sine with zeros just at the two ends of the interval.
    Sometimes the crucial issue is a question of stability:  Are there modes that grow rather than decay with time?
    For example, in fluid  mechanics a standard technique to determine whether small perturbations  to a laminar flow will be amplified into large ones—which may then trigger the onset of turbulence—is to calculate whether the eigenvalues of the  system all lie in the left half of the complex plane. (This is of course orr-sommerfeld/Squire analysis).


We may think of fourth reason to,for the success of eigenvalues:

  1. They give a matrix a personality. We humans like images; our brains are specially adapted to interpret them. Eigenvalues enable us to take the abstraction of a matrix or linear operator, for whose analysis we possess no hardwired talent, and portray it as a picture.

 

Nevertheless, there are a class of problems for which eigenvalue problems fail miserably, these are mostly the kind of problems involving either matrices (or as we’d rather meet in our fluid dynamics domain, i.e., operators) for which the inverse matrix if it is even possible to find one has a very large entry (what we mark as IIV-1II>>1), although on a mathematical note we prefer stating that V has a large “Condition Number” in the norm.

Easily put, if we take the 2-norm of A: i.e.: IIAII2=IIAxII2/IIxII2

  1. It could be such that it is equivalent to the condition that the eigenvector of A (if they even exist), are far from being orthogonal. 
  2. The other type on the spectrum are the Hermitian matrices with a complete set of orthogonal eigenvectors.

In the first case, for which the eigenvectors are far from normal, an eigenvalue analysis may certainly fail, and as it seems, even though many examples for eigenvalue analysis  are a productive means for investigation, it is our fluid mechanics where we tend this failure apparent in some of the problems (one references which is one of my favorite is  Böberg & Brosa 1988;Reddy & Henningson 1993;Trefethen et al. 1993.

 

trf



In this section we describe the phenomenon of nonmodal instability of certain discretizations of time-dependent partial differential equations. In the next, we present theorems to characterize discretizations that are free  of this effect.

1111

The standard technique for explaining the instability of finite difference formulas was developed by von Neumann and others and described in a  1951 paper of O’Brien, Hyman, Kaplan. ‘Von Neumann analysis’ is another term for discrete Fourier analysis. One begins by noting that if  we ignore the complication of boundary conditions and imagine that the  domain is unbounded, then any initial condition for the finite difference  formula can be written as a superposition of waves.
for real wave numbers k and corresponding amplification factors (i.e., eigenvalues) λ = λ(k). If |λ| > 1 for some k, we have exponential growth and instability.

It should be very clear though, that clear: Convergence of PDE discretizations depends on norm-boundedness of families of matrices.

It is explainable how von Neumann analysis fits into the general theory of Lax stability:

i.e. the famous Lax Equivalence Theorem, which states that if the discrete approximation is consistent, meaning that it approximates the right PDE as Δx → 0 and Δt → 0, then convergence ⇐⇒ stability.  Here ‘stability’ means that the solution operators are uniformly bounded as the time and space grid sizes approach zero.

 

A priori, the question of stability requires the analysis of families of matrices, and eigenvalue analysis alone could never give bounds on norms of powers of arbitrary families of matrices. In the special case of constant-coefficient problems on regular grids, however, the Fourier transform takes what would be families of matrices of unbounded dimensions in space into families of matrices of a fixed dimension, indexed over wave numbers. The transformation is unitary, and as a consequence, eigenvalue analysis of the resulting matrices is enough to ensure stability.  For practical problems involving boundaries or variable coefficients, further theorems have been proved to show that von Neumann analysis still gives the correct results provided certain additional assumptions are satisfied such as smoothness of coefficients.

On the other hand, there are some discretizations of PDEs that are  fundamentally not translation-invariant. For these, von Neumann analysis is inapplicable, and instabilities may appear that are nonmodal in nature.

 

Let’s have a look at an example:

111111

The first figure reveal that if Δt = O(N −2) as N → ∞, then the  maximal norm, though possibly large, is uniformly bounded for all N. The discretization is Lax-stable, and the numerical solution will converge to the exact solution in the absence of rounding errors. If Δt = O(N −2)  as N → ∞, on the other hand, there will be Lax-instability and no convergence. In particular, a choice such as Δt = 0.4N −1 will be catastrophic, even though the eigenvalues in that case remain inside the unit disk for all N.  The second figure shows the pseudospectra of S for the particular choice N = 20 and Δt = 0.4N −1 = 8N −2 (thus S has dimension 60). Around most of the unit circle, the resolvent norm is of modest size, but in the region z ≈ −1 it takes values beyond 106, making it clear that there must be large transient growth. Since the boundary of the 10−6-pseudospectrum crosses

 

The main question that should be asked is if could one use a discretization of this kind for large-t simulations, since the instability is transient and dies away eventually? At a glance it might seem so, but the instability can only be expected to be transient for a purely constant-coefficient linear problem in the absence of rounding errors in other words As soon as variable coefficients or  nonlinearities or other perturbations are introduced, the loss of convergence is likely to become global.















“All About CFD” – Tom’s Blog: The All Inclusive Turbulence Guide – Call to join 🤍🤍🤍
“All About CFD” – Tom’s Blog: The All Inclusive Turbulence Guide – Call to join

 

 

 

 

Leave a Reply