# Large Eddy Simulation – The Challenge of Filters That Commute With Differentiation – PART I – HOPE

### Background

Computational Fluid Dynamics (CFD) progress has been tremendous in the past half a decade. Moore’s Law vision of an exponential growth in computational resources lived up to its expectation and it’s predicted to keep doing so (at least) for  the next 20 years. Moore’s Law applied to CFD

Scientists and engineers have developed models of many levels of ﬁdelity for ﬂowﬁelds. On the ladder of CFD one may find many stages. Lifting-Surface Methods that model only the camber lines of lifting surfaces, not the thickness, vortex wakes that must of course be paneled. Linear Panel Methods that solve either the incompressible potential-flow equation or one of the versions applicable to compressible flow with small disturbances. Nonlinear Potential Methods where the velocity is represented as the gradient of a potential, as it is in incompressible potential flow, nonlinearity through effectively incorporating an entropic relation for the density as a function of the local Mach number. Euler Methods, solving the Navier-Stokes equations with the viscus and heat-conduction terms omitted. Coupled Viscous/Inviscid Methods solving the boundary-layer equations in the inner near wall region and matched to an outer region inviscid flow calculations. One huge leap forward was achieved through the ability to simulate Navier-Stokes Methods Such as Reynolds-Averged Navier-Stokes (RANS).

### Reynolds Averaged Navier-Stokes Equations (RANS)

RANS is based on the Reynolds decomposition according to which a flow variable is decomposed into mean and fluctuating quantities. When the decomposition is applied to Navier-Stokes equation an extra term known as the Reynolds Stress Tensor arises and a modelling methodology is needed to close the equations. The “closure problem” is apparent as higher and higher moments of the set of equations may be taken, more unknown terms arise and the number of equations never suffices. Reynolds-Stress Tensor

Levels of RANS turbulence modelling are related to the number of differential equations added to Reynolds Averaged Navier-Stokes equations in order to “close” them.

0-equation (algebraic) models are the simplest form of turbulence models, a turbulence length scale is specified in advance through experimenting. 0-equations models are very limited in applications as they fail to take into account history effects, assuming turbulence is dissipated where it’s generated, a direct consequence of their algebraic nature.

1-equation and 2-equations models, incorporate a differential transport equation for the turbulent velocity scale (or the related the turbulent kinetic energy) and in the case of 2-equation models another transport equation for the length scale, subsequently invoking the “Boussinesq Hypothesis” relating an eddy-viscosity analog to its kinetic gasses theory derived counterpart (albeit flow dependent and not a flow property) and relating it to the Reynolds stress through the mean strain.
In this sense 2-equation models can be viewed as “closed” because unlike 0-equation and 1-equation models (with exception maybe of 1-equations transport for the eddy viscosity itself) these models possess sufficient equations for constructing the eddy viscosity with no direct use for experimental results.

A drawback evident in almost all eddy-viscosity models is the inability to inherently account for rotation and curvature. This drawback is resulted from relating the Reynolds stress to the mean flow strain and in fact is the major difference between such a modeling approach and a full Reynolds-stress model (RSM). The RSM approach accounts for the important effect of the transport of the principal turbulent shear-stress. On the other hand, RSM simulations are not computationally cost-effective, in as much that one does get an improved physical fidelity that is worth the time and computational resources consumed, not only that, they often do not converge.

RANS methodology strength has proven itself for wall bounded attached flows due to calibration according to the law-of-the-wall. For free shear flows however, especially those featuring a high level of unsteadiness and massive separation RANS has shown poor performance following its inherent limitations due to the fact that it’s a one-point closure and by that do not  incorporate the effect of strong non-local effects and of long correlation distances characterizing many types of flows of engineering importance.

### Large Eddy Simulation

In LES the large energetic scales are resolved while the effect of the small unresolved scales is modeled using a subgrid-scale (SGS) model and tuned for the generally universal character of these scales. LES has severe limitations in the near wall regions, as the computational effort required to reliably model the innermost portion of the boundary layer (sometimes constituting more than 90% of the mesh) where turbulence length scale becomes very small is far from the resources available to the industry. Anecdotally, best estimates speculate that a full LES simulation for a complete airborne vehicle at a reasonably high Reynolds number will not be possible until approximately 2050… Modeling of LES is formally described by the application of spatially filtering NSE. An explicit approach would explicitly apply a filter with some kind of shape (may it be cutoff, top hat, etc…). subsequently, a model is devised to capture the effect of under-resolved length-scales. The most common representation, is a linear stress-strain relation relying on the Boussinesq hypothesis and the eddy viscosity concept. The first and possibly still the most popular is the Smagorinsky model. Applying the Smagorinsky model to flows other than those it was tuned for, shall prove out of its range of applicability consequence of its many shortcomings, fully explained in my former post That’s a Big W(H)ALE as well as the remedies to overcome these shortcomings from a purely physical perspective. Another route for modelling the effect of unresolved scales is found through the utilization of higher order numerical schemes to take the role of the explicit filter in the aim of adding dissipation only in the high wave number range (small and unresolved scales) – termed Implicit LES (ILES). The first of such method was MILES (F. Grinshtein, also followed by a good book on the subject of ILES).

Returning to Moore’s law prediction it could be assumed that LES is going to take more and more of a vital role in engineering design process, being ever so attractive as its level of fidelity is such that it combines the advantages of simulations along with reliability features of experiments. This allows the engineer to build up his confidence while extracting high fidelity realizable results, such that the margin of safety could be tighten for the few more percentages of optimality which are the hardest to achieve.

## commutative filters for LES to incorporate in complex geometry and unstructured grids of engineering applications value

Application of large eddy simulation (LES) to flows with increasingly complex geometry is becoming and shall keep becoming an issue if LES is to be used for problems engineering application importance.

### Problem outline and motivation

Traditionally it was a custom to describe specific issues in LES via structured grid. This of-course is a very limiting feature and the extension of LES to unstructured meshes is unavoidable.

As it happens there is a feature evident on LES for unstructured meshes, which could and should be considered desirable, where the filtering operation used to remove small scale perturbations from the flow commutes with what is termed the the differentiation operator.
It so interesting to  note that If this commutation requirement is satisfied, equations – meaning LES equations have the same structure as the unfiltered Navier Stokes equations. It is also important to note that satisfying commutation requires generally the filter to have a constant width.

This of course immediately raises questions such as that for inhomogeneous flows the minimum size of eddies that need to be resolved would naturally vary across the flow, demanding that the filter width should follow and change accordingly.

All of the above would lead to a suggestion for constructing something that shall behave like discrete variable width commutative filters for LES on unstructured meshes

### Variable Filters

Construction of  a families of continuous filters which commute with differentiation up to arbitrary order in the filter width have been a task pursued since the early seventies of the previous century. However, this set of filters seemed always to apply for infinite domain, clearly maybe addressing investigation of types of such filters but without addressing the the issue of boundary conditions in a finite domain, hence not really relevant on a larger view for the engineering kind of applications.
In the late nineties of the last century such filters were introduced as a class of discrete filters, that could be used on non-uniform but structured meshes. understanding the meaning of structured Vs. Unstructured meshes as introduced in Know Thy Mesh – Mesh Quality – Part I means the formulation must use a mapping function to perform the filtering in the computational domain as structured mesh demands.

In order to be able to generalize such  a starting point for constructing discrete commutative filters for unstructured meshes, that could be utilized in the future for engineering applications in three dimensions, in addition to the commutation issue, issues such as control of filter width and of course its shape in wavenumber space are must be considered.

## Initial groundings – mapping in structured meshes for “arbitrary” geometries

To develop a somewhat general theory for “arbitrary complex geometries” (of course this cannot be arbitrary in a wide sense as possible with unstructured meshes for complex geometries but I follow this term still), the use of mapping functions leads to a filtering to the physical domain.

Some math could make the issue more solid, so we define an operator to measure commutation error as (overbar is filtering): So continuous filtering is defined as should: where of course ΔX is of course the filter width.

Essentially we see that the differential equations describing the evolution
of the large-scale structures are obtained from the Navier–Stokes equations by applying
a low-pass filter. As the objective of the LES is to have the same structure as NSE, we should strive for the differentiation and filtering operations to commute.
In inhomogeneous flows the wave number of the smallest resolved eddies is of course could be higher in one region then another, and therefore it is immediately understood that the filtering width can also not stay constant, in other words, we must use a variable filter. Furthermore, in the general case, filtering and differentiation do not commute when the filter width is not constant in space.

One possibility to overcome this predicament is to add second order terms to the filter width, and decide on a leading term as the correction for the commutation error. This sounds nice except adding higher order terms always introduces numerical difficulties that must be taken into account, and this account should be as general as possible while considering a family of filters with high order correction of the commutation error such that it would interact as generally as possible with one problem as with another (for example it would not due if the chosen family of filters is such that can overcome the numerical difficulty added in a correction of the commutation error even for an arbitrary given order if it fails to address specific issues in an engineering oriented problem with additional boundary terms and a finite domain).

The above description led to a dominating tendency to argue that when complex geometries are at hand the accomodation of the computational mesh along with low-pass characteristics of the discrete differencing operators effectively act as a filter. Or simply put what I have just described is Implicit filtering, as what I have defined above as an explicit filter could not be noted anywhere in the actual solution procedure.

#### CONSISTENCY OF SUB-GRID SCALE MODELS

It is highly desirable and possibly a step towards increasing the physical fidelity if SGS models are consistent with the Navier-Stokes equations in a mathematical and physical standpoint. Properties like symmetry requirements, near wall scaling (such as eddy-viscosity cubed), Realizability, production of turbulence kinetic energy, zero subgrid dissipation for laminar ﬂow, consistency with the second law of thermodynamics and some others are to be explored while developing new or revised SGS methodologies. The issue of consistency is raised because although ILES has been used and is still in use for instances it could valued for, and there are such, as a non-systematic filtering approach it may be said to above all problematic issues that may be raised, far and foremost is the issue of consistency which prevails especially because the low-pass filtering effect of discrete derivative operators it wouldn’t act in any direction besides the specific direction of which the derivative is taken. In other words, there is no way to derive the discrete equations by a single three-dimensional filter, and in actuality the NSE are influenced by a one dimensional filter. Such indeterminateness in the foundational construction of a filter would make the has two very problematic outcomes. The first is that validation of the results with filtered experimental data becomes impossible to a large degree, and the second is that the interactions between resolved scales that result in subgrid-scale contributions known as the Leonard stress and looks like it should appear as the computable portion of the SGS is also impossible to calculate and from the exact same reason.in

Explicit filters are intrinsically designed, especially if especially derived with certain features to attain some direct control of the energy, where significant energy in the high spectrum is coupled without scale separation with the nonlinearities redistribution of energy evident in the NSE, resulting in momentous aliasing error. moreover, the implicit filtering approach is known for its very limiting ability to control numerical error. As a consequence of  the above discrete operators become very inaccurate as far as the high frequency content is concerned, such that the deviation becomes especially pronounced in the dynamics of the high wavenumber related eddies, and the first inference from this behaviour is when dynamic models, where the whole idea is to rely on information contained in the highest resolved wavenumber is what drives the procedure, since defining the test to a primary filtering ratio is supposed to serve as an input in the dynamic filtering approach.

### Some Possible Remedies

It is possible to add a step which sole purpose is to damp the energy in the high frequency spectrum, which could by done by performing an explicit filtering as part of the solution process. This is by no means a perfect alleviation of the problem since of course applying this explicit filter will reduce the resolution of the simulation. On the other hand it relieves the dependency of the filter size on the mesh spacing. It could also be claimed that several numerical errors mentioned above as hampering the use of a dynamic filter by the sampled stresses could be controlled (it should be noted that such control would need to be aligned with other filter objectives and of course verified).

What is certain about the use of explicit filter is that the interactions between resolved scales that result in subgrid-scale contributions (e.g Leonard stresses) and on the same token a comparison with experimental data is achievable, due to the fact that the shape of the filter is known exactly.

## In Sum

It looks like we have some kind of hope in the idea of realizing the benefits of discrete explicit filtering, and  of course at the same time develop discrete filtering operators that commute with numerical differentiation (to a any desired order), and furthermore, as I have stated in the first paragraph, discrete filtering of engineering value (meaning such that could be realized to a finite complex geometry).