…” –“with four parameters I can fit an elephant, with five I can make him wiggle his trunkJohn von Neumann

Most of nowadays CFD simulations of engineering applications are conducted via the Reynolds Averaging approach. Reynolds-Averaged Navier-Stokes (RANS) simulation is based on the Reynolds decomposition according to which a flow variable is decomposed into mean and fluctuating quantities. When the decomposition is applied to Navier-Stokes equation an extra term known as the *Reynolds Stress Tensor *arises and a modeling methodology is needed to close the equations. The “closure problem” is apparent as higher and higher moments of the set of equations may be taken, more unknown terms arise and the number of equations never suffices. This is of course an obvious consequence to the fact that taking these higher moments is simply a mathematical endeavor and has no physical contribution what so ever.

Levels of modeling are related to the number of differential equations added to Reynolds Averaged Navier-Stokes equations in order to *“close” *them.

2-equations models, incorporate a differential transport equation for the turbulent velocity scale (or the related the turbulent kinetic energy) and another transport equation for the length scale (or time scale).

These quantities are related to the Reynolds stress through the mean strain with the aid of the* “Boussinesq Hypothesis”* , which hypothesizes an

*eddy-viscosity*analog to its kinetic gasses theory derived counterpart (albeit flow dependent and not a flow property) and relating it to the Reynolds stress through the mean strain.

In this sense 2-equation models can be viewed as “closed” because unlike 0-equation and 1-equation models (with exception maybe of 1-equations transport for the eddy viscosity itself such as Spalart-Allmaras (SA) turbulence model) these models possess sufficient equations for constructing the eddy viscosity with no

**direct**use for experimental results.

All of the models in the above figure are on statistics constructed at a single point (threby termed 1-point closures. Meaning that a specific component from the Reynolds stress Tenzor (ha ha… I know 🤓) such as:

is a time average of data collected at a single point. Of course, this correlation must be constructed at every point in a computational ﬂow ﬁeld, but at each such point data only from that point are used to construct.

There is yet a diﬀerent way to view modeling and, correspondingly, construct models which we have not been previously considered. It involves two-point (or, in general, multi-point) closures. This kind of modeling also makes much sense since besides erratic temporal ﬂuctuations turbulence involves ﬂuctuations as well, and the latter can not be accounted for in a one-point closure. In a two-point correlation such as:

x and x+r are two distinct where data might be collected. Furthermore, this could be generalized by first averaging in space over all points a distance r from the chosen point x, and then perform the time average.

There are not many such models in use, certainly not in wide-spread commercial codes. The most high-end and intriguing application of the concept is found in structure-base sub-grid scale modeling for LES. You can read all about it in* Large Eddy Simulation of Turbulence* (by M. Lesieur, O. Metais and P. Comte (a special book on LES. Introducing the concept of “structure Function” in a very in-depth manner. One of the books I’ve enjoyed every page):

All these types of modeling levels (or approaches) have their merits but also shortcomings: complexity, physics fidelity, range of applicability, etc…

It seems that on average the most cost-effective level of turbulence modeling for engineering applications is based on one-point correlation, 2-equation, Boussinesq Hypothesis based turbulence models, among which the main models in use are: k-ε variants (mostly standard or realizable), k-ω SST and Spalart-Allmaras (SA) (for external aerodynamics).

These widely in use models do however contain many assumptions along the way in order to achieve the final form of the transport equations, and as such are calibrated to work well only according to well-known features of the applications they are designed to solve. Nonetheless, although their inherent limitations, today’s industry need for rapid answers dictates CFD simulations to be mainly conducted by 2-equations models whose strength has proven itself for wall bounded attached flows at high Reynolds number (thin boundary layers) due to calibration according to the law-of-the-wall.

The differences between the models are not fundamental, but can impact the results profoundly.

As far as boundary layers are concerned, the models differ mostly on how well they estimate separation onset, and in the very near wall region, the effect of their different approach to wall-treatment is clearly apparent (drag calculation and heat transfer applications are just a few examples).

There are also noticeable differences in the way each model handles free shear flows.

There is no decisive answer for a specific model superiority, as each of them may perform better in certain instances.

Most importantly however, each model features different sensitivity to calibration constants and limiters in the formulation. This might not affect baseline flows, but these same differences become enormously pronounced when flow complexity arises, as not only bad predictions are made at specific high complexity locations, but they also tend to contaminate the entire domain.

Choosing a specific model per application is certainly possible by following some rule of thumbs, but it certainly does not guarantee that the best choice of model for the application has been made. Even when it seems that such a choice has been made, slight variations might tilt the deciding factor in favor of a different turbulence model.

* The above motivates the need for a shift in paradigm*…

First, CFD practitioners will be much better off working in a consolidated framework of which the turbulence model presents one set of calibration constants and limiters without having to rethink basic definitions in the construction of such, each time the slightest variability in the application occurs.

Second, and perhaps more important, there is a lack of flexibility in tuning the types of turbulence models’ constants, as they are specifically calibrated according to the law-of-the-wall. This does not mean a model’s calibrated constants can’t be tuned. For example, Thies and Tam proposed a specific set of new model constants for the standard k-ε model designed specifically for predicting jet flows (A. Thies, C.K.W. Tam, “Computation of Axisymmetric and Nonaxismmetric Jet Flows Using the k-ε model” – AIAA journal, 1996):

Yet, a common thread through most of such specific targeting of the model constants is the fact their range of validity is very limited, such that including a solid boundary for the above example will hamper the model’s predicting power as the calibration shall not linked to the law-of-the-wall anymore.

### And then there were six…

To overcome these two limitations, that of choosing from a a multitude of turbulence models (along with their different conceptualized limiters and calibration constants), and especially to be able to tune a model without hampering its calibration according to the law-of-the-wall, ANSYS developed a consolidated infrastructure, a kind of an “all-in-one” branch of turbulence models. Although this branch is based upon the k-ω formulation it can be tuned to match a variety of flows.

The ingenuity is in a set of free parameters, such that they may be tuned without adversely affecting the regular law-of-the-wall calibration. In other words: ** The Generalized k-ω (GEKO) Turbulence Model**.

In total there are six of these free parameters, which may be tuned to achieve desirable and specific flow attributes, and enter the final formulation as part of what are ultimately switching functions which are changing the formulation’s behavior with respect to different flow attributes (specific portion of the turbulent boundary layer or shear flows vs. boundary layer flows for example).

The details on the exact structure of these switching functions is propriety, but which of these free parameters contribute to which switching function is readily identifiable due these functions location in the k-ω formulation, and the specific intended impact of each of the free parameters.

This free parameter is the main contributor for the amount of “aggressiveness” of a model in the prediction of flow separation. It is also the most important free parameter due to its impact on what is in essence the most common source of prediction variability between such models.

Increasing this specific parameter reduces the eddy-viscosity, subsequently making the model more sensitive to adverse pressure gradient. Increasing it also affects the spreading rates of shear layers.

Ultimately, increasing this parameter will significantly reduce the ratio between the eddy viscosity to the dynamic viscosity (turbulent viscosity ratio), and it does so without affecting the expected logarithmic velocity profile (which is actually the point…).

As the former free parameter is intended by design to not impact the wall shear stress and heat transfer rate upon variation, this parameter, limited to affect the inner portion of the boundary layer (and to not have an effect on free shear flows) may be tuned for that exact objective, but for non-equilibrium flows situations (we wouldn’t like for it to have an effect in a flat-plate boundary layer flow of course – and indeed it is verified to not affect the the shear stress and heat transfer coefficient for such application).

**The two free parameters above are intended to affect boundary layers.** It’s important to understand that both may be tuned to achieve a desired effect (which of course must be experimentally validated) for certain flow features, but to not have any significant effect for basic flows such as flat plate boundary layer behavior (again, that’s the hwole point…)

While the former two free parameters are designed to affect the turbulent boundary layer, the next two presented are designed for free shear flows. I’ve shown an example of calibration constants variation for jet applications, but such a variation shall not be adequate whenever a solid boundary shall be introduced to the application somewhere in the domain.

This free parameters affects only free shear flows. This is ensured through the use of a blending function (similar to that explained in The Prince of RANS: k-ω SST Turbulence Model), switching from 1 inside the boundary layer to 0 for free shear flows.

When free shear flows are encountered, increasing this free parameter value shall increase the spreading rates of free shear flows. **This is flexibility is actually very important** as far as RANS models predictions are concerned. It is well known that free shear flows are composed of a variety of three-dimensional turbulent structures. Nonetheless, the organization of those structures is related to the dominant instability modes, which differ between wakes, mixing layers , and jets.

These differences can be seen in experimental flow visualization and in solutions from an LES. This change in the way that the turbulence is organized presents a particular challenge for RANS emodels prediction, which model statistical averages rather than structure. It is among the reasons why most turbulence models cannot accurately predict both jets and mixing layers using the same set of coefficients.

This free parameter is actually a sub-model of the former, meaning it has no impact when the former is zero.

As explained above, due to the variability in the dependence of specific instability modes of the different types of shear flow (e.g. jets), this free parameter allows for further adjustment of the spreading rate of jets while maintaining a desirable and spreading rate for the mixing layer.

In applications containing rectangular corners (e.g. rectangular channel flows), secondary flows develop and evident in a plane normal to the mean flow direction. The Boussinesq hypothesis ties between the average velocity tensor of the flow and the Reynolds stresses in a linear stress-strain relation. therefore even in the equations for the kinetic energy enters the influence of the strain tensor which is the symmetric part of the velocity tensor after a decomposition to a symmetric an antisymmetric part.

The antisymmetric part is the rotation tensor defined as:

doesn’t appear in the equation for the kinetic energy nor in the Boussinesq hypothesis. As a consequence the behavior of the Reynolds stresses does not take into account instances such as secondary flows (among other rotation related flows…). When such flows are computed applying the suggested linear stress-strain relation, phenomenon such as early separations can occur, and shall further impact and contaminate the prediction downstream in the domain.

This free parameters is in essence a non-linear stress-strain term to account for secondary flows in corners.

This parameter is actually not new and is already in use as an option in the k-ω SST Turbulence Model. Nevertheless, since my intended purpose for this post is to emphasize turbulence modeling attributes of eddy-viscosity based turbulence models through GEKO, it is quite valuable to dwell a little on the motivating features for this parameter.

In the description of the last free parameter I have elaborated about the inherent weakness of eddy-viscosity based turbulence models, and their inability to capture effects of streamline curvature and system rotation. This inability is a consequence of relating the Reynolds stress to the mean flow strain, and in fact is the major difference between such a modeling approach and a full Reynolds-stress model (RSM) which accounts for the important effect of the transport of the principal turbulent shear-stress. On the other hand, RSM simulations are not computationally cost-effective, in as much as the improved physical fidelity that is worth the time and computational resources consumed in most cases, and not only that, they often do not converge.

The approach to alleviating this inability is actually based on an approach taken by Philippe Spalart and Michael Shur in the aim of accounting for rotation and curvature effect by offering a modification for the Spalart-Allmaras turbulence model formulation, based on empirical grounds.

The route for altering the transport equation (for the eddy-viscosity itself in the case of the Spalart-Allmaras turbulence model) kicks off with the identification of the effect of curvature and rotation in two types of extreme flows:

- thin shear flows with weak rotation (compared with the shear rate) or weak curvature (compared with the inverse of the shear-layer thickness), highly impacting the level of the turbulent shear stress.
- homogeneous rotating shear flow and free vortex cores of which strong rotation reduces the turbulent shear stress sharply.

Spalart and Shur offer an alteration to the original eddy viscosity transport equation, based on the first type of reasoning presented above (thin shear flows) and another empirical alteration added to the former to account for the second type of extreme (i.e. homogeneous rotating shear flow and free vortex cores).

The physical reasoning returns yet again to the chosen way to relate the eddy viscosity relation to the strain rate **and vorticity** in a way as to alleviate non-addressing the discrepancy between the principal axes of the Reynolds stress tensor and rate of strain tensor.

Subsequently to exploring a satisfactory relationship between strain-rate and vorticity to be accounted for by a scalar quantity to handle curvature and rotation for thin shear flows with weak rotation and also (albeit less reliably but could be improved by empirical additives) for homogeneous rotating shear flow and free vortex cores, the production term of the eddy viscosity transport equation is multiplied by a “rotation function”.

Basing on the Spalart and Shur Correction, Florian Menter and Pavel Smirnov sensitized the k-ω SST Turbulence Model to rotation and streamline curvature, with a slight modification of the Spalart-Shur correction to control the production terms of the original SST model, and by that increase its range of validity to comply adequately for a range of flows between stabilized flows with minimal turbulence production characteristic of strong convex curvature to those characterized by of enhanced turbulence production of strong concave curvature.

GEKO free parameters should be set in the following range to obtain a desirable flow attribute dependent performance (subsequent validation is mandatory):

This ends “GEKO – And Then There Where Six… PART I “.

Stay tuned for “GEKO – And Then There Where Six… PART II ” which shall include the following topics:

- Specific sets of GEKO parameters – ‘fix point’ in coefficient settings for widely used turbulence models.
- Local Vs. global application of GEKO.
- Suggested procedures for adjusting GEKO constants.
- How to extract value by utilizing GEKO from the get go.
- Limiters, corrections and other vegetables… Turbulence perspective.
- Examples of adjusted GEKO coefficients experimentally validated since the model was born.
- GEKO future prospects.

Reffs:

- “All About CFD…” Blog – Mechanical Analysis to the Level of ART
- Generalized k-ω Two-Equation Turbulence Model in ANSYS CFD (GEKO).
- “On the Sensitization of Turbulence Models to Rotation and Curvature” – P. Spalart, M. Shur
- “Sensitization of the SST Turbulence Model to Rotation and Curvature by Applying the Spalart-Shur Correction Term” – P. Smirnov, F. Menter

**Back to “All About CFD…” Index**

**Back to “All About CFD…” Index**

Pingback: Turbulence Modeling – The Gist (Based on Tenzor’s Advanced ANSYS Fluent Course) – TENZOR (ANSYS Channel Partner) Blog – Mechanical Analysis to the Level of ART