Geodynamic modelling provides a powerful tool to investigate processes in the Earth's crust, mantle, and core that are not directly observable. However, numerical models are inherently subject to the assumptions and simplifications on which they are based. In order to use and review numerical modelling studies appropriately, one needs to be aware of the limitations of geodynamic modelling as well as its advantages. Here, we present a comprehensive yet concise overview of the geodynamic modelling process applied to the solid Earth from the choice of governing equations to numerical methods, model setup, model interpretation, and the eventual communication of the model results. We highlight best practices and discuss their implementations including code verification, model validation, internal consistency checks, and software and data management. Thus, with this perspective, we encourage high-quality modelling studies, fair external interpretation, and sensible use of published work. We provide ample examples, from lithosphere and mantle dynamics specifically, and point out synergies with related fields such as seismology, tectonophysics, geology, mineral physics, planetary science, and geodesy. We clarify and consolidate terminology across geodynamics and numerical modelling to set a standard for clear communication of modelling studies. All in all, this paper presents the basics of geodynamic modelling for first-time and experienced modellers, collaborators, and reviewers from diverse backgrounds to (re)gain a solid understanding of geodynamic modelling as a whole.

The term

See the glossary in the Supplement for definitions of the bold terms that occur throughout the text.

combines the Greek word “geo”, meaning “Earth”, and the term “dynamics” – a discipline of physics that concerns itself with forces acting on a body and the subsequent motions they produceSpatial and temporal scales of common geodynamic processes. These processes occur over a wide range of timescales and length scales, and modellers have to take into account which of them can be included in any given model.

The broad definition of geodynamics typically results in a subdivision of disciplines relating to one of the Earth's spherical shells and specific spatial and temporal scales (Fig.

Indeed, since the study of geodynamics is predominantly occupied with processes occurring below the surface of the Earth, one of the challenges is the limited number of direct observations in space and time. Engineering limitations are responsible for a lack of direct observations of processes at depth, with, for example, the deepest borehole on Earth being a mere 12 km

To compensate for the limited amount of data in geodynamics, many studies have turned towards modelling. Roughly speaking, there are two main branches of modelling based on the tools that they use:

Since geodynamics has much in common with other Earth science disciplines, there is a frequent exchange of knowledge; e.g. geodynamic studies use data from other disciplines to constrain their models. Vice versa, geodynamic models can inform other disciplines on the theoretically possible motions in the Earth. Therefore, scientists from widely diverging backgrounds are starting to incorporate geodynamic models into their own projects, apply the interpretation of geodynamic models to their own hypotheses, or be asked to review geodynamic articles. In order to correctly use, interpret, and review geodynamic modelling studies, it is important to be aware of the numerous assumptions that go into these models and how they affect the modelling results. Similarly, knowing the strengths and weaknesses of numerical modelling studies can help to narrow down whether or not a geodynamic model is the best way of answering a particular research question.

Here, we provide an overview of the whole geodynamic modelling process (Fig.

We will use the word model to mean a simplified version of reality that can be used to isolate and investigate certain aspects of the real world. Therefore, by definition, a model never equals the complexity of the real world and is never a complete representation of the world; i.e. all models are inherently wrong, but some are useful

Computational geodynamics concerns itself with using numerical methods to solve a physical model. It is a relatively new discipline that took off with the discovery of plate tectonics and the rise of computers in the 1960s

It is important to realise that a numerical model is not equivalent to the

While it can be tested if a numerical model solves the equations correctly through

The models we are concerned with here are all forward models; we obtain a model result by solving equations that describe the physics of a system. These results are a prediction of how the system behaves given its physical state which afterwards can be compared to observations. On the other hand, inverse models start from an existing dataset of observations and aim to determine the conditions responsible for producing the observed dataset. A well-known example of these kinds of models are tomographic models of the Earth's mantle, which determine the 3-D seismic velocity structure of the mantle based on seismic datasets consisting of e.g. P-wave arrival times or full waveforms

A scientific modelling study encompasses more than simply running a model, as is illustrated in Fig.

In the remaining parts of this paper, each of the above-mentioned steps of a modelling study correspond to individual sections, making this a comprehensive guide to geodynamic modelling studies.

The geodynamic modelling procedure. A modelling study encompasses everything from the assemblage of both a physical (Sect.

From seismology, we know that on short timescales the Earth predominantly deforms like an elastic medium. Our own experience tells us that when strong forces are applied to rocks, they will break (brittle failure). But from the observation that continents have moved over the history of the Earth and from the postglacial rebound of regions like Scandinavia, we know that on long timescales the mantle strongly deforms internally. In geodynamic models, this ductile deformation of rocks is usually approximated as the flow of a highly

In the following, we will focus on problems that occur on large timescales on the order of thousands or millions of years (i.e.

We have discussed above that setting up a model includes choosing the equations that describe the physical process one wants to study. In a fluid dynamics model, these governing equations usually consist of a mass balance equation, a force balance or momentum conservation equation, and an energy conservation equation.
The solution to these equations states how the values of the unknown variables such as the material velocity, pressure, and temperature (i.e. the dependent variables) change in space and how they evolve over time (i.e. when one or more of the known and/or independent variables change).
Even though these governing equations are conservation equations, geodynamic models often consider them in a local rather than a global context, i.e. material and energy flow into or out of a system, or an external force is applied. Additionally, the equations only consider specific types of energy, i.e. thermal energy, and not, for example, the potential energy related to nuclear forces.
This means that for any system considered – or, in other words, within a given volume – a property can change due to the transport of that property into or out of the system, and thermal energy may be generated or consumed through a conversion into other types of energy, e.g. through radioactive decay.
This can be expressed using the basic principle.

The governing equations: conservation of mass (Eq.

The first equation describes the conservation of mass:

Because the first term explicitly includes a time dependence, it introduces a characteristic timescale into the model due to viscous

The timescale of viscous relaxation is usually shorter than that of convective processes and is often shorter than the timescales a model is focused on. In addition, these local changes in mass are often quite small compared to the overall mass flux in the Earth. Accordingly, many geodynamic models do not include this term

This means that the net influx and outflux of mass in any given volume of material is zero. The density can still change, e.g. if material is advected into a region of higher or lower pressure (i.e. downwards or upwards within the Earth), but these changes in density are always associated with the motion of material to a new location. Using this approximation still takes into account the largest density changes. For example, for the Earth's mantle, density increases by approximately 65 % from the Earth's surface to the core–mantle boundary.

In some geodynamic models, particularly the ones that span only a small depth range, even this kind of density change is small. For example, within the Earth's upper mantle, the average density changes by less than 20 %.
This is the basis for another approximation: assuming that material is incompressible (i.e. its density

Equation (

The surface forces are expressed as the spatial derivatives of the stress

Under the assumption that deformation is dominantly viscous, Eqs. (

Dropping the inertia term means that the Stokes equations (Eqs.

Equation (

The terms on the right-hand side of Eq. (

When material undergoes phase changes, thermal energy is consumed (endothermic reaction) or released (exothermic reaction) as latent heat. This happens both for solid-state phase transitions, such as from olivine to wadsleyite, and for melting of mantle rocks. For phase transitions that occur over a narrow pressure and/or temperature range, this can lead to abrupt changes in mantle temperature where phase transitions are located, such as around 410 km depth. The amount of energy released or consumed is proportional to the density

In the previous sections on the mass, momentum, and energy equations, we have already seen that there are different ways these equations can be simplified and that there is a choice of which physical processes to include. Based on an analysis of the equations, there are a number of different approximations that are commonly used in geodynamics

The

The

The

The

For a comparison between some of these approximations using benchmark models, see e.g.

Using the equations discussed in the previous section (Sect.

The

Rocks deform in different ways, necessitating different rheologies. For example, rocks can deform elastically or by brittle failure. On long timescales their inelastic deformation is usually modelled as that of a highly viscous fluid. Based on this latter assumption, we use the Stokes equations to describe the deformation. This physical model adequately explains many processes in the Earth's interior related to mantle convection and observations. However, some observations such as plate tectonics, which involve strain localisation and strong hysteresis, and the existence of ancient geochemical heterogeneity revealed by isotopic studies are behaviours not commonly associated with fluids. Indeed, in fluids, different chemical components are well-mixed in a convective flow, and the flow usually does not depend on the deformation history and has no material memory. Geodynamic models still aim to reproduce such processes using complex rheologies that go beyond the basic assumption of a viscous fluid and include plastic yielding, strain or strain rate weakening or hardening, and other friction laws (see below). Hence, resolving how to go forward with the assumption of a viscous fluid in the face of complex deformation behaviour of rocks remains an important challenge for the geodynamic modelling community.

In the following, we will discuss some common rheological behaviours being used in geodynamic models.

We start by discussing a very simple type of rheology with the following features: (i) only viscous (rather than elastic or brittle) deformation, (ii) deformation behaviour that does not depend on the direction of deformation (corresponding to an isotropic material), (iii) a linear relation between the stress and the rate of deformation (corresponding to a

It describes both volume changes of the material (volumetric) and changes in shape (deviatoric),
and it can be written as the sum of these two deformation components:

In the Earth's interior, viscous deformation is the dominant rock deformation process on long timescales if temperatures are not too far from the rocks' melting temperature. Under these conditions, imperfections in the crystal lattice move through mineral grains and contribute to large-scale deformation over time.
The physical process that is thought to most closely resemble this idealised case of a Newtonian fluid is

Consequently, the viscosity of rocks strongly depends on e.g. temperature, pressure, stress, size of the mineral grains, deformation history, and the presence of melt or water, and it varies by several orders of magnitude, often over small length scales. These experimentally obtained flow laws can be expressed in a generalised form using the relation

On shorter timescales, the elastic behaviour of rocks becomes important in addition to viscous flow. In the case of elasticity, the stress tensor is related to the strain tensor through the generalised Hooke's law

Hence, for elastic deformation, the stress is proportional to the amount of deformation rather than the rate of deformation, as is the case for viscous processes.
Due to the inherent symmetries of

Elastic deformation is often included in geodynamic codes that solve the Stokes equations by taking the time derivative of Eq. (

For strong deformation and large stresses on short timescales, such as occurring in the crust and lithosphere, brittle deformation becomes the controlling mechanism. In this case, a fault or a network of faults (which can range from the atomic to the kilometre scale) accommodates deformation. The relative motion of the two discrete sides of the fault is limited by the friction on the interface. However, in a continuum formulation, discontinuous faults cannot be represented, and hence the deformation in geodynamics models typically localises in so-called

One of the most well-known yield criteria is the Mohr–Coulomb criterion

Often Eq. (

In a Mohr's circle diagram of shear stress

Experimentally,

The

A common simplification of the Drucker–Prager and Mohr–Coulomb yield criteria is to use the lithostatic pressure

In nature, the strength of rocks can change over time and depends on the deformation history. Examples are the evolution of the mineral grain size, formation of a fault gouge, and percolation of fluids, which alter the strength of the rock. To account for this variation in strength over time on tectonic timescales, the cohesion and friction coefficient of the rock can be made dependent on the strain or strain rate in what is called strain or strain rate weakening (also called softening): when the strain or strain rate increases, the strength of the rock is lowered. Similarly, strain or strain rate hardening can be applied

In the Earth, the relation between stress and deformation can be very complex, and deformation is a combination of elastic and viscous behaviour and brittle failure. In general, both the viscosity and the elastic moduli can depend on temperature, pressure, chemical composition, the presence of melt and fluids, the size and orientation of mineral grains, the rate of deformation, and the deformation history of the material.
Consequently, Earth materials are usually not isotropic, and the strength of the material depends on the direction of deformation. To incorporate this behaviour into geodynamic models, the viscosity and elastic moduli have to be expressed as tensors and cannot be reduced to one or two parameters

The relation between density, temperature, pressure, and sometimes other variables like chemical composition is often called the

Depending on the model application, there are many different equations of state that can be used. For models that aim to capture the first-order effects of a given process, analyse the influence of a material property on the model evolution, or develop a scaling law (see Sect.

On the other end of the spectrum, there are models designed to fit existing observations (e.g. seismic wave speeds; see also Sect.

The mass, momentum, and energy conservation equations are visibly coupled since velocity (derivatives) and pressure enter the stress tensor, and thermal energy transport due to advection depends on the velocity.
More importantly, the previous sections have highlighted how material properties such as

In addition to temperature, pressure, and velocity there may be other conditions in the model that change over time and are important for the model evolution, but not directly related to changes in temperature, pressure, and velocity. A common example is the chemical composition of the material (which can refer to the major element composition but may also relate to the water content, for example). In this case, a transport equation is required for every additional quantity that should be tracked in the model and moves with the material flow:

Other physical processes may require additional terms or additional equations. Examples are
the generation of the magnetic field in the outer core

In the end, the partial differential equations in Eqs. (

The conservation equations in Sect.

Examples of a two-dimensional domain and material discretisation. The domain discretisation in the left-hand-side column illustrates different types of meshes. The top left mesh

The solutions of the continuum equations described in Sect.

The discretisation concept for all three main methods (FEM, FDM, FVM) is identical. The domain is subdivided in cells or elements, as shown in Fig.

To illustrate this concept, we provide a small example here for the conservation of energy using the finite-difference method, which is based on a Taylor expansion keeping only first- and second-order terms (see Appendix

While finite-difference codes only need to specify the order of the approximation they rely on (and the associated

For quadrilaterals and hexahedra, the designation

Kinematical descriptions for a compressed upper-mantle model setup.

For all methods, the discretisation process results in a linear system of equations with its size being the number of unknowns, i.e. a multiple of the number of nodes and/or elements. This system of equations is written as

Computation paradigms.

After the discretisation step, a kinematical description must be chosen to define how the material is going to move in the model. There are several widely used options, i.e. the Eulerian, Lagrangian, and arbitrary Lagrangian–Eulerian (ALE) formulations (Fig.

Eulerian codes have a fixed mesh through which material flows. Since the evolution of the top boundary of the model is often of prime importance in geodynamical studies as it accounts for the generated topography, a feature that is directly and easily observable on Earth, the air above the crust must be modelled as well to allow for the formation of topography. This air layer has been coined “sticky air”

In contrast to a Eulerian kinematical description, the mesh of Lagrangian codes deforms with the computed flow and therefore does not require the use of sticky air to model topography. This limits Lagrangian codes to small deformation. For example, subduction processes would quickly deform the mesh to such a point that it would not be suitable for accurate calculations. PyLith

Finally, as its name implies, the arbitrary Lagrangian–Eulerian (ALE) method, part of the semi-Lagrangian class of methods, is a kinematical description that combines features of both the Lagrangian and Eulerian formulations. In geodynamical codes, it often amounts to the mesh conforming vertically or radially to the free surface while retaining its structure in the horizontal or tangential direction (Fig.

The discretisation process outlined in Sect. (

Early computing architectures of the 1970s were quite limited by today's standards and predominantly relied on sequential programming whereby one task is performed after the other (Fig.

Other codes, such as the surface processes code FastScape

When documenting the parallel performance of a code, one often talks of strong and weak scaling

As mentioned in Sect.

State-of-the-art codes now all rely on some form of

Aside from the conservation of mass, momentum, and energy, special care must be taken when solving the advection in Eq. (

It is important to mention that there is no single best recipe for advection, and oftentimes methods are tested against each other

On Earth, the lithosphere interacts with the
cryosphere, biosphere, atmosphere, hydrosphere, magnetosphere, and other systems, and the deformation of the lithosphere is related to many natural hazards such as earthquakes, volcanic eruptions, landslides, and tsunamis.
These systems are often inherently multi-scale with different processes occurring on vastly different timescales, length scales (Fig.

However, some multiphysics problems are so closely intertwined that solving the coupled system of equations requires different numerical methods than solving the problems individually (for example, coupled magma–mantle dynamics). In these cases, coupling requires the development of a new code that tightly integrates the different physical processes. One recent approach in building numerical applications for multiphysics problems that could be used for this makes use of Application Programming Interfaces (APIs) instead of readily available community codes. APIs can be a collection of routines that are optimised to perform certain operations such as assembling vectors and matrices, solving systems of equations in parallel (i.e. PETSc,

At this point, we have described the model in terms of governing and constitutive equations, and we have discretised and solved the system using appropriate numerical methods. Hence, an application code has been obtained at this point. However, before any scientific study can be performed with confidence, the code must be tested to ensure it does what it is intended to do. This process has two components:

In the software engineering community, the importance of tests is well-acknowledged: “Without tests every change is a possible bug”

The recommended approach is to implement only what is needed, test it and refactor and expand the system to implement new features later on, and test again. Implementing an automatic testing framework can speed up these steps. Good tests also follow the FIRST acronym: fast (run quickly), independent (do not depend on each other), repeatable (in any environment), and self-validating (have a Boolean output, pass or fail) tests

The types of tests one can do to benchmark codes relate to

Analytical solutions for code verification can also be used in the form of the

When analytical solutions are not possible, numerical experiments of the same model setup (i.e. equations, boundary conditions, geometry, and parameters) can be tested with a number of different codes within the community. These are called

Different model complexities for the heart

Comparison with analogue experiments is important for calibrating numerical models for the complex processes required for numerical modelling of large-scale tectonic processes (e.g. plastic failure). They can be used for both verification and validation of numerical models. For example, modelling sandbox experiments

As the importance of testing is revealed, more software engineering practices are required to keep codes clean, testable, and robust (see Sect.

Potential options for geodynamic model simplification. Note that we mean “multiphysics” beyond the already coupled system described in Sect.

Designing a model is not straightforward. Before starting to design a model, it is important to understand the code, the model, and the difference between the two. While the code's purpose is of a general nature (e.g. to allow for creating models to investigate some geodynamic problems), the purpose of the model is very specific and, in most cases, indeed unique. This unique purpose is reflected in the complex nature of the model, which has to be set up with care. A model is the sum of an underlying (modelling) philosophy, one or more geodynamic concepts and hypotheses, its physical and numerical construct, and initial and boundary conditions. Even though the purpose of a geodynamic model is usually unique, its outcome never is. The same result of a spatially or temporally restricted model of nature can always be recovered by multiple different models. Therefore, a geodynamic model cannot be verified, in contrast to the code.

How to design a simplified – but not oversimplified – geodynamic model that is based on a certain modelling philosophy and applies suitable initial and boundary conditions is therefore outlined in this section.

A model is, by definition, a simplified representation of a more complex natural phenomenon. This is a simple and obvious truth that is easily forgotten when geodynamic models are interpreted, presented, and reviewed. It is the modeller's responsibility to not only constantly remind themselves, but also others, about this key underlying fact.

The complexity of the planet Earth as a whole is vast. It is therefore challenging to reconcile such true complexity with a desired simplicity. A model can easily become too complex and, just as easily, oversimplified (Fig.

So, how simple should a model optimally be? The answer to this question is not an easy one, as it strongly depends on the purpose of the model, the capabilities to diagnose and understand it, and the hypothesis that it will test. It is clear though that a more complex model does not necessarily mean a better model. In fact, a simpler model is often better than a more complex model. A simpler model is clearer to understand, clearer to communicate, and, by making fewer assumptions, more likely right

There are various ways to reduce model complexity (Fig.

Further model simplification is achieved through numerical adjustments. For example, all the following studies model plate tectonics, but the geometry of the model can be complex (e.g. a 3-D spherical domain like

The numerical model complexity can also be adjusted by changing the initial and boundary conditions (heterogeneous as in

The two overarching modelling philosophies.

There are two overarching geodynamic modelling philosophies:

Both overarching modelling philosophies can either fulfil or reject a hypothesis. Most results published to date fulfil a hypothesis, even though positive modelling results only hint at a certain phenomenon being responsible for an observation. Modelling results that reject a hypothesis (often called “failed models”) are of course more abundant, but also much clearer as they indeed serve as proof that a certain situation does not lead to a specific observation.

Furthermore, both overarching modelling philosophies can result in instantaneous and time-dependent studies

Modelling aimed to compare and understand a specific state of a geodynamic system necessitates the following procedure. Firstly, a specific observation (in a certain region) has to be defined. Secondly, a hypothesis about the control mechanism(s) has to be outlined. Thirdly, a model setup needs to be designed considering three key aspects. The model needs to be able to produce the observed feature, include the hypothetical control mechanism(s), and physically link the control mechanism to the observed feature. Lastly, the model has to be simplified to be easily understandable without being oversimplified.
For specific modelling in particular, the modeller needs to keep in mind that there is no guarantee that the suspected control mechanism is the actual, or the only, controlling mechanism (see Sect.

A specific modelling philosophy is often used to understand the circumstances that facilitated natural hazard events, like earthquakes, in order to improve hazard analyses. For example,

Modelling aimed at understanding the general behaviour of a geodynamic system necessitates the following procedure. First, a general first-order observation has to be defined. Second, a hypothesis about the controlling parameters and their possible range has to be outlined. Third, a model setup needs to be designed considering two key aspects. The model needs to include the proposed control mechanism(s), and it needs to be built on a set of assumptions for simplification.
For generic modelling in particular, the assumptions that go into designing the geodynamic model are key and need to be specified and described clearly (Sect.

When a generic modelling philosophy is applied, a general geodynamic feature is investigated via a

The mapping of a parameter space is often done through manual variation of a single model parameter and comparison of the resulting model predictions. However, recent developments allow scaling laws between the model solution and the model parameters to be computed automatically through adjoint methods. Besides solving inverse problems

After choosing the equations that will be solved and the model geometry, both initial and boundary conditions are needed to solve the numerical model. The solution of the numerical model will depend on the initial and boundary conditions used, so it is important to choose them carefully

The

For the thermo-mechanical models considered here, we need to prescribe boundary conditions for the conservation of mass, momentum, and energy equations in order to solve them. For the Stokes equations, typical mechanical boundary conditions include (1) the

Boundary conditions can be used to drive the system by e.g. prescribing the velocities of plates resulting in lithospheric extension or convergence. Hence, the modeller could assimilate data on plate motions from the geologic record into the model through the boundary conditions to improve the predictive power of the model

Considering boundary conditions for the energy equation for models of the Earth's mantle and crust, it is common practice to prescribe fixed temperatures (Dirichlet) at the Earth's surface and at the core–mantle boundary, as well as prescribed heat fluxes (Neumann) for boundaries within the mantle. Fixing the heat flux fixes the amount of energy within the domain. When using fixed temperatures, the amount of energy can freely evolve but the temperature variations are fixed. However, this might not always be applicable. For example, a model of a mantle plume in the upper mantle will need a prescribed inflowing plume temperature at the bottom of the model at the upper mantle. Similarly, although the outer core can be assumed to be a reservoir with a constant temperature for mantle models, for models of the outer-core heat flux boundary conditions at the core–mantle boundary are more appropriate

The boundary conditions of both the Stokes equations and the energy equations can be related in the model. For example, if the model has an open boundary, there could be both inflow and outflow along different parts of that open boundary. On the inflow part of the boundary, it is useful to prescribe the temperature (e.g. slab age), whereas on the outflow part of the boundary, insulating Neumann boundary conditions can be used.

It is also possible to constrain degrees of freedom inside the domain. For example, in lithospheric-scale models, a velocity prescribed within the slab (i.e. not at the boundary) is an example of an internal driving force to prescribe subduction in the absence of initial slab pull

The choice of boundary conditions can alter the modelling results with e.g. different mechanical boundary conditions on the sidewalls of the mantle affecting the resulting subduction behaviour

The initial conditions also include the initial compositional geometry and material layout and/or history in the model, since the transport equation in Eq. (

To initially drive the model in the absence of driving boundary conditions, specific initial conditions inside the model domain, so-called

Common numerical modelling problems.

Another common example of initial conditions is the so-called

The modeller should always keep in mind that the initial conditions can often determine the model outcome. That is, after all, their purpose, since otherwise there would be no localised deformation or initial drivers. Efficiently starting the model is solely at the discretion of the modeller, who aims to artificially mimic a process they are interested in. Since these initial conditions, in combination with the boundary conditions, are critical for the model development, the choices the modeller makes are sometimes referred to as the

After the code has been successfully tested and benchmarked (Sect.

The construction of a specific model setup to investigate a particular problem or hypothesis can give rise to numerical issues, despite successful code verification. During the model validation process, these issues are identified and addressed. They can usually be spotted through monitoring solver convergence behaviour and visual inspection of the solution throughout the model evolution, with model breakdown (i.e. a crash of the programme) and unexpected behaviour being the most obvious red flags. In this section, we describe a number of common problems and their potential solutions.

A

When using a free surface, quickly increasing model velocities, a corresponding increase in solver iterations, and a sloshing movement of the surface are indicative of the

Another problem that can occur when mesh deformation is allowed in finite elements (i.e. in Lagrangian and ALE methods; Fig.

Visual inspection of the modelling results can uncover other issues. For one, smaller features (e.g. a subduction interface) can be seen to spread out over time and disappear; this can be due to diffusion or smearing of the advected field in the grid-based advection method. On the other hand, steep gradients of advected fields can lead to oscillations of these fields normal to the gradients. Mitigating such undershooting and overshooting requires more diffusion or different stabilisation algorithms of advection

Unstable yet very popular finite-element pairs such as the

Internal inconsistencies can arise from disagreements in the modeller's choices in terms of boundary conditions, density formulations in the different governing equations, and the equations' coefficients. Not all inconsistencies are easily detectable or manifest themselves as numerical problems. For example, when the net prescribed inflow and outflow through the model boundaries is not (close to) zero, while a model is assumed incompressible, volume is no longer conserved and the solver for the Stokes equations might crash or return a nonsensical solution. When a free surface is used, this problem might be overlooked, as the surface can rise or fall in response to a net increase or decrease in volume, respectively. This physical inconsistency is also harder to detect in compressible models. Another example is prescribed surface velocities based on, for example, plate reconstructions models, which can add unrealistic amounts of energy into the modelled system.

Care should also be taken that the assumptions made to simplify the treatment of density in the governing equations (see Sect.

The thermodynamics of Earth materials are very complex, especially in multi-phase, multi-material systems; hence, they are often simplified in the numerical model. For example, in nature the thermal expansivity varies both smoothly and abruptly with depth (e.g. Fig. 7 of

The thermodynamic potential of the isothermal compressibility is defined as

All these relations have to be fulfilled at all temperatures and pressures. Consequently, it is often not immediately apparent if a given material description is thermodynamically consistent or not, and equations of state used in the geodynamic literature do not always take this into account

After the steps described in the previous section, checking for potential numerical issues and the internal consistency of the model setup, it is time to test whether the model results are consistent with our understanding of geodynamic processes. In a broad sense, does the model evolution stay within the bounds of what we know to be possible from geological and geophysical observations? More specifically, do the velocities obtained make sense? For example, does the sinking velocity of a slab lie within the estimated bounds from reconstructions and mantle tomography

Note that deviations of the model results from generic observations do not necessarily mean that the results are wrong. In fact, a model of a natural system like the geodynamic models described here can never truly be validated

Considering this lack of experimental control from both the real world and analogue models from the lab (see Sect.

After ensuring the accuracy, consistency, and applicability of the model results, these can now be used to address the hypothesis the modeller set out to test according to a particular modelling philosophy (Sect.

Model analysis includes visual (qualitative) diagnostics and quantitative diagnostics. These two important, partly overlapping, aspects are discussed in detail below.

Visualising the model output allows us to test, analyse, diagnose, and communicate the model results. Figures can describe and summarise the enormous amounts of data that numerical modelling can produce and highlight important features that support the initial hypothesis. Depending on the complexity of the data and the objective of the figure, visualisation methods differ widely and range from a graph of root mean square velocity (a quantitative model diagnostic, Sect.

To cover the wide range of potential visualisation products, a multitude of visualisation programmes is available. Some of the commonly used software packages are gunplot (

Mere visual inspection of the model results is not sufficient to analyse and interpret the outcome of the simulations; a quantitative analysis of the results is also required. Deciding what specific post-processing is to be done can be a time-consuming process.
There are a range of non-dimensional numbers that can be calculated to characterise the flow of fluids in a range of geodynamic environments. If multiple physical processes influence the behaviour of a system, the non-dimensional numbers derived from the governing equations (Sect.

Further analysing and diagnosing a model then varies with the modelling approach that has been taken (see Sect.

For

While some model analyses can be done by hand, the more elaborate post-processing that is becoming increasingly popular nowadays needs to be automated using open-source, testable, and extendable algorithms and shared as user-friendly software

A few software packages that allow for automated post-processing and diagnosis of geodynamic models are available to support geodynamicists with analysing their increasingly complex models and the large datasets originating from them. However, such tools are rare because while most individual researchers spend a large amount of time in coding post-processing scripts, they often do not share those scripts with the geodynamics community. Moreover, scripts that are shared in the context of repeatability and transparency are not necessarily applicable or relevant to other software output. Making their own post-processing scripts more generically applicable can also not be required of individual scientists. Contributing to post-processing tools as part of a community software project is a great step forward, reducing the duplication of work while providing author recognition. Unfortunately, not all available post-processing tools supplied with community software can be applied to results from other codes. Defining a set of interfacing functions, like the Basic Model Interface (

Generic, open-access geodynamic diagnostic tools are Adopt

Other recent developments include the automated comparison of observations to model predictions to find the smallest misfit between the two. Such statistical and probabilistic inversion methods help determine the model parameters, e.g. mantle viscosity or crustal density, that result in the best fit of the model solution with the observed quantity through forward geodynamic modelling

Scientific results are only of value if they are communicated to the wider scientific community. No matter whether they are spoken or written, the first aspects to get right when communicating science concerns letters, words, and phrases. Since geodynamic modellers, like most other life forms, tend to learn most effectively by observing and copying other fellows, it is no surprise that we tend to speak and write in a similar way to our mentors, peers, and friends. While there is generally nothing wrong with that process, it does, however, make for an excellent breeding ground for problems related to semantics that can lead to serious miscommunication

The semantics behind a modelling publication or presentation need to be in tune with the approaches taken in the modelling itself. If a modelling study is suitably communicated, there will be less misunderstanding about what the presented model stands for, what it does not stand for, and what the drawn conclusions mean.

Because geodynamic models are per definition simplifications of a natural system (Sect.

In addition, care has to be taken with absolute statements, like “X on the Earth is due to Y”, when drawing conclusions from the model results. As discussed in Sect.

Communicating a geodynamic modelling study, however, goes beyond semantics. The suitable words and phrases are most effective when combined with an appropriate manuscript structure as well as effective still and even motion graphics. Combined, these forms of communication make a new scientific insight accessible.

Peer-reviewed scientific papers are essential to disseminate relevant information and research findings. In particular, it is important to make results understandable and reproducible in the methods and results sections. Reviewers will criticise incomplete or incorrect method descriptions and may recommend rejection because these sections are critical in the process of making the results

While there are many ways of writing a paper, the main purpose of a scientific paper is to convey information. Historically, the structure of scientific papers evolved from mainly single-author letters and descriptive experimental reports to a modern-day comprehensive organisation of the manuscript known as “theory–experiment–discussion”

Manuscript structure for a geodynamic numerical modelling study following

A good introduction should answer the following questions: what is the problem to be solved? What previous work has been done? What is its main limitation? What do you hope to achieve? How do you set up your investigation? One major mistake is to attempt to do an extensive literature review in the introduction, which often goes off topic. The introduction serves as the stage to lay out the motivation for the study, and any background reading should focus on the question being addressed.

The methods section is an important part of any scientific manuscript

First, the methods should be plain and simple, objective, logically described, and thought of as a report of what was done in the study. Unstructured and incomplete methods can make the manuscript cumbersome to read or even lead the reader to question the validity of the research. Generally, journals have guidelines on how the methods should be formatted, but not necessarily what they should contain because they vary from field to field. The “who, what, when, where, how, and why” order proposed by

Figure

This is followed by a section on the computational approach explaining how the theory and the model are translated into computer language (Sect.

After the model setup has been explained, the methods should contain a section describing the design or layout of the study in detail. What is being tested or varied? How many simulations were performed in terms of model and parameter space? For example, one can use different model setups (i.e. lithosphere-scale and mantle-scale subduction models) with varying parameters in the same study. Why perform those simulations and vary those parameters? A summary table is handy to indicate all simulations that were run and which parameter was varied in which run. Additionally, it is important to include information on all input parameters and their values and units, as well as possible conversions within the code to enable reproducibility of the study (see Sect.

Analysis, visualisation, and post-processing techniques of numerical data should also be described in the methods section. This is a step generally ignored, but it is important to be open about it; e.g. “visualisation was performed in ParaView/MATLAB, and post-processing scripts were developed in Python/MATLAB/Unicorn language by the author”. If the post-processing methods are more complex, the author can provide more details (i.e. statistical methods used for data analysis). It is also good practice to provide these post-processing scripts for peer reviewing and community engagement (Sect.

Information should also be given on code and data availability. This was originally part of the methods section, but recently journals have introduced data management requirements (Sect.

Before moving to other sections, model assumptions need to be stated clearly in either the description of the theory or the numerical approach. Geodynamics is a field in which we take a complex system like the Earth or another planetary body and simplify it to a level from which we can extract some understanding (Sect.

Complementary to the methods section, the results section should be a report of the results obtained. The main goal of the results section is to present quantitative arguments for the initial hypothesis. However, any interpretation of the results or reference to other studies should be reserved for the discussion. For example, results in a mantle convection model might show that dense material accumulates at the bottom of the domain (i.e. core–mantle boundary). The interpretation of these results is that they provide a mechanism to explain how LLSVPs (large low-shear-velocity provinces) have formed. Illustrations, including figures and tables, are the most efficient way to present results (Sect.

The discussion section relates all the questions in the manuscript together: how do these results relate to the original questions or objectives outlined in the introduction section? Do the results support the hypothesis? Are the results consistent with observations and what other studies have reported? The modeller should discuss any simplifying assumptions, shortcomings of numerical methods and results, and their implications for the study. For example, the discussion in a specific modelling study (Sect.

At this point in preparing the manuscript, the authors have all the necessary elements to write the abstract and conclusions and come up with a descriptive title. Both the abstract and conclusion summarise the entire publication, but in a different way: one as a preview and one as an epilogue, respectively. It is crucial to focus a paper on a key message, intended for both specialist and non-specialist readership, which is communicated in the abstract and conclusions. Some journals also include a plain language summary and/or graphical abstracts as alternative ways to engage a broader audience.

In the end, every scientific manuscript has additional components such as the references, acknowledgements, Supplement, software and data availability, and author contributions (Fig.

In this section, we have primarily referred to scientific articles, but scientific manuscripts can also be reviews, editorials, and commentaries. The structure and contents of these manuscripts differ for each type. Each publisher and journal have their own style guidelines and preferences, so it is good practice to consult the publisher's guide for authors. Finally, even though scientific manuscripts may have a rigidly defined structure due to journal guidelines, there is still plenty of flexibility. In fact, the best manuscripts are creative, tell a story that communicates the science clearly, and encourage future work.

Effective visualisation through a scientific use of colours. Non-scientific colour maps

There are many different ways to visualise geodynamic models, and it is challenging to figure out how to do so most effectively. However, avoiding the most common visualisation pitfalls is the best start for any modeller looking into visually communicating important results across the research community and possibly beyond. The key aspects to remember when creating figures, thereby preventing misleading visual impressions, are the following: (1) scales, like graph axes and colour bars, must always be included to allow quantification of data values. (2) Bar plots must always have a zero baseline (or in the logarithmic case, have a baseline at 1), to not mislead the reader with altered relative bar heights. (3) Pie diagrams should be avoided as angles and areas are not easily quantifiable by the human brain and are therefore not directly comparable to each other. These problems are exaggerated when pie charts are displayed as 3-D structures, which causes the values underlying the pieces closest to the viewer to appear artificially larger than the others. (4) Heat maps (i.e. plots with differently coloured tiles) should have numbered tiles that include the data value, as surrounding colours heavily distort the impression of a given colour, which can mislead the viewer's perception of the underlying data values significantly

All aspects of a figure need to be explained and understandable. While filtering, resampling, or otherwise post-processing model results instead of plotting raw data can improve the message purveyed by the figure, such actions should be mentioned in the figure caption. Some numerical codes work, for example, in dimensionless numbers (Sect.

Displaying 3-D models effectively is challenging and somewhat arbitrary, as the third dimension is often difficult to convey in a still image. Given the current dominant occurrence of non-interactive, two-dimensional canvases (e.g. the pdf format), 2-D slices of parameter fields often represent the model more effectively than 3-D volumes. The combination of various datasets, like flow characteristics on top of a temperature field, can be effective but is also challenging. Velocity arrows, for example, should not overlap or distract from the remaining important content of the figure. If the velocity in a 3-D visualisation is displayed using arrows, they should be coloured according to their magnitude because their lengths are distorted by the 3-D perspective. Stream lines and a coloured contour plot of the velocity field often provide a more suitable solution to display the flow direction and patterns, as well as its velocity magnitudes, respectively.

An uninformed, unscientific use of colours not only excludes a large portion of the readership, for example through hardly distinguishable colour combinations for readers with a colour-vision deficiency (like the most common red–green colour blindness), but also significantly distorts the underlying model data visually

Scientific colour maps are perceptually uniform to prevent data distortion, and they are perceptually ordered to make data intuitively readable, colour-vision deficiency friendly, and optimally readable in black and white to include all readers. Suitable scientific colour maps

Just like any other study, numerical modelling studies should be

Before development starts, software developers and modellers involved in the development of the software they use should consider setting up a

In the

When archiving software developments or additions, one should take care to include instructions for installation and use as well as ample comments explaining the code. Software containers such as Docker containers (

Apart from software, modelling studies also use and produce data. For one, observational data can be provided as input to the model setup, and usage of these data when created by others should be duly referenced. Then, simulations produce data – the model results.

To help modellers with the implementation of the FAIR data principles, publishers and data repositories formed the coalition COPDESS. The COPDESS website (

Data repositories can be subdivided into institutional repositories, domain-specific repositories (e.g. EarthChem, IRIS, PANGAEA, and HydroShare), thematic data repositories (which differ from domain-specific repositories by having to transform the data to the repository's format yourself, e.g. NASA's Planetary Data System), and general repositories like figshare, Dryad, and Zenodo

Not only data can (and should) have a persistent identifier, but researchers can also create persistent digital identifiers, like an ORCID iD (

One last thing modellers should consider is that numerical modelling does not come for free. As a community, we have to acknowledge the environmental impact, especially of high-performance computing and data storage. In a busy year (e.g. 1 million CPU hours), computations of one researcher can emit up to 6 t of CO

In addition to environmental costs, there are non-negligible financial costs to modelling. Access to high-performance machines can be expensive and a heavy entry in a modeller's budget. Moreover, the often big data that result from running numerical models need to be stored, diagnosed, visualised, and shared. Large local or remote storage solutions, software licenses, and powerful personal computers are expensive. These financial modelling costs need to be acknowledged not only by modellers themselves but also by others, such as funding agencies. With conscious management of resources, software, and data, we can ensure a fairer, more efficient, and greener geodynamic modelling community.

Geodynamic modelling studies provide a powerful tool for understanding processes in the Earth's crust, mantle, and core that are not directly observable. However, for geodynamic modelling studies to make significant contributions to advancing our understanding of the Earth, it is of paramount importance that the assumptions entering the modelling study and their effects on the results are accurately described and clearly communicated to the reader in order for them to be well-understood. These assumptions are made at numerous stages of the numerical modelling process such as choosing the equations the code is solving, the numerical method used to solve them, and the boundary and initial conditions in the model setup.

Apart from acknowledging the assumptions made and their implications, it is important to view a modelling study in light of its intended philosophy. Generic modelling studies, usually characterised by extensive parameter variations, aim to understand the general physical behaviour of a system. Specific modelling studies, on the other hand, aim to reproduce a specific state of a specific geodynamic system and therefore rely more heavily on data comparisons.

In order to make the geodynamic modelling process transparent and less prone to errors, good software management is necessary with easily repeatable code verification to ensure that the equations are solved correctly. Additionally, it is important that the model is internally consistent with regards to thermodynamics and boundary conditions. Then, for individual models, the results need to be validated against observations.

When communicating the results of a geodynamic modelling study to peers, it is important to provide both quantitative and qualitative analyses of the model. Fair presentation of the results requires clear, unbiased, and inclusive visualisation. The results should first be objectively described and presented, after which they can be interpreted in the discussion.

In addition to outlining these best practices in geodynamic numerical modelling, we have shown how to apply them in a modelling study. Taking these best practices into account will lead to clearly communicated, unambiguous, reproducible geodynamic modelling studies. This will encourage an open, fair, and inclusive research community involving modellers, collaborators, and reviewers from diverse backgrounds. We hope to set a standard for the current state of geodynamic modelling that scientists can build upon as future research develops new methods, theories, and our understanding of the Earth. Geodynamic modelling is bound to increasingly link with a growing number of disciplines, and we trust that the perspective presented here will further facilitate this exchange.

In this Appendix, we provide an example of how to translate a physical model into a numerical model through discretisation such that it can be coded up into an algorithm. More in-depth details on numerical modelling can be found in

Before discretising the heat equation, we need to approximate all its terms, after which we can discretise these approximations. Both first-order and second-order derivatives are present in the heat equation in Eq. (

The forward Taylor series expansion of

To approximate second-order derivatives, we start with the Taylor expansions of function

Adding these two equations together yields

Now that we know how to approximate first-order and second-order derivatives, we can apply these approximations to Eq. (

1-D discretisation in space (horizontal axis) and time (vertical axis).

The next step is to discretise the first-order time derivative in Eq. (

This forward finite-difference derivative is called first-order accurate, which means that a very small

Both

Hence, we have found an expression to compute the temperature

An alternative approach to deal with the time discretisation is an implicit finite-difference scheme, whereby we use the backward difference for the time derivative (Eq.

This is often rewritten as follows in order to deal with the unknowns of time step

The main advantage of implicit methods over their explicit counterpart is that there are no restrictions on the time step, since the fully implicit scheme is unconditionally stable. Therefore, we will use the backward (implicit) scheme for the rest of this example. This does not mean that it is accurate no matter what, as taking large time steps may result in an inaccurate solution for features with small spatial scales. For any application, it is therefore always a good idea to do a convergence test, i.e. to check the results by decreasing the time step, until the solution does not change anymore. Similarly, a spatial convergence check with the solution of the model evaluated with changing spatial resolution is useful. Doing these convergence tests evaluates whether the method can deal with both small- and large-scale features robustly and ensures that it can.

Equation (

We discretise the domain of length

This system of equations can be rewritten in matrix and vector form to obtain the general expression for the linear system of equations we are solving (Sect.

All figures presented in the paper, including light, dark, and transparent versions in various file formats, are available at

The supplement related to this article is available online at:

This work is based on both the European Geosciences Union geodynamics blog, (

The contact author has declared that neither they nor their co-authors have any competing interests.

We have attempted to limit the total number of references presented in this work to increase readability. We acknowledge that this does not represent the full extent of work done on any given topic. However, we refer to well-known review papers and textbooks with extensive explanations as well as exemplary papers from early career scientists from diverse backgrounds to further promote equality, diversity, and inclusion in geodynamics. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

We would like to thank executive editor Susanne Buiter and topical editor Taras Gerya for supporting and encouraging the submission of an educational review paper like this. We would also like to thank our reviewers Paul Tackley, Boris Kaus, and Laurent Montési for extensive and detailed constructive reviews that greatly improved this paper. Similarly, we would like to thank everyone who provided additional comments during the open discussion for

We warmly thank Antoine Rozel, who was an integral part of the original EGU GA geodynamics 101 short courses and helped shape the format. We are deeply grateful to the EGU – in particular their communication officers Laura Roberts Artal, Olivia Trani, and Hazel Gibson – for the possibilities they provided us in the form of the EGU geodynamics blog and the short courses as well as for their continued support. We thank the attendees of the short courses for their constructive feedback. Thanks to all our proofreaders for their valuable feedback: Ruth Amey, Molly Anderson, Kiran Chotalia, Tim Craig, Matthew Gaddes, Rene Gassmöller, Edwin Mahlo, Martina Monaco, Gilles Mercier, Arushi Saxena, and Jamie Ward.

Iris van Zelst was funded by the Royal Society (UK) through Research Fellows Enhancement award RGF

This paper was edited by Taras Gerya and reviewed by Paul Tackley, Boris Kaus, and Laurent Montesi.