Earth deformation is a multi-scale process ranging from seconds (seismic deformation) to millions of years (tectonic deformation). Bridging short- and long-term deformation and developing seismotectonic models has been a challenge in experimental tectonics for more than a century. Since the formulation of Reid's elastic rebound theory 100 years ago, laboratory mechanical models combining frictional and elastic elements have been used to study the dynamics of earthquakes. In the last decade, with the advent of high-resolution monitoring techniques and new rock analogue materials, laboratory earthquake experiments have evolved from simple spring-slider models to scaled analogue models. This evolution was accomplished by advances in seismology and geodesy along with relatively frequent occurrences of large earthquakes in the past decade. This coincidence has significantly increased the quality and quantity of relevant observations in nature and triggered a new understanding of earthquake dynamics. We review here the developments in analogue earthquake modelling with a focus on those seismotectonic scale models that are directly comparable to observational data on short to long timescales. We lay out the basics of analogue modelling, namely scaling, materials and monitoring, as applied in seismotectonic modelling. An overview of applications highlights the contributions of analogue earthquake models in bridging timescales of observations including earthquake statistics, rupture dynamics, ground motion, and seismic-cycle deformation up to seismotectonic evolution.
Deformation of the Earth involves a large spectrum of timescales: from earthquake rupture of less than 1 s to days of aftershock activity, and from years of post-seismic relaxation to hundreds of millions of years of deformation related to plate movements (Wilson cycle; e.g. Ben-Zion, 2008). Such a multi-scale process poses major challenges to observation in nature as the instrumental and historical records are too short to capture a significant amount of the evolution with the high resolution and completeness required. Simulation is a way to overcome such limitations. However, our knowledge of the physics of earthquakes and Earth deformation in general is incomplete and poses a strong challenge to set up realistic numerical scenarios. Simplifications and parametrizations of physical laws implemented in numerical models are rarely justified. The non-uniqueness of numerical solutions for typical problems of Earth deformation on various timescales (e.g. inversion of rupture kinematics or mantle rheology from co- and post-seismic observations, respectively) is another limitation of computer models. Experimental approaches using physically self-consistent analogue models have been traditionally used to address physical problems like earthquakes for about 1 century and tectonics for about 2 centuries. Bridging the gap between short-term earthquake and long-term tectonic deformation has only recently become possible due to a better understanding of material properties and developments in monitoring techniques. New across-timescale analogue and numerical modelling approaches are an explorative tool nowadays to better understand instrumental or historical observations and paleo-seismological or geological observations, both individually as well as their interference, and interpret them in a seismotectonic context.
Reid (1911) with his pioneering jelly experiments was the first to experimentally simulate earthquakes and formulate a theory, the elastic rebound theory, based on his laboratory and field observations following the 1906 San Francisco earthquake. Wondering “What forces could have produced such distortion and displacements in the rock mass of the region” (Wood, 1912), Reid postulated that only elastic forces can do so. He hypothesized that the release of elastic deformation stored due to a slow accumulation of strain in the Earth occurred by fracturing “along an old fault line” (Reid, 1911) that caused vibration and rebound of the elastically strained rock mass in the vicinity of the fracture.
With the rise of plate tectonic theory in the 1960s accompanied by a thriving of seismology and experimental rock mechanics, stick–slip instabilities (the cyclic slow accumulation and sudden release of stress along frictional interfaces) along pre-existing discontinuities, i.e. tectonic faults, have become the most prominent earthquake mechanism (e.g. Brace and Byerlee, 1966; Byerlee, 1970). Today we are aware that the largest earthquakes, as measured by the seismic moment release energy, are exclusively a result of tectonic faulting in the brittle parts of the Earth's crust concentrated along plate boundaries. Smaller events might occur as a result of, e.g., magmatic diking, hydraulic fracturing, landslides, anthropogenic pumping, or nuclear tests. We focus here on the modelling of tectonic earthquakes.
However, the study of earthquakes faces several limiting factors related to the difficulty of accessing the deep source of the earthquake and of integrating the characteristic timescales of deformation processes that extend from seconds to thousands of years. As a consequence, seismic hazard mitigation is inevitably based on incomplete geological datasets and poorly constrained physical parameters.
New technological advances in seismology and geodesy have significantly improved our knowledge of the dynamics of deformation processes associated with earthquakes on all relevant timescales. By substituting space for time, all archetypical phases of the seismic cycle including the coseismic, post-seismic, and inter-seismic phase have been observed in nature (e.g. Klotz et al., 2001; Wang et al., 2012). However, while recent earthquakes are well documented, this is not the case for older events for which historical records are often too short and incomplete to be usable with the required precision. Beyond the last 50 years, only sparse seismological records and kinematic measurements are available. This situation does not allow constraining seismic-cycle dynamics that extend over a time span of 100 to 1000 years. In particular, geodetic data (e.g. GPS – Global Positioning System – and InSAR – interferometric synthetic aperture radar) that only cover a short time span of the inter-seismic period (< 30 years) are typically extrapolated over longer periods using steady-rate assumptions. However, numerous geophysical observations highlight the complexity of the inter-seismic period (e.g. slow earthquakes, periods of seismic quiescence or crisis) suggesting that the steady-state assumption may be an oversimplification. This raises the issue of a period length in the earthquake cycle that is far larger than the duration of most scientific observations.
Categories of analogue earthquake models (see text for discussion).
The scientific exploitation of available earthquake geophysical data mainly utilizes analytical and numerical modelling methods that allow us to obtain complementary information about deformation processes, physical and mechanical properties of faults, and boundary conditions by means of data inversion (e.g. slip at depth, contemporary stress accumulation, stress transfer). Although such approaches allow reproducing and analysing observed surface velocity fields, they face some limitations. On one hand, some important parameters, such as frictional conditions along the fault plane or stress history induced by past seismic cycles are not understood well enough. On the other hand, modelling all seismic-cycle phases using a single approach with different processes acting on different time- and space scales is still a difficult goal to achieve numerically.
We present here an overview of the history as far as the state of the art in analogue seismotectonic modelling. We group existing experimental approaches into three categories of increasing complexity and increasing similarity with the natural prototype. The latest development, namely analogue seismotectonic scale models, has a central position here, as seismotectonic scale models have shown great potential for future developments alongside numerical simulations of the earthquake process. We elaborate on the scaling, monitoring, and material characterization and show a brief overview of applications of the various models. Where appropriate, we make links to numerical models and highlight outlooks and challenges. We compiled information into a set of tables which give a quick overview of existing approaches (Table 1), scaling (Tables 2 and 3), materials (Table 4), and monitoring techniques (Table 5). Symbols used in this article are summarized in Table 6. Original data are published as open-access material in Rosenau et al. (2016).
Dimensionless numbers used in analogue earthquake models (see text for discussion).
Analogue model categories:
In parallel to the development of analytical and numerical approaches,
numerous experimental or analogue models have been developed to investigate
the physics of earthquakes, seismic-cycle dynamics, and seismotectonic
evolution. Here, we categorize analogue earthquake models into three groups
with a decreasing level of abstraction and an increasing applicability (Fig. 1; Table 1):
“Spring-slider models”, in which elastic and frictional
elements are physically discrete components of the set-up (Sect. 2.1). These
models can only be applied conceptually to nature. “Fault block
models”, in which two elastic blocks, with similar or different elastic
properties, are in frictional contact (Sect. 2.2). Observations from these
models can be qualitatively extrapolated to nature. “Seismotectonic scale models”, in which a distinct tectonic setting is realistically
simulated on a small scale and with boundary conditions mimicking as closely
as possible the natural prototype (Sect. 2.3). These models can be directly
and often quantitatively upscaled to nature.
Examples of spring-slider models:
Following Reid's initial idea of earthquakes reflecting the release of
elastic deformation energy stored in the Earth by a mechanical instability, fairly simple spring-slider models (Figs. 1a, 2) have been employed both
in seismology and experimental rock mechanics. The spring-slider system is
mathematically modelled using a single differential equation that describes
the slider motion as a function of the relevant forces acting on it:
The original Burridge and Knopoff model featured a velocity-weakening friction law as the key non-linearity controlling complexity in earthquake pattern (Carlson and Langer, 1989; Carlson et al., 1991). Schmittbuhl et al. (1996) and Mori and Kawamura (2006, 2008) investigated the control of details of the velocity-dependent friction law to earthquake characteristics. Several authors implemented different forms of the friction laws, e.g. time dependence (Dieterich, 1972a), displacement dependence (Cao and Aki, 1984), and rate and state dependence (Gu and Wong, 1991; Erickson et al., 2011; Abe and Kato, 2013). Several developments were aimed at including additional effects such as seismic waves (Wang, 2012) or stress relaxation (Aragon and Jagla, 2013). These models succeeded in simulating the variety of seismicity pattern, including Gutenberg–Richter frequency distributions, as well as details of the stick–slip process including pre-seismic quiescence, creep, and aftershocks.
While laboratory spring-slider models have been used widely for teaching basic earthquake phenomena, few have been used to scientifically approach earthquake dynamics. One of the few is that of King (1975, 1991, 1994), who employed a circular chain of spring sliders to study earthquake predictability and slip variability over various cycles (Fig. 2a). Heslot et al. (1994) performed spring-slider experiments to illuminate the dependence of frictional stability on the fundamental parameters spring stiffness, loading velocity, and slider mass (Fig. 2c). More recent studies have been realized by Varamashvili et al. (2008) to study the effect of external forcing (Fig. 2d) and by Popov et al. (2012) to study the onset of frictional instability.
Brace and Byerlee (1966) were the first amongst a number of other rock mechanics experimentalists of the late 1960s (see references in Brace, 1972) to reproduce stick–slip instabilities by biaxial compression of cylindrical rock samples. The loading machines used were usually designed to be as stiff as possible (in any case stiffer than the rock sample) but were compliant enough to store and release elastic deformation energy. Therefore, such tests can be considered to be a type of spring-slider experiment. Both intact and precut samples were used to generate stress drops associated with stick–slip instability in the order of kilobars. In a large number of experiments, the role of mineralogy, porosity, pressure, water, temperature, gouge thickness, and stiffness of the loading system on stick–slip was established first (as summarized in Brace, 1972).
Since these pioneering experiments, stick–slip as an analogue mechanism of earthquakes has been studied using axial testing machines but also direct and rotary or ring-shear testers (see relevant references in Table 1). It is not within the scope of this paper to review the literature of rock mechanics experimental work done on stick–slip deformation in the last half century. Here we focus on those approaches using analogue rock materials instead of rock samples. Knuth and Marone (2007), for example, used rods of different materials in a double-direct shear device in different configurations to study the mechanisms of sliding, rolling, and dilation as well as stick–slip (Fig. 2f). A large body of works exists on granular shear experiments using glass beads and other synthetic fault gouges in shear, rotation, and axial compression apparatuses, some with a focus on stick–slip (e.g. Anthony and Marone, 2005; Mair et al., 2002; Alshibli et al., 200; Schulze, 2003; Scuderi et al., 2015; Rubinstein et al., 2012; Nasuno et al., 1998; Fig. 2e). These experiments were accompanied by numerical models mainly using the discrete element method (e.g. Abe and Mair, 2009; Abe et al., 2006; Ferdowsi et al., 2013, 2014, 2015; Mair and Abe, 2008; Mair and Hazzard, 2007). The shearing of optically clear acrylic or polymeric spheres and discs (e.g. Nasuno et al., 1998; Daniels and Hayman, 2008; Reber et al., 2014) provided insights into the dynamics of stick–slip in granular media by means of 2-D “see-through” experiments (Fig. 2e and g).
Several studies which focus on the frictional behaviour of granular rock
analogue materials (e.g. sand, glass beads; e.g. Schulze, 2003; Ritter et al.,
2016; Klinkmüller et al., 2016; Panien et al., 2006; Lohrmann et al.,
2003) at low loads (kPa) used a Schulze ring-shear tester (Schulze, 1994; Fig. 2b), which serves here as an example of a spring-slider device used to
generate analogue earthquakes. The ring-shear tester consists of a 4 cm high
annular shear cell made of stainless steel holding approximately 1 L of
the sample material. A ring-shaped lid is placed on the filled cell. The
lid is subjected to a normal force in order to control normal load on the
sample. While the cell is rotated at velocities ranging from 0.5 to 500
The main limitation of the classical, simple spring-slider set-up stems from the general rigidity of the slider. A rigid slider distributes shear stress evenly across the frictional interface. Therefore, both loading and release are unrealistically homogenous resulting in fairly periodic and characteristic (similar slip magnitudes), system-sized events. Slip distributions of earthquakes in nature usually show complexity, with areas of high and low energy release (asperities and barriers, respectively; e.g. Aki, 1984). Such heterogeneity might be a stationary feature through subsequent seismic cycles or transient and related to variably frictional properties and/or the slip history of the fault. In any case it also reflects heterogeneous loading as well as heterogeneous release. Multiple spring-slider systems (e.g. Burridge–Knopoff, 1967; King, 1991, 1994) aimed to overcome this limitation and succeeded in generating a more complex slip and recurrence pattern (see Sect. 6.1).
A special type of spring-slider set-ups currently under development may be called “deformable slider spring”; in this, the slider is not rigid but plastic. While elasticity is still controlled by a separate spring, the frictional element can be replaced by different plastic rheologies, e.g. a Bingham fluid (Reber et al., 2015).
Examples of fault block models:
In addition to multiple spring-slider models, fault block models have been developed to circumvent the strong assumption of uniform loading and release inherent simple spring-slider models. They allow investigating different aspects of earthquake dynamics on the scale of an analogue fault plane. Unlike spring-slider models, in which the system is provided with elasticity through a spring, in fault block models the elastic strain is stored within the sample volume (Figs. 1b, 3) mimicking the behaviour of the fault-bounded crustal blocks. The first experiments by Reid (1911) were jelly block experiments used to demonstrate the distortion and displacement phenomena seen during the 1906 San Francisco earthquake. This set-up also allows small- (partial) and large- (complete) scale failures to occur in a less segmented fashion compared to multiple spring sliders bearing the potential to generate more realistic frequency size distributions.
Fault block models share the same common characteristic: they are composed by two blocks which are uniformly loaded normal to their interface and sheared against each other. They are equivalent to a slip surface embedded in an elastic solid. The slow shearing motion imposed on the system mimics the tectonic loading. Two loading configurations can be differentiated: shear, including direct shear, ring shear, and Couette shear (e.g. Brune, 1973; Archuleta and Brune, 1975; Schallamach, 1971; Latour et al., 2013b; Fig. 3a–c, e), and (biaxial) compression (e.g. de Joussineau et al., 2001; Bouissou et al., 1998; Rosakis et al., 1999; Fig. 3d and f). Earthquakes may nucleate spontaneously and repeatedly or through external forcing, i.e. impact (e.g. Xia et al., 2004).
The two blocks and the interface between them are the analogues of rock volume and an embedded fault of finite dimensions, respectively. Edge effects, artificial reflections and free-surface effects are usually unavoidable in fault block models (e.g. Scholz et al., 1972). The two blocks may be of the same material (e.g. foam rubber: Brune, 1973; Archuleta and Brune, 1975; rock: Lockner et al., 1991; Lei et al., 2000; Zang et al., 2000; Thompson et al., 2005, 2006, 2009) or materials with different compliances (e.g. gel sliding on glass: Baumberger et al., 2003; rubber on rough substrate: Hamilton and McCloskey, 1997, 1998; Schallamach, 1971). Different materials have been adopted depending on the desired rheological response of the system; ranging from purely elastic (e.g. rubber: Schallamach, 1971; Plexiglas: Rosakis et al., 2007, and references therein) to viscoelastic (e.g. polyvinilalcool – PVA: Namiki et al., 2014). The use of softer materials allows us to slow down the rupture process and make it more accessible by means of monitoring. This has been exploited by, e.g., Baumberger et al. (2003), Yamaguchi et al. (2011), and Latour et al. (2013b) using gel sliders (Fig. 3e).
Examples of seismotectonic scale models:
With the advent of high-resolution strain monitoring such as digital image correlation (e.g. Adam et al., 2005), it became possible to measure small-scale deformation increments (i.e. scaling from decimetres to metres in nature) corresponding to single earthquake displacements. This unlocked the possibility to realize analogue seismotectonic scale models of earthquakes (Figs. 1c, 4). Modern seismotectonic scale models feature realistic non-linear frictional properties of materials. They are able to mimic the coseismic dynamic weakening that in nature happens for various reasons (e.g. frictional melting, thermal pressurization, chemical effects). A second feature is a properly scaled elasticity of the model. Classical analogue models of tectonic processes use sand or other rigid particles to study long-term fault kinematics in the brittle regime. However, their elastic moduli appear to be too high (GPa; e.g. Klinkmüller et al., 2016) to be used to simulate elastic deformation realistically. To model earthquakes and seismic cycles, scaling rules impose a decrease in the elastic moduli of the model by several orders of magnitude. This can be achieved by adding elastic particles (e.g. rubber pellets) or by using compliant solids (e.g. gelatine, foam rubber). In contrast to fault block models, seismotectonic scale models make a realistic depth-dependent pressurization of the faults (i.e. lithostatic pressure) possible. Loading conditions that mimic tectonic forcing are realized by applying pure or simple shear boundary conditions, for example.
In the context of earthquakes and seismic cycles, seismotectonic scale models are used to study seismogenic fault behaviour over many orders of magnitude in timescales. They allow us to simulate multiple seismic cycles in three-dimensional models with fully dynamic ruptures including the interaction between seismic and aseismic fault areas, off-fault deformation in the brittle upper crust, and viscoelastic relaxation in the lower crust and mantle. Therefore, despite their simplicity, they are the preferred scientific tool to overcome the limitations in natural observations by means of sufficiently long and well enough resolved time series of deformation.
From an observational point of view, the main challenges of analysing analogue earthquake models are the elastic nature of deformation as well as the very small displacements on various timescales, especially with regard to the inter-seismic stage. Because earthquake cycles are dominated by elastic deformation, the deformation fields are characterized by velocity reversals and alternations between slow strain accumulation (loading) and fast release (earthquake). Due to the high variability in strain rates only quasi-continuous or highly resolved incremental monitoring provides accurate quantification. Moreover, in seismotectonic scale models, incremental displacements of interest are necessarily very small as decimetres to metres in nature scale down to micrometres in laboratory seismotectonic scale models. Finally, deformation increments occur on a wide range of timescales (seconds to thousands of years in nature, milliseconds to minutes in laboratory models) resulting in strongly variable velocities that vary by more than 12 orders of magnitude in nature. Thanks to non-linear timescaling (see Sect. 3), the latter can be reduced to about 3 orders of magnitude in analogue models.
There have been several seismotectonic scale model approaches in the recent
past (Table 1; Fig. 4). Viscoelastic seismotectonic scale models (e.g.
Corbi et al., 2013; Fig. 4a) were used to study rupture dynamics and
elastic coseismic deformation of the forearc wedge. In these models, a
gelatine wedge (analogue of the overriding plate) is underthrust by a
10
A similar set-up has been used by Rosenau et al. (2009) but with a granular,
elastoplastic wedge on top of a less compliant conveyer plate or belt (Fig. 4b). This set-up, in combination with digital image correlation based
optical strain analysis allows us to simulate and monitor the seismotectonic
evolution of subduction forearcs and thus bridge the short- to long-term
processes of earthquakes and tectonic evolution, respectively. Because of
the opacity of the models, strain monitoring was restricted to side views
through glass walls or stereoscopic top views of the surface deformation in
2-D and 3-D experiments, respectively. Seismogenic zones at the base of the
wedge were defined through the use of a velocity-weakening material (i.e. rice). Compared to the viscoelastic wedges of Corbi et al. (2013), elastoplastic
wedges were stiffer. Consequently the rupture velocity was higher, while the
deformations were smaller. From a scaling point of view, this is more
appropriate; however, it poses limitations on the observability of analogue
earthquakes, i.e. a lower bound of about
Most recently, Caniven et al. (2015) have developed an experimental approach allowing us to simulate strike-slip fault earthquakes and seismic cycles in a brittle–ductile crust (Fig. 4d). One of the remarkable points of this study is the use of an analogue model consisting of an elastic block (polyurethane foam) floating on viscoelastic (silicone oil) material and covered by brittle plastic (silica-powder–graphite mix) analogue material. This multi-layer approach takes into account, at first order, the crust rheology and its mechanical behaviour. This is a crucial point to allow the simulation of brittle–ductile couplings and post-seismic deformation. Following the approach of Caniven et al. (2015), a set-up for 3-D subduction megathrust experiments was developed by Dominguez et al. (2015) (Fig. 4c). In both settings, the use of a multi-layered model allows us to reproduce the complete seismic cycles, including for the first time realistic post-seismic deformation. Post-seismic deformation in the model is driven by relaxation due to the same mechanisms as in nature, namely after slip along the fault and viscoelastic relaxation of the underlying substrate. The scaling of experimental earthquakes has been shown to be satisfactory in terms of stress drop and associated deformation. The evolution of model kinematics is monitored using a digital image correlation technique at sub-pixel accuracy and resolution mimicking a very dense GPS network, and this allows us to emulate an InSAR-like fringe pattern (Fig. 4c and d). The rigorous implementation of numerical inversion and modelling tools in the approach provides key direct and indirect observables such as surface and volumetric strain and stress as well as the slip distribution and stress changes along the fault plane. Optical measurements are complemented by strain gauges to measure far-field strain evolution and earthquake-induced stress drops. Acoustic sensor measurements are also being developed to study background seismicity and coseismic event signatures. The analogue models generate a broad variability of earthquake-like slip events constituting large data catalogues that can be used to study earthquake scaling and statistics as well as rupture dynamics. As the model generates tens of successive seismic cycles, these provide the potential to study the short-term and long-term evolution of strain and stress fields. Cross-validation with numerical models is ongoing.
Some general model limitations apply to all of the seismotectonic scale model mentioned here; they are related to principal space and resolution restrictions. For example, the minimum size of events simulated is in the order of magnitudes 7–8 which so far has prevented us from studying pre- and aftershock activity in detail. The size of the models is generally large enough to realistically simulate near-field deformation on a local to regional scale. Continental-scale far-field phenomena like mantle relaxation in subduction zones are currently beyond the limit of such studies. Finally, fluids, poroelasticity, and temperature effects are not simulated, and, consequently, the role of metamorphism and associated changes in rock properties in earthquake and seismic-cycle dynamics cannot be evaluated.
Scaling laboratory-scale observations to nature is a central issue in analogue earthquake modelling, especially for seismotectonic scale models. To be representative of a natural system, a small-scale model should share geometric, kinematic, and dynamic similarity with its prototype (Hubbert, 1937). This condition is termed similitude and requires that all lengths, time, and forces scale down from the prototype in a consistent way dictated by scaling laws. The latter are derived either from an analytic approach (e.g. Weijermars et al., 1993) or from dimensional analysis and the formulation of dimensionless numbers (Buckingham, 1914; Table 2).
The large range of velocities to be captured in an analogue earthquake model poses practical challenges: first, to conduct experiments in a reasonable time frame and second, to observe (monitor) analogue earthquakes. If the total runtime of an experiment simulating thousands to millions of years in nature is on a reasonable scale (of hours), the episodically recurring earthquakes are captured only with very high speed monitoring techniques (MHz) or the earthquakes are slowed down to a reasonable speed (to be captured by 1–100 Hz monitoring), in which case a model run would take weeks.
Typical scales, scaling relations, and factors in seismotectonic scale models (see text for discussion).
From a scaling point of view, however, the range in velocities in the model can be significantly reduced through the use of non-linear timescaling, which considers different timescales for co- and inter-seismic deformation periods. Rosenau et al. (2009) introduced a “dyadic” timescale that recognizes two dynamically distinct regimes of the seismic cycle: the quasi-static inter-seismic regime, where inertial effects are negligible due to the slow deformation rates and the dynamic coseismic regime, which is controlled by inertial effects. Consequently, two different temporal scales are applied. This way, the earthquake rupture can be virtually slowed down while the loading phase is sped up, keeping dynamic similarity in both stages. In a typical application 0.1 s may correspond to a quarter century of inter-seismic loading and about 1 min of rupture time.
The transition from non-inertial, quasi-static to inertial, dynamic deformation can be defined kinematically. Based on theoretical considerations using a spring-slider system, Roy and Marone (1996) suggest that the transition occurs at a critical velocity, which is a function of extrinsic and intrinsic frictional properties and mass. By contrast, Latour et al. (2013a), basing their findings on empirical results and theory, equate the transition from quasi-static to dynamic deformation with the transition from exponential growth to power law growth of the rupture length. They suggest that elastic and frictional properties control the transition. In practice a velocity threshold is defined that also depends on the temporal resolution to differentiate between the quasi-static and dynamic regime. With better spatial and temporal resolution in future, this issue will certainly be re-visited in detail.
The scaling for the short and long term, or inter- and coseismic phases is elaborated in Sect. 3.1 and 3.2, respectively. Dimensionless numbers and scaling relations are summarized in Tables 2 and 3.
In the non-inertial, quasi-static regime of the inter-seismic phase, scaling is identical to the common scaling of long-term processes to the lab. For long-term tectonic studies involving materials that deform brittle or viscous material, two dimensionless numbers, the Smoluchowski and Ramberg numbers, are of interest according to the deformation regime (Weijermars and Schmeling, 1986; Brun, 2002; Pollard and Fletcher, 2005).
In the case of brittle deformation characterized by cohesion and a pressure-dependent frictional strength, the ratio between gravitation (or overburden
stress) and the material strength, labelled the Smoluchowski number, is
commonly used to ensure dynamic similarity. It is defined as
According to Eq. (2), for a brittle sandbox model under normal
gravity (i.e.
Scaling of parameters from laboratory (model) to nature
(prototype).
As cohesion and elastic moduli also share the same dimension (Pa), elastic
moduli should scale with the same scaling factor (e.g.
For a viscous model under normal gravity we find from Eq. (3) that the
stress
For the coseismic stage inertia controls the dynamics, so that the Froude
number can be used to reach dynamic similarity and find an appropriate
short-term timescaling (Rosenau et al., 2009):
A theoretical conflict arises, however, when viscous forces in the dynamic regime are considered: these should be scaled using the Reynolds number. For various reasons the similarity requirements posed by the Froude and Reynolds numbers can typically not be satisfied simultaneously. In our application, both the Reynolds and Froude number can be preserved simultaneously only if we use a viscous material that hardens dramatically (by 3 orders of magnitude or so) coseismically or assume that the mantle weakens accordingly. In practice, assuming that viscous flow plays a limited role in the coseismic stage, which is dominated by elastic deformation, non-conservation of the Reynolds number through the coseismic phase seems acceptable.
Apart from the dimensionless numbers introduced above, all model parameters
without a dimension should be preserved, e.g. Poisson's ratio
As an example to illustrate this massive scaling, the energy of an analogue
earthquake is about the energy needed to illuminate a 10 W electric light
bulb for the duration of an analogue event that is similar to the blink of an
eye (ca. 100 ms). In contrast the energy of a
As Eq. (10) is not a product of dimensional parameters but a sum of two
dimensionless terms one of which includes a logarithm of a dimensional
parameter, the standard method of applying scaling rules for the dimensions
involved fails. Consequently, a scaling factor for moment magnitude does not
exist. We can, however, scale up analogue earthquake moment magnitude
non-linearly by applying the scale factor of seismic moment
Mechanical properties of selected rock analogue materials under typical laboratory conditions (see text for discussion).
[1] Rosakis et al. (2007). [2] Caniven et al. (2015). [3] this work. [4] Kavanagh et al. (2013). [6] Rosenau et al. (2009). [7] Corbi et al. (2013). [8] Rudolf et al. (2016a). [9] Boutelier et al. (2008). [10] Di Giuseppe et al. (2009). [11] Boutelier et al. (2016). [12] Di Giuseppe et al. (2015). [13] Cooke and van der Elst (2012).
Elastic moduli of selected rock analogue materials as measured in an axial tester. Data and methodological details are published as open-access material in Rosenau et al. (2016).
This section reviews the history of analogue rock materials used in laboratory modelling of earthquakes and seismic cycles. They fall into three groups according to the dominant rheology: elastic, (frictional–)plastic, and viscoelastic. They are described in Sect. 4.1, 4.2 and 4.3. Moreover, the model materials are divided into soft and stiff materials in terms of everyday experience. Stiff materials (e.g. Plexiglas, wood) are used mainly in the fault block model category to model earthquakes under near-natural pressures (MPa) in rock mechanics deformation rigs, while for seismotectonic scale models the materials are generally soft or weak. This is because scaling laws dictate that the models deform in response to forces many orders of magnitude smaller than in nature. In particular, forces driving tectonic faults are in the order of teranewton per metre fault length, while in the lab only a few newton should be enough to deform the material. The latter is typically realized by using bulk solids (e.g. loose sand), foam rubber, or silicone oil. Most of the materials (e.g. gelatine) exhibit two or even all three rheologies under different conditions. Key material properties of the most commonly used rock analogues are summarized in Table 4.
Rocks at low temperature, pressures, and strains behave elastically, that is,
they deform when a force is applied and return to their original shape when
the force is released. In linear elastic solids, as in springs, elastic
strain
Crack growth is intrinsically related to the elastic moduli. In the simplest
case of a “penny-shaped” crack in which the slip is driven by a uniform
stress drop
Elastic deformation in the solid surrounding the crack is usually described by elastic dislocation theory (e.g. Pollard and Segall, 1987). Since the Earth's surface is considered mechanically a free surface, i.e. no shear and normal stresses are transmitted across it (to the atmosphere), so-called “half-space” models are applied. Additionally, because the characteristic lengths of static and dynamic deformations (e.g. stress shadows, seismic waves) associated with earthquakes are usually regional scale, both small-scale topography and large-scale Earth curvature are often neglected and the surface is modelled as a plane. Analytical solutions to the problem of shear-crack-induced surface and internal deformations in homogenous elastic half-space are given by Okada (1985, 1992) and applied in numerous studies (e.g. King et al., 1994; Toda and Stein, 2002; Lin and Stein, 2004). A convenient MATLAB®-based tool (“Coulomb”) based on these solutions has been developed by the USGS (Toda et al., 2011).
Predictions from analytical elastic models:
Both crack growth as well as elastic dislocation predictions are superb
benchmarks for seismotectonic scale models. Simplified versions of the
surface deformation induced by vertical strike-slip dislocations in elastic
half-space exist both for the case of inter-seismic and coseismic stages of
the seismic cycle: inter-seismic surface velocities
While in spring-slider set-ups, mechanical springs or the effective stiffness of the testing machine controls the elasticity of the system, a variety of analogue rock materials have been used that can be classified as linear, isotropic, elastic solids up to few percent of strain, similar to the Earth on relevant scales. Elastic rock analogues are classified as relatively stiff (e.g. Plexiglas) and soft (gelatine, rubber, foam). Stiff materials are used exclusively in spring-slider and block models, while soft materials also find application in seismotectonic scale models for which scaling rules dictate their Young's moduli in the order of 1–1000 kPa.
Examples of stiff elastic materials are “homalite-100” and polycarbonate
as used in the studies shown by Rosakis et al. (2007). These materials show enhanced
photoelasticity compared to other transparent stiff materials like PMMA
(polymethyl methacrylate). They are characterized by a Young's modulus value in the
order of a few gigapascal, a Poisson's ratio value of ca. 0.35–0.4, and a shear wave speed of
ca. 1000 km s
Examples of soft elastic materials include gelatine. Gelatine is the common
name for animal and plant viscoelastic biopolymers, which have been adopted
for analogue modelling (see Di Giuseppe et al., 2009; Kavanagh et al., 2008, 2013;
van Otterloo and Cruden, 2016, for a complete rheological characterization
of a wide range of gelatines). The shear modulus of gelatine is controlled
by its concentration. Low-concentration (< 10 %) gelatine has a
Young's modulus value of a few kilopascal, while high-concentration (> 10 %,
“ballistic”) gelatine has a Young's modulus value of a few hundred kilopascal (Fig. 6).
Because it consists mainly of water, gelatine has a density of around 1000 kg m
Foam rubbers are a soft elastic solid which is used often, i.e. foam polymers
(e.g. PU – polyurethane; PVC – polyvinyl chloride) that come in a variety of
densities and elasticities. Those used in analogue modelling are usually
light (e.g. density ca. 20–40 kg m
Rock friction data:
Rubber (e.g. EPDM – ethylene propylene diene monomer) has been used both as a
solid (Schallamach, 1971; Hamilton and McCloskey, 1997, 1998) as well as in
the form of pellets (Rosenau et al., 2009, 2010; Rosenau and Oncken, 2009).
It comes in wide range of densities and elasticities. Rubber is
characterized by a Young's modulus value of several megapascal (Fig. 6) and a Poisson's
ratio value of 0.5. The bulk of EPDM pellets shows a much reduced Young's modulus in
the order of 0.1 MPa. They can be mixed with more rigid particles (e.g.
sugar grains) to reach the desired elasticity (Rosenau et al., 2009). The
density of rubber ranges between 900 and > 2000 kg m
Once the forces acting on a rock sample exceed a certain threshold or yield strength, brittle failure (of intact rock) or frictional sliding (of faulted rock) will occur at low confining pressure and temperature, while ductile flow may occur at higher temperature and pressure (Sect. 4.3.1). Both deformation mechanisms cause permanent (irreversible) deformation.
Brittle rock deformation as it occurs at shallow to intermediate crustal
levels is characterized by a cohesion- and pressure-dependent frictional
strength (Mohr–Coulomb type behaviour). The latter is, in its simplest case,
described by a linear relation between applied normal load
In the context of analogue earthquakes, slip stability on pre-existing faults
is key. Whether slip is stable or unstable depends on three parameters: the
stiffness of the system, the dynamic weakening of the frictional interface
(either proportional to slip or velocity) and the applied normal load. Only
if the system is soft enough to allow the frictional strength to fall faster
than the system can respond, will the force imbalance cause slip
instability to occur. This is described by the condition for slip
instability following Eq. (18):
Frictional instability is described nowadays either in terms of static and dynamic friction or in terms of the rate- and state-dependent friction (RSF) theory (Dieterich, 1972b, 1978b; Ruina, 1983; summarized in, e.g., Scholz, 1998, 2002). RSF constitutive laws consider time- and slip-dependent re-strengthening and variations in friction with slip rate and can reproduce the entire suite of slip phenomena.
RSF theory states that a change in slip velocity from
In slide–hold–slide tests, deformation is halted for various periods
(allowing the sample to heal) and then restarted. The static friction
during reactivation, i.e. the peak friction to overcome when restarting, is
measured and scales in the presence of healing with the length of the
preceding hold period
Rate and state effects on friction of selected analogue rock
materials:
This allows us to define three stability regimes: the stable regime, in which
A large variety of materials (both solids and bulk materials) show the
characteristic rate- and state-dependent frictional response to velocity
steps and variable hold times in the respective tests (e.g. Dieterich and
Kilgore, 1994; Schulze, 2003). Most granular material like quartz sand or
glass beads has similar friction coefficients to rocks (
In contrast to sand, which shows no measurable velocity-dependence of friction (Fig. 9a), many granular materials of organic origin show rate- and state-dependent friction: rice, salt, starch flour, and polenta for example show velocity weakening, while sugar shows velocity strengthening (Fig. 9a). Schulze (2003) demonstrated velocity-weakening behaviour for limestone powder and wheat flour and velocity-strengthening behaviour for PE (polyethylene) powder. Healing (i.e. strengthening in static contact) of frictional interfaces is also minor in sand but evident for several materials including wheat flower, cocoa (Fig. 9b), PE powder, and limestone powder (e.g. Schulze, 2003).
Systematics of stick–slip in selected analogue rock materials. Results from ring-shear tests using a Schulze ring-shear tester (Schulze, 1994; Fig. 2b). Data and methodological details are published as open-access material in Rosenau et al. (2016).
Gelatine and foam rubber both show stick–slip behaviour along precut surfaces
controlled by rate- and state-dependent friction. Foam on foam contacts show
unrealistically high friction coefficients (
The slip-weakening distance
The
Rocks at higher temperature and pressure deform in a ductile manner, that is elasto(frictional)plasticity is replaced by viscoelasticity. Depending on the timescale of the applied forces and strain rate, the deformation is dominantly elastic (on short timescales, e.g. coseismic) or viscous (on long timescales, e.g. inter-seismic).
On long timescales viscoelastic materials show a strain-rate-dependent
strength. In this case stress in the deforming material is a function of
strain rate, i.e.
Rheological models represented as spring and dashpots under
loading and unloading with their relative strain–time curve:
On short timescales viscoelastic materials show a delayed, time-dependent, response when stress is applied and/or removed. The classic example is that of a sample where deformation is recoverable but strain accumulation and release are delayed due to the coexistence of both elastic and viscous behaviour. The rheological behaviour of viscoelastic material is therefore commonly described using the analogy to physical models of a spring (responsible for the elastic behaviour) and a dashpot (responsible for the viscous behaviour).
The simplest rheological models describing viscoelastic behaviour are obtained from a combination of spring and dashpot elements in parallel (Kelvin–Voigt model) or in serial configuration (Maxwell model). A way to distinguish between the two rheological models is by performing a series of low-stress creep-recovery tests with a rheometer. The test consists of applying a constant shear stress to the sample for a given time interval. The instrument records the strain in the loading phase and in the following recovery phase. Different shapes of the strain–time curve are then observable depending on the rheological model of the sample (Fig. 10).
The deformation of a sample that follows the Maxwell model shows an
instantaneous elastic response followed by linearly viscous flow. When the
load is removed the elastic component is recovered instantaneously, while a
fraction of the deformation linked to the dashpot is not recoverable (Fig. 10a). The constitutive equation for the Maxwell model can be expressed as
follows:
The deformation of a sample that follows the Kelvin–Voigt model is slowed
down by the piston component both in the loading phase and when the load is
removed (Fig. 10b). Such a slow-down effect is highlighted by the curved path
of strain as a function of time in both the loading and in the recovery
phases. Sample deformation is fully recoverable when the load is removed.
The constitutive equation for the Kelvin–Voigt model is expressed as follows:
A Maxwell body possesses a relaxation time
A more elaborate viscoelastic rheology is the Burgers model, which shows a mixture of the responses of the Kelvin–Voigt and Maxwell models (Fig. 10c). In particular, it features the instantaneous elastic response as well as the transient creep of the Kelvin–Voigt model. It has recently found application in earthquake studies because it allows us to fit time series of post-seismic deformation with a single set of parameters. Thanks to the increasing number of geodetic studies of convergent margins, it has been pointed out that Earth's mantle response after large earthquakes is characterized by two timescales: a shorter one for the transient viscosity and a longer one for the steady-state viscosity (e.g. Wang et al., 2012, and references therein).
Most viscous materials used in analogue modelling of seismotectonic processes (silicones, honey, etc.) can be described by the Maxwell model at least under certain conditions. The proper rheological model as well as the constitutive parameters like viscosity, elasticity, and the Maxwell relaxation time are inferred from a series of oscillatory and rotational tests in a rheometer (e.g. Rudolf et al., 2016a; Di Giuseppe et al., 2009; ten Grotenhuis et al., 2002; Boutelier et al., 2008, 2016; van Otterloo and Cruden, 2016).
Polydimethylsiloxane (PDMS), mostly referred to as silicone or silicone oil,
is one of the most common viscoelastic materials used in analogue modelling.
The rheology of PDMS can be described by a Maxwell model including linear
Newtonian viscosity of about 10
Hydrogels or suspensions, made of aqueous solutions of polymers used for
the thickening and stabilization of viscous fluids in the cosmetic and food
industry, have found widespread applications in analogue modelling. Gelatine has
been used in analogue earthquake models, both in fault block models (Corbi
et al., 2011) and seismotectonic scale models (Corbi et al., 2013). Gelatine
rheology varies as a function of composition, concentration, temperature, and
ageing. At concentrations > 3 % the viscoelastic behaviour of
gelatine can be described by a Maxwell model (
Hydrogels made of Natrosol, a cellulose polymer, similarly show a Burgers rheology and Maxwell relaxation times in the order of seconds (Boutelier et al., 2016). Another polymeric viscoelastic material that has found application in analogue modelling of earthquakes by means of deformable slider-spring models is Carbopol® (Reber et al., 2015). As with gelatine, Carbopol® rheology depends on its concentration but also additionally on pH. It is very shear thinning and has a yield strength of up to a few hundred pascal (Di Giuseppe et al., 2015). It is consequently rheologically modelled as a Herschel–Buckley fluid. It can more generally be described as a brittle–ductile material. The Maxwell relaxation time of Carbopol® is in the order of 0.1 s.
While hydrogels have complex rheologies with a high potential in analogue modelling, care must be taken during preparation and experimenting. This is because they are generally very sensitive to temperature and concentration and thus require careful handling, following strict protocols, and rigorous characterization of the individual rheology. Also, storage is usually limited due to a pronounced sample aging.
Monitoring techniques used in analogue earthquake models (see text for discussion).
Wet kaolin has recently been recovered as a suitable analogue material with
some potential in modelling short and long-term deformation. It shows the
more complex Burgers rheology controlled by the water content (Cooke and van
der Elst, 2012). With viscosities in the range of 10
Advances in analogue rock material characterization have been paralleled by the development of new monitoring techniques allowing high-resolution quantitative measurements of the deformation of analogue models. Monitoring techniques as applied in analogue earthquake models can be grouped into local (at a point in space), regional (mapping an area), and global (integrating over an area or volume) techniques. They are described in Sect. 5.1, 5.2, and 5.3. They differ in their temporal and spatial resolution as well as the coverage. They may further be differentiated into direct and indirect observation methods. The main monitoring techniques used in analogue earthquake modelling are summarized in Table 5.
Local monitoring techniques (Sect. 5.1) provide time series of point measurements and include quasi-seismological and quasi-geodetic techniques. They use accelerometers, acoustic sensors, strain gauges, or laser interferometry, which provide temporally high-resolution time series of displacement at a single location. Most of these techniques can be considered indirect as they do not observe the process directly but inversion techniques are required to describe the analogue earthquake source. In contrast, regional techniques (Sect. 5.2) map surface deformation and are therefore also called “full-field” techniques. They include photoelastic and digital image correlation techniques and allow high coverage and full-field, stress, and strain monitoring of the model surface and fault at high spatial but generally lower temporal resolution compared to local and global monitoring techniques. Global methods (see Sect. 5.3) are those providing a kinematic or dynamic measurement of an average value integrated across a surface area or volume, e.g. the motion of one side of a sample or the loading stress. Those measurements are necessarily indirect but can usually be inverted easily using geometric tools to a direct measure of interest (e.g. fault slip, stress drop).
Symbols and their meaning as used in this article.
Johnson et al. (1973), Wu et al. (1972), and Hamilton and McCloskey (1998)
used an array of strain gauges to monitor model motion at up to a few hundred hertz. Brune et al. (1990) in his foam block models used an instrumentation
of digital velocity transducers and accelerometers as well as microphones
embedded in the foam block and along its surface. Absolute stress and stress
drop have been measured using an in-line hydraulic pressure gauge (Brune et
al., 1990). All embedded sensors were designed with a low mass and high
dynamic range to allow measuring acceleration up to hundreds of
Optical techniques exploiting brightness changes between successive images of a target have also been developed since the beginning of analogue earthquake modelling. Deformation along the analogue fault in foam was detected by Brune et al. (1973) using a photocell focussing on a black-and-white target along the analogue fault line. Hartzell and Archuleta (1979) developed a new optical monitoring technique using a light-sensitive field effect transistor and an analogue-to-digital recorder to measure particle motions in the near and far field of an analogue fault embedded in a foam block. Brune and Anooshehpoor (1998, 1999) experimented with a telescopic, two-axis position-sensing detector that was focused on a small light-emitting diode (LED) embedded in the foam. They report a resolution of 0.1 mm.
Nowadays, digital displacement and force sensors as well as accelerometers in the form of ultralight microelectromechanical systems (MEMSs) are available. Sampling rates are typically kilohertz to megahertz. For example, Dieterich (1978a) and Ohnaka and Kuwahara (1990) used semiconductor strain gauges to monitor analogue earthquakes in granite in a block model set-up. Pressure gauges have been used by Niewland et al. (2000) to measure stress in situ in analogue models potentially useful for analogue earthquake models in future. Arrays of accelerometers have been used, e.g. by Day et al. (2008), to infer rupture dynamics in foam block models.
Acoustic sensors have been widely applied to study analogue earthquakes:
Johnson et al. (1973), Wu et al. (1972), and Okubo and Dieterich (1984) used
piezoelectric transducers to estimate slip rate and rupture speed in
stick–slip experiments on precut rock and rock analogue materials. Lockner
et al. (1991), Zang et al. (2000), and Thompson et al. (2005, 2006, 2009)
used acoustic emissions to monitor localization, precursory phenomena, and
stick–slip ruptures in rock specimens. Varamashvili et al. (2008) and Zigone
et al. (2011) used acoustic emission to characterize the stick–slip process
in a spring-slider and salt-slider set-up (Fig. 2d), respectively. Zang et al. (1998), Kwiatek et al. (2014), and Stierle et al. (2016) used acoustic
emissions to further constrain the source of laboratory earthquakes in
loaded rock specimen by means of the seismic moment tensor and
Most recently, laser velocimetry based on interferometric techniques has
been used to obtain displacement time series at selected points on the surface
of the specimen (e.g. Lykotrafitis et al., 2006; Rubino et al., 2015;
Caniven et al., 2015). The instruments record a specific component of the
velocity field at up to 10 m s
In many earthquake studies using fault block models (Rubio and Galeano, 1994; Rosakis et al., 1999; Xia et al., 2004; Lu et al., 2009, 2010; Mello et al., 2010; Schubnel et al., 2011, Nielsen et al., 2010; De Joussineau et al., 2001, Bouissou et al., 1998), photoelasticity combined with high speed photography is used to monitor the transient deformation and stresses associated with earthquake-like slip events (Fig. 3d and f). Based on the photoelastic effect, Daniels and Hayman (2008) visualized the dynamics of force chains in sheared granular media undergoing stick–slip (Fig. 2 g).
Photoelasticity provides not only a visualization of small-scale deformation but a direct and quantitative measurement of stress in suitable materials (e.g. Jessop and Harris, 1960). Photoelasticity is physically based on the fact that, when polarized light passes through a stressed birefringent material, the light separates into two wave fronts travelling at different velocities. Each wave front is oriented parallel to a direction of principal stress in the material, i.e. perpendicular to each other. Different values of the refraction index are assigned to two components that are out of phase when leaving the birefringent material. This difference in optical path can be measured by interferometry and visualized using a second polarizer. The resulting fringe patterns correspond to isocontours of maximum shear stress. This assembly forms the base of the so-called “polariscope”.
In analogue earthquake studies, light sources like laser beams or floodlights are used to illuminate the transparent model made, e.g., of homalite, polycarbonate or gelatine. A pair of linear or circular polarizers, one in front and one behind the model, forms the basis of the experimental polariscope assembly. Usually, the light path and therefore the viewing perspective is parallel to the fault plane and normal to the rupture direction.
Photoelasticity is able to monitor the distribution of maximum shear stress in the model at full coverage and high resolution. However, the absolute values of the principal stress components remain unknown. The temporal resolution is only limited by the sampling rate of the employed digital cameras, which is generally flexible and can be adapted to the expected rupture velocity. In particular, while rupture monitoring in rigid materials requires high-speed cameras (kHz imaging), commercial video cameras with 25 Hz imaging are sufficient in soft gelatine model approaches as the rupture velocity is drastically reduced. Photoelasticity works best in quasi two-dimensional models providing plane strain deformation fields. A thorough review of dynamic photoelastic applications in fault block models is given by Rosakis et al. (2007).
Image correlation techniques aim at retrieving the shape and 2-D or 3-D deformation of a surface or volume from digital images (e.g. Sutton et al., 2009). In the framework of experimental deformation monitoring, successive optical images are usually analysed to quantify incremental displacements, from which strain rates can be calculated (e.g. Adam et al., 2005, 2013). A variety of digital image correlation algorithms exists. They generally make use of successive monochromatic digital images in which a pattern of a few pixels can be tracked at sub-pixel accuracy. Given modern image resolutions of up to 30 MPx and 16 bit, monochromatic colour depth allows us to track millimetre-sized features on the micrometre displacement scale. In combination with high-speed cameras, this technique provides dynamic deformation monitoring options of unprecedented accuracy and precision. Commercial and non-commercial software packages are on the market including LaVision's Strainmaster®, Correlated Solutions Inc.'s VIC™, open-source software MicMac (Galland et al., 2016), MATLAB®-based open toolboxes MatPIV (Sveen, 2004), PIVlab (Thielicke and Stamhuis, 2014), and TecPIV (Boutelier, 2016), and COSI-Corr (Leprince et al., 2007).
The latest developments in strain monitoring using image correlation techniques include a coupling of strain monitoring of the experiment with analytical or numerical elastic dislocation modelling (EDM). For example Rosenau et al. (2009, 2010) used elastic EDM to differentiate between elastic and plastic deformation inherent in their elastoplastic models. Rubino et al. (2015) used EDM to invert strain for stress, applying a linear constitutive behaviour (Hild and Roux, 2006). Caniven et al. (2015) used EDM to invert surface deformation for fault slip distribution and depth of locking. The rigorous use of inversion and visualization techniques along with proper scaling in the models of Caniven et al. (2015) and Dominguez et al. (2015) allows for direct comparisons between model and natural observations that include surface deformation measured by geodesy (e.g. pseudo-GPS and InSAR displacement fields; Fig. 4c and d) and fault slip distribution and stress drop deduced from seismological and geodetic records. Their approach allows them to monitor a few microns of horizontal surface displacement at a spatial resolution of a few millimetres. Considering model scaling characteristics, the acquired measurements can be directly compared to a 1 km spacing of a dense GPS network and allow emulating an InSAR-like fringe pattern.
Brune et al. (1990) used a pen attached to one foam block of their fault block models moving over a strip chart recorder to derive the displacement time function of one side of the fault. Similar data were obtained by Rosenau et al. (2009) using a high-resolution electronic odometer allowing us to monitor motion on the micrometre scale and kHz sampling rate and to derive the displacement time function of the rigid basal plate simulating subduction.
Force sensors at sampling rates up to kHz are routinely used to monitor the forces acting on one side or across an area of a sample in all kinds of deformation apparatuses (e.g. in a Schulze ring-shear tester; data in Fig. 10) including spring-slider (Fig. 2) and fault block set-ups (e.g. Corbi et al., 2011). Several studies used force sensors to measure the force exerted by a backwall in sandbox experiments of strike slip (Tchalenko, 1970) and thrusting (e.g. Cruz et al., 2010; Souloumiac et al., 2012; Herbert et al., 2015). While they show the long-term stress drop associated with fault formation in a slip-weakening material (e.g. sand), the technique also has the potential to detect stress drops associated with individual slip instabilities (analogue earthquakes)(e.g. Rosenau et al., 2016; Rudolf et al., 2016b)
Earthquake statistics deals with the probabilistic treatment of the size and
frequency of earthquakes by means of frequency–size distributions,
probability distribution functions (pdfs),
The iconic Gutenberg–Richter distribution is by far the most prominent
result of earthquake statistics. It is a cumulative frequency plot of
earthquakes occurring generally in a large area over a long period. It shows
a negative log-linear correlation with a slope (“
The original Burridge–Knopoff model (Burridge and Knopoff, 1967) was able to mimic the self-similarity of earthquakes. Accordingly, two types of events can be distinguished: local events that smooth existing stress heterogeneities and that obey a Gutenberg–Richter distribution and system-sized events that recur more regularly. This has been reproduced by the experiments of King (1991, 1994), who showed that large events tend to roughen the stress distribution while small events smooth them. Moreover, he found that large events are dissimilar (i.e. not characteristic) and that rupture nucleation is not where peak slip accumulates. The frequency–size distributions found by King (1991, 1994) were Gutenberg–Richter-like except for the system-sized events which recur approximately time-predictably. Similarly, Hamilton and McCloskey (1997), when investigating the frequency–size distribution in a simple fault block model, found a power law behaviour up to analogue earthquakes approximately the size of the smallest dimension of the set-up. Larger events occurred more often than predicted by extrapolation of the power law. They concluded that a break in the slope of the Gutenberg–Richter distribution is due to the change in rupture mechanism from truly two-dimensional to quasi one-dimensional once the earthquake ruptured the whole seismogenic width.
Periodic vs. random rupture behaviour as exemplified by
seismotectonic scale models.
A simple measure of periodicity is the coefficient of variation (CV) of
recurrence intervals. It is defined as the standard deviation divided by the
mean recurrence interval. Recurrent events with a CV < 50 % can be
considered quasi-periodic as their frequencies follow a normal pdf.
CV > 50 % is considered aperiodic. Aperiodic events may follow
an exponential pdf (CV
Allowing earthquake interactions by means of static stress coupling or off-fault plasticity seems critical in controlling earthquake recurrence behaviour. In particular, static stress transfer between two seismogenic patches results in a switch from periodic to random behaviour, rarely synchronized (Sugiura et al., 2014; Varamashvili et al., 2008), as suggested by numerical models (e.g. Kaneko et al., 2010; Tullis et al., 2012a, b) and experimental results from the Rosenau et al. (2010) set-up shown in Fig. 12. This is consistent with simple spring-slider experiments (e.g. Burridge and Knopoff, 1967; King, 1991, 1994) and fault block models (e.g. Rubio and Galeano, 1994; Yamaguchi et al., 2011), where complexity emerges naturally.
Seismic vs. aseismic slip in a ring-shear test using rice.
Interaction with plastic deformation, i.e. faulting, of the hanging wall in the subduction earthquake models of Rosenau and Oncken (2009) resulted similarly in a more randomized recurrence of analogue earthquakes. The situation is similar with viscoelastic wedge models: while pure gelatine models (Corbi et al., 2013) display a very regular stick–slip (characteristic earthquakes) modified gelatine models tend to show more random behaviour (Brizzi et al., 2016). The rheological properties of gelatine in the latter have been modified by adding NaCl, which caused an increase in viscoelastic behaviour. This increase in turn affected analogue earthquake statistics and widened the range of earthquake magnitudes, recurrence times, and rupture durations by a factor of 2.
Quasi-periodic events can be potentially described by slip-predictable and time-predictable recurrence models (e.g. Weldon et al., 2004): in slip-predictable models the amount of slip depends on the duration of the previous inter-seismic period, while in time-predictable models the duration of the inter-seismic period depends on the size of the last event. However, no indication of such a predictability has been found in nature or in analogue earthquake models (e.g Rubinstein et al., 2012) except for spring-slider models (e.g. King, 1991, 1994). Nevertheless, a distinctive bimodal distribution of slip events in models by Hamilton and McCloskey (1998) as well as in the models of Rosenau et al. (2009) emerges where smaller (but still large) events follow a distinctly different, though well-defined, frequency distribution than larger events. In contrast spring-slider models by Burridge and Knopoff (1967) and King (1994) as well as some fault block models (e.g. Rubio and Galeano, 1994) show a more random behaviour.
Tectonic faults are known to accumulate slip unsteadily at a wide range of
rates: from sudden, seismic-wave-releasing slip instability at speeds of m s
Sliding in spring-slider and fault block models may occur both through a typical see-saw profile of stress reflecting phases of accumulation (i.e. stick phase) alternating with sudden drops (i.e. slip phase) or through smooth and continuous motion. The first regime, also known as “stick–slip”, represents the basic physical model for the seismic cycle, while the second, also known as “stable sliding”, is the analogue of creeping.
Spring-slider set-ups in which stiffness and loading velocity were varied reproduced a wide variety of slip styles (Leeman et al. 2016; Kaproth and Marone, 2013). The deformable slider-spring set-up of Reber et al. (2015) is used to study transients in the brittle–ductile regime. The latter is defined as a two-mineral-phase regime where one phase deforms in a brittle manner, while the other is ductile (e.g. feldspar vs. quartz). In the experiments by Reber et al. (2015), viscoelastic material is used to induce both creep and fracture. The observation of slip transients at various speeds in such experiments may be equivalent to tremors and slow slip phenomena (Peng and Gomberg, 2010). Similar variability of slip styles can be simulated using spring-slider set-ups in which stiffness and shear velocity were varied for artificial and natural fault gouge (Leeman et al. 2016; Kaproth and Marone, 2013) or using rice in a ring-shear tester set-up (Fig. 13). Here shear and normal stress are the controlling factors.
Fault block models have been specifically designed to investigate frictional dynamics as a function of the system loading rate, material rheology, and interplate roughness. In general, a bifurcation from stick–slip to stable sliding is observed as the system loading rate increases (e.g. Baumberger et al., 1994). A similar transition from potentially seismic to aseismic behaviour has been speculatively applied to subduction megathrusts, where the observed earthquake magnitude decreases with depth and the subsequent switch off at the downdip limit of the seismogenic zone may be explained by a progressive decrease in the viscosity of the upper plate (Namiki et al., 2014) or by the progressive smoothing of the interplate roughness (Voisin et al., 2008; Corbi et al., 2011).
Rupture dynamics as observed in the subduction zone megathrust models of Corbi et al. (2013): slip evolution of a crack-like analogue subduction megathrust earthquake propagating upwards and downwards in the seismogenic zone (modified from Corbi et al., 2013).
Rupture dynamics, which includes the study of earthquake nucleation, the transition to dynamic rupturing, and its arrest, has by far the broadest range of applications of the phenomena that can be studied by analogue experiments. We can only give a small overview here of the vast amount of existing knowledge, highlighting the experimental contributions using analogue earthquake models. The latter include mainly fault block models where a precut surface in rock or rock analogue material is stressed by the application of far-field compressive or shear forces.
The nucleation of an earthquake, i.e. the onset of frictional instability, has been investigated with a variety of analogue models. It was studied experimentally using fault block models using precut rock (e.g. Dieterich, 1978a; Okubo and Dieterich, 1984; Ohnaka and Shen, 1999; McLaskey and Kilgore, 2013; McLaskey and Glaser, 2011; McLaskey et al., 2012) as well as rock analogues, e.g. polycarbonate (e.g. Nielsen et al., 2010; Rosakis et al., 2007, and references therein). Accordingly, the onset of frictional instability is characterized by quasi-static creep up to loading velocity, acceleration, and dynamic propagation. Based on theoretical considerations using a spring-slider system, Roy and Marone (1996) suggest that the transition occurs at a critical velocity that is a function of extrinsic and intrinsic frictional properties and mass. On the basis of empirical results using a fault block model and theory, Latour et al. (2013a) in contrast equate the transition from quasi-static to dynamic rupture with the transition from exponential growth to power law growth of the rupture length. They suggest that elastic frictional properties control the transition.
Regarding rupture propagation, two main mechanisms can be distinguished, depending on slip duration at a single point along the fault with respect to total rupture duration. In the “crack model”, slip at a point is continuous for about the entire rupture duration, while in the “pulse model”, slip occurs only for a small fraction of the rupture duration (e.g. Heaton, 1990). Understanding what governs the duration of slip at a point is crucial for earthquake hazard assessment because the two models predict different degrees of strong motions with distance from the nucleation site (Marone and Richardson, 2006).
Brune et al. (1993) were amongst the first to find slip pulses travelling along interfaces of foam and relate them to earthquake dynamics. They argued that normal vibrations reduce the load on the fault at the rupture tip and thereby allow the rupture to propagate in a self-sustained, wrinkle-like manner and slip to occur at very low friction. Similarly, Schallamach (1971) and Rubinstein et al. (2004) reported detachment waves in experiments using rubber on hard ground and between PMMA blocks, respectively (Fig. 3b). Slip pulses were also found as the main rupture mechanism by later studies in different materials (e.g. Lykotrafitis et al., 2006; Nielsen et al., 2010). Lu et al. (2010) suggested that a low stress level along faults may support pulse-like behaviour. The role of slip pulses as an earthquake mechanism was studied more systematically using foam block models in order to explain the heat flow paradox associated with the San Andreas Fault (Anooshehpoor and Brune, 1994).
Using the same experimental technique Anooshehpoor and Brune (1999) verified theoretical predictions of Weertman (1980) and Andrews and Ben-Zion (1997) regarding the directivity and speed of slip pulses travelling along contact interfaces between differentially compliant media. Key findings were that slip pulses propagate into the direction of the particle motion in the more compliant medium at a rupture velocity close to the shear wave velocity of the more compliant medium. Similar results were found by Xia et al. (2005) using much stiffer, bi-material interfaces (homalite). The role of the bi-material character of fault interfaces was studied in depth numerically in recent times (e.g, Ma and Beroza, 2008; Ampuero and Ben Zion, 2008; Brietzke et al., 2007, 2009).
Consistent with the above and with Rosakis et al. (1999), who found that cracks can move at velocities faster than shear wave speed (“super-shear” ruptures), Lykotrafitis et al. (2006) found that pulses are generally characterized by a slower propagation velocity than cracks. Accordingly, the origin of the two different types of rupture modes depends on the strength of the initial forcing. Similarly, Xia et al. (2004) found that in their experimental set-up, the sub-shear to super-shear transition depends on the dynamic loading conditions.
The control of parameters other than the rupture mechanism on rupture velocity has been studied by a variety of approaches. Using precut Columbia Resin, Wu et al. (1972) found that propagation velocity can range in general from sub-shear to 110 % of the shear wave velocity. Using precut rock specimens, Johnson et al. (1973) found that particle velocity and rupture speed increase with stress drop, consistent with theoretical predictions. Okubo and Dieterich (1984) showed that rupture velocities along a simulated fault in granite are lower on rough faults than on smooth fault. Fault block models have also been developed to investigate how different configurations of roughness affect the rupture propagation. It has been found that a single linear barrier may both accelerate and decelerate a rupture, while a large heterogeneous barrier slows down the rupture (Latour et al., 2013b). Rousseau and Rosakis (2009) investigated the effect of more complex fault geometries including kinking and branching on rupture propagation in a homalite material. At the same time, Templeton et al. (2009) were able to simulate experimental results numerically. The control on rupture velocity in general and super-shear ruptures specifically is a very active field in analogue earthquake studies (e.g Lu et al., 2009; Schubnel et al., 2011; Mello et al., 2010, 2014, 2016; Passelègue et al., 2013, 2016).
Recently, seismotectonic scale models have become available that allow us to
study rupture dynamics in a subduction setting (Corbi et al., 2013). Because
of the slowness of the earthquake process in viscoelastic gelatine models,
rupture dynamics can be studied at high resolution. Key characteristics of
earthquake ruptures in viscoelastic subduction zone models of Corbi et al. (2013) regarding rupture nucleation, directivity and mechanism are as
follows:
Hypocentres concentrate near the base of the seismogenic zone (Fig. 14).
This is consistent with numerical simulations (Das and Scholz, 1983; van
Dinther et al., 2013a, b, Pipping et al., 2016). In nature, the spatial
relation between the hypocentre and the rupture area is less clear: the
hypocentres of the 2004 Ruptures propagate bilaterally with a preference for the updip direction
(Fig. 14) in the viscoelastic models of Corbi et al. (2013). This behaviour
is consistent with previous analogue models of interplate seismicity
performed with foam rubber (Brune, 1996) and elastoplastic materials
(Rosenau et al., 2009) and numerical simulations (van Dither et al.,
2013a, b; Pipping et al., 2016). The most likely explanation for the
preferential earthquake migration to shallow levels is that the rupture
follows the lithostatic pressure gradient that results from the thrust
geometry (Das and Scholz, 1983). Also, the material compliancy difference
between gelatine and aluminium favours the upward migration direction. Such a bi-material contrast may be active also in nature where the overriding plate
is expected to be the more compliant than the subducting one (e.g. Ma and
Beroza, 2008). The majority of ruptures are crack-like as they display a minimum
duration of slip at a point larger than
The characteristics shown by the viscoelastic seismotectonic scale models by
Corbi et al. (2013) are consistent with observations in the experiments with
elastoplastic models of Rosenau et al. (2009). However, in the latter the
rupture process is far less well resolved as the models were stiffer, speeding up the process, while the monitoring resolution was limited.
Pioneering work by Archuleta and Brune (1975) using foam block models (Fig. 2c) to study ground motion have been followed by a number of similar studies which helped in interpreting seismological observations and improving numerical predictions. King and Brune (1981) summarized the modelling approach by stating that “The foam model acts as an analogue computer that automatically accounts for the diffraction, refraction, reflection, and conversion phenomena that occur when seismic waves interact with an attenuative, linear-elastic soil structure”.
Foam rubber models were for example amongst the first to explain the strong asymmetry in particle motion and associated ground motion across dipping faults. Brune (1996) investigated the dynamics of seismogenic thrusting using a wedge-shaped foam block. He found a pronounced amplification of particle and ground motion in the hanging wall. He explained this by considering static and dynamic effects: the free-surface effect, as predicted by analytical dislocation models, allows higher static particle motions in the hanging wall because of the possibility of the material to be lifted up. Additionally, seismic energy is reflected by the fault and the free surface and becomes trapped in the hanging wall wedge, increasing its coseismic motion. Shi et al. (1998) and Shi and Brune (2005) were able to reproduce and refine the experimental results numerically. Several numerical studies confirmed their results (Oglesby et al., 1998, 2000a, b; Ma and Beroza, 2008; Nielsen, 1998; Gabuchian et al., 2017). Gabuchian et al. (2014, 2017) most recently revisited this issue by means of experiments using homalite as a rock analogue. Beside the free-surface effects they focused on rupture velocity as a controlling factor for ground motion.
Brune and Anooshehpoor (1999) showed the dynamic effect of normal fault geometry and a low stress level at a shallow level. They found systematically lower accelerations of the model surface near the normal faults when compared to strike-slip faults. Similar results have been obtained with numerical models (Shi et al., 2003; Oglesby et al., 1998, 2000a, b). The effect of a shallow weak and creeping zone on ground motions from strike-slip earthquakes has been studied quantitatively using foam models by Brune and Anooshehpoor (1998).
A co-genetic though more engineering-type approach has been used to study site effects due to topography (Anooshehpoor and Brune, 1989), sedimentary basins (King and Brune, 1981), and the response of buildings and other structures, such as dams, to earthquakes (e.g. Brune and Anooshehpoor, 1991a; Anooshehpoor and Brune, 1989). Brune and Anooshehpoor (1991b) simulated a large-scale seismic experiment in order to help interpret the seismic data obtained in nature. In such studies, geometric-scale models made from foam rubber were used. Excitation of the model was by impulses or vibrations using plate and line sources simulating the horizontally and vertically incident polarized seismic shear waves picked up at model sites by accelerometers.
These studies demonstrated, for example, the sensitivity of seismic shear wave amplification to the incidence direction in the presence of topography as well as resonance characteristics of, e.g., basins, alluvial fans, and constructions. Comparison to natural observations and theoretical predictions validated the experimental approach, which was used in the following to study more complex scenarios beyond existing theoretical and (generally 2-D) numerical models. In addition to the strong damping of the foam used (Brune and Anooshehpoor, 1998), some limitations specific to this modelling approach were recognized by Brune and Anooshehpoor (1991), such as edge reflections, imperfect or contaminated input signals, and non-linearities related to imperfect boundary conditions (e.g. bonding of foam blocks to the foundation).
Seismic-cycle deformation as shown by multi-layer strike-slip
fault models by Caniven et al. (2015):
Since Reid's formulation of the elastic rebound theory following the 1906 San Francisco earthquake (Reid, 1911) seismic cycles in various settings are seen as recurring, more or less sudden releases of stress or elastic strain energy that slowly accumulated in the period before. The term cycle by no means implies a regularity of the recurring events but rather describes the succession of the archetypical stages. Accordingly, a full seismic cycle consists primarily of the inter-seismic period (years to millennia) and the coseismic rupture (seconds). Precursors, post-seismic relaxation, and inter-seismic transients may complete the seismic cycle. Traditionally, the seismic cycle has been considered to be purely elastic (e.g. Klotz et al., 2001) and modelled accordingly using elastic models (e.g. Fig. 7b, c). The recognition of inter- and post-seismic viscoelastic relaxation phenomena in the ductile lower crust (e.g. Wang et al., 2012) and mantle as well as possibly universal precursor activity (e.g. Bouchon et al., 2013) led to continuous refinement of the seismic-cycle concept. Finally, in recent years, plasticity theory has been formulated in the framework of seismic cycles, allowing the accumulation of permanent (i.e. tectonic) deformation through seismic cycles (e.g. Wang and Hu, 2006; Johnson, 2013). Observation both in nature (e.g. Wesson et al., 2015) and in experiments using seismotectonic scale models (e.g. Rosenau and Oncken, 2010) corroborates this new view on elastoplastic seismic-cycle deformation.
Seismic-cycle deformation using seismotectonic scale models have been realized using elastic (foam: Caniven et al., 2015), viscoelastic (gelatine: Corbi et al., 2013), and elastoplastic rheologies (rubber mix: Rosenau et al., 2009, 2010; Rosenau and Oncken, 2009). Seismotectonic scale models were able to reproduce the basic pattern of seismic cycles in subduction zones and strike-slip zones with alternating phases of stress build-up (analogue of the inter-seismic stage) and stress release (analogue of the coseismic stage) due to coseismic slip associated with uplift and subsidence in the order few micrometres (decimetres to metres if scaled to nature).
Caniven et al. (2015) developed crustal-scale three-layer brittle–ductile models by coupling frictional–plastic, elastic, and viscoelastic layers in a strike-slip setting. These models were intended to study the mechanical coupling between the layers with respect to seismic-cycle deformation. The models consist of three layers: a viscoelastic basal layer (PDMS) representing the lower crust, an elastic middle layer (polyurethane foam) with an embedded strike-slip fault (able to creep and stick–slip depending on treatment), and a thin cover of brittle-plastic granular material (mixture of silica and graphite or PVC plastic powder) representing the shallow aseismic crustal layer.
The model is loaded by applying both horizontal compression and shear at velocities in the order of micrometres per second. Model kinematic evolution is monitored using a high-performance optical system, based on sub-pixel correlation of high-definition digital images and enabling very accurate measurements of model deformations with a spatial resolution ranging from 1 to 5 mm, an accuracy of a few micrometres (equivalent to a 1 km dense permanent GPS network), and a sample rate of 0.2 Hz.
Caniven et al. (2015) used an average length scale of
This experimental set-up by Caniven et al. (2015) succeeded in simulating realistic long-term tectonic loading along with seismic-cycle phases. In particular, inter-seismic loading (Fig. 15a) is either relaxed by slow and continuous, aseismic fault creep (Fig. 15b) or by successive instantaneous fault slip events (coseismic phase; Fig. 15c). After a seismic slip event low-amplitude, slow deformation occurs (post-seismic relaxation phase; Fig. 15d). The simulated kinematics of all stages can be directly compared with geodetic and seismological observations in nature. For each experiment, the evolution of the inter-seismic strain field is recorded semi-automatically by measuring surface deformation, calculating the evolution of the locking depth, quantifying the amount and location of aseismic creep, and analysing the spatial and temporal distribution of coseismic ruptures (surface rupture dimensions and geometries, coseismic slip profiles, earthquake magnitude, return period) and the post-seismic relaxation phase (surface deformation kinematics, decay of micro-earthquake activity). The model results are comparable to numerical simulations of strike-slip fault earthquakes in terms of seismic moment, slip gradients, and post-seismic response (e.g. Ben-Zion and Rice, 1997; Lapusta and Rice, 2003; Tullis et al., 2012a, b).
Lithospheric-scale elastoplastic models were used to study seismotectonic evolution of subduction zone forearcs (Rosenau et al., 2009; Rosenau and Oncken, 2009). Such models have helped to understand the relationship between earthquakes along the subduction megathrust and the structure and topography of the forearc wedge. Therefore, they are a valuable tool in understanding the links between short-term and long-term deformation processes, i.e bridging the timescales from single earthquakes to tectonic evolution.
Seismotectonic evolution of subduction zone forearcs as suggested
by elastoplastic subduction zone megathrust models:
The models of Rosenau et al. (2009) consist of a 200 mm thick granular wedge representing the brittle forearc lithosphere (< 60 km depth) made of a mixture of rubber pellets and sugar into which is embedded a seismogenic zone of rice grains. The whole model sits on top of a conveyer plate driven at a few millimetres per minute to simulate convergence. Kinematic monitoring occurred by the particle image velocimetry method able to detect displacements down to tens of micrometres at a 10 Hz resolution. While the resolution was good enough to monitor seismic-cycle deformation, it was too low to image the rupture process that occurred within less than 0.1 s. Nevertheless, Rosenau et al. (2009) succeeded in simulating the main stages of the seismic cycle, namely the co-, post, and inter-seismic stage. The key issue in using this set-up was to study the accumulation of permanent (plastic, tectonic) deformation over several seismic cycles. Differentiating between elastic and plastic deformation on a seismic-cycle scale has been done by using elastic dislocation modelling to subtract the elastic deformation from the elastoplastic deformation seen.
According to 2-D models a few percent of plate convergence is converted into permanent across-strike shortening of the forearc wedge over several seismic cycles. Shortening localizes both at the updip and downdip limit of the seismogenic areas along the megathrust (Fig. 16a). At the updip limit of the seismogenic zone, coseismic compression is relaxed post-seismically by internal shortening accommodated by a splay fault in the models of Rosenau et al. (2009, 2010). This is consistent with theoretical predictions (Wang and Hu, 2006) and observations in nature (e.g. Lieser et al., 2014). Vice versa, during the inter-seismic period compression occurs at the downdip limit of the seismogenic zone and may lead to uplift of the coast over multiple seismic cycles.
Results from 3-D experiments in the Rosenau et al. (2009) set-up suggest that a similar mechanism is active along-strike causing permanent shortening and uplift of coastal regions overlying aseismically slipping zones (barriers) along the megathrust (Fig. 16b, c).
In summary, analogue models suggested that permanent shortening is localized
at the periphery of repeated great earthquakes (
Rosenau and Oncken (2009) moreover suggest a feedback between forearc deformation and seismogenesis along the megathrust: accordingly, because the stable wedge part overlying the seismogenic zone in segmented forearcs deforms quasi-elastically, characteristic great earthquakes tend to occur fairly periodically as in simple spring-slider experiments and numerical simulations of the experiments (Pipping et al., 2016). In contrast, less segmented subduction zone forearcs have been predicted to show more random earthquake occurrence. This is in line with observations (Tormann et al., 2015) and numerical predictions (Fuller et al., 2006).
Based on 2 centuries of development in experimental tectonics and seismology, analogue modelling has become an explorative simulation tool in the past decade to understand the link between short-term and long-term deformation processes, bridging the timescales from earthquake nucleation to tectonic evolution over multiple seismic cycles. This new across-scale modelling approach met needs for a better understanding of natural observations which have become available due to developments in seismological and geodetic monitoring techniques (GPS and InSAR) and an increase in the frequency of occurrences of large to great earthquakes in a variety of settings.
Here, we have presented an overview of experimental approaches to model earthquakes, seismic cycles, and seismotectonic deformation. The processes involved are multi-scale posing the challenge to cross timescales from seconds (seismic deformation) to millions of years (tectonic deformations) both in natural observations and in simulations and experiments. Since natural observations are intrinsically limited in resolution and period of observation, simulations by means of analogue and numerical modelling are key to understanding multi-scale processes. An experimental approach to multi-scale problems seems most natural because experiments are physically self-consistent and happen in a time and space continuum. This is in contrast to numerical models, which need strong assumptions on the physical laws involved and need to be discretized.
Existing analogue earthquake models have been categorized as (1) spring-slider models, (2) fault block models, and (3) seismotectonic scale models according to their complexity, similarity, and applicability to the natural prototype. Seismotectonic scale models have been developed very recently, exploiting technological advances in material characterization and deformation monitoring techniques. Materials used in seismotectonic scale modelling studies include elastic, frictional–plastic, and viscoelastic rheologies. Monitoring techniques exist that allow us to monitor deformation in the lab accurately and precisely at high spatial and temporal resolution and with large coverage. Numerical modelling and inversion techniques adapted from geodesy and seismology allow us to infer hidden kinematic and dynamic parameters which are not directly observable (e.g. volumetric strain, material properties).
The key challenges and future developments we see are as follows:
New materials remain to be explored. Especially non-linear rheologies both
in brittle and viscoelastic regimes will contribute to more realistic
analogue models in future. For example, the implementation of Burgers rheology
in analogue models studying post-seismic mantle relaxation appears as a
necessary step in the near future. A rigorous characterization of the material is
prerequisite. Monitoring techniques are developed continuously towards higher resolution
both in space and time as well as full coverage. A key future challenge is
handling the growing amount of image data thus derived. Adaptive imaging,
i.e. adjusting the imaging rate to the deformation rate, is a way to reduce
data production. Such adaptive imaging might be based on external
triggering, e.g. by a combination with force measurement, or internal
triggering, i.e. by applying fast (near real-time) image cross-correlation
(“live strain gauge”). The coupling of analogue models with numerical models helps to overcome the
respective limitations and leads to a better exploitation of the models' respective potentials.
For example, numerical models can be used to infer quantities from the
experiment that are not directly observable, such as small-scale details of
rupture dynamics or unknown material properties. Numerical models also
provide the means to better constrain boundary conditions and imposed
artefacts in analogue models. On the other hand, experiments can help in
validating numerical models by means of testing their predictions and
thereby justifying the simplifications of the physical processes and
parametrizations involved in numerical models. Cross-validation and
benchmarking in general should be promoted in the respective communities. Properly scaled analogue earthquake models may help to improve seismological
and geodetic inversion techniques and overcome non-uniqueness of numerical
solutions. They provide a large number of well-constrained and
self-consistent case studies which display both natural complexity and
variability. Analogue earthquakes may thus serve to minimize the solution
space and more adequately constrain slip variability, for example. Finally, experimental techniques are a superb method for visualization and
teaching complex processes. For example, simple spring-slider experiments
equipped with force sensors and accelerometers are easy to realize and
provide fascinating hands-on experience in relation to earthquakes.
Tackling the above challenges will enable analogue modellers and numerical
modellers to develop more complex and realistic seismotectonic scenarios in
terms of structure and rheology. Higher resolutions will shift the detection
threshold for analogue earthquakes (i.e. the magnitude of completeness)
further down from currently ca.
Original data underlying the material presented here are published in an open access dataset by Rosenau et al. (2016).
The authors declare that they have no conflict of interest.
Fabio Corbi received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement no. 658034 (AspSync). We thank Malte Ritter and Michael Rudolf for sharing their rock mechanical data compilations. We thank Kirsten Elger and GFZ Data Service for publishing the data. We thank Michele Cooke and an anonymous reviewer as well as the editor Susanne Buiter for very constructive comments. Edited by: S. Buiter Reviewed by: M. Cooke and one anonymous referee