Introduction
Fault slip rates are generally estimated by dividing measurements of the offset of
geologic marker features by the time over which that offset accumulated (it is not
currently possible to measure a slip rate directly, though the term “slip rate
measurement” may be used to compare to a simulated or modeled value). The uncertainty in
the resulting slip estimate is typically treated as epistemic, and quantified
through the propagation of the measurement uncertainties in the offset and time
quantities e.g.,. However,
for slip rate estimates on active faults made from offset measurements near the fault
trace (i.e., within a horizontal distance that is a small fraction of the fault's locking
depth, as the width of the zone affected by earthquake-cycle strains is a function of
locking depth, e.g., ), the
episodic nature of surface displacement due to the fault's position in the earthquake
cycle will necessarily affect the results: if the measurements are taken immediately
before an earthquake, the measured offset and resulting slip rate estimate will be lower
than average; while if the measurements are taken immediately after an earthquake, the
offset and rate will be higher.
The magnitude of the perturbation to the slip rate estimate is, of course, a function
of the number of cumulative earthquakes that have contributed to the measured
offset (plus any
aseismic strain such as afterslip or creep). For older Quaternary markers that have
experienced tens to hundreds of major earthquakes, the effects will be minor; and for
bedrock geologic markers with kilometers of displacement, the earthquake cycle is likely
not worth accounting for. However, due to progressive erosion of geologic markers and the
challenge of dating many late Pliocene to early Quaternary units (which are too old for
radiocarbon and many cosmogenic nuclide systems), geologists often have no choice but to
choose late Pleistocene to Holocene markers to date. These units may also be more
desirable targets if the scientists are primarily concerned with estimating the
contemporary slip rate on a fault with a slip rate that may vary over Quaternary
timescales e.g.,. For slow-moving
faults, the slip either long-waiting to be released or recently released may represent
a sizeable fraction of the measured fault offset.
Careful paleoseismologic and neotectonic scientists will take this into account in their
slip rate calculations if sufficient data are available, especially in the years after a
major earthquake e.g.,, and many others will discuss the
potential effects if the data are not e.g.,. These
researchers may only consider the time since the last earthquake, often making the
assumption (stated explicitly or not) that the earthquakes are identical in slip and perfectly
periodic.
However, the recurrence intervals between successive earthquakes on any given fault
segment have some natural (i.e., aleatoric) variability; similarly, displacement
at a measured point is not identical in each earthquake
e.g.,. Therefore, the measured slip rate may deviate from
the time-averaged rate based on the amount of natural variability in the earthquake
cycle, particularly given successive events from the tails of the recurrence interval or
displacement distributions.
The physical mechanisms responsible for the aleatoric variability in earthquake
recurrence intervals and displacements are still unclear, and the subject of active
investigation. Most earthquakes serve to release differential stresses caused by relative
motions of tectonic plates or smaller crustal blocks; relative plate velocities measured
over tens of thousands to millions of years from geologic reconstructions are similar
enough to those measured over a few years through GPS geodesy that sudden transient
accelerations and decelerations are unlikely e.g.,. As a
consequence, plate boundary faults may have near-constant rates of loading from tectonic
stress. Many plate boundary faults are among the most regularly rupturing faults known,
particularly sections that are isolated from nearby faults
e.g., and therefore not affected by stress perturbations
resulting from earthquakes on other faults.
These stress perturbations may be “static” coseismic instantaneous stresses in the
elastic upper crust resulting from earthquake displacement , or
analogous post-seismic stress changes in the viscoelastic lower crust or upper mantle
from the time-dependent relaxation of static stress perturbations
e.g.,; alternatively, these stress perturbations may be
“dynamic” transient stress changes that accompany the passage of seismic waves from
nearby or distant earthquakes . Additionally, changes in pore
fluid pressure in a fault zone may increase or decrease the required shear stress to
initiate an earthquake . In contrast to isolated plate
boundary faults, intraplate faults or those on distributed plate boundaries may have
lower stress accumulation rates and the stress perturbations from activity on other
faults may be enough to significantly affect the timing of earthquakes on a given fault
.
Though the physical mechanisms responsible and the statistical character of
this natural variability remain under debate, its effects on the estimated slip
rates may still be estimated given some common parameterizations.
In this study, the effects of the natural variability in earthquake
recurrence intervals and per-event displacements on neotectonic slip
rate estimates are investigated through Monte Carlo simulations. The
study is geared towards providing useful heuristic bounds on the
aleatoric variability and epistemic uncertainty of late Quaternary slip rate
estimates for fault geologists, probabilistic seismic hazard modelers,
and others for whom such uncertainties are important.
Modeling the earthquake cycle
To study the effects of the natural variability in the earthquake cycle
on estimated slip rates, long displacement histories of a simulated
fault with different parameterizations of the earthquake recurrence
distribution will be created. Then, the mean slip rate over time windows
of various sizes will be calculated from each of the simulated
displacement histories, and the distribution in these results will be
presented, representing the natural variability in this quantity.
The code used in the simulations is publicly available (Styron, 2018).
To isolate the effects of the earthquake cycle from other phenomena that may affect slip
rate estimates, this study does not attempt to model erosion, nor does it consider any
measurement uncertainty in the age or offset of the faulted geologic markers; these
quantities are assumed to be perfectly known. Additionally, though natural variability in
per-earthquake displacement is included in the model, it is minor and the same for all
recurrence distributions; though it is a random variable in the simulations, it is not an
experimental variable. Furthermore, though the model has one length dimension (fault
offset), it is still best thought of as a point (0-dimensional) model, as there is
no spatial reference or along-strike or down-dip variability, and hence the magnitude of
each earthquake is undefined, and no magnitude–frequency distribution exists.
Earthquake recurrence interval
distributions
There are a handful of statistical models for earthquake recurrence
interval distributions that are under widespread consideration by the
seismological community.
The most commonly used is the exponential distribution. This is associated with
a Poisson process, and is the distribution that results from earthquakes being
distributed uniformly and randomly within some time interval. Consequently, the
probability of an earthquake (or other event) occurring at any time does not change with
time since the previous event (in other words, the hazard function is time-invariant);
this leads to the characterization of the exponential recurrence distribution as
“random”, “memoryless”, or “time-independent”. The exponential distribution is also
the simplest to describe statistically, as it requires only one parameter (the mean rate
parameter), which is the inverse of the statistical scale parameter. (The scale parameter
of a distribution determines the dispersion of the distribution, while the shape
parameter determines the shape or form of the distribution; a parameter that translates the
distribution along the x axis is called a location parameter.) The standard deviation
of a large number of samples generated from an exponential distribution is equal to the
mean.
The other distributions that are in common usage are time-dependent distributions,
meaning that the probability of an event occurring at any time since the previous event
changes with the elapsed time since that event. This class of distributions includes the
lognormal, Weibull, and Brownian passage-time
distributions. Though these distributions differ in notable ways, particularly in the
properties of the right tails at values greater than several times the mean
, they share a general shape and, given
suitable parameters, generated sample sets of small size may not be substantively
different. In fact, the distributions are similar enough that it is difficult, if not
impossible, to discriminate between them given realistic seismologic and paleoseismologic
datasets . These distributions are
described by both the scale and shape parameters.
The behavior of these distributions and of empirical datasets may be characterized by the
regularity of the spacing between events (i.e., the recurrence intervals): these may be
periodic, unclustered (i.e., “random”), or clustered. Assignment into these categories
is typically done with a parameter known as the coefficient of variation, or
CV=σ/μ, where σ is the standard deviation of the recurrence
intervals, and μ is the mean recurrence interval.
Periodic earthquakes are those that occur more regularly than random, and have a
CV<1 (i.e., σ<μ). These may be generated by any of the
time-dependent distributions described above with suitable scale and shape parameters.
(Note that in this paper, the use of the term “periodic” does not mean
perfectly repeating as it might in the physics or mathematics literature; the
behavior referred to is most accurately termed “quasi-periodic”, but that term will not
be used in the interests of conciseness.)
Unclustered earthquakes occur as regularly as random, and have a CV=1. These
may be generated by the exponential distribution (which can generate no other), or by any
of the time-dependent distributions as well, given the appropriate parameters. Note that
sample sets generated from these different distributions will not be identical:
sequences with an exponential recurrence distribution will have many more pairs of events
that are much more closely spaced together than the mean, and more pairs of events that
are much more widely spaced than the mean, compared to a sequence generated from the
lognormal distribution. Nonetheless, these will cancel out in the aggregate statistics,
so that the standard deviations will be equal. A comparison of these may be seen in
Fig. .
Earthquake recurrence distributions; “logn” represents lognormal; “exp”
represents exponential. Colors for each distribution are the same in all
figures.
Clustered earthquake sequences have sets of very tightly spaced earthquakes
that are widely separated (Fig. ), and have a
CV>1. These may be
generated from a hyperexponential distribution, which is the sum of multiple
exponentials with different means, or from the time-dependent distributions
described above, given the right parameters.
No consensus exists among earthquake scientists as to the most appropriate
recurrence interval distribution. As is generally the case with propriety,
the safest and probably most correct assumption is that it is
context-dependent. Many studies of plate boundary faults such as the San
Andreas conclude that major or “characteristic” earthquakes are periodic
e.g.,.
Conversely, many intraplate faults with low slip rates appear to show
clustered earthquakes separated by long intervals of seismic quiescence
e.g.,. However, one can find examples of
studies indicating the opposite conclusions, even from the same study areas
.
Spacing of 15 simulated successive earthquakes from each recurrence
distribution. Note that the gap between the last displayed earthquake and the
right side of the plot does not represent a long recurrence interval.
Modeled recurrence interval
distributions
This study will compare four recurrence interval distributions (Fig. ):
A periodic distribution, represented by a lognormal
distribution with a mean recurrence interval μ= 1000 years,
a standard deviation σ= 500 years, and a CV= 0.5
An unclustered time-dependent distribution, represented by a
lognormal distribution with a mean recurrence interval μ=1000 years,
a standard deviation σ=1000 years, and a CV=1.0.
An clustered time-dependent distribution, represented by a
lognormal distribution with a mean recurrence interval μ=1000 years,
a standard deviation σ=2000 years, and a CV=2.0.
An unclustered time-independent distribution, represented by an
exponential distribution with a mean recurrence interval μ=1000 years, a standard deviation σ=1000 years, and a
CV=1.0.
These distributions have been selected to represent a diversity of behaviors
with a compact and tractable number of simulations, and particularly to
explore how changes in CV as well as the shape of the distribution impact
slip rate estimates.
Earthquake slip
distributions
All earthquake recurrence distributions share a single earthquake slip distribution
(Fig. ). This distribution is a lognormal distribution with μ=1 m
and σ=0.75 m, which produces essentially “characteristic” earthquakes that
still, nonetheless, have some variability. This is representative of behavior observed in
many studies e.g.,. Taken together, the mean slip of 1 m and the mean recurrence
interval of 1000 years shared by each of the recurrence interval distributions yields a
mean slip rate of 1 mm year-1. This rate is fairly typical for intraplate faults
studied by paleoseismologists, and also allows for easy normalization so that the results
of this study can be generalized to faults with different parameters.
Earthquake slip distribution.
The choice of the lognormal distribution is for convenience, simplicity, and flexibility:
it is a common well-known distribution and – should one be interested – can be easily
given different shape and scale values to modify the CV or change the mean slip rate in
the modeling code used in this paper.
However, it is not necessarily the most accurate representation of earthquake slip
variability. compiled field measurements of surface
ruptures from 13 earthquakes. The resulting distribution (Fig. S1 in the Supplement) has
some significant differences with the lognormal distribution used here, though the CV of
0.67 is quite close to the value used here. To test the sensitivity of the results given
in this paper to the choice of slip distribution, the numerical simulations presented in
this work were run with the only change being the use of the empirical slip distribution
from , and the results are given in the Supplement (Fig. S2,
Table S1). Though there is more discussion in the Supplement, the results are essentially
identical to those presented below. As an additional experiment, the numerical
simulations have been run using an invariant per-event displacement of 1 m. Though this
is not a realistic scenario, it allows for a deconvolution of the effects of earthquake
time stochasticity and earthquake displacement stochasticity. The results are shown in
Figure S3 and Table S2, and discussed in the Supplement; there are noticeable differences
in the results, but they are quite small and do not call into question the results and
conclusions presented below.
Stochastic displacement
histories
For each of the earthquake recurrence distributions, a 2 000 000-year-long time series
of cumulative displacements is calculated, and then slip rates are estimated over time
windows of different lengths.
The construction of the displacement histories is straightforward. From each recurrence
distribution, a little over 2000 samples are drawn randomly. Then, these are combined
with an equal number of displacement samples drawn randomly from the earthquake slip
distribution. Finally, a cumulative displacement history is created for each series from
a cumulative sum of both the recurrence interval samples (producing an earthquake time
series) and displacement samples (producing a cumulative slip history). Years with no
earthquakes are represented as having no increase in cumulative displacement. Then, the
series is trimmed at year 2 000 000; it is initially made longer because the stochastic
nature of the sample sets means that 2000 earthquakes may not always reach
2 000 000 years.
The displacement histories in Fig. clearly show that given the
stochastic nature of the samples, the cumulative displacements can diverge greatly from
the mean. The magnitude of this divergence appears to be related to the CV of the
recurrence interval distributions: the clustered series (CV=2) has by far
the most divergence, both unclustered series (lognormal and exponential with
CV=1) behave qualitatively similarly, and the periodic series
(CV=0.5) tracks most closely with the mean. The divergences from the mean
are driven by successive closely spaced earthquakes, perhaps with high displacements, or
by long durations of quiescence. The clustered series in particular shows a pattern of
many closely spaced events (clusters) leading to a much higher than average displacement
accumulation rate, followed by very long episodes of dormancy in which regression to the
mean occurs. From visual inspection, the dormant episodes appear to be composed of single
or dual exceptionally long inter-event times. This of course is reflected in the great
asymmetry of this distribution (Fig. ), with the very short mode and
“fat” right tail.
Simulated displacement histories for each of the recurrence
distributions, and the “true” mean line at 1 mm year-1 in black; (a) the
first 100 000 years; (b) the entire 2 000 000 years. The
histories are the same in both plots.
Please note that in the construction of the cumulative displacement histories, all
samples are independent. This means that the duration of any recurrence interval does not
depend on the duration of the previous or subsequent interval (in other words, there is
no autocorrelation in these series); the same applies to the displacement samples. It is
currently unknown to what degree autocorrelation exists in real earthquake time and
displacement series, or how much correlation is present between recurrence intervals and
subsequent displacements. Autocorrelation in recurrence interval sequences is essentially
unstudied, though on the basis of a preliminary unreviewed analysis
, I suspect that it is as important as CV.
Furthermore, the magnitude of displacement is independent of the corresponding recurrence
interval. The framework of elastic rebound theory in its most basic form should predict
some correspondence between inter-event (loading) duration and slip magnitude, and this
is included (implicitly or explicitly) in oscillator models incorporating complete stress
or strain release in each earthquake
e.g., or in any model where
coseismic friction drops to zero, as this is functionally equivalent (because
fa=τas/τan, where fa,
τas, and τan are, respectively, friction at rupture
arrest, shear stress at rupture arrest, and effective normal stress at rupture arrest;
zero friction implies zero shear stress, or complete stress drop). Given a reasonably
constant loading rate, complete shear stress or strain release implies some
proportionality between the loading time and displacement. Nonetheless, this
correspondence is not found in the more extensive paleoseismic datasets, such as those by
(or the correlation may be negative as found by
), but the number of paleoseismic datasets of sufficient
size and quality to identify these effects with statistical significance are few indeed.
Because this modeling strategy involves sampling independence, it is
essentially a neutral model. If any correlation structure exists in the
sample sets, it will affect the displacement histories in predictable
ways. Negative autocorrelation in the sample sets, meaning that a long
interval (or slip distance) is followed by a short interval (or slip
distance) and vice versa, will cause a more rapid regression to the mean
slip rate line, and decrease the scatter in the slip rate estimates. A
positive correlation between recurrence (loading) intervals and slip
magnitudes will have the same effect. Conversely, positive
autocorrelation in either of the sample sets, or negative correlation
between the recurrence intervals and slip magnitudes, will lead to
slower regression to the mean line and therefore an increase in the
scatter of the slip rate estimates.
Envelopes of estimated slip rates as a function of the mean number
of earthquakes (or thousands of years) over which the slip rate was
estimated. All slip rates have a true value of 1 mm year-1. (a) periodic
distribution; (b) unclustered lognormal distribution; (c) clustered lognormal
distribution; (d) unclustered exponential
distribution.
Slip rate calculations
The uncertainty in the estimated slip rates due to earthquake cycle
variability is estimated by taking a function, R^, that calculates the
mean slip rate within a time window t, and sliding it along the
displacement series. R^ is calculated simply as
R^(D0,D1,t)=D1-D0t,
where D0 is the cumulative displacement at the beginning of the time window, D1 is
the cumulative displacement at the end of the time window, and t is the length of the
time window. The ^ symbol signifies an estimate rather than the true value R.
This slip rate estimation method is intended to represent a neotectonic-style slip rate
estimate in which the number of earthquakes that have contributed to the observed
deformation is unknown, as are the durations of the open intervals that bound the time
window (one of which precedes deposition of the marker unit, and one is the time since
the most recent earthquake and the measurement time). By sliding R^ over the
displacement series, a set of many samples of R^ is generated, so that we may
analyze the distribution. The number of samples is n=N-t+1, where N is the
length of the total series (2 000 000 years in this study).
A major goal of this study is to provide an answer to the following question. How long
should slip rates be measured over in order to estimate a meaningful rate? This question
will be answered by looking at the distribution in R^ as a function of t. Fifty
values of t from 500 years to 100 000 years, logarithmically spaced, are used. Note
that given μ of 1000 years, this translates to 0.5–100 mean numbers of earthquakes
in the window.
The results of these calculations are shown in Fig.
for up to 60 mean earthquake cycles. It is clear that the total variability
in the estimated slip rates is initially quite high when t is short
(< 10 000 years or ∼10 earthquakes). Particularly when t is
< 5000 years, the maximum rates are a factor of 3 or more greater
than the true rate R, but the median rates are lower than R – this means
it is more likely that fewer earthquakes are captured in the time window than
naively expected given the mean recurrence, and that the time contained in
the open intervals is a substantial fraction of the total time window. As the
median is lower than R, most measurements over these short timescales will
underestimate the mean rate, although not necessarily by much.
With longer t, between 10 000–20 000 years (or 10–20 earthquakes), the variation in
the slip rate estimates stabilize to within ±100 % of the mean
(Fig. ) for all distributions, though this happens most quickly
in the periodic distribution, and most slowly in the clustered distribution. In fact, the
only exception here is that the lower bound of the clustered distribution can stay at
zero for more than 60 mean earthquake cycles. It is highly unlikely that any given
recurrence interval will be this long; but given thousands of earthquakes over millions
of years, the chance of such an event occurring at least once is far more likely. For
rate estimates longer than several tens of mean earthquake cycles, the variation
decreases very slowly but progressively with increasing window length.
Note that with a measurement time exceeding 5–10 mean earthquake cycles, the standard
error (n/σD2+σr2, where
σD is the standard deviation of displacement; and σr is
the standard deviation of the recurrence interval) is a reasonable approximation for the
standard deviation of the variability in the slip rates due to earthquake-cycle
stochasticity, and shows broadly similar decreasing variation with increasing earthquake
cycles. However, the standard error is symmetrical, though the variability displayed here
is asymmetrical due to the asymmetry of the recurrence interval and displacement
distributions.
Epistemic uncertainty relative to the measured rate for each of the
recurrence distributions as a function of the mean number of earthquakes (or
thousands of years) over which the slip rates were measured. (a) periodic
distribution; (b) unclustered lognormal distribution; (c) clustered lognormal
distribution; (d) unclustered exponential
distribution.
Normalizing to different slip rates and earthquake
offsets
The distributions in this study were chosen to have μ=1 (kyr, m) in order to make the
mean slip rate R=1 mm year-1, and therefore to make all results easy to
generalize to different systems with different real rates. This normalization requires
some values for the mean per-event displacement D‾ and the slip rate R,
yielding a normalization factor (or coefficient) NF that can be applied to the time values as shown for the x axis
in Fig. :
NF=D‾R.
NF is also equal to the mean recurrence interval μ given suitable unit
transformations (though the recurrence interval may not be known a priori). For
example, a fault with a slip rate of 5 mm year-1 but a per-event mean slip of 1 m
has a normalization factor of 0.2, meaning that earthquakes are 5 times as frequent on
this fault as the simulated fault, so the time window required for the rates to stabilize
is 0.2 times the simulated fault. For a fault with R=1 mm year-1 and
D‾=2.5 m, NF =2.5; then the mean recurrence interval μ is 2.5 times
as long as in these simulations, so NF =2.5 and the timescales for rate stabilization
will be lengthened by that much.
This normalization will obviously be more accurate if D‾ and R are
independently (and accurately) known or can be obtained from other information.
D‾ may be estimated paleoseismologically or through the application of
scaling relationships between fault length and offset
. The accuracy of R is discussed
below, but suffice it to say for now that for more than ∼10 earthquakes, R^
should be acceptable.
Discussion
Interpreting measured rates
The most pragmatic motivation for this study is to understand how much epistemic
uncertainty in a slip rate measurement results from the aleatoric (or natural)
variability in earthquake recurrence. However, the previous results have focused on
describing the natural variability, and how much a measured rate may deviate from the
“true” secular rate, i.e., R^/R. In these methods and results, there is no
epistemic uncertainty because all quantities are known perfectly. Of course, in a real
slip rate study, the measured value is known, but the true value is not. The epistemic
uncertainty then is present, and can be quantified here by evaluating the true rate R
relative to the measured rates R^, so that the distribution of R/R^ at a
given t represents the epistemic uncertainty distribution about the measured value.
The epistemic uncertainty relative to the measured rate is shown in Fig.
for all distributions for the first 40 000 years (or 40 earthquakes), represented by the
5th, 25th, median, 75th, and 95th percentiles, and numerical results are given in
Table . Several things are clear in these plots.
First, the variance in the distributions is quite large for the first several
thousand years (or several mean earthquake cycles), but becomes much more compact after
∼15 mean earthquake cycles, as with the slip rate estimates in
Fig. . The right tails (or upper bounds in Fig. )
in fact are infinite (or undefined) for the first few earthquake cycles, because in some
fraction of the simulations R^ is zero.
Epistemic uncertainty table showing the percentiles for the
slip rate variability (in mm year-1) at each time t (years). The
long-term mean slip rate is 1 mm year-1.
Distribution
t
5 %
25 %
50 %
75 %
95 %
Lognormal
2531
0.51
0.79
1.13
1.77
4.37
(CV=0.5)
4843
0.61
0.84
1.08
1.46
2.44
10 323
0.7
0.89
1.04
1.27
1.84
42 103
0.83
0.96
1.04
1.12
1.27
Lognormal
2531
0.42
0.71
1.14
2.25
∞
(CV=1)
4843
0.51
0.75
1.07
1.67
5.62
10 323
0.6
0.82
1.04
1.38
2.55
42 103
0.72
0.89
1.03
1.18
1.55
Lognormal
2531
0.35
0.68
1.35
∞
∞
(CV=2)
4843
0.43
0.7
1.16
2.86
∞
10 323
0.51
0.75
1.07
1.66
∞
42 103
0.7
0.83
0.98
1.28
4.75
Exponential
2531
0.4
0.7
1.17
2.39
∞
(CV=1)
4843
0.49
0.75
1.08
1.67
4.79
10 323
0.61
0.8
1.03
1.37
2.32
42 103
0.76
0.89
1
1.15
1.43
Second, the distributions are asymmetrical, especially the 5–95 %
interval. The 95th percentile is generally several times as far from the
measured value as the 5th percentile, meaning that the true value of the slip
rate may be much greater than the measured rate but not a commensurately
small fraction of the measured rate.
Third, the median rate before convergence at 5μ is greater than the true rate,
meaning that in most cases very short-term slip rate measurements will underestimate the
true slip rate; this is a systematic bias. This is a particular concern for
paleoseismological slip rate estimates, where there are rarely more than five events in
any given trench, though this bias will be decreased if slip rate measurements are made
from closed intervals only. However, the median is less than 2 times R^ following
just two to three events, so the systematic bias is unlikely to be much greater than the
measurement uncertainties in the age or offset of the events.
Evaluating slip rate
changes
It is of both theoretical and practical interest to be able to evaluate
whether fault slip rates may have changed over some time period, or
between multiple sets of measurements. From a theoretical perspective,
understanding under what conditions fault slip rates change can lead to
much insight into fault processes such as growth
e.g., and interaction
e.g.,. Practically,
if an older (or longer-term) slip rate is quite different from the
contemporary rate, then its inclusion in a seismic hazard model may lead
to inaccurate hazard estimation.
First, a necessary definition is given: a slip rate change in this discussion means a
real change in R, not a change in the estimate R^, which leads to a change in
the distribution of earthquake recurrence and/or displacement parameters with time (in
statistical terminology, the recurrence and displacement distributions are then
non-stationary). This sort of change may be associated with secular changes in
fault loading, stemming from changing stress or strain boundary conditions.
Discerning a real slip rate change, rather than a change in R^ due to natural
variability, requires consideration of the lengths of time over which the different slip
rate measurements were made and the associated uncertainty. If the distributions defined
by two estimates R1^ and R2^ and their empirical distributions
(reflecting the number of earthquakes as well as the CVs of the underlying earthquake
recurrence distributions) are known, then the null hypothesis that the two slip rates are
drawn from the same stationary distribution can be tested with a Kolmogorov–Smirnov
test.
However, it is unlikely that the values for the recurrence distributions and the number
of earthquakes that have transpired are sufficiently known to make the calculation,
unless the fault has received in-depth paleoseismic and neotectonic study. As formal
hypothesis testing may not be possible given typical slip rate datasets, an informal way
of gauging the likelihood of a slip rate change is to crudely estimate (“guesstimate”)
the number of possible earthquakes and the recurrence distribution, and then use the
closest values in Table with propagated measurement uncertainty to
evaluate the amount of overlap between the two slip rate estimates. If the overlap
between the distributions is a small fraction of the total range of the distributions,
then it is likely that a real slip rate change occurred. This is clearly not appropriate
for a real hypothesis test, but it may aid researchers in developing ideas or intuition
about the behavior of a given fault.