Actors, actions, and uncertainties: optimizing decision-making based on 3-D structural geological models

Uncertainties are common in geological models and have a considerable impact on model interpretations and subsequent decision-making. This is of particular significance for high-risk, high-reward sectors. Recent advances allows us to view geological modeling as a statistical problem that we can address with probabilistic methods. Using stochastic simulations and Bayesian inference, uncertainties can be quantified and reduced by incorporating additional geological information. In this work, we propose custom loss functions as a decision-making tool that builds upon such probabilistic approaches. As an example, we devise a case in which the decision problem is one of estimating the uncertain economic value of a potential fluid reservoir. For subsequent true value estimation, we design a case-specific loss function to reflect not only the decision-making environment, but also the preferences of differently risk-inclined decision makers. Based on this function, optimizing for expected loss returns an actor’s best estimate to base decision-making on, given a probability distribution for the uncertain parameter of interest. We apply the customized loss function in the context of a case study featuring a synthetic 3-D structural geological model. A set of probability distributions for the maximum trap volume as the parameter of interest is generated via stochastic simulations. These represent different information scenarios to test the loss function approach for decision-making. Our results show that the optimizing estimators shift according to the characteristics of the underlying distribution. While overall variation leads to separation, risk-averse and risk-friendly decisions converge in the decision space and decrease in expected loss given narrower distributions. We thus consider the degree of decision convergence to be a measure for the state of knowledge and its inherent uncertainty at the moment of decision-making. This decisive uncertainty does not change in alignment with model uncertainty but depends on alterations of critical parameters and respective interdependencies, in particular relating to seal reliability. Additionally, actors are affected differently by adding new information to the model, depending on their risk affinity. It is therefore important to identify the model parameters that are most influential for the final decision in order to optimize the decision-making process.

To illustrate our approach, we apply it to a simple synthetic 3-D geological model that comprises deformed geological units and a normal fault which together form a potential structural hydrocarbon trap. Assuming a petroleum exploration and production case, we define the maximum trap volume as our value of interest. Decision makers would want to best estimate this volume to derive recoverable reserves, economic value and subsequently allocate development resources accordingly.

Computational implementation
Computationally and numerically, we implement all out methods in a Python programming environment, relying in particular on the combination of two crucial open source libraries: (1) GemPy (version 1.0) for implicit geological modeling and (2) PyMC (version 2.3.6) for conducting probabilistic simulations. 5 Gempy is able to generate and visualize complex 3-D structural geological models based on a potential-field interpolation method originally introduced by Lajaunie et al. (1997) and further elaborated by Calcagno et al. (2008). GemPy was specifically developed to enable the embedding of geological modeling in probabilistic machine-learning frameworks, in particular by  Haario et al. (2001) and check for MCMC convergence via a time-series method approach by Geweke et al. (1991).
Components of a statistical model are represented by deterministic functions and stochastic variables in PyMC (Salvatier et al., 2016). We can thus use the latter to represent uncertain model input parameters and link them to additional data via likelihood 15 functions. Other parameters, such as the value of interest for decision making, can be determined over deterministic functions, as children of parent input parameters.
To visually compare the states of geological unit probabilities after conducting stochastic simulations, we consider the normalized frequency of lithologies in every single voxel and visualize the results in probability fields (see Wellmann and Regenauer-Lieb (2012)).  Figure 2. Illustration of the process of trap recognition in 2-D, i.e. the conditions that have to be met by a model voxel, to be accepted as belonging to a valid trap. A voxel has to be labeled as part of the target reservoir formation (a) and positioned in the footwall (b). Trap closure is defined by the seal shape and the normal fault (c). Consequently, the maximum trap fill is defined by either the anticlinal spill point (S) or a point of leakage across the fault, depending on juxtapositions with layers underlying (L1) or overlying the seal (L2). The latter is only relevant if the critical Shale Smear Factor is exceeded, as determined over D and T in (d). In this example, assuming sealing of the fault due to clay smearing, the fill horizon is determined by the spill point in (d). Subsequently, only trap section 1 is isolated from the model borders in (d) and can thus be considered a closed trap. Voxels included in this section are counted to calculate the maximum trap volume.  hereby not only implement uncertainties regarding layer surface positions in depth, but also layer thicknesses, geometrical shapes and degree of fault offset.
Such probability distributions can also be allocated as homogeneous sets to point and feature groups which are to share a common degree of uncertainty (see Table 1). We assign the same base uncertainty to groups of points belonging to the same layer bottom surface by referring them to one shared distribution each. Assuming an increase of uncertainty with depth, 5 standard deviations for the shared distributions are increased for deeper formations. Furthermore, uncertainty regarding the magnitude of fault offset is incorporated by adding a skew normal probability distribution that is shared by all layer interface points in the hanging wall. A left skewed normal distribution is chosen to reflect the nature of throw on a normal fault, in particular the slip motion of the hanging wall block. Mainly negative values are returned by this distribution. This way, the offset nature of the normal fault is maintained and inversion to a reverse fault is avoided.

The value of interest for decision making
We define the trap volume V t as the central feature of economic interest. For conducting straightforward volumetric calculations, we assume that found closed traps are always filled to spill, i.e. we only consider structural features as controlling mechanisms. This value is of central importance for calculating original oil or gas in place (OOIP/OOIG) and consequently, recoverable reserves. This type of estimation is also the only approach to assess the amount of hydrocarbons in a reservoir 15 before production has started (Dean, 2007;Morton-Thompson et al., 1993).
By declaring these connections, we have given our model an economic significance. We can assume that the hydrocarbon trap volume is directly linked to project development decisions, i.e. investment and allocation of resources is represented by bidding on a volume estimate.
In the course of this work, we developed a set of algorithms to enable the automatic recognition and calculation of trap 20 volumes in geological models computed by GemPy. The volume is determined on a voxel-counted basis via four conditions illustrated in Fig. 2 and further explained in Appendix A.
Following these conditions, we can define four major mechanisms which control the maximum trap volume: (1) the anticlinal spill point of the seal cap, (2) the cross-fault leak point at a juxtaposition of the reservoir formation with itself, (3) leakage due to juxtaposition with overlying layers and SSF failure, and (4) stratigraphical breach of the seal, when its voxels are not 25 continuously connected above the trap. Due to the nature of our model, (3) and (4) will always result in complete trap failure.
The trap volume V t is a result from GemPy's implicit geological model computation. It is an output parameter dependent on deterministic and stochastic input parameters. With every model realization, input uncertainties will respectively propagate to the volume, which in turn is uncertain. We can thereby evaluate it using probabilistic methods and Bayesian decision theory in particular, as explained in the following.
2.4 Bayesian decision theory 5 We view the statistical analysis of our model from a Bayesian perspective, which is most importantly characterized by its preservation of uncertainty. Its principles have been presented and discussed extensively in literature (see Jaynes (2003), Box and Tiao (2011), Harney (2013, Gelman et al. (2014) and Davidson-Pilon (2015)). The Bayesian approach is widely seen as intuitive and inherent in the natural human perspective. It regards probability as a measure of belief about a true state of nature.
Such beliefs can be assigned to individuals. Thus, different and even contradicting beliefs about a true state of nature might be 10 held by different individuals, based on variations and disparities in the information available to each one individual.
In this work, the decision problem is one of estimating the true state of our value of interest, which we denote θ. Estimations are based on probability distributions attained from: (1) simple Monte Carlo error propagation, and (2) Bayesian inference using Markov chain Monte Carlo simulation. For (1), we are dealing with a prior probability distribution p(θ) that results from the deterministic function of the uncertain model input parameters. For (2), the prior distribution is revalued using Bayesian 15 inference (see Appendix B) given the presence of additional statistical information y, and using likelihood functions p(y|θ).
Decision making is then based on the resulting posterior probability p(θ|y). In general, Bayesian inference is about updating a belief and reaching an estimate that is less wrong.

Likelihoods
For the application of Bayesian inference, we implement two types of likelihoods: 20 1. Layer thickness likelihoods: With every model realization, we extract the z-distance between layer boundary input points at a central x − y position (x = 1100 m, y = 1000 m) in our input interpolation data. Resulting thicknesses can then be passed on to stochastic functions in which we define thickness likelihoods via normal distributions.
2. SSF likelihood: SSF values are realized over more complex parameter compositions. We base this likelihood on a normal distribution which we link to the geological model output.

25
The inclusion of these likelihood is based on purely hypothetical assumptions and is intended to provide the opportunity to explore the effects different types and scenarios of additional information might have. While the thickness likelihood functions are dependent on input parameters directly, the implementation of the SSF likelihood function requires a full computation of the model and extended algorithms of structural analysis.

Loss, expected loss and loss functions
Common point estimates, such as the mean and the median of a distribution, usually come with a measure for their accuracy (Berger, 2013). However, it has been argued by Davidson-Pilon (2015) that by using pure accuracy metrics, while this technique is objective, it ignores the original intention of conducting the statistical inference in cases, in which payoffs of decisions are valued more than their accuracies. A more appropriate approach can be seen in the use of loss functions (Davidson-Pilon, . Loss is a statistical measure of how "bad" an estimate is. Estimate-based decision are also referred to as actions a. Loss is defined as L(θ, a), so L(θ 1 , a 1 ) is the actual loss incurred when action a 1 is taken while the true state of nature is θ 1 (Berger, 2013). The magnitude of incurred loss related to an estimate is defined by a loss function, which is a function of the estimate and the true value of the parameter (Wald, 1950;Davidson-Pilon, 2015): So, how "bad" a current estimate is, depends on the way a loss function weights accuracy errors and returns respective losses.
Two standard loss functions are the absolute-error and the squared-error loss function. Both are objective, symmetric, simple to understand and commonly used Davidson-Pilon (2015) and Hennig and Kutlukaya (2007) have proposed that it might be useful to move on from standard 15 objective loss functions to the design of customized loss functions that specifically reflect an individual's (i.e. the decision maker's) objectives, preferences and outcomes. Hennig and Kutlukaya (2007) argue that choosing and designing a loss function involves the translation of informal aims and interests into mathematical terms. This process naturally implies the integration of subjective decisions and subjective elements. According to them, this is not necessarily unfavorable or less objective, as it may better reflect an expert's perspective on the situation. 20 Standard symmetric loss functions can easily be adapted to be asymmetric, for example by weighting errors on the negative side stronger than those on the positive side. Preference over estimates larger than the true value (i.e. overestimation) is thus incorporated in an uncomplicated way.. Much more complicated designs of loss functions are possible, depending on purpose, objective and application.
The presence of uncertainty during decision making implies that the true parameter value is unknown and thus the truly The expectation symbol E is subscripted with θ, by which it is indicated that θ is the respective unknown variable. This expected loss l is also referred to as the Bayes risk of estimateθ (Berger, 2013;Davidson-Pilon, 2015).
By the Law of Large Numbers, the expected loss ofθ can be approximated drawing a large sample size N from the posterior distribution, respectively applying a loss function L and averaging over the number of samples (Davidson-Pilon, 2015): Minimization of a loss function returns a point estimate known as Bayes action or Bayesian estimator, which is the decision with the least expected loss according to the loss function, and the decision we are interested in in this work (Berger, 2013;5 Moyé, 2006).

Customization of our case-specific loss function
Assigning an economic notion to our model and assuming the case of an actor or decision maker in any field naturally necessitates the consideration of preferences, interests and the overall subjective perspective such an individual or for example a company might have. Further constraints and conditions can also be specific to the field, industry or generally to the problem 10 at hand. Consequently, the design of a more specific non-standard and possibly asymmetric loss function might be required.
One that includes subjective aspects and difference in weighting of particular risks, arising from an actor's inherent preferences and the environment in which the actor has to make a decision. In the face of several uncertain parameters, a perfectly true estimate is virtually unattainable. However, an attempt can be made to design a custom loss function that returns a Bayesian For our example case of trap volume estimation, we develop a custom loss function in five steps. Ideally, an actor would like to know the exact trap volume, so that resources can be allocated appropriately in order to acquire economic gains. This conscious and irrevocable allocation is the decision to be made or action to be taken (Bratvold and Begg, 2010). Thus, we treat estimating as equivalent to making a decision. Deviations from the unknown true value in the form of over-and underestimation 20 bring about an error and loss accordingly. In steps I -IV we make assumptions about the significance of such deviations and how they differently contribute to risks in the general decision making environment.
It can be assumed that several actors in one such environment or sector may have the same general loss function but different affinities concerning the risks. This might be based for example on different psychological factors or economic philosophies followed by companies. It might also be based on budgets and options such actors have available. An intuitive example is the 25 comparison of a small and a large company. A certain false estimate or error might have a significantly stronger impact on a company which has a generally lower market share and only few projects, than on a larger company which might possess a higher financial flexibility and for which one project is only one of many development options in a portfolio. We therefore introduce the concept of varying risk affinities in the final step V. - Step I -Choosing a standard loss function as starting point: In our case, we assume that investments increase linearly  affinities ranging from risk-friendly (r = 0.5 and 0.75), over risk-neutral (r = 1), to risk-averse (r = 1.25 and r = 1.5). Dots mark the respective positions of minimizing actions. We applied the function to samples from a normal distribution (µ = 0, σ = 500) that represents the probability of a hypothetical score which is to be estimated. Jumps at zero are caused by the implementation of fatal over-and underestimation.
Taking a closer look at the realizations for r = 1.5 and r = 0.5, we can recognize how they expect different losses and come to a different optimal decision given the same information. As in this case, positive and negative score values are equally likely, and overestimation errors are weighted stronger, only the most risk-friendly actor will bid on a positive estimate. Also, expected losses are lower for more risk-friendly decision makers. loss function as a basis for further customization steps: - Step II -Simple overestimation: Considering the development of a hydrocarbon reservoir, it can be assumed that over-investing is worse than under-investing. Overestimating the size of an accumulation might for example lead to the installation of equipment or facilites that are actually redundant or unnecessary. This would come with additional 5 unrecoverable expenditures. Consequences from underestimating (0 <θ < θ), however, may presumably be easier to resolve. Additional equipment can often be installed later on. Hence, simple overestimation (0 < θ <θ) is weighted stronger in this loss function by multiplying the error with an overestimation factor a: - Step III -Fatal overestimation: The worst case for any project would be that its development is set into motion, 10 expecting a gain, only to discover later that the value in the reservoir does not cover the costs of realizing the project, resulting in an overall loss. A petroleum system might also turn out to be a complete failure, containing no value at all, although the actor's estimate indicated the opposite. Here, we refer to this as fatal overestimation. A positive value is estimated, but the true value is zero or negative (θ ≤ 0 <θ). This is worse than simple overestimation, where both values are positive and a net gain is still achieved, which is only smaller then the best possible gain of expecting the true value.

15
Fatal overestimation is included in the loss function by using another weighting factor b that replaces a: In other words: With b = 2, fatal overestimation is twice as bad as simple underestimation. - Step IV -Fatal underestimation: We also derive fatal underestimation from the idea of estimating zero (or a negative value), when the true value is actually positive (θ ≤ 0 < θ). This is assumed to be worse than simple overestimation, but 20 clearly better than fatal overestimation. No already owned resources are wasted, it is only the potential value that is lost, i.e. opportunity costs that arise from completely discarding a profitable project. Fatal underestimation is weighted using a third factor c: correctly on a higher value, will also return a greater gain. It is assumed that risk-friendly actors care less about fatal underestimation, i.e. they will rather develop a project than discard it. In our finalized loss function, we simply include these considerations via a risk affinity factor r which alters the incurred losses respectively: It is important to note that the weighting factors a, b and c can take basically any numerical values but should be chosen in a 5 way that they appropriately represent the framework conditions of the problem. Here, we assume that simple overestimation is 25% (a = 1.25), fatal overestimation 100% (b = 2) and fatal underestimation 50% (c = 1.5) worse than simple underestimation.
An example plot of actual incurred losses via this loss function can be found in Appendix C According to Eq. 8, the risk-neutral loss function is returned for r = 1, as no re-weighting takes place. For r < 1, the weight on overestimating (a, b) is reduced and increased for fatal underestimation (c), as well as normal underestimation. This represents 10 a risk-friendlier actor that is willing to bid on a higher estimate to attain a greater gain. For r > 1, the overestimation weight (a, b) is increased in the loss function, underestimation and fatal underestimation weight (c) are decreased and respectively more risk-averse actors are prompted to bid on lower estimates. Since risk neutrality is expressed by r = 1, we consider values 0 < r < 2 to be the most appropriate choices to represent both sides of risk affinity equally. Accordingly different loss function realizations are plotted in Fig. 3.

15
It has to be emphasized that this is just one possible proposal for loss function customization. There exists not one perfect design for such a case (Hennig and Kutlukaya, 2007). Slight to strong changes can already be implemented by simply varying the values of the weighting factors a, b and c. Fundamentally different loss functions can also be based on a significantly different mathematical structure. Loss functions are customized regarding the problem environment and according to the subjective needs and objectives of the decision maker (Davidson-Pilon, 2015; Hennig and Kutlukaya, 2007). Thus, they are mostly defined 20 by the actor expressing his perspective. Changes in the individual's perception and attitude might lead to further customization needs at a future point in time, as was reported by Hennig and Kutlukaya (2007).

Results
We applied our custom loss function to various different volume probability distributions resulting from stochastic simulations.
First, reference results were created using only priors and simple Monte Carlo error propagation (10,000 sampling iterations,

25
Scenario 1). Then we devised several scenarios of additional information and included these via likelihoods and Bayesian inference. For this, 10,000 MCMC sampling steps were conducted, with an additional burn-in phase of 1000 iterations. The prior parameter uncertainties were chosen to be identical for all simulations (see Table 1). Results of convergence diagnostics can be found in Appendix E. The implemented likelihoods are listed in Table 2.
For the comparison of results, we consider in particular the following measures: (1) probability field visualization, (2) occurrence of trap control mechanisms, (3) resulting trap volume distributions, and (4) consequent realization of expected losses and related decisions. Maximum trap volumes were calculated for each model iteration and plotted as a probability distribution in Fig. 4. In general, a wide range of volumes is possible, from zero to more than 3 million m 3 . However, we can recognize a bimodal tendency: Low but positive volumes are less probable than significantly high volumes or complete failure (V t = 0). 5 Consequently, applying our custom loss function to this distribution resulted in widely separated minimizing estimators for the differently risk-affine actors (see Fig. 4). Only the risk-friendliest estimates are found within the described highly positive mode of the distribution. Risk-averse individuals bid on significantly lower estimates or even zero. The risk-neutral decision is found between both modes and presents the highest expected loss. Expected losses decrease towards the extreme decisions and closer to the modes.   We considered two scenarios of thickness likelihoods: The seal being (Scenario 2a) likely very thick, or (Scenario 2b) likely very thin (see Table 2).
14 Solid Earth Discuss. In Scenario 2a, probability visualization illustrates that the presence of a thick seal is very probable (see Fig. D2). For Scenario 2b, the presence of a reliable seal is questionable.
A high likelihood of a reliable seal cap (2a) significantly reduced the probability of trap failure, while enhancing the mode of highly positive outcomes (see Fig. 4). This coincides with the predominance of the anticlinal spill point (63 %) and the leak point to the same reservoir (36%) as control mechanisms. The occurrence of other mechanisms was negligible (see Table D2). 5 Inversely, a likely thin seal (2b) virtually eliminated the positive mode and focused almost the whole distribution on complete failure. Accordingly, seal-breach related control mechanisms gained importance (65.5 % occurrence rate for stratigraphical seal breach).
In both scenarios, Bayes actions shifted towards the respectively emphasized modes. This came with overall convergence of decisions and reduction of expected losses. In Scenario 2a, all decision makers bid on a positive outcome. Risk-averse 10 individuals experienced the strongest shift, but also present the highest expected losses. In Scenario 2b, all individuals decide to not allocate resources. Even the risk-friendliest actor moved to a zero estimate, where the most risk-averse bid had already been placed in the prior Scenario 1. However, although all decision coincide, expected losses increase from risk-averse to risk-friendly (see Table D1).

15
We also tested scenarios for a likelihoods of a thick reservoir formation alone (Scenario 3a) and in combination with the likelihood of a thick seal (Scenario 3b; see Table 2. The overall effect of using these reservoir-based likelihoods turned out to be minor compared to the seal-related scenarios. In Scenario 3a, failure probabilities slightly increased, resulting in a decision shift towards lower values (see Fig. D1).
Results for Scenario 3b are very similar to those of 2b, as can also be seen in Table D1. There was no significant reduction of 20 expected losses or shift in decisions by adding the likelihood of a thick reservoir to the likelihood of a thick seal.

Introducing SSF likelihoods
We considered two SSF-related likelihood scenarios. In Scenario 4a, we implemented solely a SSF likelihood that was based on a narrow normal distribution (µ = 5.1, σ = 0.3) with a mean near the critical value SSF c = 5. In Scenario 4b, we combined the likelihood of a thick seal (2a) with a likely moderate but reliable SSF value (SSF normal distribution with mu = 2 and σ = 25 0.3). Figure 5 illustrates the posterior situations well.
Scenario 4a resulted in increased bimodality of the posterior distribution (see Fig. 6). Accordingly, the Bayes action divergence and expected losses increased. Only two trap control mechanisms remained relevant for 4a (see Table D2): anticlinal spill (66 %) and cross-fault leakage to overlying formations (34 %).
The results for 4b were comparable to those of 2a, but more pronounced. Entropies, particularly related to the seal thick-

Discussion
In this work, we build upon the recent advances presented by de la Varga et al. (2019) which enable us to view geological modeling as a probabilistic statistical problem. We expand on this by proposing custom loss functions as a useful decision-5 making tool when dealing with uncertain structural geological settings and to measure the effects of adding new information to a model. This is also aimed to illustrate the significance of the Bayesian perspective with regards to model interpretation in an economic context. As an explanatory example, we chose the hydrocarbon sector. This field is characterized by the necessity to make decisions in the face of high risks and potentially high rewards. These decisions are often closely linked to geological modeling and the estimation of reservoir-related values. By developing case-specific custom loss functions, we  intended to show that this estimation approach is suitable and useful to express the nature of complexities behind decisionmaking problems, decision environments and the risk-related behavior of actors.

State of knowledge, decision uncertainty and consequent decision making
As we defined trap volume to be in essence a deterministic function of uncertain model input parameters, uncertainties propagate to this parameter of interest when conducting stochastic simulations. We consider the resulting volume probability dis-5 tributions to be expressions of the respective state of knowledge (or information) on which the decision making is to be based.
As this should include all parameters and conditions relevant for decision making, we furthermore propose that the overall uncertainty inherent in this probability distribution can be referred to as "decision uncertainty" and that this entity should be viewed separately from geological model uncertainty.
By viewing decision making as a problem of optimizing a case-specific custom loss function applied to such a state of 10 knowledge and decision uncertainty, we were able to observe clear differences in the respective behavior of distinctly riskaffine actors. The position and separation of their minimizing estimators, i.e. their decisions, manifested according to the properties of the value distributions. General spread and the occurrence of modes relative to the overall distribution and the relevant decision space appear to be particularly significant. High spread and bimodal tendencies, i.e. high overall uncertainty, resulted in a wider separation of different actions. Reduction of the distribution to one mode conversely led to their convergence. A decrease in decision uncertainty furthermore was accompanied by a reduction in expected loss for each Bayes estimator.

5
Considering these observations, we derive that the degree of action convergence and respective expected losses can be considered measures for the state of knowledge and decision uncertainty at the moment of making a decision. The better these are, the more similar the decisions of differently risk-affine actors and the lower their loss expectations are. Given perfect information all actors would bid on the same estimate (the true value) and expect no loss, since no risk would be present. It furthermore follows from this that the relevance of risk affinity decreases with greater reduction of decision uncertainty.

The impact of additional information on decision making
We used these loss-function-related indicators to assess the significance additional information might have for decision making.
We observed that the impact on decision uncertainty, induced by Bayesian inference, is not simply strictly aligned with the change in uncertainty regarding model parameters, but on those parameter combinations which are relevant for the outcome of the value of interest. It seems to be of central importance (1) "where" in the model uncertainty is reduced, i.e. in which spatial 15 area or regarding which model parameters, and (2) which possible outcome is enhanced in terms of probability. An increased probability of a thick or thin seal in our model equally reduced decision uncertainty significantly, by raising the probability of a positive or negative outcome, respectively. Improved certainty about our reservoir thickness, however, had far lesser impact on decision making. This shows that some areas and parameter combinations have a much greater influence on the decision uncertainty than others, depending on the way they contribute to the outcome of the value of interest. 20 Some types of additional information could even lead to increased decision uncertainty. We observed this in Scenario 4a.
The introduced SSF likelihood practically constrained our geological model to two possible situations: (1) a trap which is sealed off from juxtaposing layers and full-to-spill, and (2) complete failure of the trap due to a breached seal across the fault.
This made the decision problem a predominantly binary one and split the outcome distribution into two narrowed but distant modes. The resulting increase in decision divergence and expected losses show that, in some cases, adding information might 25 leave actors in greater disagreement than before.
However, we furthermore have to consider that actors weight possible outcomes of the value distribution differently. They consequently are affected differently by the same type of information. Risk-friendly actors were the most robust in their decision making in the face of possible trap failure. Eliminating this risk proved to be far less significant to the most riskfriendly, than for risk-averse actors. Accordingly, it should be of foremost importance for risk-averse actors to reduce the 30 uncertainty regarding critical factors, such as seal integrity, which might decide between the success and complete failure of a project. This is less relevant for risk-friendly decisions makers, who respectively might acquire a comparable benefit from knowing more about the probability of positive outcomes. They are less afraid of failure and than they are of missing out on opportunity. Crucial risks might be easily assessed if they are dependent on only one or a few parameters, such as seal thickness. In other cases, they are derived from more complex parameter interrelations, as is the case for the Shale Smear Factor. To approach an effective mitigation of high risks, the complexities behind decisive factors need to be assessed thoroughly and respective parent parameters, as well as their interdependencies, need to be identified. This might enable a better understanding of which type of information is missing and where in the model additional data might be of use for improved decision making.

5
More of simply any type of information does not necessarily lead to better decisions. Instead, improved decision making is achieved by attaining the right kind of information that is able to shed light on uncertainties which are relevant to an individuals own goals and preferences, as well as the general problem at hand. Bratvold and Begg (2010) stated that value is not generated by uncertainty quantification or reduction in itself, but is created to the extent that these processes have potential to change a decision. Such decision changes were clearly indicated by the shifting of actions in our different scenarios. According to 10 Hammitt and Shlyakhter (1999), the difference in expected payoff between the prior and posterior optimal decision gives the expected value of information. This raises the question as to what extend a change in expected losses in itself might be an indicator for the value of information and if there is value in gaining confidence in a decision, even though it remains unchanged.

The significance of our method for the hydrocarbon sector 15
While Monte Carlo simulation is by now common in the hydrocarbon sector, it does not make decisions, as Murtha et al. (1997) emphasized. It merely prepares for it. We believe that loss functions have the potential to go one step further. A hypothetical ideal loss function would consider all conditions in an economic environment, as well as perfectly represent preferences and goals of an actor and consequently be able to automatically find an optimal decision. While this is obviously unrealistic, we presume that an elaborate loss function might at least provide a very good preliminary decision recommendation. It might 20 furthermore be able to weight risks that are not immediately apparent to an individual as a person. Furthermore, the influence of human biases and psychological behavioral challenges, as described by Bratvold and Begg (2010), could be mitigated.
Bayesian inference and MCMC methods have been applied for OOIP estimation and forecasting of reservoir productivity by Wadsley et al. (2005), Ma et al. (2006) and Liu et al. (2010). However, their research focused on history-matching simulations for already producing fields. Our approach of applying Bayesian inference for structural geological modeling and volumetric 25 reservoir calculations is intended to support decision making in the earliest stages of a reservoir, when it has to be decided whether a project should be developed or not. Nevertheless, it was shown in the research conducted by Wadsley et al. (2005) that early volumetric OOIP estimates can be combined with later calculations from production data via MCMC methods.
Our continuous approach could be integrated into common discrete decision-making frameworks, such as decision trees. In real cases, normally only a limited number of options is given. In the context of hydrocarbon exploration and production, this 30 would relate to fixed magnitudes of resource allocation, such as a certain number of required drilling wells or the size of a production platform. Based on such previously defined actual options, we could discretize our value probability distribution into sections, which represent each decision scenario accordingly. Our minimizing estimators would then indicate the best discrete option for a decision maker.  Figure 7. In this work, we applied our loss function approach to estimate a hydrocarbon trap volume. For this, we considered stochastic geomodeling parameters, defined deterministic functions to acquire volume, layer thicknesses and SSF values, and linked the latter two to respective likelihoods. Regarding the bigger picture, this methodology is expandable and could include other parameters and dependencies.
By taking into account other reservoir parameters and recovery factors, we could for example base decision making on recoverable volumes.
We could also take depth information from our model and combine this with other cost parameters to calculate drilling costs. Including additional costs, but also the selling price of hydrocarbons, we could attain the NPV as our final value of interest.

Limitations and outlook
It has to be emphasized that we used a synthetic geological model, not based on real data. Nevertheless, it was designed to include some typical structural characteristics related to hydrocarbon systems. We developed algorithms aimed to consider the most common conditions that define structural traps. However, these conditions needed to be simplified and were implemented on a very conceptual level. Furthermore, uncertainties employed in the 3-D model related to z-positional values only and were 5 thus of primarily one-dimensional nature. It follows that no effective uncertainty concerning the overall structural shape was implemented, particularly regarding anticlinal features and the lateral position of the spill point.
We defined risk affinity to be dependent on arbitrarily chosen risk factors which led to according reweighting. Davidson-Pilon (2015) used risk parameters determined by the maximal loss each actor could incur. Other approaches could be based on more tangible values, for example by making risk attitude dependent on a fixed budget.

10
There are still many points that could be expanded on in future research. It would be of interest to apply the same overall concept and methodology to an authentic case based on real datasets. Given a realistic economic scenario, including capital and operational expenditures of a project, possibly a full net-present-value (NPV) analysis could be conducted, applying a loss function to a NPV distribution (see Fig. 7). A more elaborate loss function could be customized on the base of surveys, acquiring the specific preferences of one or several companies and thus attaining a better profile of the economic environment, value which is the ratio of fault throw magnitude D to displaced shale thickness T (Lindsay et al., 1993;Yielding et al., 1997;Yielding, 2012): We attain both D and T by examining the contact between the seal lithology voxels and the fault surface.

15
For our model, we define the critical SSF to be SSF c = 5. We assume that cross-fault sealing is breached when this threshold is surpassed. For simplicity, the fault is considered to be sealing along its plane.
3. Location inside of closed system: The voxel is part of a model section inside of the main anticlinal feature. All of the voxels inside this particular section are separated from the borders of the model by voxels that do not meet the first two conditions above, which primarily means that they are encapsulated by seal voxels upwards and laterally. This condition 20 is relevant under the assumption, that connection to the borders of the model lead to leakage. A trap is thus defined as a closed system in this model and trap closure is assumed to be void outside of the space of information, i.e. the model space. In our example model, this also means that hydrocarbons escape in the hanging wall due to respective layer dipping upwards towards the model borders.
It has to be emphasized that these conditions have been fitted to our synthetic example model. For other models featuring 25 different geological properties, structures and levels of complexities, these conditions and respective algorithms might not apply. Models of higher complexities will surely require the introduction of further conditions. Regarding a surface defined by f (x, y), a local maximum at (x 0 , y 0 , z 0 ) would resemble a hill top (Guichard et al., 2013).
Local maxima will be found looking at the cross-sections in the planes y = y 0 and x = x 0 . Furthermore, the respective partial 5 derivatives (i.e. gradients) δz δx and δz δy will equal zero at x 0 and y 0 , i.e. that the extremum is a stationary point (Guichard et al., 2013;Weisstein, 2017). In the context of a geological reservoir system, such a hill can be regarded as a representation of an anticlinal structural trap. Local minima are defined analogously, presenting local minima in both planes at a stationary point (Guichard et al., 2013). A saddle point, however, is a stationary point, while not being an extremum (Weisstein, 2017).
In general, saddle points can be distinguished from extrema by applying the second derivative test (Guichard et al., 2013; In the case of a juxtaposition with seal-overlying formations and a failed SSF-check, the maximum contact of the trap with the fault becomes the final spill point. Due to the shape of the trap in our model, we can then expect full leakage and set the maximum trap volume to zero.

A3 Calculating the maximum trap volume
Have all trap voxels been determined via the conditions defined in Section 2.3, the maximum trap volume V t is calculated by 5 simply counting the number of trap voxels and rescaling their cumulative volume depending on the resolution in which the model was computed: Where n v is the number of trap voxels, S o gives the original scale and R m the used resolution for the model.   Figure C1. Loss based on the risk-neutral custom loss function (Eq. 8) for determined true scores of -250, 0 and 250. This plot is meant to clarify the way real losses are incurred for each guess, relative to a given true value. The expected loss, as seen in Fig. 3 is acquired by arithmetically averaging such deterministic loss realizations based on the true score probability distribution by using Eq. 3.