the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Fault interpretation uncertainties using seismic data, and the effects on fault seal analysis: a case study from the Horda Platform, with implications for CO2 storage
Mark J. Mulrooney
Alvar Braathen
Download
- Final revised paper (published on 11 Jun 2021)
- Preprint (discussion started on 19 Mar 2021)
Interactive discussion
Status: closed
-
RC1: 'Comment on se-2021-23', Christopher Jackson, 12 Apr 2021
Dear Editor and Authors,
Thanks for giving me the opportunity to review “Fault Interpretation Uncertainties using Seismic Data, and the Effects on Fault Seal Analysis: A Case Study from the Horda Platform, with Implications for CO2 storage" by Michie et al. The general topic of the paper should be of interest to the readership of Solid Earth, dealing as it does with the subsurface structural analysis of normal faults and rift basins. It should also be of interest to people concerned with conceptual and interpretational uncertainty, both of which are critical considerations when working with structural data of any type. I have written numerous comments directly on the manuscript, which I have scanned and attached to this review. The numbers below refer to specific numbers in the manuscript.
- “…how detailed the surface is….” could be replaced with “…how rugose the surface is…”.
- It might also be worth mentioning; (i) the seismic line trend relative to fault strike; and (ii) the seismic or more accurately bin spacing in the original survey (which dictates the line or seismic line spacing available to the interpreter).
- Given that it is important and that it creeps into the paper Discussion, it might also be worth mentioning in the Abstract the detail vs. time trade-off.
- Although a little pedantic, I would use “reflection” (i.e., the thing we observe and map in seismic reflection data) rather than “reflector” (i.e., the acoustic boundary or bedding surface that generates a reflection).
- What is the difference between a “linked” and a “composite” fault segment? They sound like the same thing to me. Maybe use one rather than both terms?
- There are some issues with the terminology you use here for the various fault growth models. I suggest you look at Childs et al. (2017) (https://sp.lyellcollection.org/content/439/1/1.abstract) and maybe use the terms therein, i.e., they refer to the “propagating” rather than “isolated” fault model. Granted, their Figs 1 and 2 are a little contradictory in terms of the terms they use, but I think the text (and subsequent papers by those and other authors) make this clearer.
- Which other fault growth models are you referring to here? No references are provided to support this statement. I suggest you provide some or remove this statement.
- As noted by Childs et al. (2017) and Rotevatn et al. (2019), it is important to remember that the constant-length model does not preclude relay-breaching, i.e., it simply envisages that relay formation and breaching occurs relatively rapidly after fault initiation, such that a single, large-scale slip surface is formed prior to significant displacement accumulation. This was discussed in Jackson et al. (2017), more specifically in the last paragraph of section 1 (https://www.sciencedirect.com/science/article/pii/S0191814117300652?via%3Dihub). To truly distinguish between these fault growth models requires the integration of growth strata and not simply geometric analysis (i.e., the identification of displacement minima; see Jackson et al., 2017), something that is not undertaken or at least presented in your paper. As such, I am very hesitant as to whether your paper can really say anything about which fault growth model is most appropriate to the Vette Fault. Overall, although I think the paper can say lots of useful tings about the present fault geometry, in particular the potential location of now-breached, high-permeability relays, I am not yet 100% sure what you can say about how that geometry came to be in a temporal sense.
- As you discuss later, there is also the issue of failure to include so-called “continuous deformation” (folding) in the geometric analysis. This might be worth mention here, given it was again something we raised in Jackson et al. (2017) (the paper in response to the 2017 paper by Ze and Alves in JSG).
- You could mention here that an outcome of relay breaching is the formation of abandoned hangingwall and footwall splays, which represent the now-deserted tips of the now-linked fault segments. These should be used in concert with simple fault bends to identify paleo-segmentation. In fact, do you see such things along the Vette Fault? It is noticeable that you do not show a single time-structure map in the paper, when such a map would be very useful in communicating aspects of the fault geometry (including, for example, the spatial distribution of folds, and the presence/absence of near-fault splays).
- You have already stated this a couple of times already. This could be removed without loss to the content of the paper.
- Precisely what about the hydrocarbon-water interface? The shape? The depth? Both? This is not clear, so I would encourage you to qualify this statement.
- First, I am not sure I follow why larger faults have lower critical threshold values. Can you please outline why this is the case? Second, and more generally, there is a *lot* (c. 2 pages) of material in this general review of fault sealing, which delays the reading in getting to the study results. My question is whether all this material is needed to provide a basis for the analysis, results, and discussion that follow? If not, can some of it be removed without loss?
- Could you include a regional cross-section to illustrate the geological setting of the study area better? At present it is really hard, from text alone, to get a good feeling for this. My comment about a time-structure map of, for example, the base syn-rift surface (which would record the cumulative rift-related deformation), also relates to this.
- What do you mean here by “good”? I suggest you estimate (using the extracted peak seismic frequency and the interval velocity at the depth of analysis) the actual seismic resolution. This will give the reader a better, more quantitative view of how good the resolution is, which I think is critical to convince people that variations in, for example, fault throw are really and not simply related to resolution-imposed picking limitations. Again, we discussed this in Jackson et al. (2017) when considering the very small throws (relative to seismic resolution) presented by Ze and Alves (2017).
- Do you mean “normal” in the context of the SEG standard? See here: https://agilescientific.com/blog/2012/4/5/polarity-cartoons.html.
- It is a bit unclear what you mean here, i.e., did you not undertake such a QC for the other line spacings? Did you only do it when picking every 25 metres?
- How do you think a timeslice-based interpretation approach might have differed to your section-based approach? You say here timeslices were used to guide the interpretation, but if you do not see it as a valid option to replace the section-based approach, it might be worth stating precisely why not. For example, if variance timeslices are too noisy, compared to a standard cross-section, I think it would be valuable to say so. I say this because I think your paper has great value in guiding how people interpret seismic reflection data; in this case, also suggesting why *not* to use something is valuable too!
- I am no expert in gridding approaches, so this section of text is a little hard for me to assess. However, it is not 100% clear to me from the text and Fig. 4 how the approaches vary in terms of what they’re doing to the physically picked fault sticks, or what their material impact on the fault geometry is. I guess this becomes clearer a lot later in the paper, especially through the use of Fig. 16.
- A non-expert might not know what all these terms mean, especially fault stability and slip tendency, so I encourage you to include some supporting references…or provide a Supplementary Materials section containing information on how these various parameters are derived, what they mean, etc.
- The term “polygon” (as used here and throughout the paper) is very confusing to me. I *think* you are using it in the sense that Badleys use it, whereas most interpreters will view it as a map-view feature that outlines the faut and essentially shows the fault heave at a specific structural level. Fig 5 does not currently really help me visualise what you mean by “polygon”, so perhaps this and the text could be modified to make this clearer?
- Again, this material related to the present stress state is not really my area-of-expertise.
- Why do you say this, i.e., why might dip-linkage not be associated with a change in fault strike? Is this because the upper and lower segments are inferred to have the same strike prior to linkage? In any case, I wonder why you bring up dip-linkage when, later in the paper, this is not really something you talk about.
- What do you mean by “overlap” in this context? If the faults overlapped (prior to linking), then abandoned splays should be present. See my comments above.
- Which structural level are you showing in Fig. 6? Top Sognefjord Formation? This is not clear in the text or figures.
- It is not clear to me why you need to normalise the plots in Fig. 7, given that the analysed length of the fault is the same in all picking strategies (i.e., 25 m to 800 m). Please could you clarify why this was done?
- I am confused by your use of the term “should be” in this context. Why “should” these minima be picked in the 800 m sampling case? Also, I can’t see the minima in the black circles in the 800 m spacing plot. Or is the point that the black circles are only seen on the 100 m case and *not* the 800 m case?
- Could these corrugations also be associated with ‘drift’ in the picking by the human interpreter? Is this worth mentioning here? Oh, and I also would like to flag this interesting paper, which also reveals and discusses the origins of corrugations on seismic imaged normal faults: https://onlinelibrary.wiley.com/doi/abs/10.1111/bre.12146.
- I am not sure I agree with this statement, given that the dominance of red and blues colours on the ‘low-resolution’ fault surface indicate that the fault still has an overall N-S strike.
- Although you are not the first or only people to claim this, I have to say I disagree with the interpretation that corrugations are related to strike segmentation of normal faults. For this to be plausible, each segment would need to be relatively short (i.e., the corrugation wavelength, which in your case is a few hundred metres according to Fig. 8)…but very, very tall (i.e, the full fault height, which in your case is several kilometres, given the corrugations extend from the top to the bottom). This would result in faults with implausibly low aspect ratios (see Nicol et al., 1996) - https://onlinelibrary.wiley.com/doi/abs/10.1111/bre.12146. So, ultimately, I do not think picking strategy, “…may limit the interpretation of fault growth…”.
- Or, perhaps, where fault bends occur as a result of out-of-plane propagation of the fault tips or complex early nucleation patterns? See https://www.sciencedirect.com/science/article/pii/S0191814106000320?casa_token=RfeVEhPMYFUAAAAA:97im5Akf7MWLjEkeslN3xtmWdRsR9RlNN-dpyDz1MG6GWnh7NejPQ0BzekKq3Fh5vZTyYFWEubg and https://www.sciencedirect.com/science/article/pii/S019181410600191X?casa_token=hBmvUeFBEXIAAAAA:e9K7bQaMHsUShAMzVWx7nv1TU8jsglkp5g9ekW95TKDzgYU3bYH4Lctm29GO711ROJyMhZrUkNA
- Related to comment 25, which levels on the strike projection in Fig. 9 does the T-D plot come from? I ask because the black dashed lines are shown as vertical whereas the corrugations plunge slightly southwards, meaning they do not line up. Or at least they do not line up along the entire dip extent of the fault. Maybe they line up with the level at which the T-D plot is constructed, which is the reason to ensure that this is stated in the text, figures caption, and if possible labelled on the figure.
- Regarding the comparative T-D plots for different picking strategies, maybe I’m misunderstanding something here, but shouldn’t there be locations where the values are exactly the same? For example, every 32nd 25 m-picking line would lie up with an 800 m-picking line? If so, why is this seemingly not clear on Fig 10E?
- Where is 0.4 on Fig. 10D? Do you mean values less than 40%? If so, there are hardly any values less than 40%. I’m a little lost here, so some clarification might be worthwhile.
- Can you perhaps state how big these “patches” are and, most crucially, give a sense as to how they are distributed at and above the critical Sognefjord reservoir level? Surely, in the case you present here, the variability in these locations, arising from the various picking strategies, is what’s really key?
- Why do you propose 100 m? What do you not believe it real, geologically speaking, at spacings less than 100 m? And/or what do you not think is important, in terms of CO2 storage and potential leakage, at these very small scales?
- This whole section is a little ‘wordy’ and I think the language could perhaps be streamlined and simplified a little There are many, many occurrences of the terms “dilation and slip tendency”; could these perhaps be used only when essential?
- Rather than stating “lines” in terms of seismic line spacing, could you stick with the terminology used throughout the paper up to this point, i.e., horizontal spacing of 25 m, 100 m, 800 m, etc.
- What do you mean by the inlines and crosslines “not tying precisely”?
- Is it not rather obvious that a relatively thin reservoir, in a relatively muddy sequence, next to a relatively large fault would lead high SGRs and a high likelihood of fault sealing? In that sense, I do not find it surprising that the picking strategy was an important control on this.
In summary, this is a very interesting and important work, which I very much enjoyed reading. Overall, the paper is well-structured, well-written, and the work is thorough and, for the most part, convincing. However, as I hope is clear from my comments above, some revisions will, I hope, help further improve the manuscript. In general, the English and grammar are very good, although there are a few places where these could be improved. I am more-than-happy for the authors to contact me to discuss any of the issues raised in my review.
Yours sincerely
Christopher Jackson (Christopher.jackson@manchester.ac.uk
-
AC3: 'Reply on RC1', Emma Michie, 10 May 2021
Dear Christopher Jackson,
Thank you for your highly detailed, constructive comments. These have helped to significantly improve the manuscript. Please see attached for replies to individual comments (in red). All comments have been addressed, with the corresponding changes made directly on the manuscript.
Thanks again,
Emma
-
CC1: 'Comment on se-2021-23', Billy Andrews, 12 Apr 2021
Dear Authors,
I read this contribution with great interest and hope my minor suggestions will improve what is already a very interesting and valuable contribution. The assessment of different seismic resolution, through the degradation of data, on fault growth and fault seal analysis is of upmost importance particularly when legacy areas are targeted for CCS schemes. Overall, I found the pre-print well structured, engaging and in general was impressed by the figures. A particular standout feature was the final paragraph of section 5 (L520-526), which I think will resonate with those interested in comparing different studies, and undertaking seismic interpretation themselves. My comments mostly pertain to where arguments could be strengthened, or where an alternative viewpoint may also provide a valid interpretation of the data.
I look forward to seeing the published manuscript and hope my comments are useful. If you have any questions please don't hesitate to get in contact,
Many thanks,
Billy Andrews (billy.andrews@plymouth.ac.uk)
Comment 1: Segmentation & fault corrugations
My 1st point is on the discussion of fault segmentation and fault surface geometry. From the data presented in the manuscript, i was not fully convinced by the 100 m line spacing representing the optimum spacing for the interpreted faults. More specifically, the simplification of fault geometry caused by this line spacing would miss several geological features (e.g., the majority of lenses, corrugations etc.) that are below the scale of 'segmentation', but can still play a large role in controlling fluid flow and fault stability. Additionally, segmentation and/or fault refraction can commonly occur in the down-dip component of the fault (D. Ferrill among others discuss this for faults cutting mechanically stratified layers), and at a scale that can be quite small. His could account for some of the variations in fault dip that you observe. A fantastic example of small scale changes in fault geometry can be found in Ross et al., 2020 (DOI: 10.1126/science.abb0779), where they assess the temporal evolution of micro-seismic events for an EQ swarm along a fault zone. Although there main focus of the work is the temporal evolution of events, the profiles for segmentation are very nice and show that small barriers can cause a barrier to flow, but also be breached. The recently published paper by Roche et al. (2021) would also be a good place to look in regards to how complicated the variability in strike and dip can be within a fault zone (DOI: https://doi.org/10.1016/j.earscirev.2021.103523)
Specifically, you mention that the variation in fault strike is often caused by fault segmentation, however, this does not have to be the case and faults may be corrugated without the need for fault linkage (asperities etc.). This is observed across a range of scales, and has been shown to change the slip behaviour and encourage areas of structural complexity to develop (which in turn can be linked to areas of fault leakage.. some good examples from along the Moab Fault in Utah). Additionally, corrugations can be scale dependent, which was beautifully outlined in some of A. Sagy's work. I have attached some potentially useful references on the attached PDF, however, please don't hesitate to contact me if you need any more as this is just the top of a very large iceberg.
From field studies, leakage of natural CO2 and/or hydrocarbons has been shown to occur at small ‘point-sources’ at a scale far below 100 m (relay zones, lenses, etc). I would be happy to discuss this further, however, I believe that some of the features observed at the 25 m line spacing which are attributed to picking strategy could easily be geological. Importantly, missing lenses could lead to the over-estimation of SGR/SSF and miss important leakage pathways for CO2. Overall, I agree with the 100 m suggestion for broad-scale fault interpretation, however, I think it is important that fault plane heterogeneities is considered as a viable control to fault geometry, with different scales picked up by different picking strategies.
Comment 2: Subjective bias between operators
Secondly, you make a nice comparison between two 'experienced' operators. Although this is a small dataset, and not across different scales, I found the similarity interesting. What was the relative training of both the operators? Regarding work into the human biases for seismic interpretation i think the work of Clair Bond and co-authors should be included. Of particular note is Schaaf and Bond 2019 who quantified differences in fault interpretation that students made from 3D seismic. To expand on this point, I recommend taking a look at the 2019 Special Issue of Solid Earth titled 'Understanding the Unknowns:..' (https://se.copernicus.org/articles/special_issue984.html).
You point about subjective bias being less for greater line spacing makes perfect sense, and is based on our geological training and experience of dealing with sparce datasets. We are trained to push our interpretations towards the simple, however, as discussed above this could also lead to the removal of key fault properties. Have you considered that different practitioners may have differing 'mental models' of faults, and that this may then mean they are more likely to interpret either complex or simple fault geometries. Whilst this may not be a factor between your two experienced interpreters, this may have a large effect in the wider community. This is something that I have seen for fracture analysis (in the SE SI) and is discussed in Shipton 2019 (doi: https://doi.org/10.1144/SP496-2018-161) for the effect on fault architecture (which will expand to fault geometry). Further, your point about the time invested in the interpretation improving the replicability of the results will be countered, and need to be balanced, by the need for pragmatism (limited time for the project, limited funds etc.).
Additionally, a couple of minor comments are provided below:
Section 1.3: I felt that this section was longer than what was required to introduce the key concepts of the manuscript and that some of the detail (e.g., equations etc.) could be removed, with interested readers directed to the key texts.
Line 205: Consider writing out the acronym in full, it is not used that often and would improve the readability of the text as well as enable people to dip into sections without reading from start to finish.
Line 215: How does the variability in gridding methodology compare to the variability you would expect between operators? Additionally, is there any variability within a single gridding method based on parameters used? (I am not a gridding expert, but alluding to the scale of this variability would be useful when assessing uncertainties within the workflow).
Line 238: The effect of drag and/or monocline development can have quite a large effect on fault geometry and cause certain parts of a fault to dip more shallowly than others.
Line 349: Will the change in SGR not equally, or even more so, be a function of juxtaposed lithology as opposed to fault geometry?
Line 360: I think it is important that you make it really clear that this is a site specific point within the results, and that it (at least in my opinion) cannot be easily transferred to other sites. (aka you don't want people citing you saying you said do 100 m, which is not what you are saying). .
Fig 7: It would be useful for the 25 m spacing to be added to this figure to show why 100 m is seen as optimum (i.e., to not just show where the smoothing has taken place, but show where the variability is high).
Fig 8: These are very nice, however, i suggest that you have a look at some of the corrugated fault surfaces in the references I suggested above, as there is a reasonable argument that the variability could be controlled by scale-dependent fault straightening effects. I think that was discussed mainly in one of Amir Sagy's papers (possibly the 2007 or 2009 one), also in several of Emily Brodsky's papers.
-
AC2: 'Reply on CC1', Emma Michie, 21 Apr 2021
Dear Billy,
Thanks for your detailed comments, and thank you for the suggested papers.
- I appreciate the caution you mentioned when essentially ‘ignoring’ any finer scale features, as I’m sure there are many geologists out there who would also be very cautious of this. However, utilizing every line spacing for means of creating a fault surface, unfortunately, showed a heavily increased irregularity to the surface. While faults are indeed very irregular in nature, the irregularity shown here appears to not be a product of the fault surface, but more of human error and triangulation method used. Although rigorous QCing was performed to maintain continuity between each line, the nature of seismic resolution means that picking the precise location of the fault is almost impossible. Hence, any very subtle variations where the fault has not been precisely picked will carry through to the triangulation method, and can create a surface that is very irregular with many kinks that aren’t actually there in nature. Of course, with higher resolution seismic (e.g. P-cable), the uncertainty of the location of the slip surface is reduced. However, no seismic will allow us to ‘see’ the fault as we do in the field, and hence some uncertainty will remain.
Note that the suggested 100 m spacing is for fault surface creation, e.g. for fault stability analysis. I would suggest using all data available for fault polygon (horizon-fault cutoffs) picking for fault seal analysis. The suggested 100 m spacing is based on the final fault surface that most accurately honours the geometry of the picked fault segments, and also corresponds nicely with the line spacing that captures all fault segmentation that has also been observed when every line (25 m spacing) is used (based on T-D plots).
- Thanks for your comment on bias – I shall make sure Clare Bond and others are referenced and discussed more thoroughly. The background of the two interpreters are similar, both are structural geologists at similar stages of their career, although the level of professional training does vary (both software and practical – e.g. fieldwork training etc). I agree that mental models are formed through these types of training and are brought through to seismic interpretation – something that is likely to have happened here (although to a slightly lesser degree than that described in Clare Bond’s work).
Thanks again for your comments – I will try to incorporate the suggested changes to the manuscript.
All the best,
Emma
Citation: https://doi.org/10.5194/se-2021-23-AC2
-
AC2: 'Reply on CC1', Emma Michie, 21 Apr 2021
-
CC2: 'Comment on se-2021-23', Davide Gamboa, 12 Apr 2021
HI
This is quite relevant for general fault analysis. I came across some similar issues/questions (but not in such depth or scenario comparison).
I admit I haven´t read all in detail, but I was wondering if some of the extra detail on the shorter line spacing could be an artifact induced by picking and then surface interpolation? At the distances here presented I am guessing it is probably less of an issue... My pick at every line produced something extremelly rugged (I thick the spacing of that dataset was lower than the 25 in your work), but at 10 to 20 lines things were much better (the software used and interpolation algorithm may play a bit role here).
On another work and data, I had to deal with was the surface geometry and application for the stress models (Gamboa et al 2019, work in here https://www.sciencedirect.com/science/article/pii/S1750583619302968). What you and I ended up using are different packages, but I am guessing that in the end they use the same equations from Ferril and Morris.
Did you use the same software to map the faults and do the stress models (i.e. traptester?) If so, my guess is that if the fault mapping in the same package it may limit geometry problems. In my case I mapped the faults in Petrel and exported to Move, which caused some geometry issues which I sorted out trough some touch up and resampling - the latter was particularly important to get more detail. Yet, it may risk loosing some of the original geometries.
As a last note, you work draws parallels with this: https://www.sciencedirect.com/science/article/pii/S004019511930099X
I guess the general observations should be pretty much the same, and it is a bit common sense: the shorter the spacing, the higher the detail... yet, may be worth checking.
Regards,
Davide GamboaCitation: https://doi.org/10.5194/se-2021-23-CC2 -
AC1: 'Reply on CC2', Emma Michie, 21 Apr 2021
Dear Davide Gamboa,
Thank you for your comments.
Indeed I share your concern about picking at finer scale, and is why I wouldn’t suggest to use each and every line available – exactly as you proposed, artifacts are produced associated with human error and triangulation method chosen. If picking on every line is used to create a surface where the method is gridded, which essentially smooths over any irregularity, then a question could be posed as to what is the point in the added extra time to pick on every line if this isn’t used when creating the surface. On the other hand, when a triangulation method is used that honours every point every subtle variation between the two adjacent lines (which is very common, and almost unavoidable, due to the scale of seismic resolution) will create a highly irregular fault surface. This is despite any rigorous QC that is done, and is simply a product of human nature, unfortunately. It may seem counter intuitive to suggest not to use every line, particularly when faults are very heterogeneous and irregular in nature, however this is a product of the scale of analysis used in seismic studies. This study attempted to highlight the need to pick according to an ‘optimum’ strategy, whereby inherent irregularities can still be captured, which are not (or less so) a product of human error or triangulation, but not overly smoothed out when using a very coarse picking strategy.
Yes, that’s correct that I used the same equations for the geomechanical models, despite using different software packages (I used T7 – TrapTester). Move uses the same equations as TrapTester. I also performed all the analysis in TrapTester – all picking and then subsequent fault analysis.
My work does indeed resemble that published by Tao and Alves (2019). They produced a nice study showing multiple datasets of varying scales to suggest an optimum line spacing that captures all detail at finer resolution, but without the need to spend time picking at the finest line spacing. This is based on the size of the fault. However, for this case example, their suggested line spacing would be too coarse for detailed fault analyses.
Thanks again for taking the time to read my manuscript and for the comments.
Best Wishes,
Emma Michie
Citation: https://doi.org/10.5194/se-2021-23-AC1 -
CC3: 'Reply on AC1', Davide Gamboa, 21 Apr 2021
Dear Emma,
Thanks for your reply. In general we do agree on the overal issue and limitations, and I´m guessing lots of other people came across the same issue although it was probably just not written down :) You are absolutely right about the optimum strategy needed, and I don´t necessarily see not picking every line as counter-intuitive (although that may come to interpreters after some years of experience).
On the software used, there may be the possibility that using the same package for picking and analysis, as you did with T7, may limit the creation of artifacts on the surfaces that may derive from data export and import across different ones. I say this with limited (or no) experience on testing both methods, but it sure will save (and optimize) the time spend with the work. On importing faults to move, the first step ended up being the correction of the sticks to create the surfaces - depending on the size of the datasete, that is at least one week not lost if everything is on the same environment. Perhaps there could be a very brief mention on your paper that this 1 package vs multiple could be an advantage/issue if you have the place and space to do it. Not too much, one or two lines, just to bring awareness to the reader/user. Again, only if deemed relevant or suitable.
One thing I do hope is that your paper gets noticed by a wide audience, including industry. Although the latter often have some very good people, there is also a good ammount of less versatile interpreters (often a consequence of the type of work). I came across examples where, given a 3D cube, the tendency would be to pick every line (or close to that), leading to the aforementioned artifacts and irregulaties on horizonas as described for the faults. This can impact both on deliverable quality and time constraints if, using your words, an optimum mapping strategy is not followed.
Regards,
DavideCitation: https://doi.org/10.5194/se-2021-23-CC3
-
CC3: 'Reply on AC1', Davide Gamboa, 21 Apr 2021
-
AC1: 'Reply on CC2', Emma Michie, 21 Apr 2021
-
RC2: 'Comment on se-2021-23', Ruta Karolyte, 05 May 2021
Dear authors,I enjoyed reading this manuscript which highlights an under-investigated and under-reported part of subsurface geological modelling, and deals with subjective interpretative strategies and decisions. The paper is very well written, structured, the figures are clear and easy to understand. The paper does a very good job investigating the uncertainty presented by different picking strategies, however I think the discussion can be improved by considering how this particular uncertainty compares to other uncertainties present in seismic inversion, interpretation, and geological model making. If we consider all sources of uncertainty in a classical error propagation framework, the source of the largest error is the most important, while identifying error variability in lower error sources becomes insignificant. I think this paper presents cases where the uncertainty presented by picking strategies is the dominant source of uncertainty and therefore important, as well as those where it may not be the leading source of error. My further comments below are mostly around this theme and specific to line numbers.Lines 195-200 Could the authors please comment on the resolution of the velocity model used in generating the seismic images? Was it generated using tomography or full waveform inversion? Was any lateral smoothing applied to the velocity model before processing and imaging? Because, although the seismic itself is binned on 25 m x 12.5 m grid, using a velocity model which was not also calculated independently in each bin (e.g., by tomography), or using a model which has some form of smoothing applied to it, introduces an error or lowers the effective resolution of the final seismic image such that interpreting at 25m becomes overfitting or at least possibly unnecessary. This is important for further discussion, where in some instances (for example reactivation potential plots) the closest possible picking (25 m) might produce the best fit for the seismic, but that might not be equal to the best fit for the underlying geology.Lines 355-360I think the finding of this section that SGR does not vary significantly irrespective of the picking strategy is a positive finding, and indeed will be positive for those using lower resolution seismic or legacy data. I would be cautions suggesting that coarser picking slightly overestimates the SGR data (Lines 359-360). SGR in itself is of course an approximation with an associated uncertainty (the content of clay in particular, from gamma ray log, is a source of larger uncertainty), and I think the uncertainty of some of those assumptions is higher than that presented by the picking strategy.Line 396 I think the framing of this as 'correct' and 'incorrect' assumptions is slightly misleading. I think we are trying to find an approach that provides the best fit to data, but the best fit might not be 'true' in any case as in cases were we are within the limits of the seismic resolution, we may be still within an uncertainty of the velocity model used.Lines 452-453 I think here the paper is arriving close to the lowest possible error that can be used as base error. The relative differences in geometry produced by picking by two skilled and local geology-informed operators is the type of error that in practice currently probably cannot be further minimised, especially across the community. I would view this amount of variation as base error. Looking at figure 15, if we were to smooth the fault surface until a point where dilation tendency plots on A and B look the same, I think this is the lowest possible error we can get down to. The question is then the following - what is the lowest picking interval that can be used that doesn't further decrease this resolution? I think the suggested 100m is a likely outcome, but I think this concept needs a discussion.Lines 460 - 465. This paragraph is an important discussion of over-fitting and I recommend expanding it. I think in many cases a degree of smoothing brings the model closer to best fit, depending on what is being modelled. For example, highly irregular fault surfaces will display a great range of fault reactivation potential values (Fig 12) over a surface area. In practice, we are likely not expecting a fault to be reactivated just in one of these spots or in a number of identified spots without affecting areas in-between. In this type of plot, a higher degree of smoothing seems beneficial to me (100m and above) vs 25m which produces a lot of scatter.Lines 509-511 I'm not an expert in differences between triangulation methods, but here it would also appear that if all three methods are equally commonly used, then the difference between them is the base error. If there are advantages in using one or the other in certain cases, then that can be discussed.Lines 557-564 Here I think the authors do a good job discussing how other uncertainty sources (cohesion) are likely to be more significant.Overall, I think this is a great contribution and a very well organised and presented case-study example. I would recommend for publication after these minor revisions.Citation: https://doi.org/
10.5194/se-2021-23-RC2 - AC4: 'Reply on RC2', Emma Michie, 10 May 2021