Application of Discrete Curvelet Transform in ...

1 downloads 0 Views 41MB Size Report
“Where did the light come from on the first day of Creation?” ∼ Fyodor Dostoyevsky, The Brothers Karamazov. Page 4. Page 5. Acknowledgements. First and ...
Application of Discrete Curvelet Transform in enhanced seismic imaging and accurate velocity model building by

Andrzej Górszczyk

A thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy

Supervisor: Dr. Michał Malinowski

Warsaw, June 2017

“Where did the light come from on the first day of Creation?” ∼ Fyodor Dostoyevsky, The Brothers Karamazov

Acknowledgements First and foremost, I would like to express my sincere gratitude to my supervisor, Dr. Michał Malinowski for the continuous support of my PhD study, the immense knowledge he shared with me during my research, for his patience, guidance, motivation and friendship. My sincere thanks also goes to the co-authors of the publications which are included in this thesis. I thank Dr. Gilles Bellefleur, who acknowledged the potential of this research and introduced me to the hardrock seismic data processing. I want to express my deepest appreciation to Dr. Stéphane Operto, who broadened my research perspective by sharing not only his knowledge about full waveform inversion, but also his scientific excellence, which I will always remember. I thank co-authors from my Department: Anna Adamczyk, Marta Cyz, and Jacek Trojanowski since without their support it would not be possible to conduct this research. I thank all the fellow colleagues not only from my Department, but the whole Institute of Geophysics PAS for the fair amount of support and stimulating discussions - also those less formal over a beer on Thursday evenings. I thank the people from the administrative departments at the Institute for their continuous assistance with paperwork and taking care about research facilities. I am grateful to Dr. Mariusz Młynarczuk - supervisor of my Master thesis - for enlightening me the first glance of research. I thank the people that I have met during studies at AGH University of Science and Technology in Kraków, who became my cordial friends and still are willing to share the same joy and laughs as we remember it from our academic life. I would like to address words of infinite gratitude to my beloved parents. I would never even have a chance to reach this level without their support, sacrifice, patience and understanding, which they were always devoting to me. Finally, of course, I thank my wife Katarzyna Górszczyk from the bottom of my heart for withstanding years of difficult time, late nights, many absences, and most of all for her love.

i

Co-Authorship Statement The material incorporated in this dissertation consist of three articles published in the indexed international journals and three reviewed expanded abstracts: 1. Górszczyk, A., Adamczyk, A., Malinowski, M., 2014. Application of curvelet denoising to 2D and 3D seismic data - Practical considerations. Journal of Applied Geophysics 105, 78–94. 2. Górszczyk, A., Cyz, M., Malinowski, M., 2015. Improving depth imaging of legacy seismic data using curvelet-based gather conditioning: A case study from Central Poland. Journal of Applied Geophysics 117, 73–80. 3. Górszczyk, A., Malinowski, M., Bellefleur, G., 2015. Enhancing 3D post-stack seismic data acquired in hardrock environment using 2D curvelet transform. Geophysical Prospecting 63 (4), 903–918. 4. Górszczyk, A., Malinowski, M., Bellefleur, G., 2016. Applications of Curvelet Transform in Hardrock Seismic Exploration. In: EAGE/DGG Workshop on Deep Mineral Exploration. EAGE Expanded Abstracts, 5 pp., doi: 10.3997/2214-4609.201600040. 5. Górszczyk, A., Malinowski, M., Operto, S., 2016. Crustal-scale Imaging from Ultralong Offset Node Data by Full Waveform Inversion - How to Do It Right? In: 78th EAGE Conference and Exhibition 2016. EAGE Expanded Abstracts, 5 pp., doi: 10.3997/2214-4609.201601190. 6. Trojanowski, J., Górszczyk, A., Eisner, L., 2016. A multichannel convolution filter for correlated noise: Microseismic data application. In: SEG Technical Program Expanded Abstracts 2016. Society of Exploration Geophysicists, 5 pp., doi: 10.1190/segam2016-13453668.1. In case of all publications Dr. Michał Malinowski (Institute of Geophysics PAS) was the co-author, who provided me with inspiration as well as his mentorship and guidance, thanks to which I was able to obtain and evaluate my results. He also devoted significant efforts to editing each of the manuscript. Part of Chapter 1 and the whole Chapter 2 were published in cooperation with Anna Adamczyk (Institute of Geophysics PAS), who shared with me her competences regarding input data to full waveform inversion and helped with manuscript editing. ii

Dr. Gilles Bellefleur (Geological Survey of Canada) is the co-author of Chapter 3 and Section 5.2. He provided me with motivation, input data and expertise in hardrock seismic data processing. Chapter 4 was issued with contribution from Marta Cyz (Institute of Geophysics PAS), who performed geological analysis of the final migrated images and significant part of manuscript editing. In the final Chapter 5, Section 5.3 was published by Jacek Trojanowski (Institute of Geophysics PAS), who derived new methodology of coherent noise removal from microseismic data. My contribution in this work was related to designing the curvelet-based filter which together with new method would produce optimal results. Section 5.4 of the same chapter is the result of collaboration with Dr. Stéphane Operto (UMR GéoAzur CNRS-UNSA) devoted to development and application of full waveform inversion.

iii

Summary Seismic imaging techniques are one of the main tools providing an insight into subsurface structure. However, a key to obtain an optimal subsurface image is the choice of appropriate seismic data processing targeted at the separation of signal and noise. Different approaches originating from the field of image processing were proposed for seismic noise attenuation and seismic signal enhancement. Here, I investigate the proprieties of a recently developed curvelet transform for the purpose of seismic data denoising. Curvelets are the multiscale, multidirectional basis functions allowing for an optimal and sparse representation of the images containing curvilinear objects - similar to those used during seismic imaging - therefore being appealing from the point of view of seismic processing. In the thesis I briefly review the curvelet transform proprieties and introduce a concept of random and coherent seismic noise attenuation. This is followed by the demonstration of a practical curvelet-based framework for an optimal signal and noise separation. Presented idea rely on the estimation of the threshold values according to certain frequency bands and dips in the curvelet domain. The approach leads to the superior results comparing to those obtained with the estimation of single threshold level for all curvelet coefficients. I present wide range of applications of the developed method aiming at improving the quality of the final seismic images, as well as to providing enhanced data for velocity model building. I proceed starting from illustrative synthetic 2D post-stack examples followed by their real data equivalents. I also apply the introduced framework for very unique and challenging 3D datasets acquired in so-called hardrock environment. The processed seismic volumes exhibit much better signal quality indicated in both qualitative and quantitative manner. Further, I employ the curvelet-based conditioning to process a legacy reflection data being subject for the pre-stack depth migration. I show how, due to this conditioning, reflection tomography was able to provide consistent and reliable velocity model for the depth imaging. Finally, I present the applications to challenging pre-stack datasets characterized by low signal to noise ratio including 3D gathers from the hardrock environment, 2D microseismic data and crustal-scale, ocean bottom seismometer data. I illustrate how datasets that are subject to the velocity model building via different techniques can be enhanced to provide signal quality allowing for better imaging. Presented examples indicate that application of curvelet-based data filtering strongly contributes to the improvement of the final imaging results.

iv

Contents

Acknowledgements

i

Co-Authorship Statement

ii

Summary

iv

Contents

v

List of Tables

ix

List of Figures

xi

List of Abbreviations

xxi

Introduction

1

Motivation and objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

1 Fundamentals of the noise attenuation using curvelet transform

v

7

1.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2

General description of the curvelets . . . . . . . . . . . . . . . . . . . . . . 10

1.3

Theory of the curvelet transform . . . . . . . . . . . . . . . . . . . . . . . 13

1.4

7

1.3.1

Continuous curvelet transform . . . . . . . . . . . . . . . . . . . . . 14

1.3.2

Discrete curvelet transform via wrapping . . . . . . . . . . . . . . . 18

Implementation of the curvelet-based denoising scheme . . . . . . . . . . . 22 1.4.1

Curvelet scale selection . . . . . . . . . . . . . . . . . . . . . . . . . 24

1.4.2

Scale-adaptive thresholding . . . . . . . . . . . . . . . . . . . . . . 28

1.4.3

Angle-adaptive thresholding . . . . . . . . . . . . . . . . . . . . . . 30

1.4.4

Signal-to-noise ratio definition . . . . . . . . . . . . . . . . . . . . . 31

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2 Application of curvelet denoising to 2D seismic data: practical considerations

37

2.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

2.2

Application to synthetic data . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.3

Application to real data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.3.1

Filtering of post-stack seismic sections . . . . . . . . . . . . . . . . 47

2.3.2

Data conditioning for full waveform inversion

. . . . . . . . . . . . 52

2.4

Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

2.5

Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 vi

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

3 Enhancing 3D post-stack seismic data acquired in hardrock environment using 2D curvelet transform 3.1

3.2

59

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.1.1

Curvelet-based hardrock seismic data enhancement . . . . . . . . . 60

3.1.2

Applicability of the 2D DCT to 3D data denoising . . . . . . . . . . 62

Application to 3D datasets acquired in different mining camps . . . . . . . 65 3.2.1

Flin Flon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

3.2.2

Lalor Lake . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

3.2.3

Brunswick no. 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

3.3

Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

3.4

Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

4 Improving depth imaging of legacy seismic data using curvelet-based gather conditioning: a case study from Central Poland

85

4.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

4.2

Geological background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

4.3

Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

4.4

Pre-stack depth migration workflow . . . . . . . . . . . . . . . . . . . . . . 89 4.4.1

Initial velocity model . . . . . . . . . . . . . . . . . . . . . . . . . . 90 vii

4.4.2

Gather conditioning

. . . . . . . . . . . . . . . . . . . . . . . . . . 90

4.4.3

Additional statistical QC of the picked RMOs . . . . . . . . . . . . 95

4.4.4

Tomography and final PreSDM . . . . . . . . . . . . . . . . . . . . 97

4.5

Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

4.6

Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

5 Discrete curvelet transform as s versatile tool for pre-stack seismic data enhancement

105

5.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

5.2

Solutions for hardrock seismic exploration . . . . . . . . . . . . . . . . . . 106 5.2.1

Ground-roll attenuation . . . . . . . . . . . . . . . . . . . . . . . . 107

5.2.2

Velocity model building . . . . . . . . . . . . . . . . . . . . . . . . 110

5.3

Joint MCCF and DCT filtering for microseismic data denoising . . . . . . 113

5.4

Data conditioning for crustal-scale full waveform inversion . . . . . . . . . 115

5.5

Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

Conclusions

123

viii

List of Tables 2.1

Threshold levels used during denoising of Sections 1-9 according to scales. Factor c denotes multiplier of threshold for angular wedges shown in Figure 2.6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

2.2

Applied and recovered noise parameters. . . . . . . . . . . . . . . . . . . . 45

2.3

Recovered signal parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . 45

2.4

Threshold levels used during denoising of Sections 1-3 according to scales. Factor c denotes multiplier of threshold for angular wedges shown in Figure 2.14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

2.5

Comparison of computation times regarding to the type of DCT and data size. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4.1

Acquisition parameters of the 4 transects used in the imaging. . . . . . . . 88

ix

x

List of Figures 1.1

Examples of few curvelets of different scales, angles and location in spatial domain (a) and their representation in frequency domain (b) where they are positioned in the orthogonal direction; (c) Partitioning of the 2D frequency plane into concentric, rectangular annuli and their inner segmentation into angular wedges which obey the parabolic scaling. Number of wedges doubles each second scale yielding curvelets of higher frequency more anisotropic. (d) Zoom of curvelet no. 3 from (a); each of the curvelets is described by its frequency, angle (dip) and position. Localization (x1 , x2 ) is possible because of rapid amplitude decay in spatial domain, outside the region indicated by dashed line ellipse. . . . . . . . . . . . . . . . . . . . . 12

1.2

Curve approximation with 2D wavelets (a) and curvelets (b). . . . . . . . . 13

1.3

Curvelet tiling of space and frequency. (a) The introduced tiling of the frequency plane. In Fourier space, curvelets are supported near a “parabolic” wedge marked by the dark-shaded area. (b) Schematic representation of the spatial Cartesian grid associated with a given scale and orientation. . . 16

1.4

Wrapping data (initially inside a parallelogram) into a rectangle centred at the origin utilizing their periodicity. . . . . . . . . . . . . . . . . . . . . . . 21

xi

1.5

Influence of densely partitioned frequency domain on extracting coherent energy at certain scales. The same synthetic section was transformed into curvelet domain with 4 (a) and 7 (b) scales. Threshold levels for each transform are displayed by dashed lines. In consequence higher number of scales allowed thresholding at different levels (d) and better selection of coherent energy as compared to global threshold level in the case of 4 scales (c). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

1.6

Difference between choosing wavelets or curvelets at the finest scale. (a) Demonstrative, noise-free synthetic section from the Marmousi2 model. (b) Section (a) with white Gaussian noise added. Recovered sections at (c) and (d) are the results of global thresholding after transform using 6 scales with wavelets and curvelets at the finest scale respectively. In both cases visible artefacts of different shapes arise. Introducing few threshold levels allow higher attenuation of noise and better signal passing as shown in (e). . . . 27

1.7

Threshold level adjusting using 2D Fourier spectrum analysis. (a) Synthetic section presented in Figure 1.6a has been highly corrupted with coloured (0-50Hz) noise. Section recovered in consequence of initial global thresholding level (b) still contains much of low frequency noise. From the 2D spectra shown as insets one can localize scales which contain noise coefficients and better adjust threshold levels for them. (c) Section recovered after an increase of threshold level at scales containing low frequency curvelets still containing some residual artefacts. (d) Final section after additional attenuation of certain angles (symmetric polygons bounded by dashed lines shown in the inset (c)). . . . . . . . . . . . . . . . . . . . . . . 29

2.1

Synthetic section extracted from the Marmousi2 model (a) with its frequency (b) and F–K spectrum (c). Note the complexity of structure and variety of reflector amplitudes. . . . . . . . . . . . . . . . . . . . . . . . . . 39

xii

2.2

Frequency characteristic of the noise applied to the extracted part of Marmousi2 model. Examples with the lowest value of norm kN1 k = 6.386 were selected. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

2.3

Synthetic section contaminated with noise of various characteristics. (a)–(c) correspond to Sections 1-3, (d)-(f) and (g)-(i) present Sections 4-5 and 6-9 respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

2.4

Comparison of the frequency spectra before (green) and after (red) applying each noise to the respective sections from Figure 2.3. Note increasing absolute value of noise amplitude in comparison with signal energy. Each pair of the spectra was normalized to 1. . . . . . . . . . . . . . . . . . . . . 41

2.5

F-K spectra of sections from Figure 2.3. For each section we can observe distribution of the noise relatively to the location of signal in the 2D frequency plane. All noise types have zero-centered Gaussian distribution but differ in respect to frequency characteristic, norm and standard deviation. . 41

2.6

Partitioning of the F-K spectrum we have used during DCT. After setting threshold levels according to scales we introduce additional threshold weighting factor c for coefficients localized in certain angular wedges. Dark polygons (a) - white noise, Sections 1-3; (b) - low-frequency noise, Sections 4-6; and (c) - high-frequency noise, Sections 7-9; contain coefficients thresholded with additionally weighted threshold levels. Compare those polygons with the F-K spectra presented in Figure 2.5. . . . . . . . . . . . . . . . . 42

2.7

Sections from Fig. 2.3 after denoising. Differences between obtained results reflect influence of applied noise on the results. Slight damage of weaker event amplitudes which increase with standard deviation of noise is observed. 43

xiii

2.8

Comparison of noise-free synthetic section frequency spectrum (red) and recovered section frequency spectra (green). We can observe preserving of spectrum shape, however reduction of signal energy is visible for data contaminated with noise of higher norm. . . . . . . . . . . . . . . . . . . . 44

2.9

F-K spectra of the filtered sections presented in Figure 2.7. Comparing with the F-K spectrum of the clean synthetic section (Figure 2.1c) we can observe the impact of filtration on the shape of spectra. . . . . . . . . . . . 44

2.10 Summary of the SNR before and after noise attenuation for each section. Dashed line represents SNR level of noise-free initial data. . . . . . . . . . 46 2.11 Results of F-X deconvolution denoising for Section 3 (a), Section 6 (b) and Section 9 (c). In case of each section we observe a significant share of unwanted energy which survived denoising. . . . . . . . . . . . . . . . . . . 46 2.12 Frequency spectra of sections presented in Figure 2.11 (black line - section (a), blue line - section (b), red line - section (c), green polygon - clear synthetic section). Fitting of frequency bands for obtained results proving simultaneous visible damage caused to the signal. . . . . . . . . . . . . . . 47 2.13 2D real data example using PreSTM stacks. (a)–(c) Input Sections 1-3. (d)–(f) Corresponding sections recovered after noise attenuation. (g)–(i) Energy taken as noise and extracted from each section. . . . . . . . . . . . 48 2.14 Partitioning of the F-K spectrum we have used during DCT of Sections 1-3 (panels a-c respectively). After setting threshold levels according to scales we introduce additional threshold weighting factor c for coefficients localized in certain angular wedges. Dark polygons subtracted from curvelets contain coefficients thresholded with additionally weighted threshold levels. 49

xiv

2.15 Frequency spectra of the input sections (a) Section 1, (b) Section 2, and (c) Section 3. (d-f)/(g-i) Comparison of the input section frequency spectra (red plots) and recovered signal/noise frequency spectra (green plot). . . . 50 2.16 F-K spectra of the input sections (a) Section 1, (b) Section 2, and (c) Section 3. (d-f)/(g-i) Corresponding F-K spectra of the recovered signal/noise. 50 2.17 SNR values before and after noise attenuation for each section from Figure 2.12. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 2.18 (a) Plot of the real part of the 6-Hz frequency data extracted from the POLCRUST-01 deep reflection seismic profile in the source-receiver coordinate system (frequency map); (b) same panel after curvelet denoising; (c) difference between panels (a) and (b); (d) phase of the 6-Hz data; (e) phase calculated from the curvelet-denoised real and imaginary parts of 6-Hz data; and (f) difference between panels (d) and (e). . . . . . . . . . . 52

3.1

(a) Example of an inline section (crossline spacing of 12.5 m) before curvelet denoising. (b) Section from (a) after denoising focused on maximal signal preservation. (c) Section from (a) after denoising focused on maximal random energy reduction. (d) Difference between (a) and (b). (e) Difference between (a) and (c). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

3.2

(a) Example of a single time-slice section (inline/crossline spacing of 30 m/11 m) before curvelet denoising. (b) Time slice from (a) after denoising in the inline direction. (c) Time slice from (a) after denoising in the inline and crossline directions. (d) Time slice from (a) after denoising in inline, crossline, and time slice directions. (e) Difference between (a) and (b). (f) Difference between (a) and (c). (g) Difference between (a) and (d). . . . . 63

xv

3.3

(a) Data before curvelet denoising. (b) Data after curvelet denoising. Note the preserved continuity of events at intersections of planes despite processing of the volume sliced into 2D sections. (c) Difference between (a) and (b) without visible phase shifts. . . . . . . . . . . . . . . . . . . . . . . 64

3.4

(a) Chair plot of the DMO volume from the Flin Flon mining camp before denoising. (b) Data from (a) after F-XY deconvolution. (c) Data from (a) after curvelet denoising. (d) Difference between (a) and (b). (e) Difference between (a) and (c). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

3.5

SNR estimated for each inline from the Flin Flon DMO volume before denoising (blue x) and after F-XY deconvolution (green x) and curvelet denoising (red x). Mean SNR values are marked by dashed lines. . . . . . . 68

3.6

(a) Chair plot of the post-DMO migrated volume from Lalor mining camp before denoising. (b) Data from (a) after F-XY deconvolution. (c) Data from (a) after curvelet denoising. (d) Difference between (a) and (b). (e) Difference between (a) and (c). . . . . . . . . . . . . . . . . . . . . . . . . 70

3.7

SNR estimated for each inline from the Lalor 3D volume before denoising (blue x) and after F-XY deconvolution (green x) and curvelet denoising (red x). Mean SNR values are marked by dashed lines. . . . . . . . . . . . 71

3.8

(a) Part of the inline section from the Lalor 3D volume (crossline spacing of 12.5 m) before denoising. (b) Section from (a) after F-XY deconvolution. Note the significant reduction of seismic energy down to 500 ms. (c) Section from (a) after curvelet denoising. Weak, poorly correlated, short events were extracted in the shallow part down to 500 ms. . . . . . . . . . . . . . 71

xvi

3.9

(a) Chair plot of the DMO volume from the Brunswick survey before denoising. (b) Data from (a) after F-X deconvolution performed twice, i.e., in the inline and crossline directions. (c) Data from (a) after curvelet denoising. (d) Difference between (a) and (b). (e) Difference between (a) and (c). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

3.10 SNR estimated for each inline from the Brunswick 3D volume before denoising (blue x) and after F-XY deconvolution (green x) and curvelet denoising (red x). Mean SNR values are marked by dashed lines. . . . . . . . . . . . 74 3.11 (a) An inline section from the Brunswick 3D volume (crossline spacing of 11 m) before denoising with shallow reflections hiding the true structure marked by black ellipse and an event with direction opposite to the main structure marked by green ellipse. (b) Section from (a) after curvelet denoising. (c) and (d) Sections from (a) and (b) after application of the F-K mute. Shallow structure is now visible, but events with dips coinciding with the applied mute are damaged. (e) Section from (b) after additional angle-adaptive thresholding of curvelet coefficients. Shallow structure is clearly visible, and all events are left preserved. . . . . . . . . . . . . . . . 76

4.1

Location of the study area (dashed rectangle) in Central Poland. Known salt structures are marked by different colors: yellow - salt pillows, pink partially pierced salt diapirs, orange - fully pierced salt diapirs, and bluenon-salt anticlines. Gray shadingmarks the extent of the Mid-Polish Swell outlined by the sub-Cenozoic sub-crops of the Lower Cretaceous or older rocks (map modified from Krzywiec, 2012b). Inset on the right shows location of the transects selected for depth imaging (blue lines). . . . . . . 87

4.2

Simplified block diagram of the depth imaging workflow. Red boxes indicate modifications where our in-house curvelet-based denoising scheme was applied. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

xvii

4.3

Block diagram of data conditioning for moveout picking. Items in red boxes are related to curvelet-based denoising. . . . . . . . . . . . . . . . . . . . . 90

4.4

Data from line WA04WC04 displayed as pseudo-3D volume representing CDP × offset × depth volume: (a) raw data and (b) data after curvelet denoising. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

4.5

Sample CDP gathers from line WA04WC04 after (a) conventional and (b) curvelet-based conditioning. Note the enhancement of the hyperbolic moveouts between 4 and 7 km depth. . . . . . . . . . . . . . . . . . . . . . . . . 93

4.6

RMO picks (blue - speed up, red - slow down) produced on the CDP gathers from line WA04WC04 after (a) conventional, (b) curvelet-based conditioning. Seed points for RMO picking were equispaced in depth every 200 m. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

4.7

Tomographic QC display obtained from the RMOs picked on the CDP gathers after (a) conventional, (b) curvelet-based conditioning. Colour cells denote how the respective picks on the gathers are mapped into the velocity changes mapped on the depth grid (blue - speed up, red - slow down). . . . 95

4.8

Results of an additional statistical QC of the RMO picks. (a) Accepted (blue) and removed (red) RMO picks on the CDP gathers and (b) tomographic QC display obtained from the RMOs picked on the CDP gathers after curvelet-based conditioning and after additional statistical picks removal (blue - speed up, red - slow down). Compare to Figure 4.7b. . . . . 96

4.9

Part of the PreSDM stacks for line WA04WC04 overlaid with the velocity perturbations from tomographic inversions run with the RMO picks obtained on gathers after (a) conventional, (b) curvelet-based conditioning followed by the statistical QC. . . . . . . . . . . . . . . . . . . . . . . . . . 98

xviii

4.10 Final depth-migrated stacks along line KC05KF05 (a) and line WA04WC04 (b) (see location in Figure 4.1) overlaid with the final velocity models. Note the vertical exaggeration. Acronyms denote the following salt structures: KSD - Kłodawa salt diapir, USP - Uniejów salt pillow, PWS Ponętów–Wartkowice structure, and TSP - Turek salt pillow. Main stratigraphic intervals are also indicated. . . . . . . . . . . . . . . . . . . . . . . 100

5.1

Location map of the 3D-3C Lalor seismic survey (adopted from Bellefleur et al., 2015) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

5.2

Example of radial-component shot gathers contaminated with the groundroll.

5.3

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

Perspective view on the two surfaces corresponding to the upper and lower limits of the 3D window used to extract portion of the data containing ground-roll. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

5.4

Shot gathers from Figure 5.2 after application of curvelet filtering. . . . . . 109

5.5

CDP gathers (a) before and (b) after curvelet filtering without NMO correction. (c) difference between (a) and (b) . . . . . . . . . . . . . . . . . . 111

5.6

Example section migrated with (a) manually and (b) automatically derived velocity models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

5.7

Sections from Figure 5.6ab with corresponding velocity fields (5700-6300 m/s) overlaid. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

5.8

(a) Raw, surface microseismic data with band-pass filter and amplitude balance. Results of filtering (a) using: (b) DCT, (c) MCCF, (d) MCCF+DCT. (e) and (f) Residuals between (a) and a filtered images (c) and (d) respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

xix

5.9

(a) Geodynamic setting of the Nankai Trough. The solid red line represents the seismic profile of the TKY-21 experiment. (b) Zoomed view of the TKY-21 survey area, overlaid with the bathymetry variations. The black line and the dashed red line represent the shot profile and the receiver line, respectively. White star represent the position of the OBS 14 presented in Figure 5.10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

5.10 (a-b) OBS gather 14 (a) before and (b) after coherency filtering and deconvolution. The red and green lines represent the initial and final first-arrival traveltime picks, respectively. The white-shaded area highlights a complex portion of the data characterized by a shadow zone with a weak first arrival and a complex set of post- critical reflections and diffraction. . . . . . . . . 117 5.11 Real part of the monochromatic data visualized in the shot-receiver coordinate system. The presented frequency is 1.5 Hz. The dataset is shown (a) before and (b) after coherency filtering. (c) The difference between (a) and (b), showing the removed noise and outliers. . . . . . . . . . . . . . . . 118

xx

List of Abbreviations

AGC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Automatic Gain Control CT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Curvelet Transform CDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Common Depth Point CRS

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Common Reflection Surface

DCT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Discrete Curvelet Transform DMO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dip Moveout FAT

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . First Arrival Tomography

FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fast Fourier Transform FWI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Full Waveform Inversion IFFT . . . . . . . . . . . . . . . . . . . . . . . . . . . Inverse Fast Fourier Transform KSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kłodawa Salt Diapir LVZ

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Low-Velocity Zone

MCCF

. . . . . . . . . . . . . . . . . . . . . . . . . Multichannel Convolution Filter

MPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mid-Polish Swell NIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Normal Incidence Point NMO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Normal Moveout OBS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ocean Bottom Seismometer xxi

PreSDM . . . . . . . . . . . . . . . . . . . . . . . . . . . Pre-Stack Depth Migration PreSTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pre-Stack Time Migration PWS . . . . . . . . . . . . . . . . . . . . . . . . . . . Ponetów–Wartkowice Structure QC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quality Control RMO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Residual Moveout RMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Root Mean Square RTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reverse Time Migration SNR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Signal to Noise Ratio TSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Turek Salt Pillow USP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Uniejów Salt Pillow WT

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wavelet Transform

xxii

Introduction

Motivation and objectives In geophysical data processing presence of the noise remains one of the main challenges. Regardless of the acquisition environment, geometrical setting, employed equipment etc. recorded data always contain their useful part - signal - and some undesirable distortions - noise. Decades of ongoing research tackling the problem of seismic noise attenuation led to a broad range of denoising methods. This is reflected by the numerous, increasingly sophisticated techniques, which are now specialized in removing particular types of noise. Simultaneously, lot of efforts is devoted to development of a versatile tool, able to provide an optimal representation of signal and noise, not only for some distinct problems, but covering extensive scope of seismic data and seismic noises they contain. Indeed, such an universal method would have a great potential to become appreciated contribution to the standard seismic processing. Therefore, motivated by the shortcomings of case-oriented denoising approaches, researchers are looking for inspirations and solutions emerging from other study areas - in particular from the image processing. The main aim of this thesis is to investigate the potential benefits of the recently developed method of the image decomposition called curvelet transform (CT) for seismic data denoising. The CT in its current form (so-called “second generation” CT) was developed and published together with its discrete implementation (DCT) by Candès et al. (2006).

1

2

INTRODUCTION

It was introduced as a transform which has an inherent ability to provide sparse representation of a curvilinear objects using multiscale and multidirectional basis functions curvelets. The working hypothesis is that the DCT allows one to accurately represent any seismic section and decompose it into a set of curvelet coefficients that can be separated to select only desired part of the data (i.e. signal of any arbitrary characteristic, shape, location etc.). The objective of the research described in this thesis was to develop a workflow for scale- and angle-dependent selection/weighting of the curvelet coefficients (thresholding) to build a robust framework for the optimal signal and noise separation. For this purpose, in the following chapters, I provide the necessary insight into the proprieties of the DCT which are considered crucial from the seismic data denoising point of view. This is followed by the demonstration of a broad range of applications which validate the flexibility of the developed methodology. I present how this approach is able to provide panacea for the limitation of some more routinely applied denoising methods, especially when dealing with complex signals. I also demonstrate that, with the DCT, one is able to deliver superior processing and imaging results both in 2D and 3D. I advocate that the method can be introduced at any stage of the seismic processing either as a stand-alone tool or can complement other denoising schemes to overcome some of their limitations.

Thesis outline The thesis guides the reader through the application of curvelet denoising to different types of seismic data, preceded by the review of the theory and implementation of the CT and its applicability to seismic data processing. In presented applications I proceed from the illustrative, simple 2D synthetic examples, and gradually increase the complexity of the processing using different 2D real data followed by the 3D post-stack volumes and various types of pre-stack gathers. In Chapter 1, I provide necessary theoretical background of the CT followed by the outline

Thesis outline

3

of the curvelet-based denoising scheme. I also present principles of the adaptive thresholding according to different scales and dips in the curvelet domain allowing for separation of the signal of interest from the random noise. In Chapter 2, I proceed with practical aspects of the application of DCT to different types of the 2D synthetic and real seismic data. I also focus on the qualitative and quantitative evaluation of the obtained results. This chapter, together with the part of Chapter 1 presenting denoising methodology and the synthetic results, was published in Górszczyk et al. (2014). In Chapter 3, I explore the applicability of the 2D DCT to 3D post-stack seismic data. I use three data volumes acquired in the hardrock environment of different mining camps in Canada for the ore exploration purposes. They are characterized by low signal-to-noise ratio (SNR) and are significantly contaminated with the scattered energy. I compare results of DCT-based filtering with those obtained using more common method of coherency filtering (F-X deconvolution) and measure the improvement of the SNR to quantitatively express quality of the final images. This study was published in Górszczyk et al. (2015b). In Chapter 4, I demonstrate DCT-assisted pre-stack depth migration (PreSDM) workflow dedicated for processing of the vintage reflection data. This kind of vintage data is typically characterized by low fold and low SNR providing limited accuracy of moveout picking which hampers consistent velocity model updates during reflection tomography. As a consequence, reflectors in the final PreSDM section can be locally shifted with respect to their true position. Presented methodology aims to: (i) improve SNR of the common depth point (CDP) gathers being subject to the moveout picking; (ii) remove outliers from the set of picked moveouts to improve their consistency; (iii) provide correct velocity background model for PreSDM. This chapter was published in Górszczyk et al. (2015a). In Chapter 5, I gather another four applications of the DCT to different types of pre-stack seismic data. I consider denoising of the pre-stack gathers the most challenging since the signal can be difficult to track before stacking. Also the number of input data is significantly larger as compared to the post-stack sections. This chapter is partially dedicated

4

INTRODUCTION

to encourage the potential reader about versatility of the DCT-based noise attenuation. I present filtering examples of: (i) 3D pre-stack gathers from hardrock environment published in Górszczyk et al. (2016a); (ii) microseismic data acquired during monitoring of the hydraulic stimulation, ilustrating the potential limitations of the curvelet denosing, published in Trojanowski et al. (2016); (iii) crustal-scale, long-offset ocean bottom seismometer (OBS) data published in Górszczyk et al. (2016b). I close the thesis with the Conclusions section.

References

5

References Candès, E., Demanet, L., Donoho, D., Ying, L., 2006. Fast Discrete Curvelet Transforms. Multiscale Modeling & Simulation 5 (3), 861–899. Górszczyk, A., Adamczyk, A., Malinowski, M., 2014. Application of curvelet denoising to 2D and 3D seismic data — Practical considerations. Journal of Applied Geophysics 105, 78–94. Górszczyk, A., Cyz, M., Malinowski, M., 2015a. Improving depth imaging of legacy seismic data using curvelet-based gather conditioning: A case study from Central Poland. Journal of Applied Geophysics 117, 73–80. Górszczyk, A., Malinowski, M., Bellefleur, G., 2015b. Enhancing 3D post-stack seismic data acquired in hardrock environment using 2D curvelet transform. Geophysical Prospecting 63 (4), 903–918. Górszczyk, A., Malinowski, M., Bellefleur, G., 2016a. Applications of Curvelet Transform in Hardrock Seismic Exploration. In: EAGE/DGG Workshop on Deep Mineral Exploration. EAGE Expanded Abstracts, 5 pp., doi: 10.3997/2214-4609.201600040. Górszczyk, A., Malinowski, M., Operto, S., 2016b. Crustal-scale Imaging from Ultra-long Offset Node Data by Full Waveform Inversion - How to Do It Right? In: 78th EAGE Conference and Exhibition 2016. EAGE Expanded Abstracts, 5 pp., doi: 10.3997/22144609.201601190. Trojanowski, J., Górszczyk, A., Eisner, L., 2016. A multichannel convolution filter for correlated noise: Microseismic data application. In: SEG Technical Program Expanded Abstracts 2016. Society of Exploration Geophysicists, 5 pp., doi: 10.1190/segam201613453668.1.

6

INTRODUCTION

Chapter 1 Fundamentals of the noise attenuation using curvelet transform

1.1



Introduction

During the seismic experiments utilizing active sources one can perceive the notion of a noise in two different ways. Firstly (and more intuitively) noise can be understood as an imprint of the surrounding environment generating seismic energy which is not related to these active sources. With such understanding of the noise, we can further refer (depending on its origin) to natural and anthropogenic noise. One is related to the currently ongoing processes caused by the natural environment itself (wind, earthquakes, sea waves, animals etc.), while another is caused by the human activity (roads, power lines, construction sites etc.). Both natural and anthropogenic noises can have either coherent or random characteristics and different frequency bands hence requiring separate approaches during denoising. However, apart from this first understanding of the noise, there is also a second interpretation which can be referred to as a source-related noise. In this case, what can be Chapter based on: Górszczyk, A., Adamczyk, A., Malinowski, M., 2014. Application of curvelet denoising to 2D and 3D seismic data - Practical considerations. Journal of Applied Geophysics 105, 78–94. ∗

7

8

CHAPTER 1

considered a noise, is more conditional. Typically, the term “source-generated noise”, refers to the noise generated e.g. by the mechanical system of the vibrator or the mud and rocks ejected form the shot hole. From the other hand, the recorded wavefield consist of different types of waveforms like air waves, direct waves, surface waves, reflected and refracted waves, multiple waves or converted waves, and all these arrivals interfere with each other making this wavefield complex. Therefore, for example, during the typical reflection imaging only the reflected waves will be considered as a signal, while other arrivals will act as a noise which should be removed during processing. Since this type of noise is generated by the active source it has usually coherent characteristic, limited frequency band and, depending on the geological complexity, its appearance is predicable until certain degree. For random noise attenuation simple median filtering (e.g. Bednar, 1983; Yilmaz, 2001) can be utilized, however its robustness is limited due to the fact that the strong median filter can significantly deform the signal. More robust and routinely used F-X deconvolution filter (e.g. Canales, 1984; Yilmaz, 2001) is based on the prediction of linear events in the frequency-space domain. The drawback here is that it can only predict single, dominant dip within a running window, hence removing more subtle arrivals of the angle conflicting with the main dip. Other semblance-based coherence filtering methods (e.g. Milkereit and Spencer, 1989; Kong et al., 1985) can lead to misinterpretation by creating smeared artefacts imitating seismic signal. Radon or tau–p transform (Kappus et al., 1990; Turner, 1990; Zhou and Greenhalgh, 1994) although primarily dedicated for multiple elimination can also be partially helpful for attenuating random noise by localizing and boosting coherent events. Therefore, it seems that one of the key objectives during seismic data denoising is to provide “suitable” representation of the recorded wavefield which will allow for feasible extraction of what will be treated as a signal. Indeed, what we consider useful for solving certain imaging problem can be skipped when utilizing the same data for some other purpose. However, some representations of the seismic data realized by the particular

1.1. Introduction

9

mathematical transformations can be considered as a more comprehensive in terms of the ability to extract the useful signal than the others. Many of these representations come from the field of the image processing and are derived from the Fourier transform (FT). This is because partitioning of the image into individual frequency components can help in interpretation and/or separation of a certain features of this image. Since in the geophysical data processing, one can consider any collection of seismic traces (e.g. prestack gathers, post-stack sections etc.) as an image, therefore application of the image processing methods to seismic processing is quite straightforward. Considering routinely applied bandpass filtering, it can be viewed as a simplest example of frequency analysis, where the desired interval of the spectrum can be selected. Its extension in 2D (F-K filtering, e.g. Stewart and Schieck, 1989; Yilmaz, 2001) allows not only to target certain frequency bands, but also to track the slope of the features which can be useful for coherent noise removal. Both of these basic tools employing Fourier analysis provide access to the different frequency components and hence have ability to control the resolution of the image details. However, despite considerable usefulness, these classical multiresolution ideas lack the information about the time or spatial position of the mapped features. Moreover, what is worth to stress is that the FT have inherent difficulty to provide sparse representation (using a finite series of continuous sine and cosine waves) for sharp discontinuities of the signal. To partially overcome these limitations, wavelet transform (WT) was proposed (see Daubechies, 1992, and references therein). Wavelets reconstruct the signal with a finite, wave-like functions which are limited both in time and frequency, due to the shift and scale parameters. As a consequence, discontinuities of the signal can be captured using smaller number of the coefficients providing sparser representation than in case of the FT. This is especially true for 1D signals where many Fourier coefficients were needed to represent a singular discontinuity. However, the weakness of the WT lies in the precise representation of 2D objects smooth along a curve and sharply discontinuous in the normal direction. Because classical 2D wavelets can only be horizontal, vertical or diagonal, they are not suitable

10

CHAPTER 1

to adapt to curvilinear features with high discontinuity contrast. Therefore, even though 2D reconstruction provided by wavelets is superior comparing to the FT, because of their isotropic nature, they are still unable to maintain sparse representation. The shortcoming of the WT in edges representation motivated the development of the sparsity-promoting image decomposition methods. Some of them e.g. bandelet transform (Le Pennec and Mallat, 2005), contourlet transform (Do and Vetterli, 2005), curvelet transform (Candès et al., 2006), seislet transform (Fomel and Liu, 2010) or shearlet transform (Kutyniok et al., 2012) utilize corresponding fixed basis functions able to effectively handle the curvilinear objects. In particular, curvelets can be considered as robust functions that consider not only frequency and location but also the direction of the represented feature. Because of their anisotropic characteristic, curvelets need significantly fewer coefficients to represent edges in comparison to wavelets. In fact representations provided by DCT are nearly as sparse as if the object were not singular and turn out to be far more sparse than the wavelet decomposition (Candès and Demanet, 2005).

1.2

General description of the curvelets

CT was first introduced by Candès and Donoho (1999a) as a multiscale and sparse representation of signals suited for mapping edges in the image processing problems. It was particularly an answer for the difficulties that wavelets revealed when used to represent higher dimension linear singularities. Soon after curvelets were introduced an approach of image restoration combining curvelets and wavelets was presented (Starck et al., 2001). The first generation of the CT was based on the ridgelet transform (Candès and Donoho, 1999b) applied to blocks of data defined with 2D spatial coordinates and the frequency. This implementation, however, had limited application since the geometry of the ridgelets is unclear, as they are not true ridge functions in digital images. This led the authors to redesign their own construction and make it simpler, faster and less redundant. As a result they substituted block ridgelet transform by direct 2D frequency domain partitioning

1.2. General description of the curvelets

11

and created two digital implementations of, so-called, “second generation” DCT: • DCT via unequispaced FFT, • DCT via wrapping. The main difference between them comes from the choice of spatial grid used to translate curvelets at each scale and angle (Candès et al., 2006). Since that time curvelets have evolved and found various fields for application - especially noise attenuation. The essence of curvelet-based denoising in the image processing was described by Starck et al. (2002). It was just a matter of time when the DCT found its application also in seismic data processing where recorded signal usually have the form of continuous smooth lines oscillating in the direction in which the wavefront moves. In stacked seismic sections the reflections are represented as linear objects of similar nature, oscillating in the direction perpendicular to their spatial extension. Even if the subsurface structure is complicated with conflicting dips, intersecting curved events, faults, etc., the DCT is still able to decompose the data into linear sum of needle-shaped atoms named curvelets. In this context, Herrmann et al. (2008a,b) show the application of curvelets to reconstruct data from incomplete measurements, to separate primaries and multiples and to restore migration amplitudes. Cristall et al. (2004) used DCT to mitigate acquisition and imaging problems in 4D seismic surveys. Numerical examples of DCT application in common offset Kirchhoff depth migration were presented by Douma and de Hoop (2007). Neelamani et al. (2008) introduced curvelet-based noise attenuation for 3D seismic data corrupted with random and linear noise. More seismic issues like ground roll attenuation (Yarham and Herrmann, 2008) or enhancing crustal reflection data with sparsity promotion (Kumar et al., 2011) were solved with curvelet-based methods proving their value for seismic processing. Curvelets are multiscale, multidirectional and local in both spatial and frequency domains as presented in Figure 1.1a,b. Moreover, in Fourier domain they occupy strictly localized, directional wedges (Figure 1.1c) which is reflected by the fact that they obey the “parabolic scaling” law, which means that for each of the angular polygons, if its

12

CHAPTER 1

Figure 1.1. Examples of few curvelets of different scales, angles and location in spatial domain (a) and their representation in frequency domain (b) where they are positioned in the orthogonal direction; (c) Partitioning of the 2D frequency plane into concentric, rectangular annuli and their inner segmentation into angular wedges which obey the parabolic scaling. Number of wedges doubles each second scale yielding curvelets of higher frequency more anisotropic. (d) Zoom of curvelet no. 3 from (a); each of the curvelets is described by its frequency, angle (dip) and position. Localization (x1 , x2 ) is possible because of rapid amplitude decay in spatial domain, outside the region indicated by dashed line ellipse.

length dimension is proportional to 2j , where j is an integer, then its width dimension is approximately proportional to 2j/2 . Those features require increased (in comparison with e.g. FT) parametrization. In this case, each single curvelet is described by characteristic frequency (scale), dip (angle) and two spatial coordinates in the manner shown in Figure 1.1d. Such parametrization allows to map particular smoothly-continuous events such as wavefronts or reflectors and random noise into separate or in practice almost separate sets of coefficients, which makes curvelet domain favourable in regard to seismic data

1.3. Theory of the curvelet transform

13

processing. Another important feature of the curvelet transform is the sparsity of data representation. The less coefficients required to represent the signal, the more successful the transformation. By thresholding the curvelet coefficients, one is able to achieve high level of sparsity, which might be an advantage in further, easier and faster processing, especially when dealing with large datasets (Herrmann et al., 2008a,b).

Figure 1.2. Curve approximation with 2D wavelets (a) and curvelets (b).

Indeed, with the DCT one is able to obtain very high order of approximation for curved objects with different conflicting orientations and intersections. Figure 1.2 illustrates simple schematic curve reconstruction using 2D wavelets (a) and curvelets (b). It is clear that curvelet approximation produces smoother shape representation using, at the same time, fewer coefficients when compared with 2D wavelet approximation.

1.3

Theory of the curvelet transform

This section provides a brief review of the main features of the continuous curvelet transform and its discrete implementation via wrapping together with their mathematical constructions following the nomenclature of Candès et al. (2006).

14

CHAPTER 1

1.3.1

Continuous curvelet transform

Let us assume two dimensional plane R2 with spatial coordinates x = (x1 , x2 ), frequency domain variable ω = (ω1 , ω2 ) and r and θ being polar coordinates in the frequency domain. On this plane we define two smooth, nonnegative and real-valued windows W (r) and V (t) called radial window and angular window respectively. Both of this windows are supported on r ∈ (1/2, 2) and t ∈ [−1, 1] and will always obey the admissibility conditions: ∞ X

W 2 (2j r) = 1,

r ∈ (3/4, 3/2);

(1.1)

t ∈ (−1/2, 1/2).

(1.2)

j=−∞ ∞ X

V 2 (t − l) = 1,

l=−∞

Now, for each j ≥ j0 in the frequency domain by support of the radial and angular windows W and V applied with scale-dependent widths we can define a window Uj as follows: Uj (r, θ) = 2

−3j/4

−j

W (2 r)V



2⌊j/2⌋ θ 2π



.

(1.3)

The support of Uj can be considered as a polar wedge which length and width depend on the shape of radial and angular windows respectively. From Equation 1.3 it can be seen that with decreasing value of j the support of W increases and the support of V decreases such that the angular wedges Uj become longer and thinner. The window Uj is now applied to build curvelet functions as follows. Let the waveform ϕj,0,0 (x) be given by its Fourier transform:

ϕˆj,0,0 (ω) = Uj (ω).

(1.4)

The function ϕj,0,0 can be considered as a “mother” curvelet in the sense that for scale 2−j a whole family of curvelets can be obtained by rotations and translations. For this purpose we define: • the equispaced sequence of rotation angles θl = 2π · 2−⌊j/2⌋ · l, with l = 0, 1, ... such

15

1.3. Theory of the curvelet transform that 0 ≤ θl < 2π (note that the spacing between angles depends on the scale) • the sequence of translation parameters k = (k1 , k2 ) ∈ Z2 .

Using these notations we define curvelet (as a function of x) at scale 2−j , orientation θl (j,l)

and position xk

(k1 · 2−j , k2 · 2−j/2 ) as follows: = Rθ−1 l   (j,l) ϕj,l,k (x) = ϕj,0,0 (x) Rθl (x − xk ) ,

(1.5)

where Rθ is rotation by θ radians and Rθ−1 its inverse (and transpose RθT ), 



 cos θ sin θ  Rθ =  , − sin θ cos θ

Rθ−1 = RθT = R−θ

(1.6)

Corresponding curvelet coefficient is then simply obtained as a inner product between an element f ∈ L2 (R2 ) and a curvelet ϕj,l,k :

c(j, l, k) := hf, ϕj,l,k i =

Z

(1.7)

f (x)ϕj,l,k (x)dx. R2

Since curvelet transform originate from the frequency domain, we can apply Plancherel’s theorem and express Equation 1.7 as a integral over frequency plane: 1 c(j, l, k) := (2π 2 )

Z

1 fˆ(ω)ϕj,l,k (ω)dω = (2π 2 )

Z

(j,l)

fˆ(ω)Uj (Rθl ω)eihxk

,ωi

dω.

(1.8)

Similarly to wavelet theory, curvelet transform also have coarse scale, nondirectional elements. They are obtained with window W0 such that: |W0 (r)|2 +

X

|W (2−j r)|2 = 1,

(1.9)

j≥0

and location k = (k1 , k2 ) ∈ Z2 , as follow: ϕj0 ,k (x) = ϕj0 (x − 2−j0 k),

ϕˆj0 (ω) = 2−j0 W0 (2−j0 |ω|).

(1.10)

16

CHAPTER 1

Hence complete curvelet transform consist of the fine-scale directional elements (ϕj,l,k )j≥j0 ,l,k and coarse-scale nondirectional wavelets (Φj0 ,k )k , although it is the anisotropic behaviour of curvelets which is one of their main advantages. Figure 1.3 demonstrates principles of the curvelets construction. j/2

~2

~2

-j/2

j

~2

~2-j

a

b

Figure 1.3. Curvelet tiling of space and frequency. (a) The introduced tiling of the frequency plane. In Fourier space, curvelets are supported near a “parabolic” wedge marked by the dark-shaded area. (b) Schematic representation of the spatial Cartesian grid associated with a given scale and orientation.

Summarizing this section we can list few properties of the curvelet transform.

1. Tight frame Like in an orthonormal basis, we can expand a function f (x1 , x2 ) ∈ L2 (R2 ) as a series of curvelets (including coarse scale elements) with reconstruction formula:

f=

X

hf, ϕj,l,k iϕj,l,k ,

(1.11)

j,l,k

holding equality in L2 sense and the Parseval theorem: X

|hf, ϕj,l,k i|2 = kf k2L2 (R2 ) ,

∀f ∈ L2 (R2 ).

(1.12)

j,l,k

2. Parabolic scaling ϕj (x) is of rapid decay away from a 2−j by 2−j/2 rectangle with major axis pointing

17

1.3. Theory of the curvelet transform in the vertical direction. Hence the effective length and width obey the relation: length ≈ 2−j/2 ,

width ≈ 2−j



width ≈ length2 .

(1.13)

3. Oscillatory behavior From Equations 1.3 and 1.4 it is apparent that ϕˆj,0,0 is supported close to horizontal axis ω2 = 0 and away from vertical axis ω1 = 0. This translates to oscillatory characteristic of ϕj,0,0 in the direction of x1 axis and lowpass in the direction of x2 axis. Therefore at scale 2−j curvelet appears a as needle-shaped plane wave with the envelope specified by the narrow ridge of effective length 2−j/2 and width 2−j oscillating across this ridge. 4. Vanishing moments Initial curvelet ϕj,0,0 has q vanishing moments when: Z

∞ −∞

ϕj,0,0 (x1 , x2 )xn1 dx1 = 0,

for all 0 ≤ n < q, for all x2 .

(1.14)

The same equality holds for curvelets of different angles θl when x1 and x2 are considered in the corresponding rotated coordinates. We can point out here that the integral is performed along x1 direction which is perpendicular to the ridge. Hence, number of vanishing moments characterize the oscillatory behaviour described in the previous point. In Fourier domain Equation 1.14 becomes: ∂ n ϕˆj,0,0 (0, ω2 ) = 0, ∂ω1n

for all 0 ≤ n < q, for all ω2 .

(1.15)

Curvelets as defined here have an infinite number of vanishing moments since they are compactly supported in the frequency plane.

18

1.3.2

CHAPTER 1

Discrete curvelet transform via wrapping

In the continuous-time definition (1.3), the window Uj smoothly extracts frequencies near the dyadic corona {2j ≤ r ≤ 2j+1 } and near the angle {−π·2−j/2 ≤ θ ≤ π·2−j/2 }. Coronae and rotations are not well suitable to Cartesian coordinates. Therefore, it is convenient to replace these concepts by the concentric squares (instead of circles) and shears (instead of rotations). For example, the Cartesian analogue of the (Wj )j≥0 , Wj (ω) = W (2−j ω), could be expressed as follows: fj (ω) = W

q Φ2j+1 (ω) − Φ2j (ω),

j ≥ 0,

(1.16)

where Φ is obtained by product of two one dimensional low-pass windows: Φj (ω1 , ω2 ) = φ(2−j w1 )φ(2−j w2 ).

(1.17)

Function φ obeys 0 ≤ φ ≤ 1, can be equal to 1 on [−1/2, 1/2], and vanishes outside of [−2, 2]. Similarly to 1.9: Φ0 (ω)2 +

X j≥0

f 2 (ω) = 1. W j

(1.18)

Now we can follow with the angular localization. Let V obey 1.2 and set: Vj (ω) = V (2⌊j/2⌋ ω2 /ω1 ).

(1.19)

fj and Vj we define the window: Using W

ej (ω) := W fj (ω)Vj (ω). U

(1.20)

ej isolates the wedge {(ω1 , ω2 ) : 2j ≤ ω1 ≤ 2j+1 , −2−j/2 ≤ ω2 /ω1 ≤ 2−j/2 }, which Hence U

is a Cartesian equivalent of the polar window defined by equation 1.3. Introduce now the

19

1.3. Theory of the curvelet transform set of equispaced slopes tan θl := l · 2−⌊j/2⌋ , l = −2⌊j/2⌋ , ..., 2⌊j/2⌋ − 1, and define ej,l (ω) := W fj (ω)Vj (Sθ ω), U l

(1.21)

where Sθ is the shear matrix, 



0  1 Sθ :=  . − tan θ 1

(1.22)

ej,l is the Cartesian The angles θl are not equispaced here but the slopes are. The U

ej,l defines a concentric analogue of the Uj (Rθl ω) from the previous section. The family U tiling presented in Figure 1.1c.

Since Vj (Sθl ω) = V (2⌊j/2⌋ ω2 /ω1 − l), and for each ω = (ω1 , ω2 ) with ω1 > 0, by analogy to equation 1.2 we write:

∞ X

|Vj (Sθl ω)|2 = 1.

(1.23)

l=−∞

Because of the support constraints of V , the above sum restricted to the angles −1 ≤ P tan θl < 1 gives |Vj (Sθl ω)|2 = 1, for ω2 /ω1 ∈ [−1 + 2−⌊j/2⌋ , 1 − 2−⌊j/2⌋ ]. Therefore all angles

we can write what follows from equation 1.18: X

X

all scales all angles

ej,l (ω)|2 = 1. |U

(1.24)

This pseudopolar tiling of the frequency plane with trapezoids is further utilized by DCT via wrapping to obtain curvelets at each scale and angle. The curvelet in the Cartesian grid is defined as follows:

c(j, l, k) =

Z

ej (S −1 ω)eihb,ωi dω fˆ(ω)U θl

(1.25)

where b ≃ (k1 2−j , k2 2−j/2 ), taking values on a rectangular grid. Suppose now that we are given a Cartesian array f [t1 , t2 ], 0 ≤ t1 , t2 < n and let fˆ[n1 , n2 ]

20

CHAPTER 1

denote its 2D FT as: fˆ[n1 , n2 ] =

n−1 X

f [t1 , t2 ]e−t2π(n1 t1 +n2 t2 )/2 ,

−n/2 ≤ n1 , n2 < n/2.

(1.26)

t1 ,t2 =0

which shall be seen as a samples: fˆ[n1 , n2 ] = fˆ(2πn1 , 2πn2 )

(1.27)

from the interpolating trigonometric polynomial defined as: fˆ(ω1 , ω2 ) =

X

f [t1 , t2 ]e−i(ω1 t1 +ω2 t2 )/n .

(1.28)

0≤t1 ,t2 ≤n

ej,l [n1 , n2 ] = U ej [n1 , n2 ] is supported on some For simple example when θl = 0 the window U rectangle of length L1,j and width L2,j :

Pj = {(n1 , n2 ) : n1,0 ≤ n1 < n1,0 + L1,j , n2,0 ≤ n2 < n2,0 + L2,j }

(1.29)

where (n1,0 , n2,0 ) is the value at the bottom-left of the rectangle and through the parabolic scaling L1,j is about 2j while L2,j is about 2j/2 . ej,l [n1 , n2 ] does not fit in a rectangle ∼ 2j × 2j/2 , aligned However for θl 6= 0 the window U

with the axes, in which the 2D IFFT could be applied to compute coefficient as in equation ej,l [n1 , n2 ] is supported in the parallelepipedal 1.25. This is because for θl 6= 0 the window U

region:

Pj,l = Sθl Pj .

(1.30)

as shown in Figure 1.4. However by periodicity i.e. multiples of L1,j in the horizontal    π π 3π 5π otherwise or , direction and L2,j in the vertical direction (when θ ∈ − , 4 4 4 4 the role of the coordinate axes must be exchanged, see Figure 1.4), it is possible to obtain corresponding coefficients in the rectangle at the origin of the plane. This periodization

21

1.3. Theory of the curvelet transform

Figure 1.4. Wrapping data (initially inside a parallelogram) into a rectangle centred at the origin utilizing their periodicity.

ej,l [n1 , n2 ]fˆ[n1 , n2 ] reads: of the widowed data d[n1 , n2 ] = U W d[n1 , n2 ] =

X X

d[n1 + m1 L1,j , n2 + m2L2,j ].

(1.31)

m1 ∈Z m2 ∈Z

The windowed data wrapped around the origin are defined as the restriction of W d[n1 , n2 ] to indices n1 , n2 inside a rectangle with sides of length L1,j × L2,j near the origin:

0 ≤ n1 < L1,j ,

0 ≤ n2 < L2,j .

(1.32)

Given the indices (n1 , n2 ) originally inside Pj,l (possibly larger than L1,j and L2,j ) the correspondence between the wrapped and the original indices is one-to-one. Therefore one can see the wrapping transformation as a simple reindexing of the data which can be alternatively expressed using the modulus:

W d[n1 mod L1,j , n2 mod L2,j ] = d[n1 , n2 ].

(1.33)

Summarizing this section the architecture of the DCT via wrapping can be presented as

22

CHAPTER 1

follows: 1. Apply 2D FFT to obtain Fourier samples fˆ[n1 , n2 ], −n/2 ≤ n1 , n2 < n/2. ej,l [n1 , n2 ]fˆ[n1 , n2 ]. 2. For each scale j and angle l form the product U

ej,l fˆ)[n1 , n2 ] where the 3. Wrap this product around origin and obtain fej,l [n1 , n2 ] = W (U  π π . range of n1 and n2 is now 0 ≤ n1 < L1,j and 0 ≤ n2 < L2,j for θ ∈ − , 4 4 4. Apply the 2D IFFT to each fej,l and collect discrete curvelet coefficient cD (j, k, l).

1.4

Implementation of the curvelet-based denoising scheme

As it was mentioned before, the unique parametrization of the DCT allows to map the coherent features of the image into narrow (sparse) set of high value coefficients while the incoherent energy will be represented by the broad collection of low value coefficients. Therefore, one of the key issues during curvelet-based noise attenuation is the optimal selection of the coefficients representing signal of interest. Kumar et al. (2011) proposed the “cooling method” which promotes sparsity and involves global hard and iterative global soft thresholding of curvelet coefficients. Here we decided to use a somehow simpler approach and follow a scheme that is well known and widely applied in image processing, namely: 1. Forward DCT, 2. Analysis of obtained curvelet coefficients, 3. Hard thresholding, 4. Inverse DCT. We focus on careful choice of the curvelet coefficients that need to be suppressed in order to enhance the signal. The thresholding strategy includes adapting the parameters according to the scale and angle of the data in curvelet domain and requires additional iterations for their adjustment. The whole algorithm is not a closed box and it gives wide independence in choosing the thresholding parameters, which is an advantage when

1.4. Implementation of the curvelet-based denoising scheme

23

working with datasets of varying quality and noise characteristics. Detailed description of this procedure and the guidelines we follow are presented later in this chapter. We make use of two assumptions frequently used when dealing with seismic noise attenuation (see Hennenfent et al., 2011). The first one is that the recorded seismic data y can be represented as superposition of signal y ′ and noise n: y = y ′ + n.

(1.34)

The second corresponds to the fact that in the curvelet domain signal and noise map into approximately disjoint collections of coefficients. Hence we can write:

Xy ′ ∩ Xn ≈ ⊘

(1.35)

where Xy′ and Xn denote nonzero vectors of curvelet coefficients. We will now proceed through some details that might explain and clarify presented scheme. As mentioned in Section 1.2 there are two implementations of the DCT. For our needs wrapping-based forward and inverse transforms are superior because of its simpler implementation and faster execution especially when dealing with large portions of data. Source codes of both implementations are available from the CurveLab project (www.curvelet.org). Since the main goal of this work is to consider a practical application of the DCT to seismic data processing, therefore the reader is referred to Candès et al. (2006) for the omitted details concerning specific aspects of the DCT implementations which were briefly reviewed in the previous sections. As mentioned, CT has its origins in image processing and thus the input data are in a form of an image, i.e. seismic traces are treated as equispaced columns of samples (or pixels). As long as the standard, equispaced FFT is used in the DCT implementation, the irregularities in the acquisition suppose to be dealt with beforehand (resampling of the data, interpolation or filling the empty traces with zeros). In general our DCT workflow consists of four steps:

24

CHAPTER 1 1. Applying 2D FFT to the image, 2. Forming a product of scale and angle windows, 3. Wrapping this product around origin, 4. Collecting discrete coefficients via 2D IFFT.

Before the forward transform, we have to choose the number of scales and whether we are going to use wavelets or curvelets for representing the features at the finest scale. The appropriate number of scales depends on the size of the image and characteristics of both signal and noise, e.g. if we deal with high frequency events a higher number of scales might be required in order to correctly map them into the curvelet domain. Random white noise, which in curvelet domain is represented by a large number of relatively small coefficients, is (in most situations) easy to remove without much deliberating. However, if the noise is of certain characteristic or is coherent and maps into curvelet coefficients of significant values, then densely partitioned frequency spectrum is worth considering, since it can lead to better separation and in consequence attenuation of the noise.

1.4.1

Curvelet scale selection

In order to demonstrate the influence of scale partitioning on the signal and noise separation in the curvelet domain let us consider following situation. Zero-centred white Gaussian noise filtered between 15 and 30 Hz was applied to a synthetic noise-free seismic section. During DCT, 2D Fourier spectrum (or F-K spectrum) was partitioned in two ways with 4 and 7 scales as demonstrated in Figure 1.5a,b. The following two plots (Figure 1.5c,d) present vectors of curvelet coefficients obtained from noisy section corresponding to each used grid. The first vector contains curvelet coefficients (from scales 1–2) received after DCT with sparsely partitioned 2D frequency spectrum. The second vector corresponds to the densely partitioned frequency domain (coefficients from scales 1–5 and partially 6). As it can be clearly seen, increasing number of scales led to extracting coherent energy in a form of separate, higher coefficient collections at scales 4 and 5, whose energy was primarily mixed with noise and indistinguishable. Usefulness of this

1.4. Implementation of the curvelet-based denoising scheme

25

observation is explained later in Section 1.4.2.

Figure 1.5. Influence of densely partitioned frequency domain on extracting coherent energy at certain scales. The same synthetic section was transformed into curvelet domain with 4 (a) and 7 (b) scales. Threshold levels for each transform are displayed by dashed lines. In consequence higher number of scales allowed thresholding at different levels (d) and better selection of coherent energy as compared to global threshold level in the case of 4 scales (c).

On the other hand, densely partitioned spectrum may cause certain undesirable effects. If the signal is dominated by incoherent noise, it may produce significant artefacts in a form of high frequency coefficients of a relatively high value. They result from fitting the finest

26

CHAPTER 1

scale curvelets to the incoherent noise. Furthermore, the computational cost of forward and inverse curvelet transforms increases with the number of scales. DCT via wrapping allows to choose between curvelets and wavelets to represent features in the finest scale (see Candès et al., 2006). To illustrate the consequence of employing each of the options we will use another synthetic example. In this case zero-centred white Gaussian noise was added to a synthetic seismic section representing part of the Marmousi2 model (Martin et al., 2006) (Figure 1.6a - noise free data, Figure 1.6b data with added noise) and DCT with wavelets and curvelets at the finest scale was conducted. Sections presented in Figure 1.6c,d were recovered using DCT with wavelets and curvelets at the finest scale respectively. In both cases global thresholding was applied after empirical estimation of optimal threshold levels which were |thrw | = 0.45 for wavelets and |thrc | = 0.40 for curvelets. The results, though both far from ideal, suggest advantage of using curvelets in the finest scale: the image is clearer and the weaker events are easier to follow. Another aspect of the choice between curvelets and wavelets at the final scale is redundancy of each approach. Proportion of obtained coefficients varies from about 2.8 in the case of wavelet to 7.2 with curvelets at the finest scale (Candès et al., 2006). In this work however, despite higher computational requirements, we prefer to use curvelets at the finest scale as they provide superior results especially when recovered section contains high-frequency events.

Figure 1.6. Difference between choosing wavelets or curvelets at the finest scale. (a) Demonstrative, noise-free synthetic section from the Marmousi2 model. (b) Section (a) with white Gaussian noise added. Recovered sections at (c) and (d) are the results of global thresholding after transform using 6 scales with wavelets and curvelets at the finest scale respectively. In both cases visible artefacts of different shapes arise. Introducing few threshold levels allow higher attenuation of noise and better signal passing as shown in (e).

1.4. Implementation of the curvelet-based denoising scheme 27

28

CHAPTER 1

1.4.2

Scale-adaptive thresholding

Reviewed proprieties of DCT may significantly help to minimize the overlapping of noise and signal coefficients in curvelet domain. However, correct configuration of the scales is just the background for the most important step of the algorithm which is weighting of the curvelet coefficients. During hard thresholding, coefficients of values higher than the selected threshold level are preserved, while smaller coefficients are zeroed. Therefore, picking appropriate threshold values has direct influence on the results’ quality. It follows from our practice that starting with the threshold equal to the 98th or 99th percentile of the curvelet coefficients, depending on the noise level and size of the input section, gives us good first approximation of the desired result. It could happen that such estimation of the threshold level is sufficient and suitable for larger set of sections (i.e. inlines or crosslines from the same dataset). However, if we deal with a more complicated case of noisy data, further analysis is required. For this purpose it is a good practice to plot the vector of coefficients in a manner shown in Figure 1.5c,d. Instead of introducing one global threshold level (Figure 1.5c), one can set different levels proportional for each scale (Figure 1.5d). In practice, whole scale might be attenuated when containing coefficients corresponding to noise of certain characteristic or causing visible artefacts, which is similar to the less sophisticated and low-cost methods, i.e. bandpass or F-K filtering. Multithreshold approach requires careful inspection of the curvelet coefficients and additional human interaction, however, it gives a significant improvement of the results when compared with the global thresholding. Figure 1.6e presents section restored using scale-adaptive thresholding. Coefficients in curvelet domain were thresholded with 3 different levels: • |thr1 | = 0.45 - scales 1,2,6, • |thr2 | = 0.40 - scales 3,5, • |thr3 | = 0.35 - scales 4. Improvement of recovered section is undeniable comparing to Figure 1.6c. Also a sig-

1.4. Implementation of the curvelet-based denoising scheme

29

nificant amount of artefacts visible in Figure 1.6d was eliminated making image cleaner. Most of very weak events, completely unrecognisable in the noisy input (Figure 1.6b) section were recovered and the number of artefact was minimized.

Figure 1.7. Threshold level adjusting using 2D Fourier spectrum analysis. (a) Synthetic section presented in Figure 1.6a has been highly corrupted with coloured (0-50Hz) noise. Section recovered in consequence of initial global thresholding level (b) still contains much of low frequency noise. From the 2D spectra shown as insets one can localize scales which contain noise coefficients and better adjust threshold levels for them. (c) Section recovered after an increase of threshold level at scales containing low frequency curvelets still containing some residual artefacts. (d) Final section after additional attenuation of certain angles (symmetric polygons bounded by dashed lines shown in the inset (c)).

The next example explains how the analysis in the frequency domain helps to choose the optimal threshold level. Figure 1.7a is the same synthetic section as presented in Figure 1.6a, with added white Gaussian noise band-pass filtered such that it overlaps the signal spectrum (see the 2D Fourier spectrum in the inset). In the section only strong

30

CHAPTER 1

reflections are recognizable and the remaining part of the signal is completely dominated by noise. Corresponding inset with the F-K spectrum proves that the coherent energy is mixed with noise of similar frequency characteristics. Figure 1.7b presents a section recovered using DCT with global thresholding |thr| = 0.60. Initially, we preserved only the highest 0.2% of the coefficients and as a result most of the noise has been removed. However, inspecting the inset with F-K spectrum, one can notice that in comparison with the initial spectrum of the signal, much of the incoherent energy remained in the central, rectangle-shaped area corresponding to certain scales in the curvelet domain. We can make use of this information and set higher attenuation levels for those scales. Hence, in the second approach, we introduced more detailed thresholding with 2 levels: • |thr1 | = 0.75 - scales 1,2,3, • |thr2 | = 0.55 - scales 4,5,6. Again, a slight intervention in the algorithm provides far better result (Figure 1.7c).

1.4.3

Angle-adaptive thresholding

In our consideration about optimal thresholding approach we could even go one step further. Presented example proved superiority of the approach of tuning the coefficients’ weighting according to the scales. Why do not try similar approach with certain angles? Of course, transferring such an approach into angular wedges in practice requires more analysis of the coefficients in the curvelet domain. However, obtained results are incomparably better than in the case of global thresholding. In the presented example, coherent energy of the input section is concentrated within two wedges symmetrical in relation to the origin of F-K spectrum. After introducing few threshold levels according to the scales we can still point some residual artefacts. Those are easy to remove by attenuating remaining coefficients localized in certain parts of the 2D Fourier spectrum. Region indicated with a dashed line in the inset in Figure 1.7c corresponds to the specific angular wedges of distinct scales. Multiplying threshold levels for those angular wedges

1.4. Implementation of the curvelet-based denoising scheme

31

by 1.5 leads to improvement in result quality (Figure 1.7d). Higher attenuation of coefficients localized in detected region results in minimizing the number of artefacts and less damage to coherent energy as compared to the results of thresholding adapted to scales only (compare Figure 1.7c,d). Presented example was highly corrupted by a random noise of frequency characteristics similar to the signal. This resulted in significant overlapping of signal and noise coefficients in the curvelet domain, and, in consequence, to large damage to the weak reflections. Nevertheless, obtained result proves the effectiveness of our denoising approach consisting of both scale- and angle-adaptive thresholding.

1.4.4

Signal-to-noise ratio definition

Until now we have been judging the quality of the restored signal based on the “eyeball norm”. In order to express the efficiency of the proposed method, we will introduce the SNR estimation algorithm, based on the assumption that seismic signal is correlated from trace to trace and the noise is uncorrelated. Based on this assumption for each trace Ti (where i ∈ h1, X − 1i and X - number of traces) we can estimate autocorrelation of signal ASi as the average of cross correlations CTi Ti+1 , CTi+1 Ti between two adjacent traces Ti ,Ti+1 :

ASi =

CTi Ti+1 + CTi+1 Ti 2

(1.36)

To estimate the autocorrelation of the noise for the same trace Ti the first autocorrelations ATi T i and ATi+1 Ti+1 are calculated. Averaging results of the following subtractions: ATi T i − ASi = A′Ni

(1.37)

ATi+1 T i+1 − ASi = A′Ni+1

(1.38)

and

32

CHAPTER 1

we obtain estimation of the noise autocorrelation ANi for trace Ti as follows:

AN i

A′Ni + A′Ni+1 = 2

(1.39)

Since autocorrelation has its maximum value when the correlation lag is zero and because the zero-lag value of the autocorrelation is no more than the sum of squared values of samples, hence calculating:

SN Ri =

rms of signal = rms of noise

s

max(ASi ) max(ANi )

(1.40)

we estimate signal to noise ratio SN Ri at trace Ti . Repeating this flow for all pairs of traces [Ti Ti+1 , Ti+1 Ti+2 ,..., TN −1 TN ] we receive vector of values (SN R1 , SN R2 ,..., SN RN −1 ). Final SN R is obtained by averaging:

SN R =

SN R1 + SN R2 + ... + SN RN −1 N −1

(1.41)

The approach described above is based on the work of Dash and Obaidullah (1970) and Ikelle and Amundsen (2005).

References

33

References Bednar, J. B., 1983. Applications of median filtering to deconvolution, pulse estimation, and statistical editing of seismic data. GEOPHYSICS 48 (12), 1598–1610. Canales, L. L., 1984. Random noise reduction. In: SEG Technical Program Expanded Abstracts 1984. Society of Exploration Geophysicists, pp. 525–527. Candès, E., Demanet, L., Donoho, D., Ying, L., 2006. Fast Discrete Curvelet Transforms. Multiscale Modeling & Simulation 5 (3), 861–899. Candès, E. J., Demanet, L., 2005. The curvelet representation of wave propagators is optimally sparse. Communications on Pure and Applied Mathematics 58 (11), 1472– 1528. Candès, E. J., Donoho, D. L., 1999a. Curvelets - A surprisingly effective nonadaptive representation for objects with edges. In: Curve and Surface Fitting. Saint-Malo: Vanderbilt University Press. Candès, E. J., Donoho, D. L., 1999b. Ridgelets: a key to higher-dimensional intermittency? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 357 (1760), 2495–2509. Cristall, J., Beyreuther, M., Herrmann, F. J., 2004. Curvelet processing and imaging: 4-D adaptive subtraction. In: CSEG Annual Conference Proceedings. CSEG, CSEG. Dash, B. P., Obaidullah, K. A., 1970. DETERMINATION OF SIGNAL AND NOISE STATISTICS USING CORRELATION THEORY. GEOPHYSICS 35 (1), 24–32. Daubechies, I., 1992. Ten lectures on wavelets. SIAM. Do, M. N., Vetterli, M., 2005. The contourlet transform: an efficient directional multiresolution image representation. IEEE Transactions on image processing 14 (12), 2091–2106. Douma, H., de Hoop, M. V., 2007. Leading-order seismic imaging using curvelets. GEOPHYSICS 72 (6), S231–S248.

34

CHAPTER 1

Fomel, S., Liu, Y., 2010. Seislet transform and seislet frame. Geophysics 75 (3), V25–V38. Hennenfent, G., Cole, J., Kustowski, B., 2011. Interpretative noise attenuation in the curvelet domain. In: SEG Technical Program Expanded Abstracts 2011. Society of Exploration Geophysicists. Herrmann, F. J., Moghaddam, P., Stolk, C. C., 2008a. Sparsity- and continuity-promoting seismic image recovery with curvelet frames. Applied and Computational Harmonic Analysis 24 (2), 150–173. Herrmann, F. J., Wang, D., Hennenfent, G., Moghaddam, P. P., 2008b. Curvelet-based seismic data processing: A multiscale and nonlinear approach. GEOPHYSICS 73 (1), A1–A5. Ikelle, L. T., Amundsen, L., 2005. 5. Characterization of Seismic Signals by Statistical Averages. In: Introduction to Petroleum Seismology. Society of Exploration Geophysicists, pp. 181–232. Kappus, M. E., Harding, A. J., Orcutt, J. A., 1990. A comparison of tau-p transform methods. GEOPHYSICS 55 (9), 1202–1215. Kong, S. M., Phinney, R. A., Roy-Chowdhury, K., 1985. A nonlinear signal detector for enhancement of noisy seismic record sections. GEOPHYSICS 50 (4), 539–550. Kumar, V., Oueity, J., Clowes, R. M., Herrmann, F., 2011. Enhancing crustal reflection data through curvelet denoising. Tectonophysics 508 (1-4), 106–116. Kutyniok, G., Lim, W.-Q., Zhuang, X., 2012. Digital shearlet transforms. In: Kutyniok, G., Labate, D. (Eds.), Shearlets: Multiscale Analysis for Multivariate Data. Birkhäuser Boston, pp. 239–282. Le Pennec, E., Mallat, S., 2005. Sparse geometric image representations with bandelets. IEEE transactions on image processing 14 (4), 423–438. Martin, G. S., Wiley, R., Marfurt, K. J., 2006. Marmousi2: An elastic upgrade for Marmousi. The Leading Edge 25 (2), 156–166.

References

35

Milkereit, B., Spencer, C., 1989. Noise suppression and coherency enhancement of seismic data. In: Statistical application in the earth sciences. Vol. 89. Geological Survey of Canada, pp. 243–248. Neelamani, R., Baumstein, A. I., Gillard, D. G., Hadidi, M. T., Soroka, W. L., 2008. Coherent and random noise attenuation using the curvelet transform. The Leading Edge 27 (2), 240–248. Starck, J.-L., Candes, E., Donoho, D., 2002. The curvelet transform for image denoising. IEEE Transactions on Image Processing 11 (6), 670–684. Starck, J.-L., Donoho, D. L., Candes, E. J., 2001. Very high quality image restoration by combining wavelets and curvelets. In: Laine, A. F., Unser, M. A., Aldroubi, A. (Eds.), Wavelets: Applications in Signal and Image Processing IX. SPIE-Intl Soc Optical Eng. Stewart, R. R., Schieck, D. G., 1989. 3-D F-K filtering. In: SEG Technical Program Expanded Abstracts 1989. Society of Exploration Geophysicists. Turner, G., 1990. Aliasing in the tau-ptransform and the removal of spatially aliased coherent noise. GEOPHYSICS 55 (11), 1496–1503. Yarham, C., Herrmann, F. J., 2008. Bayesian ground-roll separation by curvelet-domain sparsity promotion. In: SEG Technical Program Expanded Abstracts 2008. Society of Exploration Geophysicists. Yilmaz, Ö., 2001. Seismic data analysis: Processing, inversion, and interpretation of seismic data. Society of Exploration Geophysicists. Zhou, B., Greenhalgh, S. A., 1994. Linear and parabolic tau-p transforms revisited. GEOPHYSICS 59 (7), 1133–1149.

36

CHAPTER 1

Chapter 2 Application of curvelet denoising to 2D seismic data: practical considerations

2.1



Introduction

In seismic data processing random noise is the most commonly observed type of noise (Chopra and Marfurt, 2014). Despite the development of different acquisition tools and sensor configurations (Martin et al., 2000; Dey et al., 2012) dedicated to minimize the recording of random noise, some portion of incoherent energy is always evident on seismic traces. This recorded energy is useless for obtaining seismic images or velocity models. It interferes with the signal, i.e. the useful part of the data, damaging or concealing it, yielding difficulties in imaging and leading to misinterpretation. Therefore, the need to enhance the most informative events in seismic data leads to development of increasingly sophisticated and robust denoising techniques (see Chapter 1). We believe that better understanding of the nature of curvelets and clarification of the advantages of this method may lead to its popularization and subsequent development as one of those techniques. Designing optimal denoising workflow, adequate to the processed data and the processing Chapter based on: Górszczyk, A., Adamczyk, A., Malinowski, M., 2014. Application of curvelet denoising to 2D and 3D seismic data - Practical considerations. Journal of Applied Geophysics 105, 78–94. ∗

37

38

CHAPTER 2

scheme, is crucial for successful seismic imaging (Biondi, 2006). In the previous chapter, the theoretical background of the DCT and the implementation of the scale- and angleadaptive thresholding of the curvelet coefficients were presented. Experience gained so far shows that the practical application of more complex thresholding leads to better noise attenuation. To further support this approach, here we will continue with its application to the different examples of 2D synthetic and real data. To demonstrate practical random noise attenuation utilizing DCT we will start with synthetic examples revealing various configuration of noise characteristic. Subsequently, we will present the results illustrating application of the denoising scheme to the 2D real post-stack data, characterized by various acquisition configurations and processed in a different way. We will also show the application of the curvelets to condition the data that are subjected to frequency-domain full waveform inversion (FWI). Finally, we will conclude with some practical implications and comment on the features of applied method.

2.2

Application to synthetic data

For synthetic data considerations, we decided to use time stack section from Marmousi2 dataset (available from www.agl.uh.edu). The Marmousi2 model contains a wide range of complicated structures and reflectors of very weak to strongly outlined amplitudes and varying dips. This complexity gives us the opportunity to show how curvelet-based noise attenuation copes with heterogeneous environment containing different kinds of noise. For our tests a smaller region of the model was extracted and resampled to the size of 1024 × 1024 pixels as shown in Figure 2.1a. Figure 2.1b,c presents amplitude and F-K spectra of selected data. The SNR of the clean synthetic section was estimated as 3.97. We introduce additional parameters to describe and compare quality of the results, the L2 norm kN k and standard deviation σ. For the clean section from the Marmousi2 model the norm equals 16.48 and the standard deviation is 0.08.

2.2. Application to synthetic data

39

Figure 2.1. Synthetic section extracted from the Marmousi2 model (a) with its frequency (b) and F–K spectrum (c). Note the complexity of structure and variety of reflector amplitudes.

Testing of the algorithm consisted of processing the input section contaminated with synthetic noise of varying attributes. For this purpose, white Gaussian noises of different standard deviations (σ1 =0.1, σ2 =0.2, σ3 = 0.3) and norms (kN1 k = 6.386, kN2 k = 12.773, kN3 k = 19.159) were modelled. Subsequently low- (0-25 Hz) and high-pass (50+ Hz) filters were designed. After applying them to the mentioned white noises, we obtained another 6 (3 low-frequency and 3 high-frequency) types of noises. Selected bands have highly different dominant frequencies as well as significantly large overlap with the signal spectrum. Since the band-pass filtering is changing standard deviations and norms of the results as compared to initially designed white noises, therefore scaling factors were applied to low- and high-frequency noises in order to equalize them with white noise in respect of the norm. Figure 2.2 demonstrates frequency characteristics of noises (norm kN1 k = 6.386) comparing with the spectrum of the clean synthetic section. In consequence we obtained 9 types of noises with different frequencies and standard deviation characteristics. By applying them to the clean synthetic section, we produced 9 input test sections (parameters summarized in Table 2.1) presented in Figure 2.3.

Figure 2.2. Frequency characteristic of the noise applied to the extracted part of Marmousi2 model. Examples with the lowest value of norm kN1 k = 6.386 were selected.

40

CHAPTER 2

Section 1 2 3 4 5 6 7 8 9

Frequency band 0-100 Hz 0-100 Hz 0-100 Hz 0-25 Hz 0-25 Hz 0-25 Hz 50-100 Hz 50-100 Hz 50-100 Hz

Noise parameters Standard Norm deviation σ1 =0.10 kN1 k= 6.38 σ2 =0.20 kN2 k=12.77 σ3 =0.30 kN3 k=19.15 σ4 =0.06 kN1 k= 6.38 σ5 =0.12 kN2 k=12.77 σ6 =0.18 kN3 k=19.15 σ7 =0.09 kN1 k= 6.38 σ8 =0.18 kN2 k=12.77 σ9 =0.27 kN3 k=19.15

Scale 1 0.14 0.28 0.42 0.20 0.40 0.60 0.05 0.05 0.05

Thresholds according to Scale Scale Scale 2 3 4 0.12 0.12 0.12 0.22 0.22 0.22 0.32 0.32 0.32 0.15 0.15 0.10 0.30 0.30 0.22 0.45 0.45 0.36 0.05 0.05 0.07 0.05 0.05 0.14 0.05 0.05 0.21

scales Scale 5 0.16 0.32 0.48 0.10 0.15 0.20 0.20 0.35 0.45

Scale 6 0.16 0.32 0.48 0.10 0.15 0.20 0.20 0.35 0.50

Threshold scaling with dip c=1.5 c=1.5 c=1.5 c=3.0 c=3.0 c=3.0 c=0.5 c=0.5 c=0.5

Table 2.1. Threshold levels used during denoising of Sections 1-9 according to scales. Factor c denotes multiplier of threshold for angular wedges shown in Figure 2.6.

Figure 2.3. Synthetic section contaminated with noise of various characteristics. (a)–(c) correspond to Sections 1-3, (d)-(f) and (g)-(i) present Sections 4-5 and 6-9 respectively.

The top row (Figure 2.3a-c) shows Sections 1-3 contaminated with white noise, the noise in Sections 4-6 (Figure 2.3d-f) is low-frequency and in Sections 7-9 (Figure 2.3g-i) - high-

2.2. Application to synthetic data

41

frequency. Noise amplitudes increase from left- to right-hand side. The standard deviations of noises (up to the value of σ = 0.3) and initial noise-free section (0.08) show that the signal was highly contaminated.

Figure 2.4. Comparison of the frequency spectra before (green) and after (red) applying each noise to the respective sections from Figure 2.3. Note increasing absolute value of noise amplitude in comparison with signal energy. Each pair of the spectra was normalized to 1.

Figure 2.5. F-K spectra of sections from Figure 2.3. For each section we can observe distribution of the noise relatively to the location of signal in the 2D frequency plane. All noise types have zero-centered Gaussian distribution but differ in respect to frequency characteristic, norm and standard deviation.

42

CHAPTER 2

The following two figures (Figures 2.4 and 2.5) present frequency and F-K spectra for the sections shown in Figure 2.3. The noisy sections were used as input data for the denoising algorithm. We used DCT via wrapping with 6 scales and default number of angles which is 16 at the 2nd coarsest and 64 at the finest scale. Initially only the highest 1% of the curvelet coefficients at all scales were selected for inverse DCT. Quick analysis of the curvelet coefficients and 2D frequency spectra allowed to adjust threshold levels individually for each scale. Moreover, for certain angular wedges, additional threshold scaling was applied, depending on the type of noise. Table 2.1 summarizes thresholds for all sections according to the scales. Figure 2.6 presents polygons subtracted from curvelet grid, containing angular wedges within which threshold levels were multiplied by constant factor c (see Table 2.1). Shape of the selected polygons depends on the frequency characteristic of applied noise.

Figure 2.6. Partitioning of the F-K spectrum we have used during DCT. After setting threshold levels according to scales we introduce additional threshold weighting factor c for coefficients localized in certain angular wedges. Dark polygons (a) - white noise, Sections 1-3; (b) - low-frequency noise, Sections 4-6; and (c) - high-frequency noise, Sections 7-9; contain coefficients thresholded with additionally weighted threshold levels. Compare those polygons with the F-K spectra presented in Figure 2.5.

Figure 2.7 illustrates the effectiveness of our method. Each of the panels presents the results of the curvelet denoising of the data from respective panels in Figure 2.3. In general, increasing noise amplitudes makes it more difficult to recover the weakest reflections and causes more artefacts in the final image. In sections with the smallest values of the standard deviation and the norm of the noise amplitudes, the signal looks similarly well restored, independent of the noise frequency spectrum. However, increasing the noise amplitude causes damage to the weaker events. The most accurate results were obtained for the section polluted with high-frequency noise (Figure 2.7g-i) where we observe relatively small signal damage and a minimal number of artefacts. This is also reflected in Figure 2.8g-i containing frequency spectra of the recovered sections compared to the noise-free

2.2. Application to synthetic data

43

Figure 2.7. Sections from Fig. 2.3 after denoising. Differences between obtained results reflect influence of applied noise on the results. Slight damage of weaker event amplitudes which increase with standard deviation of noise is observed.

synthetic section. Significant damage to the amplitudes of the dominant signal frequency can be observed as the standard deviation of the noise increases (see Figure 2.8d-f). This might be a consequence of the more pronounced overlapping of the low-frequency signal and noise coefficients in the curvelet domain. Slightly better results are observed in case of the white noise (see Figure 2.7a-c and corresponding frequency spectra in Figure 2.8a-c). Additionally Figure 2.9 presents the F-K spectra of the corresponding filtered sections. Apparently, the main areas of signal energy are left well preserved, however some small damages are visible when the norm of noise increases.

44

CHAPTER 2

Figure 2.8. Comparison of noise-free synthetic section frequency spectrum (red) and recovered section frequency spectra (green). We can observe preserving of spectrum shape, however reduction of signal energy is visible for data contaminated with noise of higher norm.

Figure 2.9. F-K spectra of the filtered sections presented in Figure 2.7. Comparing with the F-K spectrum of the clean synthetic section (Figure 2.1c) we can observe the impact of filtration on the shape of spectra.

Tables 2.2 and 2.3 summarize the statistics of the synthetic data before and after denoising. Norm and RMS values were calculated in order to quantitatively describe the applied and removed noise (Table 2.2). The levels of applied and removed noise are very similar one

45

2.2. Application to synthetic data Std. deviation σ1 =0.1 σ2 =0.2 σ3 =0.3

White noise Norm App. Rec. 6.38 6.46 12.77 12.74 19.15 19.07

RMS App. Rec. 0.10 0.10 0.20 0.20 0.30 0.30

Low frequency noise Std. Norm RMS deviation App. Rec. App. Rec. σ1 =0.06 6.38 6.27 0.05 0.05 σ2 =0.12 12.77 12.60 0.10 0.10 σ3 =0.18 19.15 18.93 0.15 0.15

High frequency noise Std. Norm RMS deviation App. Rec. App. Rec. σ1 =0.09 6.38 6.40 0.09 0.09 σ2 =0.18 12.77 12.68 0.18 0.18 σ3 =0.27 19.15 18.99 0.28 0.28

Table 2.2. Applied and recovered noise parameters. Input section Norm RMS

16.484 0.080

White noise Low frequency noise High frequency noise Section 1 Section 2 Section 3 Section 4 Section 5 Section 6 Section 7 Section 8 Section 9 15.710 15.014 14.448 15.781 14.675 13.731 16.039 15.743 15.425 0.075 0.070 0.067 0.075 0.070 0.063 0.076 0.073 0.071

Table 2.3. Recovered signal parameters.

to another which is a desired effect in our synthetic considerations. Slightly higher RMS values in case of removed noise suggest that some part of the coherent energy was damaged during thresholding. In order to determine the level of signal loss, analogous statistics for the clean basic and recovered sections were calculated (Table 2.3). As could be expected, increased level of the applied noise results in more significant signal loss (especially weaker events) during the attenuation process. The problem is particularly evident in case of the low-frequency and white noise. The reason might be the similarity of the noise and signal frequencies and the consequent dominance of the noise over weaker reflectors. SNR values of the data before and after denoising were collected and presented in Figure 2.10. The robustness of the applied method is reflected by the significant increase of the SNR. It is worth to remind that our SNR estimation is based on the cross-correlation of the traces. Therefore, even though basic section is noise-free, due to the shifts of the signal from trace to trace along the conflicting dips, we are unable to obtain perfect value of SNR (which for data without noise supposed to tend toward infinity). However, the fact, that the final SNR of the recovered data is a good approximation of the SNR value estimated for the clean section proves that our estimation provides good quantitative evaluation of the results. Perhaps, in case of Sections 1-3 and 7-8, some weak residual artefacts remaining after filtering, cause slightly higher SNR value comparing to noise-free data. Whereas for Sections 4-6 by removing some portion of the weakest signals we generated an empty spots within sections in time-space domain which in turns causes loss of SNR. Moreover, SNR of the input Sections 4-6 is higher than other noisy sections. It seems to be a intuitive consequence of higher cross-correlation value between traces containing low frequencies.

46

CHAPTER 2

Figure 2.10. Summary of the SNR before and after noise attenuation for each section. Dashed line represents SNR level of noise-free initial data.

We finish this section with the examples demonstrating the results of F-X deconvolution applied to Sections 3, 6 and 9 (Figure 2.11a–c). Comparing these sections with the corresponding results of our curvelet-based method, one can easily realize the volume of noise which survived during F-X deconvolution and simultaneous damage of signal energy, especially in case of section corrupted with white noise (Figure 2.11a). Results presented in Figure 2.11b,c (denoised sections 6 and 9 respectively) exhibit better continuity of signal comparing to Figure 2.11a. This is a consequence of better prediction for these signal frequencies, which are separable with frequency bands of coloured noises. However, there are still visible local loses of reflections and phase shifts combined with significant volume of remaining noise and created artefacts. Figure 2.12 demonstrates the shape and fitting of the frequency spectra of the filtered sections with the noise-free input data. It proves significant damage of signal energy which is undesirable especially that recovered sections demonstrate relatively noisy background. These examples explain and justify the need for using a more complex denoising method presented here.

Figure 2.11. Results of F-X deconvolution denoising for Section 3 (a), Section 6 (b) and Section 9 (c). In case of each section we observe a significant share of unwanted energy which survived denoising.

2.3. Application to real data

47

Figure 2.12. Frequency spectra of sections presented in Figure 2.11 (black line - section (a), blue line section (b), red line - section (c), green polygon - clear synthetic section). Fitting of frequency bands for obtained results proving simultaneous visible damage caused to the signal.

2.3 2.3.1

Application to real data Filtering of post-stack seismic sections

For 2D real data study three different datasets characterized by different acquisition parameters, source frequency and processing steps were used. All sections represent pre-stack time migration (PreSTM) stacks with mild automatic gain control (AGC) scaling applied and the following spatial and frequency characteristics: • Section 1 - CDP spacing: 3.125 m, sampling: 4 ms, bandpass filter: 8–12–70–90 Hz, • Section 2 - CDP spacing: 12.5 m, sampling: 2 ms, bandpass filter: 4–8–115–130 Hz, • Section 3 - CDP spacing: 15 m, sampling: 4 ms, bandpass filter: 8–12–65–85 Hz. From the described PreSTM stacks, representative 1024×1024 pixel areas were extracted and served as input data in further processing. As can be seen in Figure 2.13a-c, the chosen sections differ in terms of formation and intensity of reflectors as well as frequency of the noise. Section 1 (Figure 2.13a) mostly consists of straight, parallel interfaces (their dip increases slightly in deeper parts) corrupted with noise of broad frequency spectrum. Layered structure of the section is especially affected by noise in areas of poorly highlighted amplitudes. In Section 2 (Figure 2.13b) some aligned reflectors of different amplitudes can be observed and the image is damaged by high frequency noise. Section 3 (Figure

48

CHAPTER 2

2.13c) presents more complicated structure with varying dips and visible faults. Volume of the noise in this case seems to be relatively small, however weaker parts of reflectors seem to be damaged by somewhat low frequency energy. The processing started with the same DCT settings that were used for the synthetic sections. Initially the highest 1% of all the curvelet coefficients were selected for the inverse transform. Afterwards, threshold levels were adjusted based on the restored images and the analysis of the vectors of coefficients and the frequency spectra. Taking into account the frequency characteristic of noise and the distribution of coherent energy in curvelet

Figure 2.13. 2D real data example using PreSTM stacks. (a)–(c) Input Sections 1-3. (d)–(f) Corresponding sections recovered after noise attenuation. (g)–(i) Energy taken as noise and extracted from each section.

49

2.3. Application to real data

domain for Sections 1 and 3 we applied higher threshold levels for coarser scales and relatively lower values for finer scales, whereas in case of Section 2 reversed approach was applied. The threshold values are summarized in Table 2.4. Scale 1

Scale 2

Scale 3

Scale 4

Scale 5

Scale 6

c

Section 1

0.25

0.25

0.25

0.15

0.15

0.15

1.50

Section 2

0.20

0.20

0.20

0.30

0.30

0.30

1.50

Section 3

0.30

0.30

0.30

0.20

0.15

0.15

1.50

Table 2.4. Threshold levels used during denoising of Sections 1-3 according to scales. Factor c denotes multiplier of threshold for angular wedges shown in Figure 2.14

As in case of synthetic data, here we have introduced additional adjustment of the threshold levels according to dip. Figure 2.14a-c presents polygons extracted from curvelet grid within which noise attenuation was performed with threshold level scaled by a factor c = 1.5. Filtered sections are presented in Figure 2.13d-f. Improvement of the recovered signal in comparison to the input data is significant. Most of the incoherent energy was suppressed without obscuring useful information (e.g. location of faults in Figure 2.13f) or introducing artefacts. Improvement of data quality is the most visible in Section 2 where the noise amplitudes were significant in comparison to the coherent energy. Figure 2.13g-i presents the difference between the input data and the recovered sections, i.e. the energy we consider to be the noise. The noise energy is incoherent and, except for the strongest event in Section 2, does not seem to follow any of the reflectors, which suggests that, in general, the method does not damage the signal and preserves its amplitudes.

Figure 2.14. Partitioning of the F-K spectrum we have used during DCT of Sections 1-3 (panels a-c respectively). After setting threshold levels according to scales we introduce additional threshold weighting factor c for coefficients localized in certain angular wedges. Dark polygons subtracted from curvelets contain coefficients thresholded with additionally weighted threshold levels.

Information about frequency characteristic of the input sections as well as of the obtained results and residuals is also reflected in the corresponding frequency spectra and F-K spectra presented in Figures 2.15 and 2.16. For Section 1 relatively narrow frequency band

50

CHAPTER 2

Figure 2.15. Frequency spectra of the input sections (a) Section 1, (b) Section 2, and (c) Section 3. (d-f)/(g-i) Comparison of the input section frequency spectra (red plots) and recovered signal/noise frequency spectra (green plot).

Figure 2.16. F-K spectra of the input sections (a) Section 1, (b) Section 2, and (c) Section 3. (d-f)/(g-i) Corresponding F-K spectra of the recovered signal/noise.

of the signal (about 20-60 Hz, Figure 2.15d) was obtained, whereas noise energy is almost uniformly distributed in the spectrum (Figure 2.15g). In case of Section 2 one can notice the most significant reduction of energy after noise attenuation as compared to the input

2.3. Application to real data

51

section (Figure 2.15e). Analysing residual frequency spectrum (Figure 2.15h - green plot) we observe a trend of growing noise energy as frequency increases up to about 65 Hz. After this point nearly whole energy is treated as residual which is also reflected by the fine-scale character of the noise in the time-space domain. As mentioned before, in case of Section 3 the volume of the noise appeared to be relatively low which is reflected by the smaller reduction of the resulting energy in comparison to the input (Figure 2.15f). Residual frequency spectrum of this section (Figure 2.15i) contains mainly lower frequencies and its energy consequently decreases with higher frequencies. Figure 2.16 demonstrates F-K spectra of the inputs, results and residuals respectively. It illustrates the distribution of signal and noise energy for each of the sections introducing more details of the results of our denoising approach and supports the interpretation.

Figure 2.17. SNR values before and after noise attenuation for each section from Figure 2.12.

The values of SNR (Figure 2.17) confirm the effect described above. The most significant improvement was observed in case of Section 2. The volume of high frequency noise of the input section in this case is reflected in a relatively low SNR before the attenuation. Section 1 shows almost twice as high level of SNR after denoising in comparison to the input data. Finally Section 3 did not contain much noise therefore the increase of SNR is not as big but still significant.

52

2.3.2

CHAPTER 2

Data conditioning for full waveform inversion

Motivated by the results of 2D DCT applied to 2D post-stack sections, we explore the application of curvelet denoising to enhance the signal in frequency maps (panels). By frequency map we understand the real or imaginary part of the Fourier transform for one particular frequency extracted from 2D seismic data, presented in source-receiver coordinates. It represents the data that are subjected to frequency-domain FWI (Pratt, 1999). The image of a frequency map is eligible for curvelet noise attenuation, since the signal has a clear directional character (stripes along the diagonal of the image) and the noise is unidirectional (Figure 2.18a).

Figure 2.18. (a) Plot of the real part of the 6-Hz frequency data extracted from the POLCRUST-01 deep reflection seismic profile in the source-receiver coordinate system (frequency map); (b) same panel after curvelet denoising; (c) difference between panels (a) and (b); (d) phase of the 6-Hz data; (e) phase calculated from the curvelet-denoised real and imaginary parts of 6-Hz data; and (f) difference between panels (d) and (e).

Here data from the regional deep reflection seismic profile POLCRUST-01 (see Malinowski et al., 2013, for details), acquired with the Vibroseis technique (6-72 Hz sweep) were used. Mono-frequency subsets of the data were extracted in order to run frequency domain FWI. The curvelet denoising scheme was applied separately on the real and imaginary parts of

2.4. Discussion

53

the mono-frequency data. Limited frequency band allowed for easy weighting of coefficients according to scales. Moreover, because frequency maps exhibit highly directional nature, they are a typical example of data suitable for angle dependent thresholding. Figure 2.18 shows the results of curvelet denoising for the real part of the data at 6 Hz. It is clear from Figure 2.18b, that only the continuous events were preserved in filtration. Horizontal stripes of coherent energy visible in Figure 2.18c (i.e. the difference between input and filtered panel) represent noisy receiver positions. Vertical stripes are related to the shots with amplified amplitudes, e.g. noisy shot record. Both were filtered out by our method. Bottom panels of Figure 2.18 shows the calculated phase. Comparison of the input, filtered and the residual sections (Figure 2.18d-f respectively) suggests that the phase was preserved during our denoising and it is easier to follow its changes. FWI tests with this dataset indicate that the inversion with curvelet-denoised data outperforms the one with the normally preconditioned data (e.g. bandpass filtered). This is because the attenuation of the noise in the curvelet domain significantly improves the quality of the data and allows the FWI to fit the signal not the noise during the inversion.

2.4

Discussion

Thresholding is not a trivial subject considering any type of filtering. It has the largest and most immediate impact on the results. The approach in which thresholding is locally adapted at certain scales or angles leads to more efficient noise attenuation. As it was mentioned, it is a good practice to set the first approximation of the threshold level as the largest 1-2% of the curvelet coefficients. Observations indicate that the smaller and more noisy the section, the higher percentage of curvelet coefficients need to be inverted back to the time-space domain in order to obtain reliable results (assuming the same partitioning of frequency domain). However, this approach still fails when employed to global thresholding. Adjusting the global threshold always requires additional iterations and, in the end, has a small chance to provide results comparable to those obtained with

54

CHAPTER 2

scale-adapted threshold levels. The weakness of global thresholding manifests itself for example in the form of artefacts that can be removed without damaging the signal only when certain frequency bands (via curvelet scales and/or angular wedges) are directly accessed and attenuated. The direct extension of traditional thresholding approach is to set the weighting levels in the same way for specific scales. This approach can be effective even in the presence of white noise and leads to the results which are much more precise than the results obtained from the inversion of globally thresholded coefficients. The access to particular frequency bands gives more leeway in separating signal from noise and avoiding artefacts. When tuning the threshold levels we not only evaluate the results and residuals in the timespace domain but also analyse the vector of curvelet coefficient and 2D frequency spectra. Thanks to better understanding and interpretation of signal and noise characteristics, as well as their distribution in curvelet and F-K domain, one is immediately closer to the desired, optimal solution. Hence, knowing basic features of curvelet transform we are able to achieve far better results in very few iterations, while global thresholding is unable to provide comparable, artefact-free results. Transferring this scale-localized approach to angle-adaptive thresholding is more laborious because of the number of angular wedges. Demonstrated method is applicable in practice, however its employment and decision-making process should be based on the prior knowledge about the dip of the structure or statistical calculations rather than “blind” selections. Nevertheless, one can expect that addressing this issue would translate into even better noise suppression. Section size 512x512 1024x1024 2048x2048 4096x4096 8192x8192 16384x16384

Curvelets at the finest scale Forward transform Inverse transform 0.5 s 1.3 s 1.9 s 2.6 s 6.3 s 7.8 s 21.4 s 25.7 s 90.3 s 105.8 s 403.3 s 695.3 s

Wavelets at the finest scale Forward transform Inverse transform 0.2 s 0.5 s 0.7 s 1.4 s 1.9 s 4.1 s 7.2 s 13.1 s 26.7 s 48.5 s 114.3 s 229.7 s

Table 2.5. Comparison of computation times regarding to the type of DCT and data size.

Comparison with F-X deconvolution filtering shows that even though curvelet-based noise

2.5. Conclusions

55

attenuation is more costly, the results are significantly better. The emphasis on noise reduction with a minimum number of iterations is due to the computational complexity of DCT. Table 2.5 shows times for direct and inverse DCT with wavelets and curvelets at the finest scale (spectrum partitioned into 7 scales) depending on the size of the input section. From the obtained results it can be concluded that for large datasets (i.e. 3D data) complete noise attenuation might be time-consuming. Careful choice of the threshold levels, based upon the qualitative comparison of denoised images, along with the analysis of frequency spectra and vectors of curvelet coefficients, allows to minimize the number of iterations and hence to limit the computational cost and the time of calculations.

2.5

Conclusions

We illustrated the efficiency of curvelet-based noise attenuation in enhancing coherent signal of seismic data from experiments conducted in various environments. The developed approach proved its effectiveness and functionality on both synthetic and real data contaminated with noise of various characteristics. The introduced scheme is intuitive and, once the architecture of curvelet representation is understood, simple to apply in practice. The results of scale-adaptive thresholding demonstrate a significant improvement over the images obtained with the global threshold version of the algorithm, which justifies this procedure, even though at the current stage of development it still requires significant human interaction. Demonstrated examples show that even complex structures of crooked reflectors or varying dips can be recovered from very noisy data. Denoised sections are clearer and easier to interpret. Proper use of scale-adaptive thresholding ensures that obtained images contain minimum amount of an artefact-free. Furthermore, it was proven, that noise suppression in curvelet domain is effective not only for stacked 2D seismic data or shot gathers (e.g. Kumar et al., 2011), but also for frequency maps (i.e. the form of data input to full waveform inversion).

56

CHAPTER 2

The thresholding algorithms presented here should not be the final stage of developing a curvelet-based tools for seismic data denoising. The scale-adaptive thresholding involves some human interaction however it has potential to be applied to the large datasets consisting of numerous sections differing in noise and signal characteristics. Possible way of development of the presented method could be the attempt to automate threshold levels adjusting based on the statistical analysis or pattern recognition methods. Extending the algorithms with angle-adaptive thresholding was also mentioned and applied to a presented examples. Indeed manual setting of the threshold values for each angle might be too laborious to be performed for more complicated datasets. However, the idea is very appealing and making the effort to automate this process would be certainly worthwhile. Taking into account how promising the presented examples are, we expect that with more sophisticated methods of setting the thresholds, the results of curvelet denoising can be even more rewarding.

Acknowledgements

57

Acknowledgements This work is funded by the Polish National Science Centre grant no 2011/03/D/ST10/05128. DCT algorithm was taken from the CURVELAB project (www.curvelab.org).

References Biondi, B. L., 2006. 3D Seismic Imaging. Society of Exploration Geophysicists. Chopra, S., Marfurt, K. J., 2014. Causes and Appearance of Noise in Seismic Data Volumes. Explorer, WC109–WC122. Dey, A. K., Stewart, R. R., Lines, L. R., Bland, H. C., 2012. Noise suppression on geophone data using microphone measurements. CREWES Research Report - Volume 12 (2000). Kumar, V., Oueity, J., Clowes, R. M., Herrmann, F., 2011. Enhancing crustal reflection data through curvelet denoising. Tectonophysics 508 (1-4), 106–116. Malinowski, M., Guterch, A., Narkiewicz, M., Probulski, J., Maksym, A., Majdański, M., Środa, P., Czuba, W., Gaczyński, E., Grad, M., Janik, T., Jankowski, L., Adamczyk, A., 2013. Deep seismic reflection profile in central europe reveals complex pattern of paleozoic and alpine accretion at the east european craton margin. Geophysical Research Letters 40 (15), 3841–3846. Martin, J., Özbek, A., Combee, L., Lunde, N., Bittleston, S., Kragh, E., 2000. Acquisition of marine point receiver seismic data with a towed streamer. pp. 37–40. Pratt, R. G., 1999. Seismic waveform inversion in the frequency domain, Part 1: Theory and verification in a physical scale model. GEOPHYSICS 64 (3), 888–901.

58

CHAPTER 2

Chapter 3 Enhancing 3D post-stack seismic data acquired in hardrock environment using 2D curvelet transform

3.1



Introduction

Seismic data are more and more often used for the exploration of ore deposits worldwide (see Eaton et al., 2003; Malehmir et al., 2012). However, there are numerous challenges related to the processing of seismic data from hardrock environments (Adam et al., 2008; Urosevic et al., 2008). The challenges are primarily related to the fact that these data are typically characterized by a low SNR, caused both by the contamination of anthropogenic noise (e.g. active mine operations) and heterogeneity prevailing in rocks (e.g. igneous, volcanic, and metamorphic rocks) hosting mineralization, which often causes scattering and low reflection coefficients between major lithologies (sometimes even the ore bodies themselves). Therefore, reflections and diffractions are difficult to image as they are typically very discontinuous. ∗ Chapter based on: Górszczyk, A., Malinowski, M., Bellefleur, G., 2015. Enhancing 3D post-stack seismic data acquired in hardrock environment using 2D curvelet transform. Geophysical Prospecting 63 (4), 903–918.

59

60

CHAPTER 3

Despite the advancements in seismic noise removal techniques, so far, only standard methods, like the F-X deconvolution or coherency filtering, were used to enhance post-stack data acquired in hardrock terranes. It is worth to stress that coherency filtering (e.g. semblance-based, Milkereit and Spencer, 1989; Kong et al., 1985) is not always the optimal choice to attenuate scattering-related noise since it tends to smear random energy creating artificially-looking events, making interpretation non-unique. As an alternative, the curvelet-based denoising has great potential to overcome the limitations of the standard approaches and provide interpretation-friendly 3D seismic volumes. Here, we will begin with a short examination on the curvelet coefficients’ thresholding while filtering hardrock seismic data and validity of the 2D DCT approach for denoising 3D volumes. Then, the random noise attenuation workflow on 3D post-stack seismic data acquired in three different mining camps, located in Canada, will be demonstrated. Each of chosen volumes is characterized by variable data-quality and geological complexness. Results are compared with the standard approaches (e.g. F-XY deconvolution) and clearly illustrate the robustness of the curvelet-based denoising in improving SNR and enabling more precise data interpretation.

3.1.1

Curvelet-based hardrock seismic data enhancement

During curvelet-based filtering using a single threshold level for all coefficients (global thresholding) seems to be the obvious choice. This approach might be sufficient (but as it was shown in Chapter 2, not neccesarly is) for denoising seismic data acquired in the “soft rock” environment, where reflection signal is easy to follow and the structure is relatively simple. However, when dealing with more complicated noisy data (e.g. very low SNR, low fold, scattered energy, coloured noise), some form of “inteligent thresholding” is necessary to remove as much noise coefficients as possible while preserving signal coefficients. By “intelligent thresholding”, we mean an approach described in details in Chapter 1 (Sections 1.4.1 to 1.4.3), that introduces more advanced separation according to curvelet scales and/or angles instead of one threshold level. Such approach is especially beneficial when

3.1. Introduction

61

applied to seismic data acquired in hardrock environment. Adjusting threshold levels requires few additional iterations and an a posteriori analysis of the results, but gives better noise attenuation. Bringing this interpretative context as a key for more robust curvelet denoising was also pointed out by Hennenfent et al. (2011).

Figure 3.1. (a) Example of an inline section (crossline spacing of 12.5 m) before curvelet denoising. (b) Section from (a) after denoising focused on maximal signal preservation. (c) Section from (a) after denoising focused on maximal random energy reduction. (d) Difference between (a) and (b). (e) Difference between (a) and (c).

Simple synthetic example, demonstrated in Figure 1.7 (Chapter 1), illustrates the effect of choosing optimal threshold level and its impact on the frequency-wavenumber spectra of the filtered data. Here, we present the direct impact of thresholding on the final SNR level, following the strategy described in previous chapters. Figure 3.1a shows the input section, which is a single inline extracted from a 3D post-stack volume from a hardrock

62

CHAPTER 3

dataset (SNR = 0.92). Results obtained using two different curvelet denoising scenarios are presented in Figure 3.1b,c. The section in Figure 3.1b is the effect of a “gentle” filtering, obtained by weighting the coefficients according to scales, so as not to overfilter the data. With relatively low threshold levels, we partially incorporate some coefficients representing noise into inverse DCT (at the expense of preserving signal). Nevertheless, enhancement of the signal is clear (SNR = 3.04). The continuity of events is improved, and the amount of incoherent noise is highly reduced. Moreover, the difference between the input section and the filtered one (Figure 3.1d) does not exhibit coherent features. On the other hand, application of stronger filtering, focusing on removing as much incoherent energy as possible, is achieved by using different higher thresholds (i.e., scaled by higher factors). Narrow thresholding leaves only the highest curvelet coefficients before the inverse DCT. As a result, we obtain a section with minimized volume of random noise and scattered energy (Figure 3.1c). The coherent events contaminated with noise in the input section are even more clearly delineated (SNR = 3.64), and the remaining residuum is generally incoherent. However, zoomed data in the inset in Figure 3.1c exhibit some “blank spaces” generated during filtration, which may look suspicious even though the main structure is clearly visible. Also, minor signal damages occurred as visible in the zoomed view in Figure 3.1e. Such result from the “strong” filtering of the data might be helpful for an initial interpretation since one can easily localize the main structures without doubts. However, for a more detailed data investigation, it is better to follow a “gentle” filtering approach and avoid narrow scaling of the threshold levels, just in case any fine details are lost through the application of strong filters.

3.1.2

Applicability of the 2D DCT to 3D data denoising

The approach of curvelet denoising described in previous section was referring to 2D seismic inline, where an implementation of the 2D DCT was used. The natural choice for denoising 3D data would be to use 3D DCT (e.g. Ying et al., 2005; Woiselle et al., 2010). However it is much more computationally expensive than the 2D version due to

Figure 3.2. (a) Example of a single time-slice section (inline/crossline spacing of 30 m/11 m) before curvelet denoising. (b) Time slice from (a) after denoising in the inline direction. (c) Time slice from (a) after denoising in the inline and crossline directions. (d) Time slice from (a) after denoising in inline, crossline, and time slice directions. (e) Difference between (a) and (b). (f) Difference between (a) and (c). (g) Difference between (a) and (d).

3.1. Introduction 63

64

CHAPTER 3

the extension of the curvelet grid to the third dimension, which significantly increases the number of coefficients and redundancy. Therefore, we investigated the applicability of the 2D DCT for denoising 3D volumes by performing volume slicing into 2D planes. From our experience, we find that it is reasonable to perform denoising starting from inline sections followed by filtering of crosslines and proceeding with final filtering on time slices. This sequence is not accidental since, as demonstrated in Figure 3.2a–d, it reduces the impact of having signal more difficult to capture on the time slices than on the vertical sections, especially in the case of hardrock data lacking clear continuous events. By boosting coherent energy before the final time-slice filtering, we minimize probability of removing signal dominated by random energy. At the same time, at any of the stages, we do not alter the coherent energy, which is apparent by inspecting the residuals after each filtering step (Figure 3.2e–g). The proposed approach does not affect the signal consistency at the intersections of 2D planes (e.g. between inlines and crosslines or inlines and time slices). Figure 3.3 presents a zoom of a 3D chair plot, clearly demonstrating that there are no signal discontinuities after denoising performed by 3D volume slicing (compare Figure 3.3a,b). We also do not observe apparent phase shifts, which would be visible in the residuals (Figure 3.3c).

Figure 3.3. (a) Data before curvelet denoising. (b) Data after curvelet denoising. Note the preserved continuity of events at intersections of planes despite processing of the volume sliced into 2D sections. (c) Difference between (a) and (b) without visible phase shifts.

The main challenge when denoising 3D data is estimating the threshold levels for each 2D section. The number of inline/crossline sections and time slices would make individual, manual setting of the thresholds for every 2D plane very time consuming and would, consequently, dramatically decrease the robustness of the method. To overcome this problem,

3.2. Application to 3D datasets acquired in different mining camps

65

we took advantage of the fact that sections from the same volume, extracted in the same direction, exhibit approximately similar characteristic in the frequency and curvelet domains. Instead of using fixed starting threshold, uniform for all sections within series of inlines, crosslines, or time slices, we estimate it separately for each section using the percentage of the highest curvelet coefficients. Depending on the size of the section, number of scales and angles, volume of noise, and level of filtering required, preserving 2%-5% of the highest curvelet coefficients is typically satisfactory for the initial threshold level. At later stages, reviewing the frequency spectra and vectors of curvelet coefficients for selected sample sections gives us good notion of the coherent energy distribution in the curvelet domain. It allows adjustment of the initial threshold level according to curvelet scales and performs better random energy attenuation. Adaptive thresholding is also important when taking into account differences in signal appearance comparing, e.g. inline sections with time slices. Variable-frequency characteristic of signal and noise, observed between sections representing different planes, demands careful selection of curvelet coefficients usually unobtainable with a single and uniform threshold level. Summarizing this section, we can assume that curvelets are well suited for hardrock seismic data denoising. Practical application of DCT is intuitive and gives promising results. We introduce thresholding of the coefficients according to curvelet scales, which supports better signal and noise separation. Processing 3D volumes requires simple data slicing and individual processing of the 2D sections. This partitioning, however, does not harm seismic signal and allow for independent thresholding for each section taking into account its frequency characteristics.

3.2

Application to 3D datasets acquired in different mining camps

Here, we present results from application of the previously described denoising approach to 3D datasets acquired in three mining camps located in Canada (Flin Flon, Lalor Lake and

66

CHAPTER 3

Brunswick). The chosen datasets reveal diverse quality of imaged targets with variable degree of contamination by random noise and scattered energy. For data acquired in Flin Flon and Brunswick areas (significantly contaminated by random energy), we use the 3D dip moveout (DMO) stacks, which preserve diffraction signatures - sometimes being possible expression of the ore deposits. In the migrated stack, these circular patterns are collapsed at the apex of diffraction hyperbola, making them smaller and easier to overlook, especially when contaminated by noise (Adam et al., 2003). Recently acquired Lalor dataset exhibits better quality; hence, we used post-DMO migrated volume for filtering. We compare obtained results with more standard filtering methods (e.g. F-XY deconvolution) used during processing.

3.2.1

Flin Flon

The first dataset comprises a 17-km2 3D survey acquired in 2007 within the Flin Flon mining camp, Manitoba, Canada (White et al., 2012). This survey was designed to push exploration to greater depths and establish new exploration targets. However, the Flin Flon mining camp is a challenging environment for geophysical exploration. First, the area is an active industrial district, including not only infrastructure related to production and processing of ore, but also a community that developed close to and around the mines over the years of exploration. The area of 3D survey includes several power lines, railways, main roads, and streets. This setting generates noisy background for seismic recording and limits the strength of sources that could be employed, particularly near existing infrastructures. Second, natural terrain conditions within the survey area are highly variable comprising exposed bedrock, swamps, pockets of shallow glacial deposits, and lakes. This, combined with the limited accessibility in some areas of the 3D survey, required the use of different kind of sources (dynamite, vibroseis, and airgun; see White et al., 2012). The 3D volume used in the filtering is composed of 171 inlines spaced every 25 m, 340 crosslines spaced every 12.5 m, and 1000 time slices every 2 ms. Figure 3.4a-c present chair

Figure 3.4. (a) Chair plot of the DMO volume from the Flin Flon mining camp before denoising. (b) Data from (a) after F-XY deconvolution. (c) Data from (a) after curvelet denoising. (d) Difference between (a) and (b). (e) Difference between (a) and (c).

3.2. Application to 3D datasets acquired in different mining camps 67

68

CHAPTER 3

plots of the input, F-XY deconvolution, and curvelet-processed data, respectively. Difficulties encountered during the acquisition are clearly reflected in the quality of the data. Input volume (Figure 3.4a) contains a significant share of random energy contaminating signal. Only the strongest coherent events are visible in the inline direction, whereas in the crossline and time slice planes, signal is difficult to track. After F-XY deconvolution (Figure 3.4b), some minor improvements are visible; however, data still remain relatively noisy. Events that were visible in the input sections are enhanced; nevertheless, some “shadow zones” are still dominated by incoherent energy. Data quality significantly increases after the application of curvelet denoising (Figure 3.4c). The amount of the incoherent energy is minimized, and events undetectable in the input data are now easy to follow in all directions. Moreover, the residuals, i.e., difference between the input and the filtered volumes (Figure 3.4d,e), maintained their incoherent character, implying that no coherent energy was altered during both F-XY filtering and curvelet denoising. However, results obtained with curvelets are far better both qualitatively and quantitatively.

Figure 3.5. SNR estimated for each inline from the Flin Flon DMO volume before denoising (blue x) and after F-XY deconvolution (green x) and curvelet denoising (red x). Mean SNR values are marked by dashed lines.

Our results are benchmarked by the mean SNR improvement. Figure 3.5 shows the SNR for all inlines before and after filtering. Input data SNR is 1.01, whereas SNR for F-XYdeconvolution and curvelet results is 1.41 and 3.04, respectively. Values for both filtering methods follow the trend of initial SNR. The higher SNR obtained with curvelet denoising proves that this method gives not only qualitative but also quantitative enhancement of the signal.

3.2. Application to 3D datasets acquired in different mining camps

3.2.2

69

Lalor Lake

The Lalor 3D seismic data were acquired in 2013 over a recently discovered (in 2007) volcanogenic massive sulphide deposit, located in the Lalor mining camp, Manitoba, Canada (Bellefleur et al., 2015). The 25-Mt deposit sits at depths ranging from 570 m to 1160 m and comprises both zinc- and gold-rich zones, that are shallowly dipping to the northeast (about 30◦ ). The deposit is located approximately in the middle of the 16-km2 survey area. The Lalor area is away from main roads, towns, or large industrial infrastructures. As a result, the region is seismically quiet, especially when compared to the Flin Flon area. The quality of the data is generally very good with many continuous reflections observed in the data. Not only the low noise, but also the shallower dips of the geology in the area played an important role for the continuity and strength of reflections in this dataset. The 3D volume used in filtering is composed of 151 inlines spaced every 25 m, 359 crosslines spaced every 12.5 m, and 2001 time slices every 1 ms. Figure 3.6a presents input volume (post-DMO migration in this case) with strong signal visible in all planes. However, signal in the shallow part consists of weak, short and gently dipping events mixed with rather dominant incoherent energy. After F-XY deconvolution, the data look cleaner (Figure 3.6b) but with limited signal enhancement. Moreover, an inspection of the residual (Figure 3.6d) suggests that some of the coherent events were removed not only in the shallow part of the data. In contrast, the application of curvelet denoising recovers clear signal with improved continuity (Figure 3.6c). Even the amplitudes of the weak and shallow events contaminated by the noise were boosted rather than removed. The residuals between the raw and curvelet processed data (Figure 3.6e) do not exhibit coherent energy contrary to the residuum after F-XY deconvolution (compare Figure 3.6d,e). Even though these shallow events were not our primary target, they were only properly recovered with the curvelet denoising approach. The analysis in Figure 3.7 shows that the quality of the input volume is about two times higher (SNR = 2.16) than the initial SNR of the Flin Flon dataset. Final SNR

CHAPTER 3 70

Figure 3.6. (a) Chair plot of the post-DMO migrated volume from Lalor mining camp before denoising. (b) Data from (a) after F-XY deconvolution. (c) Data from (a) after curvelet denoising. (d) Difference between (a) and (b). (e) Difference between (a) and (c).

3.2. Application to 3D datasets acquired in different mining camps

71

Figure 3.7. SNR estimated for each inline from the Lalor 3D volume before denoising (blue x) and after F-XY deconvolution (green x) and curvelet denoising (red x). Mean SNR values are marked by dashed lines.

after curvelet denoising is estimated at 5.83, which is a significant improvement in signal quality. In the case of the SNR measured after F-XY deconvolution (SNR = 3.53), we observe an increase in the SNR in approximately the 20 first and last inlines, i.e., a higher increase in the SNR than for the other sections. The same situation was also noted for crosslines. Such increase is particularly surprising, especially when we take into account the generally lower fold for outer bins and possible DMO/migration artefacts.

Figure 3.8. (a) Part of the inline section from the Lalor 3D volume (crossline spacing of 12.5 m) before denoising. (b) Section from (a) after F-XY deconvolution. Note the significant reduction of seismic energy down to 500 ms. (c) Section from (a) after curvelet denoising. Weak, poorly correlated, short events were extracted in the shallow part down to 500 ms.

After closer look at the data processed with F-XY deconvolution, we concluded that much of the weak high-frequency events (most likely migration artefacts) were damaged or removed in inlines and crosslines near the volume edges. This is clearly visible up to 500 ms where, as we aforementioned, weak signal was mixed with higher amount of

72

CHAPTER 3

incoherent energy (see Figure 3.8a). Figure 3.8b,c demonstrates part of inline 149 after F-XY deconvolution and curvelet denoising, respectively. Much of the short, densely layered, dipping events (200-500 ms) were “flattened” in the F-XY data. This most likely caused an increase in the correlation between traces and, in fact, resulted in apparently higher SNR. For contrast we do not observe similar trend in case of data processed with curvelets.

3.2.3

Brunswick no. 6

The Bathurst Mining Camp, New Brunswick, was a major base metal-producing area in Canada. It comprises several massive sulphide deposits. Here, we use 3D data acquired in 2000 around the Brunswick no. 6 mine area. Volcanic-hosted massive sulphide deposits are related with the steeply dipping reflector package (60◦ –70◦ ) that can be tracked down to 5 km of the crust (Malehmir and Bellefleur, 2010; Cheraghi et al., 2011). Three 2D profiles acquired in 1999 provided high-resolution images, showing reflections correlating with structures and lithological contacts observed at surface. Subsequently, a 38-km2 3D survey was acquired in 2000 to map key mineralization horizons and potentially define new exploration targets. Although 3D seismic data were required for proper imaging of the complex steeply dipping structures, they provided lower resolution and more discontinuous reflections than observed in the 2D data. Cheraghi et al. (2012) partly related difficult imaging in the 3D to acquisition geometry and footprint, and investigated different binning strategies to improve final volume quality. Non-orthogonal 3D geometry used for this survey was designed so the receiver lines were perpendicular to the strike of main geological structures for better imaging of these targets. However, this setting was not suitable to recover structures with other slope directions. The best volume obtained by Cheraghi et al. (2012) was used as the input data for denoising. This dataset is clearly the noisiest among the three presented in this chapter. The 3D volume used for the filtering is composed of 236 inlines spaced every 30 m, 512 crosslines spaced every 11 m, and 1001 time slices every 2 ms. The chair-plot of the input

Figure 3.9. (a) Chair plot of the DMO volume from the Brunswick survey before denoising. (b) Data from (a) after F-X deconvolution performed twice, i.e., in the inline and crossline directions. (c) Data from (a) after curvelet denoising. (d) Difference between (a) and (b). (e) Difference between (a) and (c).

3.2. Application to 3D datasets acquired in different mining camps 73

74

CHAPTER 3

volume (DMO stack) presented in Figure 3.9a exhibits quite high amount of incoherent energy. Some discontinuous steeply dipping events, visible in the inline plane, barely extend into the crossline direction. Time slice planes are dominated by random energy without any clear signal. Results obtained for this volume using F-XY deconvolution were of significantly worse quality than those obtained for the Flin Flon and Lalor datasets (not shown here). In addition to F-XY deconvolution, we also applied F-X deconvolution in both inline and crossline directions. Results of this approach, as presented in Figure 3.9b, not only clearly show enhanced signal continuity in crosslines and inlines, but also brought some coherent energy in time slices. Evidently, a fair portion of the scattered energy was removed; however, some coherent events following the dipping structure are observed in the residuum (Figure 3.9d). Again, the curvelet denoising approach produces clearly superior results (Figure 3.9c) without damaging the signal. Events are now easier to follow from one plane to another, making structure more apparent for interpretation. Evidence of shallow reflections in the residuum (see difference between the raw- and curvelet-processed volumes in Figure 3.9e) is the consequence of acquisition footprint suppression described later in this section. Similar to the case of the Lalor data, we observe that, even with careful F-X deconvolution filtering, we were unable to obtain results comparable with the curvelet denoising in respect of both signal improvement and random noise attenuation.

Figure 3.10. SNR estimated for each inline from the Brunswick 3D volume before denoising (blue x) and after F-XY deconvolution (green x) and curvelet denoising (red x). Mean SNR values are marked by dashed lines.

Figure 3.10 summarizes the SNR for this dataset. Initial SNR is rather uniform for all sections with SNR = 1.69. For few outermost sections, we notice a decrease in the SNR

3.2. Application to 3D datasets acquired in different mining camps

75

related probably to lower fold or DMO footprint. Similar tendency may be observed in the case of the Flin Flon dataset. We do not observe a remarkable increase in the SNR after F-X deconvolution (SNR = 2.04), although qualitative results were better. This may suggest occurrence of some tiny artefacts introduced by prediction filtering, which decrease correlation between traces. Significant improvement of data quality after curvelet denoising is confirmed by the increase in the SNR (SNR = 3.41). By taking into account the volume of the extracted noise and the simultaneous signal preservation, these results especially confirm the effectiveness of the curvelet denoising in difficult hardrock area. Cheraghi et al. (2012) draw some attention to the acquisition footprint in their analysis of the influence of the acquisition geometry and processing on the final data quality. Several methods were proposed to remove such artefacts (e.g. Drummond et al., 2000; Al-Bannagi et al., 2005; Soubaras, 2002; Gulunay et al., 2006). In this dataset, those artefacts are mostly visible in the form of shallow, rather sub-horizontal and high-amplitude reflections. These reflections obscure the true structure and hinder exact interpretation. The easiest way to remove them is the application of the F-K mute. This method, however, works globally on the entire dataset and removes all events from the range of the muted dips. Therefore, the F-K filter not only removes the shallow, horizontal, and gently dipping events but also affects deeper reflections related to geological structure. In the curvelet domain, coefficients are well localized in space, hence providing the possibility to remove certain dips locally. Moreover, rather than muting all curvelets of particular angle and frequency, we use an angle-adaptive thresholding approach (Górszczyk et al., 2014). In this particular case, instead of removing low-value coefficients as for random noise attenuation, we are muting curvelets above a certain level, which represent high-amplitude reflections. This approach minimizes the amount of artefacts typical for F-K mute filtering. Figure 3.11a,b present a raw- and curvelet-processed inline section of the Brunswick 3D dataset. Improvement of coherent energy is clearly undeniable. However, by removing random energy, we also significantly enhanced shallow events related to the acquisition footprint. Application of the F-K filter to both raw- and curvelet-processed data improves the coherency of the main steeply dipping events and partially removes impact of shal-

CHAPTER 3 76

Figure 3.11. (a) An inline section from the Brunswick 3D volume (crossline spacing of 11 m) before denoising with shallow reflections hiding the true structure marked by black ellipse and an event with direction opposite to the main structure marked by green ellipse. (b) Section from (a) after curvelet denoising. (c) and (d) Sections from (a) and (b) after application of the F-K mute. Shallow structure is now visible, but events with dips coinciding with the applied mute are damaged. (e) Section from (b) after additional angle-adaptive thresholding of curvelet coefficients. Shallow structure is clearly visible, and all events are left preserved.

3.3. Discussion

77

low reflections on the true geological structure down to about 600 ms (see corresponding Figure 3.11c-d). However, serious disadvantage of this filter is demonstrated by the removal of useful signal with dip coinciding with the applied mute function. This might be observed in case of the event dipping into the opposite direction (see the green ellipse in Figure 3.11). This event cannot be recognised anymore in Figure 3.11c, whereas after F-K mute was applied to the curvelet-processed section, only some shadow reflection is visible, leaving doubts as to whether it is real event or artefact resulting from the F-K filtering (see Figure 3.11d). By taking advantage of the curvelet coefficients’ parametrization comprising spatial coordinates, we performed additional angle-dependent thresholding in the inline sections. Result (Figure 3.11e) leaves no questions about the presence of the dipping structure primarily mixed with shallow reflections. Events of similar dips in other parts of the section were preserved, and no artefacts were introduced. This result proves that curvelets are not only suitable for random noise attenuation but can also find application in solving other issues related to seismic signal enhancement (e.g. directional filtering).

3.3

Discussion

Based on the experience gained with filtering three different 3D datasets, here we discuss practical application aspects of the curvelet denoising method in a hardrock environment. A crucial point of the whole denoising workflow is the selection of threshold levels, which might be adjusted according to certain scales and dips in the curvelet domain. Without a good understanding of the curvelet properties, one is unable to precisely localize and separate coefficients representing signal of interest from the useless portion of energy. In order to get a clear idea of the changes introduced while attenuating certain curvelet coefficients, it is a good practice to analyze not only the resulting section, but also its F-K spectra. Additionally, comparing vector of curvelet coefficients before and after denoising provides insight into the relation between the selected threshold level and the results. While searching for most appropriate thresholds, few forward and inverse DCTs need to be performed. During these iterations, special attention is paid to the difference between

78

CHAPTER 3

input and filtered data. As long as the residuals do not exhibit coherent features, signal is preserved and more aggressive thresholding can be attempted (i.e., the stronger filtering approach). However, as described in Section 3.1.1 and presented in Figure 3.1, one needs to be cautious with selecting threshold levels. The fact that there is no upper limit of the threshold level not only gives a high level of freedom during denoising, but also leaves an open door for overfiltering and possible signal removal. Flexibility in selecting the weighting parameters leads to an ensemble of possible denoising results. This, in turn, raises the question whether or not other random noise filtering methods like F-X/F-XY deconvolution should be applied prior to applying curvelet denoising. Toward this end, we applied the curvelet denoising workflow to datasets after F-X/F-XY deconvolution with exactly the same parameters used for raw volumes. Obtained results (compared with those after curvelet denoising only) revealed similar quality according to the “eyeball norm”. Improvement of SNR was approximately 7% and 3% for Flin Flon and Brunswick datasets, respectively. Slightly higher increase was noted in the case of the Lalor volume (14%). However, in the latter case, the increase is caused by strengthening the effect produced by the F-XY deconvolution at the volume’s edges, where some shallow steeply dipping events were flattened (see Section 3.2.2 and Figure 3.7 and 3.8). Furthermore, curvelet denoising of the raw data with thresholds increased by 10% over the initial level, resulted in SNR higher than the cases using combined F-X/FXY deconvolution and curvelet denoising. This confirms our inference that the curvelets might be self-sufficient for optimal random noise attenuation. Combining this method with other coherency filters does not necessarily improve final results because of the possible signal damages (e.g. F-X/F-XY deconvolution, predicting that single dip within designed window can easily affect events representing other dips). As mentioned earlier, during random energy attenuation, one needs to pay special attention not to remove coherent signal. Since curvelets are “naturally” well designed to separate random noise, this objective is usually easy to meet. On the other hand, it is equally important not to produce “fake signal”, which could lead to misinterpretation.

3.3. Discussion

79

Since we are aware of the artefacts produced by denoising applied in the F-K domain or predictive deconvolution, we could expect the same tendency in the case of the relatively strong coherency filter employing curvelets. Remarkably, there is no strong evidence that the presented method tends to produce ambiguous signal by fitting curvelets into even strong noise. Our approach can be further developed by establishing a method of automatic thresholds estimation for random noise attenuation. Such optimization can be designed using, e.g. statistical methods. This, however, is not a trivial problem even when considering a single filtering task, which, in turn, certainly hampers wide application of curvelet denoising in production processing. From our experience, we acknowledge that the final threshold adjustment is always made a posteriori. Moreover, in the case of real data, the incoherent noise considered as completely random energy is rather rare and often appears as more or less correlated scattered features. In the case of data acquired in a hardrock environment, the amount of such scattered energy is remarkably high. Although it is referred to as a noise, it might originate from a portion of poorly migrated signal or migration artefacts. These features are usually significantly less coherent than an imaged structure; however, they are still far from clearly random distribution and hence require stronger filtering. Taking this fact into account, we are still convinced that the key point for optimal curvelet-based denoising is the understanding of the DCT properties and investigation of coefficients in the curvelet domain, rather than automating this process. This was also demonstrated during enhancing of the shallow part of the Brunswick volume. Standard random energy attenuation was insufficient in this case, and removing of the coherent flat events, arising from the acquisition footprint, was essential for proper imaging of the dipping structures. Finally, the workflow proposed here for 3D data denoising utilize 2D DCT. The existence of the 3D DCT implementation proposed, e.g. by Ying et al. (2005) and its less redundant version developed by Woiselle et al. (2010), leaves an option for comparison of the results obtained with each algorithm. However, serious disadvantage of the 3D DCT (related to

80

CHAPTER 3

the increase in the curvelet grid dimension) is the need to provide intensive computational resources and memory (even 40 times the size of input data, see Neelamani et al. (2008)). Taking into account the fact that few iterations of forward and inverse DCTs are required during thresholds adjustment, the whole process might become computationally very expensive. Perhaps, employing an implementation of the 3D DCT would increase quality of the results, but the application of the scale and/or angle adaptive thresholding might be highly laborious in this case, making the whole filtering procedure hard to apply in practice. Considerations about robustness and performance of the 3D curvelet-based seismic data denoising confronted with the results presented here might form a subject of a separate research.

3.4

Conclusions

We have demonstrated an efficient workflow for denoising 3D post-stack seismic data by using 2D DCT and, subsequently, demonstrated its benefits on three different 3D seismic volumes from different hardrock environments. The algorithm proves that the 2D DCT can be successfully applied for 3D data denoising. Results obtained for three volumes characterized by different data qualities, revealed remarkable increase in qualitative and quantitative signal enhancement. We showed that the curvelets can be used not only for random energy attenuation, but also to remove certain features corrupting data (e.g. localized events with certain dips). Comparison with the F-X/F-XY deconvolution clearly shows the superiority of our approach in respect to signal enhancement, signal preservation, and amount of the removed noise. The whole workflow leaves high order of flexibility in parametrization. We emphasize that proper adjustment of the thresholds according to scales and angles is the most important step, with the direct impact on the obtained results. Even when processing highly noise-contaminated data, our approach will strengthen signal and will not “invent” strong artefacts from the incoherent energy. This is especially important from the interpreter’s point of view, since producing “fake signal” may lead to misinterpretation. All analysed datasets after curvelet denoising are

3.4. Conclusions

81

better suited for interpretation. Even complex structures with varying and intersecting dips are easier to follow in the denoised data. We believe that this approach can significantly reduce difficulties in the interpretation of hardrock datasets arising from the noise contamination.

82

CHAPTER 3

Acknowledgements This work was funded by the Polish National Science Centre under Grant 2011/03/D/ST10/05128. The authors would like to thank Saeid Cheraghi and Donald White for providing processed Brunswick and Flin Flon 3D volumes. DCT algorithm was taken from the CURVELAB project (www.curvelet.org).

References Adam, E., L’Heureux, E., Bongajum, E., Milkereit, B., 2008. 3D Seismic imaging of Massive Sulfides: seismic modeling, data acquisition and processing issues. In: SEG Technical Program Expanded Abstracts 2008. Society of Exploration Geophysicists. Adam, E., Perron, G., Arnold, G., Matthews, L., Milkereit, B., 2003. 15. 3D Seismic Imaging for VMS Deposit Exploration, Matagami, Quebec. In: Hardrock Seismic Exploration. Society of Exploration Geophysicists, pp. 229–246. Al-Bannagi, M. S., Fang, K., Kelamis, P. G., Douglass, G. S., 2005. Acquisition footprint suppression via the truncated SVD technique: Case studies from Saudi Arabia. The Leading Edge 24 (8), 832–834. Bellefleur, G., Schetselaar, E., White, D., Miah, K., Dueck, P., 2015. 3D seismic imaging of the Lalor volcanogenic massive sulphide deposit, Manitoba, Canada. Geophysical Prospecting 63 (4), 813–832. Cheraghi, S., Malehmir, A., Bellefleur, G., 2011. Crustal-scale reflection seismic investigations in the Bathurst Mining Camp, New Brunswick, Canada. Tectonophysics 506 (1-4), 55–72. Cheraghi, S., Malehmir, A., Bellefleur, G., 2012. 3D imaging challenges in steeply dipping mining structures: New lights on acquisition geometry and processing from the Brunswick no. 6 seismic data, Canada. GEOPHYSICS 77 (5), WC109–WC122.

References

83

Drummond, J. J. M., Budd, A. J. L., Ryan, J. W., 2000. Adapting to noisy 3D data attenuating the acquisition footprint. In: SEG Technical Program Expanded Abstracts 2000. Society of Exploration Geophysicists. Eaton, D. W., Milkereit, B., Salisbury, M. H. (Eds.), 2003. Hardrock Seismic Exploration. Society of Exploration Geophysicists. Górszczyk, A., Adamczyk, A., Malinowski, M., 2014. Application of curvelet denoising to 2D and 3D seismic data — Practical considerations. Journal of Applied Geophysics 105, 78–94. Gulunay, N., N., B., M., M., 2006. Acquisition footprint suppression on 3D land surveys. First Break 24 (1092), 71–77. Hennenfent, G., Cole, J., Kustowski, B., 2011. Interpretative noise attenuation in the curvelet domain. In: SEG Technical Program Expanded Abstracts 2011. Society of Exploration Geophysicists. Kong, S. M., Phinney, R. A., Roy-Chowdhury, K., 1985. A nonlinear signal detector for enhancement of noisy seismic record sections. GEOPHYSICS 50 (4), 539–550. Malehmir, A., Bellefleur, G., 2010. Reflection seismic imaging and physical properties of base-metal and associated iron deposits in the Bathurst Mining Camp, New Brunswick, Canada. Ore Geology Reviews 38 (4), 319–333. Malehmir, A., Durrheim, R., Bellefleur, G., Urosevic, M., Juhlin, C., White, D. J., Milkereit, B., Campbell, G., 2012. Seismic methods in mineral exploration and mine planning: A general overview of past and present case histories and a look into the future. GEOPHYSICS 77 (5), WC173–WC190. Milkereit, B., Spencer, C., 1989. Noise suppression and coherency enhancement of seismic data. In: Statistical application in the earth sciences. Vol. 89. Geological Survey of Canada, pp. 243–248.

84

CHAPTER 3

Neelamani, R., Baumstein, A. I., Gillard, D. G., Hadidi, M. T., Soroka, W. L., 2008. Coherent and random noise attenuation using the curvelet transform. The Leading Edge 27 (2), 240–248. Soubaras, R., 2002. Attenuation of acquisition footprint for non-orthogonal 3D geometries. In: SEG Technical Program Expanded Abstracts 2002. Society of Exploration Geophysicists. Urosevic, M., Kepic, A., Juhlin, C., Stolz, E., 2008. Hard rock seismic exploration of ore deposits in Australia. In: SEG Technical Program Expanded Abstracts 2008. Society of Exploration Geophysicists. White, D. J., Secord, D., Malinowski, M., 2012. 3D seismic imaging of volcanogenic massive sulfide deposits in the Flin Flon mining camp, Canada: Part 1 — seismic results. GEOPHYSICS 77 (5), WC47–WC58. Woiselle, A., Starck, J.-L., Fadili, J., 2010. 3-D Data Denoising and Inpainting with the Low-Redundancy Fast Curvelet Transform. Journal of Mathematical Imaging and Vision 39 (2), 121–139. Ying, L., Demanet, L., Candes, E., 2005. 3D discrete curvelet transform. In: Papadakis, M., Laine, A. F., Unser, M. A. (Eds.), Wavelets XI. SPIE-Intl Soc Optical Eng.

Chapter 4 Improving depth imaging of legacy seismic data using curvelet-based gather conditioning: a case study from Central Poland

4.1



Introduction

Depth imaging is nowadays routinely applied to help exploration in various plays (Etgen et al., 2009) and in certain conditions (like sub-salt imaging) it provides much better results as compared to time imaging. It is also applicable to image legacy data, however quality of the results depends directly on the acquisition parameters and data quality. Velocity model building, based on reflection tomography (Woodward et al., 2008), employed during PreSDM, requires careful and consistent residual moveout (RMO) picking in the migrated gathers. The RMOs are analysed on the CDP gathers and then flatten to improve focusing. Automating this process assumes that data are of sufficient qualChapter based on: Górszczyk, A., Cyz, M., Malinowski, M., 2015. Improving depth imaging of legacy seismic data using curvelet-based gather conditioning: A case study from Central Poland. Journal of Applied Geophysics 117, 73–80. ∗

85

86

CHAPTER 4

ity, to track coherent energy, representing corresponding reflections. This assumption is poorly met while performing depth imaging of vintage data typically exhibiting low fold, relatively low SNR and recorded with short offsets. Proper data conditioning allowing for noise attenuation and signal enhancement becomes crucial in this case. Conventional tools used to gather conditioning like trace-mixing, F-X deconvolution and various coherency filters might be insufficient. As an alternative to the conventional depth imaging, a group of methods based on fitting multiparameter surfaces (kinematic attributes) in the pre-stack domain, like the common reflection surface (CRS) stacking (e.g. Mann et al., 1999), or multifocusing (e.g. Berkovitch et al., 2008), proved very effective in improving structural imaging of the legacy data in the time domain (e.g. Baykulov et al., 2009). CRS-derived attributes can be also inverted for a velocity model in depth using NIP-wave tomography (Duveneck, 2004) and hence the CRS method can be considered as an alternative velocity model building tool. Despite the advantages offered by the CRS and multifocusing methods, in this chapter we explore the applicability of the conventional reflection tomography and pre-stack depth migration to image legacy data. We develop a novel, curvelet-based, pre-stack gather denoising and conditioning scheme, that produces more consistent RMO picks as compared to the picks obtained from the gathers processed using traditional coherency filters. It is supplemented by additional, statistical quality-control of the picked RMOs. Our workflow is then tested by imaging vintage 2D seismic data acquired in Central Poland in the 70-80s in the area affected by intense salt tectonics. It provides a portrait of the salt structures (e.g. domes, pillows) and the deformed overlaying sediments in the depth domain, which is important for calibration of the structural interpretation using boreholes.

4.2

Geological background

The studied area is located within the Kuiavian segment of the Mid-Polish Swell (MPS) (Figure 4.1). MPS was formed as a result of regional uplift and inversion of the Mid-

4.2. Geological background

87

Polish Trough: Permo-Mesozoic basin that evolved above the Teisseyre-Tornquist Zone, one of the most fundamental lithospheric-scale boundaries in continental Europe (Pharaoh et al., 2006). Following deposition of the Lower Permian (Rotliegendes) siliciclastics and Upper Permian (Zechstein) evaporites, the Mid-Polish Trough was filled by several km thick Triassic-Cretaceous sediments, mostly siliciclastics and carbonates and Upper Triassic evaporites. Development of the Triassic-Cretaceous cover was governed by (I) regional subsidence related to deeper crustal processes (e.g. basement faulting, thermal subsidence) and (II) intense salt tectonics related to the Upper Permian (Zechstein) evaporites.

Figure 4.1. Location of the study area (dashed rectangle) in Central Poland. Known salt structures are marked by different colors: yellow - salt pillows, pink - partially pierced salt diapirs, orange - fully pierced salt diapirs, and blue- non-salt anticlines. Gray shadingmarks the extent of the Mid-Polish Swell outlined by the sub-Cenozoic sub-crops of the Lower Cretaceous or older rocks (map modified from Krzywiec, 2012b). Inset on the right shows location of the transects selected for depth imaging (blue lines).

Two main tectono-stratigraphic domains can be distinguished in the study area, based on the role the basement tectonics played in the development of the salt structures. The Kłodawa Salt Diapir (KSD) and the area located NE of this structure were formed above the inferred pre-Permian basement fault (Kłodawa fault) (Krzywiec, 2004). Domain located SW from the KSD developed with no involvement of the thick-skinned basement tectonics. It includes Ponętów-Wartkowice salt structure, which was shaped in two phases: (i) Triassic-Jurassic “flip-flop” salt tectonics (Quirk and Pilcher, 2012) and (ii) Late Cretaceous-Early Tertiary compressional reactivation (Krzywiec, 2012a). The depth imaging of the legacy seismic data was a part of a larger research project

88

CHAPTER 4

(JURASHALE, see Acknowledgements) targeted at investigating perspective JurassicTriassic unconventional play located SW from the KSD. According to the exploration model of the Strzelecki Energia company involved in the project, the potential or proven source rock intervals in this area include: Lower Keuper (Upper Triassic) mudstones, Pliensbachian-Toarcian (Lower Jurassic) mudstones, Aalenian-Bathonian (Middle Jurassic) mudstones, Kimmeridgian (Upper Jurassic) mudstones and marls.

4.3

Data

Depth imaging was performed for 4 transects with a total length of 200 km (Figure 4.1, Table 4.1). Original data (10 profiles), acquired between 1977-84 by PBG and Geofizyka Kraków, were reprocessed in 2012 by Geofizyka Toruń up to pre-stack time migration (PreSTM). Significant improvement in data quality was achieved as compared to the original processing. Additionally, transects consisting of multiple lines with different source/receiver spacing were merged together and rebinned using maximum group spacing. Before initial PreSDM residual spikes were removed and an AGC with a 1000 ms window followed by trace balancing was applied. Three of the transects (W-lines) are characterized by relatively large group intervals (75 m) and offsets up to 3600 m with an exception of the eastern part of transect WA04WC04 (group interval 50 m and offsets up to 1200 m). KC05KF05 transect consists of the profiles acquired with a group interval of 20-35 m and offsets up to 1900 m. Data for all transects are characterized by a low fold (12-48). See Table 4.1 for all acquisition parameters. Transect KC05KF05

Length [km] 53.112

WA04WC04

52.575

WA110478 W0030277M

47.700 46.800

Fold 24 48 12 24 24 12 24

Shot interval [m] 20 35 150 50 75 150 75

Group interval [m] 20 35 75 50 5 75 75

Maximum offset [m] 1645 1920 1200 3600 3600 3600 3600

Table 4.1. Acquisition parameters of the 4 transects used in the imaging.

4.4. Pre-stack depth migration workflow

4.4

89

Pre-stack depth migration workflow

Depth imaging is an iterative process that consists of velocity model building step and actual migration. After each iteration of PreSDM, hyperbolic RMOs are picked on the CDP gathers. Starting interval velocity model is subsequently updated during imaging by reflection tomography until it flattens RMOs on the CDP gathers.

Figure 4.2. Simplified block diagram of the depth imaging workflow. Red boxes indicate modifications where our in-house curvelet-based denoising scheme was applied.

We used a standard Kirchhoff PreSDM algorithm and grid-based tomography for velocity model building. The crucial part of the latter process was the performance of the autopicker used to select RMOs for inversion. It can be enhanced when a proper conditioning is applied to the input gathers. Figure 4.2 presents block diagram of the general depth migration workflow used for this dataset.

90

4.4.1

CHAPTER 4

Initial velocity model

Initial interval velocity model (Vint ) used in the initial PreSDM is typically derived from the available RMS velocity models (Vrms ) after some smoothing and resampling. As mentioned in before, input data were reprocessed including PreSTM, however the Vrms velocity models did not tie at line intersections. In order to start imaging from a consistent set of velocity models, we gathered all 51 Vrms models from the 2012 reprocessed dataset. After uniformly resampling and truncating to 5 s, velocity models were interpolated on a 3D grid (cell size 500 m × 500 m × 100 ms) with some additional smoothing. New Vrms models for each transect were extracted from the smoothed velocity cube and subsequently converted to Vint . Such an approach guarantees that the initial velocity models tie at transects’ intersections.

4.4.2

Gather conditioning

Since depth imaging results depend directly on the velocity model, hence carefully and consistently picked RMOs are essential in the whole process. Because of the low fold, resulting from acquisition parameters, relatively low SNR, presence of residual multiples and limited offsets much efforts were devoted to develop a workflow that would condition the migrated gathers before running the RMO picking procedure. Proper conditioning allows to obtain consistent and stable RMO picks.

Figure 4.3. Block diagram of data conditioning for moveout picking. Items in red boxes are related to curvelet-based denoising.

4.4. Pre-stack depth migration workflow

91

Different approaches have been tested including: F-X deconvolution, bandpass filtering, trace mixing, creating supergathers and curvelet denoising. Optimal noise removal and signal preservation were obtained in combination of all those methods in the order described in Figure 4.3. The crucial part of this process though was the application of curvelet-based coherency filtering. We apply the approach presented in the previous chapters and tested on 2D and 3D poststack seismic data (Górszczyk et al., 2014; Górszczyk et al., 2015). Here this method is extended and tuned to work in the pre-stack domain. We applied our 2D DCT-based conditioning algorithm on the pre-stack gathers after the initial PreSDM. Since the conditioning tests performed with data organized in CDP gathers resulted in insufficient coherency of the moveouts (mostly due to low fold, relatively small number of traces in CDP gather and low signal to noise ratio), a different approach was developed. We sorted 2D data into common-offset sections and considered them as a pseudo-3D volume representing CDP × offset × depth cube (Figure 4.4a). Firstly, we processed common offset sections to boost coherent events. In the next step, random energy was attenuated by running the algorithm on the depth-slices. Example of the filtered volume after curvelet denoising is demonstrated in Figure 4.4b. The appropriate separation of curvelet coefficients representing signal of interest and noise was the crucial and the most difficult point of the described process. While filtering common offset sections, we have to take into account the tendency of decreasing signal frequencies as offset increases. Processing depth slices might be even more challenging, since signal exhibits variable shapes in this plane, which might be difficult to capture by a finite set of curvelet scales. However, appropriate thresholding executed with respect to the variable frequency characteristic of the signal (scale-adaptive thresholding) allows its optimal separation and use in further processing. Curvelet-based conditioning has been compared to a more traditional random noise attenuation filter (F-X deconvolution - see Figure 4.2 for the workflow). As demonstrated in Figure 4.5, reflection moveouts are more continuous in case of the curvelet-conditioned

CHAPTER 4 92

Figure 4.4. Data from line WA04WC04 displayed as pseudo-3D volume representing CDP × offset × depth volume: (a) raw data and (b) data after curvelet denoising.

4.4. Pre-stack depth migration workflow

93

Figure 4.5. Sample CDP gathers from line WA04WC04 after (a) conventional and (b) curvelet-based conditioning. Note the enhancement of the hyperbolic moveouts between 4 and 7 km depth.

gathers so we can expect that the automatic RMO picking will perform better. Picking on the pre-stack gathers after application of our conditioning algorithm (Figure 4.6b) revealed high consistency of the picked moveouts. On the contrary, picking on the gathers after traditional conditioning (Figure 4.6a) produced vast amount of bad picks, i.e. those with conflicting moveouts. With such poorly defined and mostly random picks, the

94

CHAPTER 4

Figure 4.6. RMO picks (blue - speed up, red - slow down) produced on the CDP gathers from line WA04WC04 after (a) conventional, (b) curvelet-based conditioning. Seed points for RMO picking were equispaced in depth every 200 m.

velocity model computed by tomography is likely to be wrong. Range of this inconsistency is shown in so-called tomographic quality control (QC) display, where the RMO’s values are mapped along the grid used for picking into the respective percentage of the velocity changes (%∆V). Improvement of picks obtained with curvelet-based condition-

4.4. Pre-stack depth migration workflow

95

ing (presented in Figure 4.6b) is reflected by more regular velocity changes (Figure 4.7b) comparing to QC of picks produced using standard conditioning (Figure 4.7a).

Figure 4.7. Tomographic QC display obtained from the RMOs picked on the CDP gathers after (a) conventional, (b) curvelet-based conditioning. Colour cells denote how the respective picks on the gathers are mapped into the velocity changes mapped on the depth grid (blue - speed up, red - slow down).

4.4.3

Additional statistical QC of the picked RMOs

In our approach, moveouts picking was performed using seed points placed along constant depth horizons spaced each 200 m. This method does not require additional picking of geological interfaces (like in a layer-based tomography) which makes whole process more

96

CHAPTER 4

Figure 4.8. Results of an additional statistical QC of the RMO picks. (a) Accepted (blue) and removed (red) RMO picks on the CDP gathers and (b) tomographic QC display obtained from the RMOs picked on the CDP gathers after curvelet-based conditioning and after additional statistical picks removal (blue - speed up, red - slow down). Compare to Figure 4.7b.

automatic. Moreover, we were able to obtain more adequate distribution of picks at all depths and within areas containing even weak reflectors (which usually are not picked as interfaces). However with such broad collection of picked moveouts it was reasonable to perform filtering of possibly bad picks. To improve regularity of the tomography input we

4.4. Pre-stack depth migration workflow

97

additionally applied QC of the picks based on semblance analysis and statistical calculations of velocity changes (i.e. standard deviations), conducted within neighbourhood of each pick. Such QC might not be necessary when processing conventional data acquired according to current acquisition standards. However, when dealing with vintage data, it was important to develop and implement aforementioned QC algorithm for removing maximum number of bad and preserving good picks. Such filtered picks were used as an input to tomography. Figure 4.8a presents the results of additional picks removal. Figure 4.8b shows tomographic QC of picks obtained on the pre-stack gathers after curvelets conditioning and with additional statistical picks removal (compare to Figure 4.7b). Such RMO picks are more consistent and most of bad moveouts are removed. After performing described QC we observed improvement of the PreSDM results especially while inspecting moveouts flattening on the migrated gathers.

4.4.4

Tomography and final PreSDM

Two iterations of grid tomography have been performed with adjusted grid size, maximum velocity changes and smoothing parameters. Each iteration of tomography was followed by gather conditioning described in Section 4.4.2 and RMO picks filtering described in Section 4.4.3. We noted that although the overall migrated stack quality differs mainly in details, velocity field obtained from the RMO picked on the curvelet-conditioned gathers seems to better follow geological structures (compare Figure 4.9a and b). After obtaining the final sediment velocity model through tomographic updates, we ran a salt-flood procedure in order to determine base-salt reflection. The final velocity model was flooded with a constant velocity below base-salt reflection. After final PreSDM run, migrated gathers were subjected to RMO picking and correction (RMOC). Final steps after RMOC involved muting and pre-stack noise attenuation. Again, curveletbased denoising was used for gathers post-processing. Although stacking procedure attenuates a lot of random noise visible in the gathers we concluded that better continuity of the structure is obtained in the final image when mild curvelet conditioning is applied to

98

CHAPTER 4

the gathers before stacking in the similar way as in case of gather conditioning for RMO picking.

Figure 4.9. Part of the PreSDM stacks for line WA04WC04 overlaid with the velocity perturbations from tomographic inversions run with the RMO picks obtained on gathers after (a) conventional, (b) curvelet-based conditioning followed by the statistical QC.

4.5

Discussion

Current industry practice in velocity model building using ray-based methods is the massively automatic determination of the RMO picks without much attention paid to the gathers conditioning. Picks determined with a lower quality/confidence are characterized by lower coherency (typically expressed as semblance). These picks can be eliminated

4.5. Discussion

99

based on their semblance values (as is the case in the statistical QC performed in our approach) and/or they can be weighted according to their semblance values when incorporated into the tomographic equations. The effect of the noise in data is then mitigated and not directly affecting the tomographic velocity model building (Jones, 2010). However, eliminating low-quality picks may result in the gaps in the ray coverage and therefore leaves some of the model parts unconstrained (i.e. much dependent on the starting model). This seems to be particularly important in the grid-based or hybrid tomography, where the density of the picks and the data quality should be sufficiently high for the continuity of reflectors to be captured automatically. We advocate that in the case of the legacy data, characterized by low fold and often poor data quality, pre-RMO picking gather conditioning is a key for successful velocity model building. In our case, the equal-quality picks were occasional characterized by opposite moveouts. Even though we can input them to the tomographic solution with a lower weight, they will affect final velocity inversion (by cancelling the velocity changes related to the opposite signs of the moveouts). If we set the semblance threshold level too low, those picks will be completely eliminated and therefore model will not be properly sampled. Therefore, our curvelet-based gather conditioning helped to resolve both aspects: (I) moveout consistency and (II) quantity of the picks. First of all, we were able to pick less of the conflicting moveouts and preserve the most consistent ones by further statistical refinement. Secondly, quantity of the picks increases, when picking curvelet-conditioned gathers as compared to the conventional approach (Figure 4.6). Figure 4.10 shows final migrated stacks along the two parallel ∼50-km long transects (KC05KF05 and WA04WC04) crossing two major salt structures: the Kłodawa salt diapir (KSD) and Ponętów-Wartkowice structure (PWS). Some smaller-scale salt structures (i.e. the Uniejów and Turek salt pillows) are also well imaged. The distinction between the two tectono-stratigraphic domains located on both sides of the KSD (see Figure 4.1) is also visible in the velocity field, i.e. the velocities of the Triassic section are much higher in the area located to the west of the KSD as compared with the same stratigraphy to the

100

CHAPTER 4

east of the KSD. Both flanks of the PWS show consistently higher velocities (especially the western one) as compared with the less deformed surrounding. The top of the PWS is characterized by lower velocities in the Lower Cretaceous/Upper Jurassic section, which can be explained by faulting and fracturing. Interpretation of the KSD, made during the salt-flood procedure, led to the slightly different shape of this structure. In case of line WA04WC04 (Figure 4.10b), the KSD seems to be narrower and more inclined, whereas in case of line KC05KF05 (Figure 4.10a), the KSD is more symmetrical.

Figure 4.10. Final depth-migrated stacks along line KC05KF05 (a) and line WA04WC04 (b) (see location in Figure 4.1) overlaid with the final velocity models. Note the vertical exaggeration. Acronyms denote the following salt structures: KSD - Kłodawa salt diapir, USP - Uniejów salt pillow, PWS Ponętów–Wartkowice structure, and TSP - Turek salt pillow. Main stratigraphic intervals are also indicated.

Salt structures (diapirs and pillows) were imaged within the limitations imposed by raytracing based migration methods. Those structures are not being significantly better

4.6. Conclusions

101

imaged in depth as compared with the results of the 2012 PreSTM reprocessing, which might be partially explained by the relatively short recording aperture. However, the data are directly taken into the depth domain for interpretation and more importantly, the velocity depth models at transects’ intersections all tie together (which was not the case for the PreSTM Vrms models). We anticipate further testing of imaging salt structures by running the reverse time migration (RTM) using final velocity models from our raybased depth imaging workflow. However, limited offset of the data (up to 3600 m) may be prohibitive for improving the imaging of the steeply dipping diapir flanks.

4.6

Conclusions

Application of the Kirchhoff PreSDM to vintage seismic data (acquired in the 70s and 80s) proved to be effective in obtaining properly positioned structural image, which is especially important when there is a strong overprint of salt tectonics, as is the case with the study area in Central Poland. Taking into account limitations associated with low fold and SNR, obtained results are satisfactory, at least for the depth-maximum offset (3000-3500 m). Despite low fold and acquisition constraints of the reprocessed vintage data, velocity model building and the depth imaging work well. Superior results are obtained when a proper conditioning of the gathers is done before running an autopicker for tomography. Our 2D DCT-based conditioning algorithm runs in a two-step mode: (i) on the common offset sections and (ii) on the depth-slices, improving the performance of the autopicker and thus provides a more reliable input to grid tomography. Additionally, in case of legacy data, such conditioning acts as trace regularization. Also, QC of picks - before running tomographic model updates - plays an important role in proper production workflow. In case when the grid-based reflection tomography is applied to vintage data, it is strongly recommended to devote some time for proper data conditioning aimed at signal coherency improvement before running the RMO autopicker.

102

CHAPTER 4

Acknowledgements This work has been funded by the Polish National Centre for Research and Development within the Blue Gas project (No. BG1/JURASHALE/13). CURVELAB (www.curvelab.org) functions were used for calculating DCT. Migration was run using TSUNAMI software under license fromTsunami Dev. Inc. Thanks to Strzelecki Energia Sp. z o. o. for the data used in this work and their contribution in the project (Marta Mulińska and Tomasz Rosowski). Discussions with Piotr Krzywiec (Institute of Geological Sciences PAS) are also gratefully acknowledged.

References Baykulov, M., Brink, H.-J., Gajewski, D., Yoon, M.-K., 2009. Revisiting the structural setting of the Glueckstadt Graben salt stock family, North German Basin. Tectonophysics 470 (1-2), 162–172. Berkovitch, A., Belfer, I., Landa, E., 2008. Multifocusing as a method of improving subsurface imaging. The Leading Edge 27 (2), 250–256. Duveneck, E., 2004. Velocity model estimation with data-derived wavefront attributes. GEOPHYSICS 69 (1), 265–274. Etgen, J., Gray, S. H., Zhang, Y., 2009. An overview of depth imaging in exploration geophysics. GEOPHYSICS 74 (6), WCA5–WCA17. Górszczyk, A., Adamczyk, A., Malinowski, M., 2014. Application of curvelet denoising to 2D and 3D seismic data — Practical considerations. Journal of Applied Geophysics 105, 78–94. Górszczyk, A., Malinowski, M., Bellefleur, G., 2015. Enhancing 3D post-stack seismic data acquired in hardrock environment using 2D curvelet transform. Geophysical Prospecting 63 (4), 903–918.

References

103

Jones, I. F., 2010. An Introduction to: Velocity Model Building. EAGE Publications, ISBN 9073781841. Krzywiec, P., 2004. Triassic evolution of the Kłodawa salt structure: Basement-controlled salt tectonics within the Mid-Polish Trough (Central Poland). Geological Quarterly 48 (2), 123–134. Krzywiec, P., 2012a. Evolution of Triassic and Jurassic shales in vicinity of the Kłodawa salt diapir – results of seismic data interpretation. Unpublished Report. Krzywiec, P., 2012b. Mesozoic and Cenozoic evolution of salt structures within the Polish basin: An overview. Geological Society, London, Special Publications 363 (1), 381–394. Mann, J., Jäger, R., Müller, T., Höcht, G., Hubral, P., 1999. Common-reflection-surface stack — a real data example. Journal of Applied Geophysics 42 (3-4), 301–318. Pharaoh, T. C., Winchester, J. A., Verniers, J., Lassen, A., Seghedi, A., 2006. The Western Accretionary Margin of the East European Craton: an overview. Geological Society, London, Memoirs 32 (1), 291–311. Quirk, D. G., Pilcher, R. S., 2012. Flip-flop salt tectonics. Geological Society, London, Special Publications 363 (1), 245–264. Woodward, M. J., Nichols, D., Zdraveva, O., Whitfield, P., Johns, T., 2008. A decade of tomography. GEOPHYSICS 73 (5), VE5–VE11.

104

CHAPTER 4

Chapter 5 Discrete curvelet transform as s versatile tool for pre-stack seismic data enhancement

5.1



Introduction

In the previous chapters, dedicated applications of DCT were reviewed and presented. Each of them was specifically designed to work with certain data type, including 3D hardrock post-stack data and 2D vintage pre-stack data. In both cases, quality of the final products strongly relied on the choice of DCT as a key processing step allowing to obtain satisfactory seismic image. One can be left with a feeling that overall efforts spent to apply DCT to different types of seismic data are relatively high. Perhaps, because the deep understanding of how the Chapter based on: Górszczyk, A., Malinowski, M., Bellefleur, G., 2016a. Applications of Curvelet Transform in Hardrock Seismic Exploration. In: EAGE/DGG Workshop on Deep Mineral Exploration. EAGE Expanded Abstracts, 5 pp., doi: 10.3997/2214-4609.201600040; Górszczyk, A., Malinowski, M., Operto, S., 2016b. Crustal-scale Imaging from Ultra-long Offset Node Data by Full Waveform Inversion - How to Do It Right? In: 78th EAGE Conference and Exhibition 2016. EAGE Expanded Abstracts, 5 pp., doi: 10.3997/2214-4609.201601190; Trojanowski, J., Górszczyk, A., Eisner, L., 2016. A multichannel convolution filter for correlated noise: Microseismic data application. In: SEG Technical Program Expanded Abstracts 2016. Society of Exploration Geophysicists, 5 pp., doi: 10.1190/segam2016-13453668.1. ∗

105

106

CHAPTER 5

time-space/depth-space domain maps into the curvelet domain is not straightforward and demands some empirical tests. Hence, one can conclude that it could be not worth to implement this method in the standard processing procedure for the single data case. However, here the ultimate goal is not only to present final seismic images, but what is more important, to promote methodology which stands behind them. Therefore, to further attract potential user of the DCT, the selection of DCT applications which were published in form of reviewed, expanded abstracts, will be now presented. This should encourage reader that once the nature of DCT is understood it is easy to apply this powerful tool able to work with any type of seismic data at any stage of processing. First two applications are dedicated to hardrock seismic exploration problems: (i) groundroll attenuation from the radial component of 3D pre-stack data, as well as (ii) automation of the velocity model building workflow for pre-stack time migration (PreSTM) (Górszczyk et al., 2016a). Subsequently, the problem of denoising of the microseismic data with joint multichannel convolution filter (MCCF) and DCT (Trojanowski et al., 2016) will be presented. Last application will tackle the problem of the coherency of the OBS data and its influence on the velocity model building, using first arrival tomography (FAT) and FWI (Górszczyk et al., 2016b).

5.2

Solutions for hardrock seismic exploration

As it was mentioned in Chapter 3, seismic data acquired in the hardrock environment are characterized by low SNR and corrupted by significant amount of scattered energy. Therefore, interpretation of the discontinuous events from the final seismic image is hampered. However, Chapter 3 was dedicated to the post-stack data. While stacking operation acts as a strong coherency filter, we should expect that the gathers before stacking will manifest even less signal of interest. As s consequence, standard processing methods could be insufficient for separating signal from noise. However, as it will be presented in the next two sections, the DCT is a robust pre-stack coherency filtering tool, even when processing

5.2. Solutions for hardrock seismic exploration

107

challenging datasets.

Figure 5.1. Location map of the 3D-3C Lalor seismic survey (adopted from Bellefleur et al., 2015)

The Lalor Lake 3D survey (see Figure 5.1) comprises 908 shot points and 2685 receiver stations (Bellefleur et al., 2015). A total of 16 receiver lines were oriented SW-NE, almost parallel to the dip direction of the ore zones and footwall rocks near the deposit. The 15 shot lines were generally orthogonal to the receiver lines with many shot points located northeast of the deposit to provide sufficient ore zone illumination from the downdip direction. Shot and receiver lines locally deviated from planned location to adjust for difficult terrain (e.g. steep hills, cliffs). Energy sources were 0.5 kg of explosives loaded in 5m deep holes. Digital multi-component accelerometers were kept alive for the entire survey and provided a total of 8055 traces per shot gather. The cold temperatures during the acquisition (below -20◦ C) resulted in solidly frozen near-surface conditions, which allowed excellent ground-to-geophone coupling.

5.2.1

Ground-roll attenuation

Figure 5.2a,b present each second receiver line from two 3D shot gathers, located in the middle and at the edge of acquisition grid. Both of them are contaminated with a strong ground-roll, whose dip and frequency characteristic differs quite significantly, depending on the spatial position of the shot relative to the receiver line. However, taking advantage

108

CHAPTER 5

from the properties of the DCT, we are able to localize coefficients representing those events and remove them in the curvelet domain.

Figure 5.2. Example of radial-component shot gathers contaminated with the ground-roll.

In order to protect part of the data which is not affected by the ground-roll, we additionally window each shot gather before filtering. Example of time window for one of the gathers is presented in Figure 5.3. Red and blue shaded surfaces delineate the top and bottom of the applied window. Volume extracted by this kind of 3D windows correspond to the area between dashed green curves in Figure 5.2a,b. Thanks to the windowing, majority of the data is removed from filtering, so no artefacts and signal damages are introduced outside the window. The time window limits Ws,r,t used here can be derived as a simple linear function of the source-receiver offset os,r and empirically fitted velocity v as follows:

Ws,r,t =

os,r + l, v

(5.1)

5.2. Solutions for hardrock seismic exploration

109

where velocity v controls dip of the window and l is applied time shift.

Figure 5.3. Perspective view on the two surfaces corresponding to the upper and lower limits of the 3D window used to extract portion of the data containing ground-roll.

Figure 5.4. Shot gathers from Figure 5.2 after application of curvelet filtering.

110

CHAPTER 5

For the whole processed dataset, upper and lower limits of the window were estimated using vu = 3000m/s and vl = 2500m/s with the corresponding time shifts lu = 0 and ll = 0.2 respectively. Unlike the scenario when random noise is attenuated, in this case the aim is to remove strong coherent events. Hence, to separate it from the rest of the data, one shall look to remove high value coefficients of certain dips in the curvelet domain, following the approach presented in Chapter 3, Section 3.2.3. Results of this approach are presented in Figure 5.4ab. Its effectiveness comparing with gathers in Figure 5.2 is striking. Although, some weak residuals left after filtering could be pointed out, taking into account amount of removed ground-roll they are negligible. It is worth to stress that both windowing function and thresholding of curvelet coefficients are designed in an automatic manner, to work with all gathers within given dataset. Therefore, it is possible to process large data volumes with a minimal human supervision. According to Dr. Gilles Bellefleur, in this particular case, curvelet-based filtering outperformed significantly any classical methods like F-K filtration, applied to remove the ground-roll.

5.2.2

Velocity model building

Here, we use curvelets to enhance coherency of the CDP gathers, which are used for moveout picking and velocity model building for PreSTM. We start processing by forming pseudo 3D cubes from CDP gathers migrated with the initial velocity model (obtained by the manual velocity picking in this case). We form separate cubes for every inline (dimension of the cube: crossline (or CDP) × offset × time). In order to boost coherent energy, we perform filtering in two passes starting by filtering offset × time sections followed by the filtering of time slices for each pseudo-cube (Górszczyk et al., 2015). Examples of CDP gathers before and after filtering are presented in Figure 5.5ab. SNR improvement after filtering is evident. Due to removal of large volume of scattered energy (Figure 5.5c), we observe numerous clear coherent events at all time intervals.

5.2. Solutions for hardrock seismic exploration

111

Figure 5.5. CDP gathers (a) before and (b) after curvelet filtering without NMO correction. (c) difference between (a) and (b)

Such filtered data were further used in a standard semblance-based moveout picking and velocity analysis. The crucial advantage of this step is that (because of high quality of the data after filtering) we can perform fully automatic picking without decimating gathers along inlines and crosslines, like typically done during manual picking. As a result, we are able to derive velocity model much faster and from an incomparably higher number of picks. Picking performed automatically eliminates problem of picks consistency, since the manual picking on decimated inlines might be subjective. Empirical tests showed that the velocity analysis without initial a priori model and using only constant velocity (6000 m/s) as a guide for picking led to the same final results.

112

CHAPTER 5

Figure 5.6. Example section migrated with (a) manually and (b) automatically derived velocity models.

Figure 5.6a presents a stacked section (inline 1114) from PreSTM obtained with the velocity model derived from manual picking performed for entire volume (every 10th inline). The PreSTM velocity model was estimated using a range of velocities perturbed (in percent) from an initial model obtained from DMO. Figure 5.6b shows the same section migrated with the velocity model derived by our automatic workflow. Although there are no large and straightforward differences in the structure imaged on the two sections, some important details are marked by dashed ellipses in both panels. In particular, shallow structures (down to 300 ms) are strongly delineated in Figure 5.6b. Sharp event between 400-500 ms at crosslines 1220-1320 is significantly better focused. Also some deeper events around 700 and 800 ms marked by the ellipses appear to be more continuous.

Figure 5.7. Sections from Figure 5.6ab with corresponding velocity fields (5700-6300 m/s) overlaid.

Additionally, velocity models are overlaid on the seismic sections (Figure 5.7). We observe

5.3. Joint MCCF and DCT filtering for microseismic data denoising

113

significantly slower velocity values in automatically derived model at larger depths (Figure 5.7b. High velocities observed in Figure 5.7a are probably the effect of interpolation between sparse manual picks in this area.

5.3

Joint MCCF and DCT filtering for microseismic data denoising

Apart from random noise, also correlated noise is problematic for many kinds of measurements. In particular, it is an important issue for borehole and surface microseismic monitoring during hydraulic stimulation. The monitoring is carried out during high-level activity at the wellsite, when heavy equipment and pumps are working near seismic receivers. Such activity results in surface waves which are often transferred to borehole tube waves, resulting in correlated noise dominating seismic recordings. It is a particular issue for surface microseismic measurements because of low SNR of the recorded events. For large signal amplitudes, easily distinguishable from the background noise, there are several methods to enhance SNR by some sort of signal identification and noise suppression. In frequency-space domain, it can be the F-X deconvolution (Canales, 1984) which identifies linear events and suppresses random noise. In the F-K domain it is possible to define range of frequencies and wave numbers to be preserved (Stewart and Schieck, 1989). A relatively new approach is based on the DCT (Candès et al., 2006). Other methods for coherent noise removal use beamforming (e.g. Özbek, 2000) or Wiener filters (e.g. Wang et al., 2009) based on correlation estimators suitable for low frequency noise. A new approach to suppression of surface waves was also proposed within the framework of interferometry (see Vasconcelos et al., 2008; Halliday et al., 2010). Study presented by Trojanowski et al. (2016) belongs to the same category, because it is based on the correlated noise defined by a stationary part of the Green’s function between two channels. This response function is estimated using deconvolution of a signal recorded on one channel by a signal on another channel. To mitigate the influence of random noise, the

114

CHAPTER 5

filter computes average values of the Green’s functions between the two channels and iteratively removes correlated noise. Details of this conceptually new, simple multi-channel convolution filter (MCCF) were presented by Trojanowski et al. (2016). Here, we present comparison of the results obtained with MCCF- and DCT-based filtering as well as combination of both methods. Processed waveforms come from the surface monitoring dataset with a microseismic event recorded during hydraulic fracturing. One line of the 11 arm star-shaped surface array was processed. The analysed line was nearly 3400 m long and consisted of 150 geophones. The raw data contained a lot of high amplitude noise which significantly changes from trace to trace. There were also many spikes, which could lower the performance of the MCCF filter. For this reason the data were balanced and bandpass-filtered between 20-80 Hz to focus on the frequencies of microseismic events at surface datasets (Eisner et al., 2013). No moveout correction was applied. A preprocessed section containing a weak event is shown in Figure 5.8a. The event is hardly visible in the raw dataset because of the very low SNR. These preprocessed data were further filtered using DCT (Figure 5.8b), MCCF (Figure 5.8c) and MCCF+DCT (Figure 5.8d). The MCCF was applied to the triplets of neighboring receivers. The DCT filter was tuned to obtain the best possible results. Note that DCT method did not remove the high amplitude noise which has similar dip as the event (Figure 5.8b). Consequently, although SNR of the image in Figure 5.8b is higher than for MCCF (Figure 5.8c), the improvement of the energy of the signal on the stack will be lower than for the MCCF because of the remaining residual coherent events. The best possible results in terms of both coherency of the signal and SNR were obtained for DCT applied to the image already filtered using MCCF. Residuals shown in Figure 5.8e contain a lot of correlated noise while in Figure 5.8f one can observe also significant amount of scattered energy. In both cases there is no evidence of potential damages caused to the event, proving that the applied filters successfully separate useful signal from the coherent and random noise. Such filtered signal can be utilized by tomographic methods to build velocity model or, if the velocity model is known, one can aim to migrate these kind of events or localize the position of their sources.

5.4. Data conditioning for crustal-scale full waveform inversion

5.4

115

Data conditioning for crustal-scale full waveform inversion

This final example will present how coherency filtering utilizing DCT can enhance the signal of crustal-scale OBS data which are subject to the FAT and FWI. This kind of

Figure 5.8. (a) Raw, surface microseismic data with band-pass filter and amplitude balance. Results of filtering (a) using: (b) DCT, (c) MCCF, (d) MCCF+DCT. (e) and (f) Residuals between (a) and a filtered images (c) and (d) respectively.

116

CHAPTER 5

data is characterized by very long offsets (often more than 100 km) and, therefore, low SNR at longer source-receiver distances. Moreover, if the underlying geological structure is complex, it causes the interference of multiple arrivals originating at certain interfaces. In particular presence of low-velocity zones (LVZs), described in detail by Park et al. (2010) and references therein, generate shadow zones in the OBS gathers hampering interpretation of the first arrivals. Application of DCT to the OBS gathers can significantly improve quality of the input data used during FAT and subsequent FWI.

Figure 5.9. (a) Geodynamic setting of the Nankai Trough. The solid red line represents the seismic profile of the TKY-21 experiment. (b) Zoomed view of the TKY-21 survey area, overlaid with the bathymetry variations. The black line and the dashed red line represent the shot profile and the receiver line, respectively. White star represent the position of the OBS 14 presented in Figure 5.10.

Górszczyk et al. (2016b) presented multiscale approach to the crustal-scale velocity model building utilizing FAT and frequency-domain FWI. Processed dataset was acquired in the geologically challenging environment of a subduction zone localized in the eastern Nankai Trough, Japan (Figure 5.9). Figure 5.10a presents an example of OBS gather before coherency filtering. One can notice that the signal becomes weaker and more difficult to track as the offset increases. It is worth to mention that this OBS is located near the coastal side of the profile (see white star in Figure 5.9b), hence, wavefields recorded at significant offsets have propagated through the overriding plate and into the subducting oceanic plate, the top of which forms a LVZ. The heterogeneity of the geological setting manifests itself in the complex anatomy of the recorded wavefields. This is especially visible in the shadow zone indicated by the white-shaded area. The reason behind coherency filtering in case of this dataset is twofold. Firstly, it helps to identify first arrivals during FAT stage, and secondly, it minimizes the amount of

5.4. Data conditioning for crustal-scale full waveform inversion

117

Figure 5.10. (a-b) OBS gather 14 (a) before and (b) after coherency filtering and deconvolution. The red and green lines represent the initial and final first-arrival traveltime picks, respectively. The whiteshaded area highlights a complex portion of the data characterized by a shadow zone with a weak first arrival and a complex set of post- critical reflections and diffraction.

noise that will be introduced during FWI. If for some given survey the number of sources and receivers is similar then the direct filtering of frequency maps (approach presented in Chapter 2, Section 2.3.2) would be preferred. However for crustal scale acquisition number of OBSs is much smaller than number of airgun shots (100 OBSs × 1404 shots in case of this dataset). Hence because of the lack of the resolution in the receiver dimension we conditioned OBS gathers in the time-space domain. In crustal-scale applications, FAT is the most common method to build the initial velocity model for FWI. Starting model must satisfy the cycle-skipping criterion - namely, it must predict the recorded traveltimes with an error that does not exceed half the period of the lowest frequency data used in the FWI. Pratt (2008) reformulates this criterion as the following inequality: △t 1 < , T 2Nλ

(5.2)

118

CHAPTER 5

where △t/T is the relative time error and Nλ is the number of propagated wavelengths. This condition can be quite challenging to fulfill with long-offset data even if we manage to begin the inversion at a frequency as low as 1.5 Hz. For this frequency, more than 40 wavelengths are propagated for a maximum offset of 140 km and a mean velocity of 5 km/s, whereas this number reaches 238 for a frequency of 8.5 Hz (the maximum frequency used in this study).

Figure 5.11. Real part of the monochromatic data visualized in the shot-receiver coordinate system. The presented frequency is 1.5 Hz. The dataset is shown (a) before and (b) after coherency filtering. (c) The difference between (a) and (b), showing the removed noise and outliers.

Therefore accurate first break picking is crucial for correct reconstruction of the velocity model. In Figure 5.10 red curves denote initially picked first arrivals, while green ones mark first breaks picked using data after DCT filtering and deconvolution. Inset in Figure 5.10b clearly shows that the DCT filtering allowed to recover and pick very weak arrival within the shadow-zone (green curve) which was initially corrupted by the noise and untrackable. Consistent picking of wrong phases along few OBS gathers would lead FAT to the erroneous initial model and, in consequence, to the cycle-skipping problem during the FWI step. The improvement of the data coherency can be also validated while inspecting data in the frequency domain. For this purpose whole dataset is organized in the form of frequency maps similarly to the example from Chapter 2, Section 2.3.2. Figure 5.11a presents the the real part of the monochromatic data at 1.5 Hz. Comparing the data before and

5.5. Conclusions

119

after filtering (Figure 5.11ab respectively) one can observe significant improvement of the coherency. Also the residuals in Figure 5.11c exhibit mostly incoherent characteristic meaning that the DCT left phase of the data (i.e. the kinematic information) unchanged.

5.5

Conclusions

We presented four applications of the DCT-based filtering method to solve seismic data denoising problems. Each of these examples differs not only in terms of the purpose of the processing, but also in terms of fundamental data parameters, such as acquisition geometry and scale, amount of filtered traces, range of frequency band or type of removed noise. It is worth to mention that all four types of data were characterized by low SNR. In each case, the DCT-based method was able to provide superior results as compared with the standard filtering methods. We also showed that the DCT can be introduced at different stages of the seismic data processing. Applicability of the curvelets to filtering seismic data organised in a different 2D and 3D collection is unlimited given that the sufficient resolution (i.e. number of traces and time/depth samples) is maintained. Any type of threshold estimation can be introduced to separate signal and noise coefficients in the curvelet domain. In consequence it gives the high degree of freedom in the filter optimization for certain data case. Example of the microseismic data denoising (Section 5.3) indicated potential limitation of the DCT when removing the coherent noise of the dip similar to the signal of interest. However, this problem was effectively overcome by joint applications of MCCF and DCT. Obtained results prove that the DCT has a great potential to become more common tool in the seismic processing community. Flexibility in both application and parametrization of the DCT-based filters emerges naturally from the implementation of the DCT. Its complexity should be considered as an advantage to explore, rather than inconvenience requiring deeper understanding of the method. Indeed, it is the above-mentioned flexibility that allows to consider DCT as a versatile tool for seismic data processing.

120

CHAPTER 5

Acknowledgements Processing of the hardrock data has been realized in collaboration with Dr Gilles Bellefleur from Geological Survey of Canada. The 3C–3D Lalor Lake seismic data were acquired as part of the phase 4 of the Targeted Geoscience Initiative (TGI-4). Development of the MCCF and processing of the microseismic data was supported by a grant from the Polish National Science Centre (decision no. DEC-2013/09/N/ST10/03773). Processing of OBS data has been partially funded by the Polish National Science Centre grant no 2011/03/D/ST10/05128 and by the IG PAS, The Leading National Research Centre (KNOW) in Earth Sciences 2014-2018. Data were provided by the JAMSTEC.

References Bellefleur, G., Schetselaar, E., White, D., Miah, K., Dueck, P., 2015. 3D seismic imaging of the Lalor volcanogenic massive sulphide deposit, Manitoba, Canada. Geophysical Prospecting 63 (4), 813–832. Canales, L. L., 1984. Random noise reduction. In: SEG Technical Program Expanded Abstracts 1984. Society of Exploration Geophysicists, pp. 525–527. Candès, E., Demanet, L., Donoho, D., Ying, L., 2006. Fast Discrete Curvelet Transforms. Multiscale Modeling & Simulation 5 (3), 861–899. Eisner, L., Gei, D., Hallo, M., Opršal, I., Ali, M. Y., 2013. The peak frequency of direct waves for microseismic events. GEOPHYSICS 78 (6), A45–A49. Górszczyk, A., Malinowski, M., Bellefleur, G., 2015. Enhancing 3D post-stack seismic data acquired in hardrock environment using 2D curvelet transform. Geophysical Prospecting 63 (4), 903–918. Górszczyk, A., Malinowski, M., Bellefleur, G., 2016a. Applications of Curvelet Trans-

References

121

form in Hardrock Seismic Exploration. In: EAGE/DGG Workshop on Deep Mineral Exploration. EAGE Expanded Abstracts, 5 pp., doi: 10.3997/2214-4609.201600040. Górszczyk, A., Malinowski, M., Operto, S., 2016b. Crustal-scale Imaging from Ultra-long Offset Node Data by Full Waveform Inversion - How to Do It Right? In: 78th EAGE Conference and Exhibition 2016. EAGE Expanded Abstracts, 5 pp., doi: 10.3997/22144609.201601190. Halliday, D. F., Curtis, A., Vermeer, P., Strobbia, C., Glushchenko, A., van Manen, D.-J., Robertsson, J. O., 2010. Interferometric ground-roll removal: Attenuation of scattered surface waves in single-sensor data. GEOPHYSICS 75 (2), SA15–SA25. Özbek, A., 2000. Adaptive beamforming with generalized linear constraints. In: SEG Technical Program Expanded Abstracts 2000. pp. 2081–2084. Park, J.-O., Fujie, G., Wijerathne, L., Hori, T., Kodaira, S., Fukao, Y., Moore, G. F., Bangs, N. L., Kuramoto, S., Taira, A., 2010. A low-velocity zone with weak reflectivity along the Nankai subduction zone. Geology 38 (3), 283–286. Pratt, R. G., 2008. Waveform tomography - successes, cautionary tales, and future directions. In: Presented at the 70th Annual EAGE Conference & Exhibition, Roma. pp. WO11 – Full–Waveform Inversion: current status and perspectives. Stewart, R. R., Schieck, D. G., 1989. 3-D F-K filtering. In: SEG Technical Program Expanded Abstracts 1989. Society of Exploration Geophysicists. Trojanowski, J., Górszczyk, A., Eisner, L., 2016. A multichannel convolution filter for correlated noise: Microseismic data application. In: SEG Technical Program Expanded Abstracts 2016. Society of Exploration Geophysicists, 5 pp., doi: 10.1190/segam201613453668.1. Vasconcelos, I., Gaiser, J., Calvert, A., Calderón-Macías, C., 2008. Retrieval and suppression of surface waves using interferometry by correlation and by deconvolution. In: SEG Technical Program Expanded Abstracts 2008. pp. 2566–2570.

122

CHAPTER 5

Wang, J., Tilmann, F., White, R. S., Bordoni, P., 2009. Application of frequencydependent multichannel Wiener filters to detect events in 2D three-component seismometer arrays. GEOPHYSICS 74 (6), V133–V141.

Conclusions In this thesis I presented the methodology for seismic noise attenuation which utilize 2D DCT. The robustness of this approach firstly rely on the parametrization of the basis functions used to decompose the image, i.e. the curvelets. Apart from the scale and spatial coordinates parameters (typical for 2D wavelets), curvelets are also defined by the angle which makes them directional and more adaptive to the curvilinear objects. Oscillatory behaviour of curvelets meets well with the characteristic of waveforms providing natural representation of seismic events. Moreover, their anisotropic shape supports the sparseness of the reconstruction, which in turn translates to the convenient separation of signal and noise in the curvelet domain. The concept of performing this separation according to the selected scales and angles is the second pillar of the demonstrated methodology. I showed that more complex weighting of the curvelet coefficients provides superior results comparing to those obtained with global thresholding. This kind of approach is robust not only for random noise attenuation, but also allows to target certain coherent features, which are supposed to be removed from the data. I demonstrated wide range of applications to both synthetic and real, 2D and 3D, pre-stack and post-stack datasets. Presented examples were characterized by different acquisition settings and purposes, as well as were contaminated with random and coherent noise. Data obtained after curvelet-based processing were clearly better in terms of SNR both in qualitative and quantitative manner. I showed how DCT outperforms routinely applied methods like F-X deconvolution in terms of noise attenuation at the same time minimizing potential damages caused to the signal components. The interpretation of the post-stack

123

124

CONCLUSIONS

data after curvelet denoising was more feasible especially in case of 3D volumes from hardrock environment. Corresponding noise attenuation of the pre-stack data, although more challenging because of locally weak events and significant variations of the amplitudes with offset, also proved the robustness of the curvelets for the coherency filtering. Applied processing allowed to extract from the data the crucial information needed for correct velocity model building performed with different methods. Versatility of the presented approach is encouraging to introduce it to the seismic processing community as an effective and conventional processing tool which has a potential to address the recurring seismic noise attenuation problems. This will increase the popularity of the method and simultaneously stimulate its development. I believe that there are a few possibilities to further extend the presented framework which potentially would help to improve curvelet-based noise attenuation. First thing would be casting the estimation of the thresholds into an automatic optimization algorithm. This have the biggest potential to succeed when facing the random noise suppression. On the other hand, when filtering out certain coherent features, the automation of the thresholds estimation becomes strongly case-dependent. Therefore, the processing would require either interpreter interaction or some sort of a teaching sequence which would train the algorithm how to recognize and remove this part of the coherent energy which is considered as a noise. Even in case of what is referred to as a “random” noise attenuation, one need to be aware that in many cases, when processing real data, this randomness of the noise is not purely fulfilled. It means that the scattered energy which one want to remove can be (until some extend) locally correlated between few samples hampering the optimal, fully automatic coherency filtering. On top of that it is worth to mention that even automatic optimization algorithms would require certain number of iterations to converge into the optimal result. This in turn might be more time consuming comparing to the scenario where the person familiar with the curvelet denoising would process the data. Nevertheless, for a large datasets (e.g. containing numerous pre-stack gathers) affected by the noise of similar characteristic automatic thresholds estimation might be appealing.

Conclusions

125

Second possibility to further develop the presented methodology is to extend it to the 3D DCT. From the adaptive thresholding point of view, implementation of this extension is quite straightforward, since it simply increases the dimension of the space within which one is searching for the curvelet coefficients representing signal. Its practical application for random noise attenuation can be potentially feasible, however the ability to track the coherent events in 3D is limited and usually one displays the 3D volume in form of 2D cross-sections (inlines, crosslines or time-slices) which partially justify application of 2D DCT for 3D data volumes. Tuning of the threshold parameters in case of 3D DCT would be certainly more laborious. This issue perhaps could be eliminated by some sort of automatic optimization algorithm mentioned before. However, from the 3D DCT implementation point of view, the redundancy of the produced coefficients is much larger than in case of its 2D equivalent. For a large, densely sampled data volumes the amount of required computational power and the memory resources can be prohibitive to apply this approach in practice. Therefore, before the extension of the approach presented in this thesis to the third dimension, one needs to consider lowering of the redundancy of the 3D DCT. Finally, to further develop the curvelet-based noise attenuation, one can consider how to overcome some inherent limitations. The example presented in Chapter 5, Section 5.3 uncovered that curvelet denoising can face a difficulties when filtering coherent noise of the same frequency characteristic and dip as the signal of interest. In the presented case, the combination of the MCCF and DCT led to the optimal denoising results. The open question is if DCT has potential to overcome this limitation alone by some dedicated coefficients analysis or if it is better to apply it in a sequence with other method. Perhaps the latter approach might be more feasible. Hence in this regard, recalling the working hypothesis defined in the Introduction, DCT is able to precisely represent any seismic section and provides high accuracy of signal and noise separation. However, under certain circumstances, signal enhancement can be clearly improved when combining the curvelet denoising with other method. Therefore, despite the DCT (with all its advantages) still being outstanding tool for seismic noise attenuation, it seems that the pursuit for the

126

CONCLUSIONS

ideal method giving panacea to all denoising problems has still long way to go.