Inversion of residual gravity anomalies using neural network ...

13 downloads 912 Views 364KB Size Report
This approach is mainly based on using modular neural network (MNN) inversion for estimating the shape factor, the depth, and the amplitude coefficient.
Arab J Geosci (2013) 6:1509–1516 DOI 10.1007/s12517-011-0452-y

ORIGINAL PAPER

Inversion of residual gravity anomalies using neural network Mansour A. Al-Garni

Received: 18 June 2011 / Accepted: 19 October 2011 / Published online: 22 November 2011 # Saudi Society for Geosciences 2011

Abstract A new approach is presented in order to interpret residual gravity anomalies from simple geometrically shaped bodies such as horizontal cylinder, vertical cylinder, and sphere. This approach is mainly based on using modular neural network (MNN) inversion for estimating the shape factor, the depth, and the amplitude coefficient. The sigmoid function has been used as an activation function in the MNN inversion. The new approach has been tested first on synthetic data from different models using only one well-trained network. The results of this approach show that the parameter values estimated by the modular inversion are almost identical to the true parameters. Furthermore, the noise analysis has been examined where the results of the inversion produce satisfactory results up to 10% of white Gaussian noise. The reliability of this approach is demonstrated through two published real gravity field anomalies taken over a chromite deposit in Camaguey province, Cuba and over sulfide ore body, Nornada, Quebec, Canada. A comparable and acceptable agreement is obtained between the results derived by the MNN inversion method and those deduced by other interpretation methods. Furthermore, the depth obtained by the proposed technique is found to be very close to that obtained by drilling information. Keywords Neural network inversion . Modular algorithm . Gravity . Simple-shaped bodies

M. A. Al-Garni (*) Department of Geophysics, Faculty of Earth Sciences, King Abdulaziz University, Jeddah, Saudi Arabia e-mail: [email protected]

Introduction Gravity data interpretation aims mainly to estimate depth and location of the causative target. It is known that the gravity data interpretation is non-unique where different subsurface causative targets may yield the same gravity response (anomaly); however, a priori information about the geometry of the causative target may lead to a unique solution (Roy et al. 2000; Aboud et al. 2004). Numerous methods have been used to interpret the residual gravity anomalies. Among these methods, Fourier transformation (Sharma and Geldart 1968), Euler deconvolution (Thompson 1982; Reid et al. 1990), Mellin transform (Mohan et al. 1986; Babu et al. 1991), Hilbert transform (Sundararajan et al. 1983a, b), Hartley transform (Sundararajan and Rama Brahmam 1998), least-squares minimization (Gupta 1983; Salem et al. 2003), and Walsh transform (Al-Garni 2008). In the aforementioned methods, the geometry of causative target is assumed where the accuracy of the results depends on how close the assumed model from the real structure. Some recent approaches have been developed to estimate the shape factor of the causative targets of gravity anomaly. Among these approaches is Walsh transform (Shaw and Agarwal 1990), analytic signal (Nandi et al. 1997), and nonlinear least-squares minimization (Abdelrahman and El-Araby 1993; Abdelrahman and Sharafeldin 1995; Abdelrahman et al. 2001), and derivative of a numerical formula (Aboud et al. 2004). In this paper, modular neural network (MNN) inversion is used mainly to compute the depth and the shape factor of the causative target from a gravity anomaly. NN can offer a unique solution, especially where the data are noisy, when acknowledge of a task is not available or unknown nonlinearity between input and output may exist (Bhatt and Helle 2002; Al-Garni 2010). There are several

1510

advantages that can make the NN be superior to the other methods (Masters 1993; Al-Garni 2009, 2010; El-Kaliouby and Al-Garni 2009) namely: &

& &

& & &

&

No prior knowledge about the input/output is required for model development where unknown mapping can be inferred from the data provided to the network training. NNs respond correctly to new data that have not been used for model development. NNs can model linear and high nonlinear input/output mapping. Zhang and Gupta (2000) showed that the NNs are capable of forming an arbitrary close approximation to any continuous nonlinear mapping. NNs can perform the inversion in nearly no time once they are well-trained. NNs are generally robust with input if the data are chaotic in mathematical sense where this behavior in most other technique cannot be handled. NNs have wide ranges for the input starting models; on the contrary, the conventional techniques require an initial starting model that can be trapped in a local minimum if is not close to the solution NNs, which are considered as global search algorithm, produce satisfactory results even if the starting models are away from the solution.

Methodology Neural networks NNs can be considered as universal approximation which can approximate any function in terms of its variables. Thus, they may contribute to finding solutions to a variety of geophysical applications (Macias et al. 2000; Poulton 2001; El-Kaliouby and Al-Garni 2009; Al-Garni 2009, 2010). NNs can map any continuous function to arbitrary accuracy (Yarger et al. 1978; Jang et al. 1997; Jain and Martin 1999; Al-Garni 2010). Models of NN can be more accurate than polynomial regression models used for approximating functions, allowing mainly two important things: more dimensions than look-up table models and multiple outputs for a single model (ElKaliouby and Al-Garni 2009). Generally, an NN is fed by a training set of a group of examples from which it learns to estimate the mapping function described by the example patterns. NNs algorithms may be divided into two main groups which are supervised (associative) learning and unsupervised (self-organization) learning. The supervised learning is based on desired outputs. During the training, the NN tries to match the outputs with the desired values. In unsupervised learning,

Arab J Geosci (2013) 6:1509–1516

the method is not given any target value where the desired output of the network is unknown. During the training, the network performs some kind of data compression such as dimensionality reduction or clustering. Hence, the NN learns the distribution of patterns and makes a classification of that pattern where similar patterns are assigned to the same output cluster (Poulton 2001; El-Kaliouby and AlGarni 2009; Al-Garni 2009, 2010). A typical NN consists of a minimum of three layered nonlinear problems: an input layer, a hidden layer, and an output layer (Fig. 1). Each layer consists of nodes that are represented by circles where the lines between the nodes indicate the flow of information from one node to the next (input to output). Also, each layer is a single processing element (PE) that acts on data to generate a result. Each node has an extra input called the threshold input which acts as a reference level or bias for the node (El-Kaliouby and Al-Garni 2009; Al-Garni 2009, 2010). Therefore, data enter the network via the input layer where each node broadcasts a single data value over weighted connection to the hidden layer, which process the input data and broadcast their results to the output layer. The output nodes have dissimilar sets of weights and process the input values to produce the results. The process of this architecture is called feed-forward multilayer perception (Fig. 1). The input is mainly processed by the hidden node via two main steps: (1) it multiplies every input by its weight and sums the product then passes the sum through a nonlinear transfer function, which also called a threshold function, in order to produce a result which is the activation of the PE, (2) the activation is multiplied by the connection weights going to the next layer. The used transfer function in this paper is the sigmoid transfer function, where it is continuously differentiable and monotonically increasing and can be described by a smooth step function as: f ð/Þ ¼ ð1 þ ex Þ1

ð1Þ

where, α /2 ½1; 1 is the input value to the activation function where it was used in the hidden and output layers

Fig. 1 Architecture of neural network

Arab J Geosci (2013) 6:1509–1516

1511

of MNN. Hence, the input is passed through the network in this manner until it reaches the output layer. Training NN is the most important step for the model development where it learns the problem behavior. NN is taught with simulated/measured samples from a training set of models. The performance of NN is evaluated by the computed difference between the real NN outputs and the desired outputs for all the training samples. In this paper, the NN was designed to learn to extract the depth (z), the shape factor (q), and the amplitude coefficient (A). The input layer has nodes as the input samples, which are in this case the gravity data. There are three output nodes in the output layer for the parameters (z, q, and A). The process of the training mainly adjusts the weight parameters (w) in the network in which the error between the NN model predictions and the desired output [E(w)] is minimized, where E(w) is a nonlinear function of w. Iterations are used to explore the weight space where an initial guess would be implemented first and then iteratively update w as follow wnew ¼ wold þ dn

ð2Þ

where wnew, wold, n, and δ are the new and the current vectors containing the values of the weights, the update direction, and a positive step size regulating the extent to which w can be updated in that direction, respectively (Zhang and Gupta 2000; El-Kaliouby and Al-Garni 2009). In the training process, the error should be minimized over a large number of iterations; however, in some cases the error remains high and flat and that indicates the training is trapped in a local minimum. Therefore, w can be perturbed and then start a new initial guess for the training process (El-Kaliouby and Al-Garni 2009; Al-Garni 2010). Modular neural network The NN inversion that has been used for training is based on the MNN architecture. This type of the NN has been successfully used in different areas of geophysics (ElKaliouby and Poulton 1999; El-Kaliouby 2001; Zhang et al. 2002; Bhatt and Helle 2002; El-Kailouby and Al-Garni 2009; Al-Garni 2009, 2010). An MNN is characterized by a series of independent neural networks moderated by some intermediary. Each independent neural network serves as a module (local expert) and operates on separate inputs to accomplish some subtask of the task that the network wishes to implement (Azam 2000). In other words, the network can be decomposed to several modules where these modules do not interact with each other (Fig. 2). The outputs of the modules are mediated by an integrated unit called gating network, which does not permit to feed information back to the modules. In fact, the gating network decides how the output of the modules ought to be combined to form the final

Fig. 2 A diagram showing the architecture of the modular neural network

output of the system and which modules should be trained with which training patterns. Supervised and unsupervised learning paradigms are combined in the MNN in which the gating network learns to break the task in to several subtasks, which is unsupervised learning, and each module is allocated to learn only one part of the task, which is supervised learning. Thus, the modules compete with each other to learn each training pattern that performs the function of a mediator among the modules (Haykin 1994; El-Kaliouby and Al-Garni 2009; Al-Garni 2010). Each module and the gating network receive the same input pattern from the training set where modules and the gating network are trained simultaneously. The gating network decides which module produced the most accurate response to the training pattern and the connection weights in that module are allowed to be updated to enhance the probability that this module will respond best to similar patterns (Haykin 1994; Zhang et al. 2002; El-Kaliouby and Al-Garni 2009; AlGarni 2009, 2010). One of the main benefits of an MNN is the ability to reduce a large, unwieldy neural network to smaller, more manageable components, where several advantages can be listed as, & & & & &

Increased learning and relearning speed Improved generalization Better usability Interpretability Easier hardware implementation

Formulation of the problem In gravity, the fields of many simple geometrically bodies are symmetric about the location of causative targets unlike

1512

Arab J Geosci (2013) 6:1509–1516

the magnetics where the magnetization direction and the direction of earth’s magnetic field make the measured field asymmetric. The gravity effect at any point P (x, y) on the surface caused by simple geometric-shaped bodies such as an infinite horizontal cylinder, a semifinite vertical cylinder, and sphere is given by (Abdelrahman et al. 2001) gðxi ; z; qÞ ¼

ð x2

A þ z2 Þq

ð3Þ

where, A is an amplitude coefficient related to the radius and density contrast of the buried causative target, z is the depth, x is the position coordinate, and q is the shape factor which describes the source geometry and has the value of 0.5, 1.0, and 1.5 for an infinite horizontal cylinder, a semifinite vertical cylinder and sphere, respectively. Synthetic examples The synthetic examples are sampled at 101 points of input data over 200 units profile with two-unit interval (Fig. 3). The MNN was used in order to invert the gravity data using 101 points for the input layer; 50 nodes were used in the hidden layer. The sigmoid transfer function was used to modify activations in the hidden layer. Three local experts with five processing elements have been used for the MNN inversion. It was assumed that the simple geometric-shaped models to have parameters z=5 units, q=0.5, 1.0, and 1.5 for horizontal cylinder, vertical cylinder, and sphere, respectively, and A=100 units, generating responses shown in Fig. 3. One thousand one hundred twenty-five training models have been used covering the ranges of parameters. The ranges of parameters that have been used for training the network are: & & &

The depth, z, ranges from 2.0 to 8 units, with 15 points in this range The shape factor, q, ranges from 0.3 to 2, with five points in this range The amplitude coefficient, A, ranges from 50 to 150 units, with 15 points in this range.

The choice of the centers of the parameter ranges and the expected ranges of the parameters is based on the behavior of measured field data. These expected ranges are less restricted than assuming an initial starting model in the local inversion methods which may be far from the true parameters where the solution could be trapped in a local minimum. The choice of the ranges of parameters is dependent mainly on the response of the measured gravity data. For example, deep targets show a broad gravity anomaly while shallow targets show a sharp narrow anomaly. A coarse range is usually selected at the beginning of the training with small number of points for each parameter in order to

explore the suitability of the selected range to fit the gravity field data. Generally, if any of the model parameters does not fall within the selected ranges, then the NN can report that this parameter is out of range so that the selected range can be expanded. During the learning process, the learning error for each parameter can be examined individually as well as the overall root mean square. If the learning error is accepted, then we compare the misfit between the NN inversion with the gravity field data. Also, if the misfit is acceptable, then the choice of the ranges is suitable; otherwise, we should narrow or expand the parameter ranges, depending on the learning error of each parameter. Moreover, the resolution of each parameter can be increased by increasing the number of points per range. Generally, this process involves two-step (coarse to fine) training for the NN which does not take much time. On the other hand, whenever the neural network is well-trained then it can invert any field data that fall within the training range in almost no time. In this synthetic data, one well-trained network has been used to invert different models using MNN inversion. The results of the MNN inversion responses are shown in Fig. 3 and the inverted parameters are tabulated in Table 1. Noise analysis The effect of random noise has been studied by adding 5% and 10% of white Gaussian noise (WGN) to the response of the vertical cylinder model (Fig. 4). The inverted noisy responses are shown in Fig. 4, proving that the NN inversion provides satisfactory results even up to 10% of WGN (Table 2). Al-Garni (2009) studied the error in the horizontal location (xo =0) of a causative target where he addressed that any error in the horizontal location of the target led to errors in the inverted parameters. However, the horizontal location of the causative target can be easily and precisely determined, in gravity case, and that where the maximum or minimum peak is. Field examples To examine the validity of the proposed technique, the gravity anomalies over a chromite deposit in Camaguey province, Cuba and over a sulfide ore body, Nornada, Quebec, Canada, were used (Fig. 5). The shape factors, depths, and amplitude coefficients of both anomalies are estimated using MNN. Chromite deposit Figure 5a shows the normalized residual gravity anomaly measured over a chromite deposit in Camaguey province, Cuba (Davis et al. 1957). This anomaly was sampled at 73

Arab J Geosci (2013) 6:1509–1516

1513

Fig. 3 Synthetic anomaly over a horizontal cylinder model (a), over a vertical cylinder model (b), and over a sphere model (c), and their neural network inversion responses

points of input data over 73 m distance with 1-m interval where the maximum as the center of the profile (xo =0). One thousand training models have been used, covering the ranges of parameters. The parameter ranges that have been used for training the network are: &

The depth, z, ranges from 10 to 30 m, with 20 points in this range

& &

The shape factor, q, ranges from 0.3 to 2, with five points in this range The amplitude coefficient, A, ranges from 5,000 to 10,500, with 10 points in this range.

The result of MNN inversion is also shown on Fig. 5a. The MNN inversion shows that the estimated depth is 21.14 m, shape factor is 1.54, and coefficient amplitude is

1514

Arab J Geosci (2013) 6:1509–1516

Table 1 Theoretical examples of different geometrical models Model

Parameters

Horizontal cylinder

True NN inversion True NN inversion True NN inversion

Vertical cylinder Sphere

Z (units) 5.00 5.03 5.00 5.03 5.00 5.06

q 0.50 0.48 1.00 1.03 1.50 1.48

A (units) 100.00 108.23 100.00 112.40 100.00 107.25

Table 2 Synthetic examples of different WGN percentages Model

Parameters

Vertical cylinder 0% of WGN 5% of WGN 10% of WGN

True NN inversion NN inversion NN inversion

Z (units) 5.00 5.03 5.20 5.08

q 1.00 1.03 1.05 1.05

A (units) 100.00 112.40 108.23 116.20

Sulfide ore body 9,382.52 mGal×m2. Robinson and Coruh (1988) showed that this anomaly is resulted from a sphere model of depth of 21 m. Aboud et al. (2004) estimated the depth and found to be ranging between 23.64 and 24.23 m with an average value of 23.81 m and the shape factor was 1.5. It has been shown that the estimated parameters using neural network agree well with the previous studies and with the information obtained from the drill hole which is 21 m. The results are summarized in Table 3. Fig. 4 Synthetic anomaly over a vertical cylinder model contaminated with 5% (a) and 10% (b), of random WGN and their neural network inversion responses

Figure 5b shows the normalized residual gravity anomaly over sulfide ore body, Nornada, Quebec, Canada (Grant and West 1965). The anomaly was sampled at 101 points over a 200 m profile with a 1 m interval where the maximum as the center of the profile (xo = 0). One thousand five hundreds seventy-five training models have been used, covering the ranges of parameters. The parameter ranges that have been used for training the network are:

Arab J Geosci (2013) 6:1509–1516

1515

Fig. 5 Normalized gravity anomaly data over a chromite deposit, Canada, (Grant and West 1965; a), and a residual gravity anomaly profile over a sulfide deposit, Cuba (Davis et al. 1957; b), and their neural network inversion responses

& & &

The depth, z, ranges from 20 to 40 m, with 15 points in this range The shape factor, q, ranges from 0.4 to 1.5, with seven points in this range The amplitude coefficient, A, ranges from 100 to 300, with 15 points in this range.

The results of the MNN inversion is shown in Fig. 5b. The neural network inversion shows that the estimated depth is 29.75 m, shape factor is 0.69, and the amplitude coefficient is 200.11 mGal×m2. Roy et al. (1999) showed that the gravity anomaly is resulted from a vertical line of shape factor ranging between 0.66 and 0.68. Aboud et al. (2004) showed that the estimated depth ranged between Table 3 Shows a comparison between the present technique and others for the chromite deposit in Camaguey province, Cuba Authors

z (m)

q

A (mGal×m2)

Robinson and Coruh (1988) Aboud et al. (2004) Present technique

21 23.8 21.14

Sphere 1.5 1.54

– – 9,382.52

26.45 and 30.50 m with an average value of 28.91 m and the estimated shape factor ranged between 0.54 and 0.65 with an average value of 0.6. The inverted parameters using MNN agree well with the drilling results (30 m) as well as the previous work of Roy et al. (1999, 2000) and Aboud et al. (2004). The results are summarized in Table 4.

Conclusion NN inversion of gravity parameters of simple geometricshaped bodies such as sphere, horizontal cylinder, and

Table 4 Shows a comparison between the present technique and others for the sulfide ore body; Nornada, Quebec, Canada Authors

z (m)

q

A (mGal×m2)

Roy et al. (1999) Aboud et al. (2004) Drilling Present technique

0.66–0.68 28.91 30.0 29.15

Vertical line 0.6 – 0.69

– – 200.11

1516

vertical has been investigated. MNN inversion has been utilized for three parameters: shape factor, depth, and amplitude coefficient. This approach has been tested first on synthetic data using only one well-trained network and then on two field examples taken from Cuba and Canada. The sigmoid function has been used as an activation function in the MNN inversion. The effect of noise has been investigated and the results show that the proposed technique produces satisfactory results up to 10% of white Gaussian noise. The results of the MNN inversion of the two field examples show good agreement compared with other published inversion techniques and drilling information. Hence, the successful application of the MNN inversion to the synthetic and gravity field data demonstrated the reliability and the validity of this approach.

References Abdelrahman EM, El-Araby TM (1993) A least-squares minimization approach to depth determination from moving average residual gravity anomalies. Geophysics 59:1779–1784 Abdelrahman EM, Sharafeldin MS (1995) A least-squares minimization approach to shape determination from gravity data. Geophysics 60:589–590 Abdelrahman EM, El-Araby TM, El-Araby HM, Abo-Ezz ER (2001) A new method for shape and depth determinations from gravity data. Geophysics 66:1774–1778 Aboud E, Salem A, Elawadi E, Ushijima K (2004) Estimation of shape factor of buried structure from residual gravity data. The 7th SEGJ International Symposium, November 24–26, 2004. Tohoku University, Sendai Al-Garni MA (2008) Walsh transforms for depth determination of a finite vertical cylinder from its residual gravity anomaly. SAGEEP 6–10:689–702 Al-Garni MA (2009) Interpretation of some magnetic bodies using neural networks inversion. Arab J Geosci 2:175–184 Al-Garni MA (2010) Interpretation of spontaneous potential anomalies from some simple geometrically shaped bodies using neural network inversion. Acta Geophysica 58:143–162 Azam (2000) Biologically inspired modular neural networks. PhD Dissertation, Virginia Tech., 183 p. http://scholar.lib.vt.edu/ theses/available/etd-06092000-12150028/unrestricted/etd.pdf Babu LA, Reddy KG, Mohan NL (1991) Gravity interpretation of vertical line element and slap–a Mellin transform method. Indian J Pure Appl Math 22:439–447 Bhatt A, Helle H (2002) Committee neural network for porosity and permeability prediction from well logs. Geophys Prospect 50:645–660 Davis WE, Jackson WH, Richter DH (1957) Gravity prospecting for chromite deposits in Camaguey province, Cuba. Geophysics 22:848–869 El-Kaliouby HM (2001) In: Poulton MM (ed) Extracting IP parameters from TEM data computational neural networks for geophysical data processing, chapter 17. Pergamon, Oxford El-Kaliouby HM, Al-Garni MA (2009) Inversion of self-potential anomalies caused by 2D inclined sheets using neural networks. J Geophys Eng 6:29–34 El-Kaliouby HM, Poulton MM (1999) Inversion of coincident loop TEM data for layered polarizable ground using neural networks.

Arab J Geosci (2013) 6:1509–1516 Society of Exploration Geophysicists (SEG) 69th annual meeting. Houston, Texas, USA Grant FS, West GF (1965) Interpretation theory in applied geophysics. McGraw-Hill, New York Gupta OP (1983) A least-squares approach to depth determination from gravity data. Geophysics 48:537–360 Haykin S (1994) Neural networks: a comprehensive foundation. Macmillan, New York Jain LC, Martin NM (1999) Fusion of neural networks, fuzzy sets and genetic algorithms: industrial applications. CRC, Boca Raton Jang JSR, Sun CT, Mizutani E (1997) Neuro-fuzzy and soft computing: a computational approach to learning and machine intelligence. Prentice-Hall, New York Macias C, Sen M, Stoffa P (2000) Artificial neural networks for parameter estimation in geophysics. Geophys Prospect 48:21–47 Masters T (1993) Practical neural network recipes in C++. Academic, CA, USA Mohan NL, Anandababu L, Roa S (1986) Gravity interpretation using Mellin transform. Geophysics 52:114–122 Nandi BK, Shaw RK, Agarwal NP (1997) A short note on identification of the shape of simple causative sources from gravity data. Geophys Prospect 45:513–520 Poulton MM (2001) Computational neural networks for geophysical data processing. Pergamon, Oxford, UK Reid AB, Allsop JM, Granser H, Millet AJ, Somerton IW (1990) Magnetic interpretation in three dimensions using Euler deconvolution. Geophysics 55:80–91 Robinson ES, Coruh C (1988) Basic exploration geophysics. Wiley, New York Roy L, Agarwal NP, Shaw RK (1999) Estimation of shape factor and depth from gravity anomalies due to some simple sources. Geophys Prospect 47:41–58 Roy L, Agarwal NP, Shaw RK (2000) A new concept in Euler deconvolution of isolated gravity anomalies. Geophys Prospect 48:559–575 Salem A, Elawadi E, Ushijima K (2003) Short note: depth determination from residual anomaly using a simple formula. Computer Geosci 29:801–804 Sharma B, Geldart LP (1968) Analysis of gravity anomalies of twodimensional faults using Fourier transforms. Geophys Prospect 16:77–93 Shaw RK, Agarwal P (1990) The application of Walsh transforms to interpret gravity anomalies due to some simple geometrical shaped causative sources: a feasibility study. Geophysics 55:843– 850 Sundararajan N, Rama Brahmam G (1998) Spectral analysis of gravity anomalies due to slab like structures—a Hartley transform technique. J Appl Geophys 39:53–61 Sundararajan N, Mohan NL, Seshagiri Rao SV (1983a) Gravity interpretation of 2-D fault structures using Hilbert transform. J Geophysics (Germany) 53:34–47 Sundararajan N, Mohan NL, Seshagiri Rao SV (1983b) Interpretation of gravity anomalies due to some 2-D structures—a Hilbert transform technique. Indian AcadSci (Earth and Planetary Sciences) 92:179–188 Thompson DT (1982) EULDPH—a new technique for making computer-assisted depth estimates from magnetic data. Geophysics 47:31–37 Yarger HL, Robertson RR, Wentland RL (1978) Diurnal drift removal from aeromagnetic data using least squares. Geophysics 46:1148–1156 Zhang Q, Gupta K (2000) Neural networks for RF and microwave design. Artech House, London, UK Zhang L, Poulton MM, Wang T (2002) Borehole electrical resistivity modeling using neural networks. Geophysics 67:1790–1797

Suggest Documents