Rock Mech Rock Eng (2012) 45:1055–1072 DOI 10.1007/s00603-012-0239-9
ORIGINAL PAPER
Application of Generalized Regression Neural Networks in Predicting the Unconfined Compressive Strength of Carbonate Rocks Nurcihan Ceryan • Umut Okkan • Ayhan Kesimal
Received: 31 October 2011 / Accepted: 29 February 2012 / Published online: 15 March 2012 Springer-Verlag 2012
Abstract Measuring unconfined compressive strength (UCS) using standard laboratory tests is a difficult, expensive, and time-consuming task, especially with highly fractured, highly porous, weak rock. This study aims to establish predictive models for the UCS of carbonate rocks formed in various facies and exposed in Tasonu Quarry, northeast Turkey. The objective is to effectively select the explanatory variables from among a subset of the dataset containing total porosity, effective porosity, slake durability index, and P-wave velocity in dry samples and in the solid part of samples. This was based on the adjusted determination coefficient and root-mean-square error values of different linear regression analysis combinations using all possible regression methods. A prediction model for UCS was prepared using generalized regression neural networks (GRNNs). GRNNs were preferred over feed-forward backpropagation algorithm-based neural networks because there is no problem of local minimums in GRNNs. In this study, as a result of all possible regression analyses, alternative combinations involving one, two, and three inputs were used. Through comparison of GRNN performance with that of feed-forward back-propagation algorithm-based neural N. Ceryan (&) Department of Geology Engineering, Balikesir University, Balikesir, Turkey e-mail:
[email protected] U. Okkan Department of Civil Engineering, Balikesir University, Balikesir, Turkey e-mail:
[email protected] A. Kesimal Department of Mining Engineering, Karadeniz Technical University, Trabzon, Turkey e-mail:
[email protected]
networks, it is demonstrated that GRNN is a good potential candidate for prediction of the unconfined compressive strength of carbonate rocks. From an examination of other applications of UCS prediction models, it is apparent that the GRNN technique has not been used thus far in this field. This study provides a clear and practical summary of the possible impact of alternative neural network types in UCS prediction. Keywords Unconfined compressive strength Prediction Porosity Wave velocity Generalized regression neural networks All possible regression methods List of / AdjR2 k Id (%) MSEi
Symbols Porosity Adjusted determination coefficient Number of parameters in the model Slake durability index (fourth cycle) Mean of residual squares in the model with i parameters n (%) Total porosity N Number of data ne (%) Effective porosity R2 Determination coefficient S Smoothing parameter Sd Output from the denominator neuron Sj Output from the jth numerator neuron ui Input portion of the ith training vector represented by the ith neuron in the pattern layer V Volume of the sample Vfl Velocity in the fluid Vm P-wave velocity in rock samples lacking pores and fissures Vp P-wave velocity in the sample Wd Weight of the sample in the dried condition
123
1056
Wij Ws X yj h i
qd qs qw r2
N. Ceryan et al.
Weight vector between the pattern layer and summation layer Weight of the sample in the saturated condition Input vector Output vector Output from the ith neuron in the pattern layer Density of solid particles Dry density Water density Variance of the dependent variable
1 Introduction The unconfined compressive strength (UCS) of rocks is a basic design and construction parameter in geotechnical projects, particularly those involving underground opening, tunnel and dam design, rock blasting and drilling, mechanical rock excavation, and slope stability. This property of rock can be experimentally determined using direct or indirect methods (ISRM 1981). Limited laboratory facilities and difficulties in obtaining high-quality core samples oblige engineers to use a predictive model to determine these properties. In addition, some rock types, such as carbonate rocks, are characterized by weakness planes, lineation, and lamination (Yagiz et al. 2011). Thus, determining the UCS of such rock types is sometimes difficult or impossible using a UCS test. As a result, in determining the strength properties of rocks, many researchers have developed various predictive models that employ conventional statistical models (simple linear regression and multiple linear or nonlinear regression types) as well as models based on artificial intelligence techniques (ANN, genetic algorithms, fuzzy logic, and classification and regression trees). The parameters used in these models are obtained by simple index parameters and/or mineralogical analyses in addition to basic mechanical tests, such as physical properties measuring the elastic wave velocity, Schmidt hammer, point-load index (Kahraman 2001; Hack and Huisman 2002; Chang et al. 2006; Ceryan et al. 2008; Cobanoglu and C ¸ elik 2008; Kahraman et al. 2009; Oyler et al. 2010; Yagiz et al. 2011), block punch test (Ulusay et al. 2001; Altindag et al. 2004; Kayabali and Selc¸uk 2010), and core strangle test (Yilmaz 2010). The index parameters and basic mechanical tests and mineralogical analyses require only small-volume samples and are a simple, fast, and more economical solution (Bell 1978; Fahy and Guccione 1979; Brooks 1985; Doberenier and De Freitas 1986; Hawkins and McConnell 1990; Shakoor and Bonelli 1991; Ulusay et al. 1994; Romana 1999; Alvarez Grima and
123
Babuska 1999; Singh et al. 2001; Gokceoglu 2002; Gokceoglu and Zorlu 2004; Sonmez et al. 2004; Chang et al. 2006; Oyler et al. 2010). Indirect methods for UCS prediction are often necessitated by limited laboratory facilities (Baykasog˘lu et al. 2008). In recent years, ANNs have been used extensively in engineering applications (Kahraman et al. 2009). ANNs have also found their way into geotechnical engineering (Meulenkamp 1997; Alvarez Grima and Babuska 1999; Meulenkamp and Alvarez Grima 1999; Singh et al. 2001; Kahraman and Alber 2006; Baykasog˘lu et al. 2008; Yilmaz and Yuksek 2009; Kahraman et al. 2009; Sarkar et al. 2010). ANN-based models can provide practically accurate solutions for both precisely or imprecisely formulated problems and phenomena that are understood only through experimental data and field observations (Yagiz et al. 2011). They are highly nonlinear and can capture complex interactions among input/output variables in a system without any prior knowledge of the nature of those interactions (Canakci and Pala 2007; Ji et al. 2006). Many researchers have used ANNs to estimate UCS (e.g., Meulenkamp and Alvarez Grima 1999; Singh and Dubey 2000; Kahraman and Alber 2006; Cobanog˘lu and C¸elik 2008; Zorlu et al. 2008; Yilmaz and Yuksek 2008; Sarkar et al. 2010; Kahraman et al. 2010; Cevik et al. 2011; Yagiz et al. 2011), the weathering degree of rocks (Gokceoglu et al. 2009), and prediction of the permeability coefficient of coarse-grained soils (Yilmaz et al. 2011). In the present study, we aim to establish predictive models for the UCS of carbonate rocks developed in various facies and exposed in Tasonu Quarry, northeast Turkey, for rock engineering applications. These core samples featured both coarse and fine grain sizes, visible fracturing, and macrofossils. Their surface structure had pitted, pittedto-vuggy, and vuggy textures. Thus, use of estimation methods was regarded as useful in determining the uniaxial compressive strength. In this study, the objective is to select effectively the explanatory variables (inputs) among a subset of mineralogical and index properties of the samples, based on the adjusted determination coefficient and root-mean-square error (RMSE) values of different linear regression analysis combinations and to prepare a prediction model for UCS using generalized regression neural networks (GRNNs). For this purpose, total porosity (n), slake durability index (Id), and P-wave velocity in the solid part of the sample (Vm) were selected as the inputs for the GRNNs. GRNNs were preferred in this application over feed-forward back-propagation algorithm-based neural networks because the problem of local minimums does not occur in GRNNs, and so an iterative procedure is not required.
Application of Generalized Regression Neural Networks
1057
percentages of the minerals were calculated by the method developed by Gundogdu (1982). Details of the method can be found in Temel and Gundogdu (1996). Some samples from the quarry were 100% CaCO3. In other samples, there were significant variations in other components (clay, feldspar, biotite, and opaque minerals; Table 1). In this study, 56 groups of block samples, each sample having approximate dimensions of 30 9 30 9 30 cm3, were collected in the field for rock mechanics tests using the core-drilling machine of the Rock Mechanics Laboratory in the Engineering Faculty of Karadeniz Technical University. The core samples were prepared from the rock blocks; they were 50 mm in diameter, and the edges of the specimens were cut parallel and smooth (ISRM 2007; Fig. 3). Some tests, such as for specific density, unit weight, porosity, effective porosity, P-wave velocity, slake durability, and UCS, were carried out in the laboratory. The physical property and UCS tests were performed on 15 samples from each sample group. The slake durability test was performed on three samples from each group (Table 1). The total porosity (n) and effective porosity of the rock were estimated using the following equations: q n¼1 s; ð1Þ qd
Fig. 1 Location of Tasonu Quarry (Trabzon, northeast Turkey)
2 Materials and Testing Procedures The carbonate rock samples developed in different facies and were taken from Tasonu Quarry, Trabzon, northeast Turkey (Fig. 1). The rocks are used as raw materials by Trabzon Askale Cement Factory. They are part of the Kirechane Formation, which developed in the Campanian (Fig. 2). The mineralogical composition of the samples from the Kirechane Formation was studied using X-ray diffraction (XRD) at Hacettepe University. Semiquantitative L0
20
L8b
L0
L0
L2
20
L8b
L8b
L8c L7
L2 L1 L6
13
L4b
25
18
L5
15
L3a
L1
L0
L4b
18
L3b
15
L8c
L3a
12 8
14 11
15
L8b
9 L2
L3a
14 L3a
14
L4b 14
23
L8b 14
584800
585000
Fig. 2 Geologic map of Tasonu Quarry. L0 basalt, andesite, and pyroclasts; L1 volcanic pebbly red tuff; L2 red tuff alternating with white limestone; L3 common macro-shelly karstic voided limestone (a) intercalated with red tuff (b); L4 fine-grained karstic voided carbonate mudstone (a) overlying red sandy clayey limestone (b); L5 alternating sandy limestone, clayey limestone, and marl; L6 volcanic
15
N
L4a
15
L0
L5
21
L8a
L0
L8b
L4b
0
100 m
585200 tuff intercalating with clayey limestone and marl; L7 sandy pebbly limestone; L8 carbonate-cemented sandstone intercalated with clayey limestone and marl (b). The lower part of the sandstone contains a silicified level (a) with interbeddings of common macrofossiliferous with biotite tuffaceous carbonate-cemented sandstone and sandy limestone
123
1058
N. Ceryan et al.
Table 1 Mineralogical, index, and strength properties of the samples examined Smpl
Clt
Cly
Fld
Qrz
Qq
Bi
G
ck
n
ne
Id
Vp
Vm
UCS
1a
82
18
2.47
20.65
16.5
12.9
91.2
3,202
4,245
14.50
1b
72
9
19
2.47
20.62
16.7
13.4
91.5
3,299
4,458
16.12
2a1
69
26
5
2.35
19.45
17.3
14.8
2,449
3,475
8.98
2a2
71
20
9
2.35
18.76
20.3
16.1
88.4
2,650
3,903
13.90
2b1
54
46
2.20
17.87
18.8
11.8
84.4
2,455
2,834
8.85
2b2
55
45
2.21
17.50
20.8
13.5
85.2
2,354
2,631
7.77
3a1
95
5
2.72
22.33
17.9
14.9
3,672
5,418
17.05
3a2
93
4
2.72
22.25
18.2
15.6
3,517
5,507
16.72
3b1
66
34
2.53
18.81
25.6
22.7
2,293
3,255
11.34
3b2
60
36
6
3c 4a
62 68
31 23
5 8
4b
67
20
10
3
2.52
18.43
26.8
21.7
90.3
2,875
4,422
14.13
4c
51
43
4
2
2.23
17.57
21.3
12.3
91.3
2,492
2,941
12.79
j1a
56
42
2
2.67
20.05
25.0
21.6
86.5
2,200
3,362
11.38
j1b
43
51
6
2.62
17.33
33.9
30.9
83.5
1,963
2,739
9.63
j2
82
12
4
2.68
22.73
15.2
11.6
93.5
3,379
4,648
14.97
j3
76
8
12
2
2.59
24.01
7.4
5.0
3,787
5,391
22.69
j4
34
31
7
4
24
2.62
19.72
24.7
19.7
3,074
3,914
13.12
j5
77
22
2.21
19.98
9.6
5.8
92.8
2,913
3,589
11.92
j6a
63
18
9
11
2.40
20.20
15.8
16.8
93.0
2,780
4,391
13.62
j6b
68
11
5
2.28
18.77
17.6
12.5
92.9
3,263
4,762
14.32
j6c
56
34
8
2
2.28
20.11
11.8
9.1
92.9
2,652
3,579
12.47
j7
73
15
14
0
2.48
22.19
10.5
9.4
95.6
3,259
4,641
17.40
jtb8a
43
45
8
4
2.57
17.65
31.2
27.6
83.3
2,466
3,353
8.31
j8bc
87
13
2.34
17.18
26.6
23.2
85.4
2,522
3,858
9.86
j9a j9bc
50 75
38 10
5
2 11
2.68 2.52
23.50 20.97
12.3 16.7
9.8 11.9
91.2 93.6
2,836 3,329
3,440 4,746
12.34 13.71
j9d
86
j10a
83
8
j10bc
38
22
j11a
100
j11b
67
j12
100
8
3
1 1
2
20
0 5 4
85.9
2.53
18.73
25.9
22.5
87.9
2,286
3,249
13.48
2.38 2.51
19.34 18.83
18.7 25.0
14.4 17.4
89.5 92.5
2,159 2,991
3,061 4,458
9.51 15.37
93.9
14
2.39
22.02
8.0
6.2
95.9
3,574
5,216
17.92
5
4
2.45
20.51
16.3
14.3
91.6
3,035
4,689
15.49
16
21
2.45
20.62
15.8
13.9
93.3
3,002
4,424
13.80
2.71
23.76
12.3
10.7
96.3
3,634
5,279
18.62
2.42
22.28
8.0
7.3
93.6
3,528
4,851
18.21
95.2
3,649
5,321
20.76
8
5
9 2
10
j13
24.14
7.4
4.7
19.45
17.3
14.8
2,449
8.98
j14
53
19
7
10
2.62
19.60
25.2
19.9
3,286
4,946
13.16
j15
87
2
3
8
2.69
23.75
11.5
10.6
93.0
3,527
5,763
15.84
j16
10
69
6
16
2.66
17.75
33.2
31.6
80.3
1,319
1,401
7.32
j17
90
2.49
22.47
9.7
7.2
94.1
3,576
5,168
18.87
9
1
2.61 2.35
1
j18
2.68
23.90
10.9
7.5
3,124
4,439
11.70
j19
31
46
23
2.68
22.35
16.6
13.8
2,736
2,973
10.61
j20 j21
86 82
14 13
5
2.62 2.44
22.85 18.91
11.9 22.1
9.4 16.5
3,350 2,883
4,965 4,645
15.41 13.42
j22
82
18
2.49
20.53
17.4
13.7
93.5
3,232
4,560
14.04
j23
87
11
2
2.45
20.16
17.8
15.5
92.6
2,965
4,532
13.18
j2527
34
41
7
4
2.75
22.52
18.1
17.3
89.4
1,902
3,052
12.21
j26a
95
0
j26b
90
5
5
123
14 5
90.8 91.3
2.68
24.34
9.2
6.7
96.0
4,259
5,905
21.02
2.68
23.48
12.4
8.6
95.0
3,521
5,915
21.42
Application of Generalized Regression Neural Networks
1059
Table 1 continued Smpl j28
Clt 83
Cly
Fld
Qrz
7
8
2
Qq
Bi
j29
G
ck
2.72
25.33
6.8
2.69
24.16
10.3
2.72
21.37
n
ne
Id
Vp
Vm
UCS
3.7
97.4
3,980
5,258
15.24
6.7
95.7
36,827
21.3
19.7
80.5
2,072
88.3
19.22
j30
12
49
16
1
22
2,520
9.75
j31
56
26
6
2
10
2.72
22.24
18.2
13.9
j33a
53
41
5
1
2.68
24.23
9.7
7.2
j33b
88
3
9
0
2.58
24.56
4.9
3.5
96.3
3,825
5,285
24.06
j33d
82
8
8
2
2.58
21.76
15.8
13.7
92.5
3,440
4,619
14.57
3,081
4,059
12.63
2,700
2,962
12.49
Clt calcite (%), Cly clay (%), Qrz quartz (%), Qq opaque minerals (%), Bi biotite (%), G specific density, ck dry unit weight (kN/m3), n total porosity (%), ne effective porosity (%), Id slake durability index (fourth cycle) (%), Vp P-wave velocity in dry samples (m/s), Vm P-wave velocity in the solid part of the sample (m/s), UCS unconfined compressive strength (MPa)
Fig. 3 Test samples with 50 mm diameter
ne ¼
Ws Wd ; qw V
ð2Þ
where qs is the dry density, qd is the density of solid particles, qw is the water density, Wd is the weight of the sample in the dried condition, Ws is the weight of the sample in the saturated condition, and V is the volume of the sample.
In this study, ultrasonic pulse velocity (UPV) tests were conducted using the first method suggested in ISRM (1981). UPV measurements were performed on the samples in both dried and saturated conditions. For the testing, longitudinal (P) velocities were measured by using the ultrasonic pulse method. Pundit-Plus model equipment was used for the sonic velocity measurements. The length of the
123
1060
N. Ceryan et al.
Fig. 4 GRNN structure
x1
φ1
x2
xN
W
ij
S1
y1
φ2
S2
y2
φN
S Sj j
yj
Sd Input Layer
Pattern Layer
measuring base was determined with accuracy of 0.1 mm. Before the measurements, the end surfaces of the samples were made smooth and flat. A thin film of vaseline was applied to the surface of the transducers (transmitter and receiver). The pulse transmission technique was applied during the test so that the transmitter and receiver were positioned on the opposite end surfaces of the specimens investigated. No pressure was applied to the sample in the test. In the measurements, the Pundit and two transducers (a transmitter and a receiver) having frequency of 400 kHz were used. The time of ultrasonic pulses was read with accuracy of 0.1 ls. After the measurements, the velocity of the P-wave, Vp, was calculated from the measured travel time and the distance between the transmitter and receiver. In addition to the P-wave velocity in rock samples that lacked pores and fissures, Vm, was calculated by employing Eq. 3 (from Barton 2007) as 1 / 1/ ¼ þ ; Vp Vfl Vm
ð3Þ
where Vp is the P-wave velocity in the sample, Vfl is the velocity in the fluid, / is the ratio of the path length in the fluid to the total path length (i.e., the porosity), and Vm is the P-wave velocity in rock samples lacking pores and fissures (in other words, P-wave in solid). In this study, to calculate the Vm value, Vp was used for the P-wave velocity measured in saturated samples, Vfl for the P-wave velocity measured in the fluid, and / for porosity. Slake durability testing was undertaken on 10 samples of each rock type for four cycles (Franklin and Chandra 1972). UCS tests were carried out according to ISRM (2007). Core samples were prepared in a 2.5:1 height-to-diameter ratio, with diameter of 50 mm and height of 125 mm. The
123
Summation Layer
Output Layer
experiments were performed on 15 samples in the dried condition for each group. During the test, the samples were loaded to be broken in 10 and 15 min.
3 Method 3.1 Selection of UCS Predictors The selection of the model inputs generally depends on the dependent variables. Any type of input can be used in modeling as long as it has significant correlation with the dependent variables. However, a large number of the predictors may not produce better results. Hence, some statistical methods have been used in various studies to reduce the dimensionality of the relationship (Singh and Harrison 1985; Sharma 1996). There are several ways of selecting predictors if a large number are available. One common method is a detailed search in which all possible regressions are tried and one is selected as the most appropriate predictor according to statistical performance criteria (Neter et al. 1996). Maximum adjusted determination coefficient (AdjR2) can be used as such a performance criterion (McQuarrie and Tsai 1998). R2 (Eq. 4) describes the proportion of the variation in the dependent variable as explained by the predictors in the model. R2 increases with increasing number of parameters in the model. Thus, it does not by itself indicate the correct regression model. AdjR2 (Eq. 5) is the modified version of R2 adjusted for the number of inputs in the model. AdjR2 is generally considered a more accurate goodness-of-fit measure than R2:
Application of Generalized Regression Neural Networks
R2 ¼ 1
1061
MSEk ; r2
Adj R2 ¼ 1
ð4Þ
ðN 1Þ ð1 R2 Þ; ðN 1 kÞ
ð5Þ
where R2 is the determination coefficient, AdjR2 is the adjusted determination coefficient, MSEk is the mean of residual squares in the model with k parameters, r2 is the variance of the dependent variable, N is the number of data, and k is the number of parameters in the model.
3.2 Generalized Regression Neural Networks The GRNN is a special four-layer neural network that imitates the regression process, being used in the prediction of continuous variables (Specht 1991). This approach has been preferred in applications over the feed-forward neural network because there is no problem with local minimums in the GRNN, and so it does not require an iterative procedure. The GRNN, which is related to the normalized radial basis function and is based on kernel regression, consists of
Table 2 Different linear regression analysis combinations n
ne
Id
ck
Vm d
73.8
1.9008
69.7
2.0431
65.6
64.8
2.2039
53.1
52
2.5723
47.6
46.3
2.7198
d
77.7
76.7
1.7941
d
76.9
75.8
1.8271
d d
76.8 75.3
75.7 74.1
1.8301 1.8895
73.5
72.3
1.9549
d
78.3
76.7
1.7930
d
77.9
76.2
1.8097
d
77.7
76.1
1.8158
d
77.5
75.9
1.8226
d
77.0
75.4
1.8434
d
78.6
76.5
1.8007
d
d
78.3
76.1
1.8150
d
d
77.9
75.7
1.8302
d
77.6
75.3
1.8452
75.9
73.5
1.9111
78.6
75.9
1.8225
d d d d d d d d d d
d
d d
d d
Four inputs
d
d
d
d
d d
d
Five inputs
d
d
d
d
RMSE (MPa)
74.4
d
Three inputs
AdjR2 (%)
70.4
One input
Two inputs
R2 (%)
d
d
d
d
d
d
d
d
d
d
d
d
Table 3 GRNN, REG, and FFNN performance for the training and testing periods Combinations
Model
R2 Training
COMB1
COMB2
COMB3
AdjR2 Testing
Training
MSE (MPa2) Testing
Training
RMSE (MPa) Testing
Training
Testing
REG
0.7771
0.6963
0.7691
0.6729
2.2607
5.8364
1.5036
2.4159
FFNN
0.8111
0.7295
0.8044
0.7087
1.9153
5.4473
1.3839
2.3339
GRNN
0.8048
0.7527
0.7978
0.7337
1.9319
4.7422
1.3899
2.1777
REG FFNN
0.7950 0.8686
0.7406 0.7460
0.7798 0.8589
0.6974 0.7037
2.0444 1.2929
4.9236 5.5656
1.4298 1.1371
2.2189 2.3592
GRNN
0.8675
0.7540
0.8577
0.7130
1.4941
4.7853
1.2223
2.1875
REG
0.8104
0.7386
0.7885
0.6673
1.8949
4.9977
1.3766
2.2356
FFNN
0.9076
0.7627
0.8969
0.6980
0.9093
4.7747
0.9536
2.1851
GRNN
0.9418
0.7664
0.9351
0.7027
0.6248
4.5824
0.7904
2.1407
123
1062
four layers: input layer, pattern layer, summation layer, and output layer. The typical structure of the GRNN is shown in Fig. 4.
Fig. 5 Determination of smoothing parameter for the testing period in the GRNN model with COMB1 (a), COMB2 (b), and COMB3 (c)
123
N. Ceryan et al.
In the first layer, which does not perform any processing, an input vector is presented to the network. The number of neurons contained in this layer is equal to the
Application of Generalized Regression Neural Networks
1063
where x is the input vector, S is the smoothing parameter, and ui is the input portion of the ith training vector represented by the ith neuron in the pattern layer. If the smoothing parameter becomes larger, the function approximation will be smoother. If the smoothing parameter becomes too large, many neurons will be required to fit a fast-changing function. Too small a smoothing parameter means that many neurons will be needed to fit a smooth function, and the GRNN may not generalize well.
number of elements, N, in the input vector. The input data are then passed on to the second layer, the pattern layer, where each training vector is represented (Cigizoglu 2005; Okkan and Dalkilic 2011; Serbes and Okkan 2011). Thus, there are N pattern neurons running in parallel if the training dataset consists of a total of i = 1, 2, …, N samples. Each neuron, i, generates an output, hi, based on the input provided by the input layer: hi ¼ exp½ðx ui ÞT ðx ui Þ=2S2 ;
Fig. 6 Scatter plots of COMB1 for the training and testing period
ð6Þ
25
21
Training
Test
23
19
GRNN (1, 0.05, 1)(MPa)
GRNN (1, 0.05, 1)(MPa)
21 17 15 13 11
9
19 17 15 13 11 9
7
y = 0.772x + 3.0818 R² = 0.8048
5 5
7
9
11
13
15
17
19
y = 0.7679x + 3.8586 R² = 0.7527
7 5
21
5
7
9
11
Measured(MPa)
13
21
17
19
21
23
25
25
Training
Test
23
σ c = 0.77 + 0.00319 vm(MPa)
19
σ c = 0.77 + 0.00319 vm(MPa)
15
Measured(MPa)
17 15 13 11
9 y = 0.7045x + 4.0432 R² = 0.7771
7
21 19 17 15 13 11 9 y = 0.7664x + 3.4687 R² = 0.6963
7
5
5
5
7
9
11
13
15
17
19
21
5
7
9
11
Measured(MPa)
13
15
17
19
21
23
25
Measured(MPa)
21
25
Training
Test
23
19
FFNN (1, 8, 1)(MPa)
FFNN (1, 8, 1)(MPa)
21
17 15 13 11
19 17 15 13 11
9 9
7
y = 0.8594x + 1.7276 R² = 0.8111
y = 0.8225x + 3.1012 R² = 0.7295
7
5
5
5
7
9
11
13
15
17
Measured(MPa)
19
21
5
7
9
11
13
15
17
19
21
23
25
Measured(MPa)
123
1064
N. Ceryan et al.
Every neuron in the pattern layer is then connected to the summation layer, which contains two groups of neurons, namely numerator and denominator neurons. The group of numerator summation neurons is used for computing the weighted sum of the outputs from the pattern neurons. The transformation applied in the numerator neurons can be written as
Fig. 7 Scatter plots of COMB2 for the training and testing period
Sj ¼
N X
ð7Þ
Wij hi ;
i¼1
where Sj is the output from the jth numerator neuron, hi is the output from the ith neuron in the pattern layer, and Wij is the weight vector between the pattern layer and summation layer.
21
25
Training GRNN (2, 0.14, 1)(MPa)
GRNN (2, 0.14, 1)(MPa)
Test
23
19 17 15 13 11
9
21 19 17 15 13 11 9
7
y = 0.7375x + 3.5537 R² = 0.8675
y = 0.6324x + 5.6836 R² = 0.7540
7
5
5 5
7
9
11
13
15
17
19
21
5
7
9
11
21
Training 19 17 15 13 11
9 y = 0.7468x + 3.4416 R² = 0.795
7
15
17
19
21
23
25
25
Test
23 21 19 17 15 13 11 9
y = 0.7883x + 3.1954 R² = 0.7406
7 5
5 5
7
9
11
13
15
17
19
5
21
7
9
11
Measured(MPa)
13
15
17
19
21
23
25
Measured(MPa) 25
21
Training
Test
23
FFNN (2, 9, 1)(MPa)
19
FFNN (2, 9, 1)(MPa)
13
Measured(MPa) σ c = 5.77 + 0.00255 vm - 0.132 n(MPa)
σ c = 5.77 + 0.00255 vm - 0.132 n (MPa)
Measured(MPa)
17 15 13 11
21 19 17 15 13 11
9
9 7
y = 0.8698x + 1.7894 R² = 0.8686
5 5
7
9
11
13
15
17
Measured(MPa)
123
y = 0.8210x + 3.5465 R² = 0.7460
7
5 19
21
5
7
9
11
13
15
17
19
Measured(MPa)
21
23
25
Application of Generalized Regression Neural Networks
1065
The denominator group in the summation layer has only one neuron, which is computed by using the sum of the output from the pattern layer neurons, and can be defined as Sd ¼
N X
The number of neurons in the output layer is equal to the number of numerator neurons. The outputs (yj) of the GRNN can be computed as yj ¼ Sj =Sd :
ð8Þ
hi ;
ð9Þ
4 Results
i¼1
To evaluate the strength and direction of the relations between variables, different linear regression analysis
where Sd is the output from the denominator neuron and hi is the output from the ith neuron in the pattern layer.
25
21
Training GRNN (3, 0.13, 1)(MPa)
GRNN (3, 0.13, 1)(MPa)
Test
23
19 17 15 13 11
9
21 19 17 15 13 11 9
7
y = 0.8706x + 1.7518 R² = 0.9418
y = 0.6762x + 4.8914 R² = 0.7664
7 5
5 5
7
9
11
13
15
17
19
5
21
7
9
11
σ c = - 6.1 + 0.00221 vm - 0.0991 n + 0.139 ld(MPa)
σ c = - 6.1 + 0.00221 vm - 0.0991 n + 0.139 ld(MPa)
21
Training 19 17 15 13 11
9 7
y = 0.7653x + 3.2503 R² = 0.8104
5 5
7
9
11
13
15
17
13
15
17
19
21
23
25
Measured(MPa)
Measured(MPa)
19
25
Test
23 21 19 17 15 13 11 9
y = 0.7919x + 3.0216 R² = 0.7386
7 5
21
5
7
9
11
Measured(MPa)
13
15
17
19
21
23
25
Measured(MPa)
21
25
Training
Test
23
FFNN (3, 6, 1)(MPa)
19
FFNN (3, 6, 1)(MPa)
Fig. 8 Scatter plots of COMB3 for the training and testing period
17 15 13 11 9
21 19 17 15 13 11 9
7
y = 0.911x + 1.2243 R² = 0.9076 5
7
9
11
13
15
17
Measured(MPa)
19
y = 0.8395x + 2.8758 R² = 0.7627
7
5 21
5 5
7
9
11
13
15
17
19
21
23
25
Measured(MPa)
123
1066
N. Ceryan et al.
GRNN, input–output data were divided into training and testing subsets in the proportions of 2/3 and 1/3, respectively. Before presenting the input–output data to GRNN, all datasets were normalized to the range 0–1 so that the different input signals had the same numerical range. The training and testing subsets were scaled to the range 0–1 using the equation zt = (xt - xmin)/(xmax - xmin), where xt is the real data, zt is the normalized data, and xmax and xmin are the maximum and minimum values, respectively, of the real data. Then, the output values of the GRNN, which were in the range 0–1, were converted to real-scaled values. The best GRNN structures with different inputs (or combinations) provided the best training result in terms of the minimum RMSE, and the maximum AdjR2 was also considered for the testing periods. In training, the smoothing parameters (S) of the GRNN models were
combinations were obtained using the ‘‘all possible regression method’’ tool in Minitab software with five predictor variables: n (%), the total porosity; ne (%), the effective porosity; Id (%), the slake durability index (fourth cycle); Vp (m/s), the P-wave velocity in dry samples; and Vm(m/s), the P-wave velocity in the solid part of the sample, in addition to the UCS. The different combinations are given in Table 2. In Table 2, AdjR2 increases and the root-mean-square error (RMSE) decreases rapidly with three variables (Vm, n, and, Id), and then increases with addition of the fourth and fifth inputs. Following all possible regression analyses, alternative models that involve one (COMB1), two (COMB2), and three inputs (COMB3) were prepared (seen as underlined characters in Table 2). In applying the GRNN, a MATLAB code was used. To compare the generalization capabilities of the
Fig. 9 REG, GRNN, and FFNN results of COMB1 for the training and testing period
21 Measured
Training
GRNN (1, 0.05, 1)
19
σ c = 0.77 + 0.00319 vm FFNN (1, 8, 1)
17
σ c (MPa)
15 13 11 9
7 5 1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Data Points
Measured
25
Testing
GRNN (1, 0.05, 1) σ c = 0.77 + 0.00319 vm FFNN (1, 8, 1)
σ c (MPa)
20
15
10
5 31
32
33
34
35
36
37
38
39
Data Points
123
40
41
42
43
44
45
Application of Generalized Regression Neural Networks
1067
FFNN models used for prediction had one (for COMB1), two (for COMB2), and three inputs (for COMB3), and one output, and the number of hidden neurons was optimized through trials. At the end of the trials performed, eight neurons in the hidden layer for COMB1, nine neurons in the hidden layer for COMB2, and six neurons in the hidden layer for COMB3 showed the lowest MSE and the highest R2 values, thus having the best performance for FFNN (Table 3). The GRNN, REG, and FFNN results for the training and testing periods were compared with the desired UCS values in the form of scatter plots and graphical presentations considering the relations between sample number and UCS (Figs. 6, 7, 8, 9, 10, 11). When the performance of the training and testing periods was compared, it was observed that the GRNN results were better in terms of error performance.
determined by a trial-and-error method (Fig. 5a–c). In the testing period, the best three combinations and smoothing parameters had the values SCOMB1 = 0.05, SCOMB2 = 0.14, and SCOMB3 = 0.13. RMSE values of 2.1777 MPa (for COMB1), 2.2328 MPa (for COMB2), and 2.1407 MPa (for COMB3) were obtained, thus attaining the most efficient performance (seen as bold characters in Table 3). The GRNN results were compared with those using multiple linear regression models (REG) and feed-forward back-propagation algorithm-based neural networks (FFNN). A typical FFNN model was constructed using the same input combinations as used for the GRNN for comparison purposes. The neurons of the hidden layer and output layer used the sigmoid transfer function. A scaled conjugate gradient algorithm (Moller 1993) was employed for training, and the training epochs were set to 15 (for COMB1), 25 (for COMB2), and 35 (for COMB3). The Fig. 10 REG, GRNN, and FFNN results of COMB2 for the training and testing period
21 Measured
Training
GRNN (2, 0.14, 1)
19
σ c = 5.77 + 0.00255 vm - 0.132 n FFNN (2, 9, 1)
17
σ c (MPa)
15 13 11 9 7 5 1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Data Points 27 Measured
Testing
25
GRNN (2, 0.14, 1)
23
σ c = 5.77 + 0.00255 vm - 0.132 n FFNN (2, 9, 1)
21
σ c (MPa)
19 17 15 13 11 9 7 5 31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
Data Points
123
1068
N. Ceryan et al.
Fig. 11 REG, GRNN, and FFNN results of COMB3 for the training and testing period
21 Measured
Training
GRNN (3, 0.13, 1)
19
σ c = - 6.1 + 0.00221 vm - 0.0991 n + 0.139 ld FFNN (3, 6, 1)
17
13
σc
(MPa)
15
11 9 7 5 1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Data Points
27 Measured
Testing
25
GRNN (3, 0.13, 1)
23
σ c = - 6.1 + 0.00221 vm - 0.0991 n + 0.139 ld FFNN (3, 6, 1)
21
σ c (MPa)
19 17 15 13 11 9 7 5 31
Fig. 12 Box-and-whisker plots of measured and model predictions for the testing period
33
34
35
36
37
38 39 Data Points
25.00 Mean Symbol
23.00 21.00
σc (MPa)
19.00 17.00 15.00 13.00 11.00 9.00 7.00 5.00
123
32
40
41
42
43
44
45
Application of Generalized Regression Neural Networks Fig. 13 Anderson–Darling (AD) normality test and probability plots for COMB1 testing results
1069
99 Variable
Normal - 95% CI
Measured REG (COMB1) GRNN (COMB1)
95 90
AD P Mean StDev N 15.27 4.482 15 0.356 0.410 15.17 4.117 15 0.716 0.048
80
Percent (%)
70
15.58
3.967 15 0.589 0.104
60 50 40 30 20 10 5
1 0
5
10
15
20
25
30
σ c (MPa)
Fig. 14 Anderson–Darling (AD) normality test and probability plots for COMB2 testing results
99 Variable Measured REG (COMB2) GRNN (COMB2)
Normal - 95% CI 95 90
Mean StDev
Percent (%)
80 70
N
AD
P
15.27
4.482 15
0.356 0.410
15.23
4.106 15
0.617
15.34
3.264 15
0.662 0.067
0.088
60 50 40 30 20 10 5
1 0
5
10
15
20
25
30
σ c (MPa)
In this study, GRNN and REG results are also provided as box-and-whisker plots to compare the minimum, maximum, and median values of the observed and predicted UCS values in the testing period. It was noted that the minimum value statistic and range between the first and third quartiles of GRNN predictions for COMB1 and median statistics (second quartile) of GRNN results for all combinations (COMB1, COMB2, and COMB3) were satisfactory and provided superior predictions compared with REG and FFNN. In addition to the basic statistics of the models, box-andwhisker plot (Fig. 12) presentations and Anderson–Darling normality test and probability plots (Figs. 13, 14, 15) were
also examined for the testing period. The Anderson–Darling (AD) normality test, which is a statistical test to determine whether there is evidence that given data did not arise from a given probability distribution, was also used in this study to analyze two comparison groups (measured values and model predictions) as a means of identifying whether or not they fitted normal distribution. We prepared the AD plot and its test results using Minitab software. The AD test rejects the hypothesis of normality when the P value is B0.05 (level of significance). P values of three combinations for measured values and GRNN predictions were obtained: 0.410 (for measured), 0.104 (for COMB1), 0.067 (for COMB2), and 0.086 (for
123
1070
N. Ceryan et al.
Fig. 15 Anderson–Darling (AD) normality test and probability plots for COMB3 testing results
99 Variable Measured REG (COMB3) GRNN (COMB3)
Normal - 95% CI 95 90
M ean StDev
Percent (%)
80 70
N
AD
P
15.27
4.482 15
0.356 0.410
15.11
4.130 15
0.696
15.21
3.462 15
0.621 0.086
0.055
60 50 40 30 20 10 5
1 0
5
10
15
20
25
30
σ c (MPa)
COMB3). It was shown that the P values were greater than the significance level of 0.05. So we cannot reject the null hypothesis (i.e., that the data fitted normal distribution). According to these results, it is evident that GRNN for COMB1 fitted well.
5 Summary and Conclusions In this study, alternative models were developed to predict the UCS value of carbonate rocks developed in different facies by considering index properties using an alternative neural networks method—GRNN. The explanatory predictors of GRNN among the measured samples were selected by performing a comprehensive all possible regression analysis, in which optimum inputs were selected by the adjusted determination coefficient and RMSE performance, considering the UCS as the dependent variable. Following analysis, alternative combinations that involved one (Vm), two (Vm, and n), and three inputs (Vm, n, and Id) were used for GRNN. All these trials showed that three successful combinations (COMB1, COMB2, and COMB3) were obtained as assessed using different statistical criteria: MSE, RMSE, R2, and AdjR2 values, box-and-whisker plot presentations showing quartiles and extreme values, and AD test statistics giving an indication of the distribution fit between measured values and model predictions. However, prediction with the smallest number of inputs (for COMB1) is sufficient. The P-wave velocity in the solid part of the sample (Vm) used to predict UCS values is regarded as being sufficient for this purpose. In spite of a number of advantages, feed-forward neural networks have some drawbacks, including the possibility
123
of getting trapped in local minima and subjectivity in the determination of model parameters (learning rate, momentum rate, Marquardt parameter, decay rate, etc.) and structure (hidden layers, number of neurons in hidden layers, activation function type, etc.). Following an examination of other applications of UCS prediction models, it is apparent that the GRNN technique detailed above has not thus far been applied in this field. Moreover, the capability of generalization and ease of training of GRNN is far beyond the capacities of the other artificial intelligence methods. Thus, the present study has demonstrated that GRNN is an alternative artificial neural network technique that is capable of UCS modeling and allows nonlinear relations to be analyzed efficiently after determining the optimum GRNN smoothing parameter (S). The GRNN model suggested in this study can be applied to carbonate rocks developed in various facies. However, it should not be forgotten that the performance of the models developed in this study should be also checked using some additional data which will be available in literature. We believe that the GRNN method can be easily applied to other geologic variables of nonlinear nature to achieve better performance than would be obtained using traditional statistical methods and artificial intelligence approaches.
References Altindag R, Alyildiz IS, Onargan T (2004) Technical note: mechanical property degradation of ignimbrite subjected to recurrent freeze–thaw cycles. Int J Rock Mech Min Sci 41:1023–1028 Alvarez Grima M, Babuska R (1999) Fuzzy model for the prediction of unconfined compressive strength of rock samples. Int J Rock Mech Min Sci 36:339–349
Application of Generalized Regression Neural Networks Barton N (2007) Fracture-induced seismic anisotropy when sharing is induced in production from fractured reservoirs. J Seism Explor 16:115–143 ¨ zbakır L (2008) Predicting of Baykasog˘lu A, Gu¨llu¨ H, C¸anakc¸ı H, O compressive and tensile strength of limestone via genetic programming. Expert Syst Appl 35:111–123 Bell FG (1978) The physical and mechanical properties of Fell sandstones. North-umberland, England. Eng Geol 12:1–29 Brook N (1985) The equivalent core diameter method of size and shape correction in point load test. Int J Rock Mech Min Sci Geomech 22:61–70 (Abstr.) Canakci H, Pala M (2007) Tensile strength of basalt from a neural network. Eng Geol 94:10–18 Ceryan S, Tudes S, Ceryan N (2008) A new quantitative weathering classification for igneous rocks. Environ Geol 55:1319–1336 Cevik A, Sezer EA, Cabalar AF, Gokceoglu C (2011) Modeling of the unconfined compressive strength of some clay-bearing rocks using neural network. Appl Soft Comput 11:2587–2594 Chang C, Zoback MD, Khaksar A (2006) Empirical relations between rock strength and physical properties in sedimentary rocks. J Petrol Sci Eng 51:223–237 Cigizoglu HK (2005) Generalized regression neural networks in monthly flow forecasting. Civil Eng Environ Syst 22(2):71–84 Cobanog˘lu I˙, C¸elik SB (2008) Estimation of uniaxial compressive strength from point load strength, Schmidt hardness and P-wave velocity. Bull Eng Environ 67:491–498 Doberenier L, De Freitas MH (1986) Geotechnical properties of weak sandstones. Geotech 36:79–94 Fahy MP, Guccione MJ (1979) Estimating strength of sandstone using petrographic thin-section data. Bull Assoc Eng Geol 16:467–485 Franklin JA, Chandra A (1972) The slake durability test. Int J Rock Mech Min Sci 9(1):325–341 Gokceoglu C (2002) A fuzzy triangular chart to predict the unconfined compressive strength of the Ankara agglomerates from their petrographic composition. Eng Geol 66:39–51 Gokceoglu C, Zorlu K (2004) A fuzzy model to predict the unconfined compressive strength and modulus of elasticity of a problematic rock. Eng Appl Artif Intell 17:61–72 Gokceoglu C, Zorlu K, Ceryan S, Nefeslioglu HA (2009) A comparative study on indirect determination of degree of weathering of granites from some physical and strength parameters by two soft computing techniques. Mater Charact 60:1317–1327 Gundogdu N (1982) The geological, geomechanical and mineralogical investigation of Bigadic Sedimantery Basin aged Neogen. ¨ Engineering Faculty, Beytepe, Ankara, p 368s PhD thesis, HU Hack H, Huisman M (2002) Estimating the intact rock strength of a rock mass by simple means. In: van Rooy JL, Jermy CA (eds) Proceedings of 9th congress of the international association for engineering geology and the environment, Durban, South Africa Hawkins A, McConnell BJ (1990) Influence of geology on geomechanical properties of sandstones. In: 7th international congress on rock mechanics. Balkema, Rotterdam, pp 257–260 ISRM (1981) In: Brown ET (ed) Rock characterization, testing and monitoring-ISRM suggested methods. Pergamon, Oxford, p 211 ISRM (2007) The complete ISRM suggested methods for rock characterization, testing and monitoring: 1974–2006. In: Ulusay R, Hudson JA (eds) Suggested methods prepared by the commission on testing methods, International society for rock mechanics. ISRM Turkish National Group, Ankara, p 628 Ji T, Lin T, Lin X (2006) A concrete mix proportion design algorithm based on artificial neural networks. Cem Concr Res 36:1399– 1408 Kahraman S (2001) Evaluation of simple methods for assessing the unconfined compressive strength of rock. Int J Rock Mech Min Sci 38:981
1071 Kahraman S, Alber M (2006) Estimating the unconfined compressive strength and elastic modulus of a fault breccia mixture of weak rocks and strong matrix. Int J Rock Mech Min Sci 43:1277–1287 Kahraman S, Gunaydin O, Alber M, Fener M (2009) Evaluating the strength and deformability properties of misis fault breccia using artificial neural networks. Expert Syst Appl 36:6874–6878 Kahraman S, Alber M, Fener M, Gunaydin O (2010) The usability of Cerchar abrasivity index for the prediction of UCS and E of Misis Fault Breccia: regression and artificial neural networks analysis. Expert Syst Appl 37:8750–8756 Kayabali K, Selc¸uk L (2010) Nail penetration test for determining the uniaxial compressive strength of rock. Int J Rock Mech Min Sci 47(2):265–271 McQuarrie AD, Tsai C (1998) Regression and time series model selection. World Scientific Publishing Co. Pte. Ltd., River Edge Meulenkamp F (1997) Improving the prediction of the UCS, by Equotip readings using statistical and neural network models. In: Memoirs of the Centre for Engineering Geology in the Netherlands, vol 162, p 127 Meulenkamp F, Alvarez Grima M (1999) Application of neural networks for the prediction of the unconfined compressive strength (UCS) from Equotip hardness. Int J Rock Mech Min Sci 36:29–39 Moller MF (1993) A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw 6:523–533 Neter J, Kutner M, Nachtsheim C, Wasserman W (1996) Applied linear statistical models. McGraw-Hill, NY Okkan U, Dalkilic HY (2011) Reservoir inflows modeling with artificial neural networks: the case of Kemer Dam in Turkey. Fresenius Environ Bull 20(11):3110–3119 Oyler DC, Mark C, Melinda GM (2010) In situ estimation of roof rock strength using sonic logging. Int J Coal Geol 83:484–490 Romana M (1999) Correlation between unconfined compressive and point-load (Franklin tests) strengths for different rock classes. In: 9th ISRM congress vol 1, Balkema, pp 673–676 Sarkar K, Tiwary A, Singh TN (2010) Estimation of strength parameters of rock using artificial neural networks. Bull Eng Environ 69:599–606 Serbes ZA, Okkan U (2011) Modeling of streamflows by using generalized regression neural networks (in Turkish). 5.Ulusal Su Mu¨hendislig˘i Sempozyumu Bildiriler Kitabı (Cilt II), pp 537– 546 Shakoor A, Bonelli RE (1991) Relationship between petrographic characteristics, engineering index properties and mechanical properties of selected sandstones. Bull Assoc Eng Geol 28:55–71 Sharma S (1996) Applied multivariate techniques. Wiley, Canada Singh TN, Dubey RK (2000) A study of transmission velocity of primary wave (P-Wave) in coal measures sandstone. J Sci Ind Res India 59:482–486 Singh A, Harrison A (1985) Standardized principal components. Int J Remote Sens 6:883–896 Singh VK, Singh D, Singh TN (2001) Prediction of strength properties of some schistose rocks from petrographic properties using artificial neural networks. Int J Rock Mech Min Sci 38:269–284 Sonmez H, Tuncay E, Gokceoglu C (2004) Models to predict the unconfined compressive strength and the modulus of elasticity for Ankara Agglomerate. Int J Rock Mech Min Sci 41:717–729 Specht DF (1991) A general regression neural network. IEEE Trans Neural Netw 2(6):568–576 Temel A, ve Gundogdu MN (1996) Zeolite occurrences and the erionite—mesothelioma relationship in Cappadocia. Mineralium Deposita, Central Anatolia, vol 31, pp 539–547 Ulusay R, Tureli K, Ider MH (1994) Prediction of engineering properties of a selected litharenite sandstone from its
123
1072 petrographic characteristics using correlation and multivariate statistical techniques. Eng Geol 37:135–157 Ulusay R, Gokceoglu C, Sulukcu S (2001) Draft ISRM suggested method for determining block punch index (BPI). Int J Rock Mech Min Sci 38:1113–1119 Yagiz S, Sezer EA, Gokceoglu C (2011) Artificial neural Networks and nonlinear regression techniques to assess the influence of slake durability cycles on the prediction of uniaxial compressive strength and modulus of elasticity for carbonate rocks. Int J Numer Anal Methods Geomech. doi:10.002/nag.1066, online Yilmaz I (2010) Use of the Core Strangle Test for tensile strength estimation and rock mass classification. Int J Rock Mech Min Sci 47(5):845–850
123
N. Ceryan et al. Yilmaz I, Yuksek AG (2008) An example of artificial neural network (ANN) application for indirect estimation of rock parameters. Rock Mech Rock Eng 41(5):781–795 Yilmaz I, Yuksek AG (2009) Prediction of the strength and elasticity modulus of gypsum using multiple regression, ANN and ANFIS models. Int J Rock Mech Min Sci 46(4):803–810 Yilmaz I, Marschalko M, Bednarik M, Kaynar O, Fojtova L (2011) Neural computing models for prediction of permeability coefficient of coarse-grained soils. Neural Comput Applic. doi: 10.1007/soo521-011-0535-4 Zorlu K, Gokceoglu C, Ocakoglu F, Nefeslioglu HA, Acikalin S (2008) Prediction of unconfined compressive strength of sandstones using petrography-based models. Eng Geol 96:141–158