Tool-wear prediction and pattern-recognition using artificial neural

11 downloads 0 Views 4MB Size Report
Pattern-recognition · Prediction · Artificial neural network ·. DNA-based computing ... on-line tool-wear monitoring in a real-life application (rough and finish cuts).
J Intell Manuf (2017) 28:1285–1301 DOI 10.1007/s10845-015-1155-0

Tool-wear prediction and pattern-recognition using artificial neural network and DNA-based computing Doriana M. D’Addona1 · A. M. M. Sharif Ullah2 · D. Matarazzo1

Received: 13 July 2015 / Accepted: 5 October 2015 / Published online: 13 October 2015 © Springer Science+Business Media New York 2015

Abstract Managing tool-wear is an important issue associated with all material removal processes. This paper deals with the application of two nature-inspired computing techniques, namely, artificial neural network (ANN) and (in silico) DNA-based computing (DBC) for managing the toolwear. Experimental data (images of worn-zone of cutting tool) has been used to train the ANN and, then, to perform the DBC. It is demonstrated that the ANN can predict the degree of tool-wear from a set of tool-wear images processed under a given procedure whereas the DBC can identify the degree of similarity/dissimilar among the processed images. Further study can be carried out while solving other complex problems integrating ANN and DBC where both prediction and pattern-recognition are two important computational problems that need to be solved simultaneously. Keywords Tool-wear · Nature-inspired computing · Pattern-recognition · Prediction · Artificial neural network · DNA-based computing

Introduction Among other factors, tool-wear must be minimized to achieve the sustainability of a manufacturing process. There-

B

Doriana M. D’Addona [email protected]

1

Fraunhofer Joint Laboratory of Excellence for Advanced Manufacturing Engineering (Fh-J_LEAPT), Department of Chemical, Materials and Production Engineering, University of Naples Federico II, Piazzale Tecchio 80, 80125 Naples, Italy

2

Department of Mechanical Engineering, Kitami Institute of Technology, 165 Koencho, Kitami, Hokkaido 090-8507, Japan

fore, tool-condition monitoring and tool-wear prediction have earned a great deal of attention. Teti et al. (2010) have reviewed the methodologies used in modern tool-condition monitoring. They have reported that the advanced signal processing and machine learning techniques have extensively been used in monitoring the conditions of a cutting tool in both on-line and off-line modes. To be more specific, some of the selected studies are briefly described below, showing the computational challenges of tool-wear monitoring and prediction and how to overcome the challenges, as well. Wang et al. (2007) have developed a self-organizing map (neural network) and trained it in a batch mode after each cutting passes against the flank-wear-relevant features of cutting forces (obtained from a sensor-based analysis) and measured the degree of wear (obtained by interpolating the visionbased measurement). They have shown that the trained neural network is effective for online tool condition monitoring in milling and independent of the cutting conditions used. Jemielniak et al. (2012) have studied the force, vibration, and acoustic emission signals while turning a specific material and extracted some features from the time, frequency, and time-frequency domains of the signals for detecting the tool-wear. They have introduced a hierarchical process to coordinate among the useful features while determining the tool condition (used-up time of the tool). Penedo et al. (2012) have introduced the concepts called global model (a model based on linear or quadratic regressions to get the overall trend) and local model (a model based on fuzzy k-nearestneighbors smoothing algorithm to get the localized-nonlinear variability) and integrates the models to detect the tool-wear in turning. They have shown that the methodology works better compared to a transductive or inductive neuro-fuzzy models for tool-wear detection. Li et al. (2000) proposed a tool wear condition monitoring system for turning operations by utilizing an adaptive neuro-fuzzy inference system. The

123

1286

model proposed as input the feed-motor current to estimate the feed cutting force. The proposed method related this force to the tool wear and it resulted effective in being adaptive to variations in cutting conditions. Li and Zhejun (1998) related features extracted from an Acoustic Emission (AE) signal to the tool wear, under given cutting conditions. The AE signal features were extracted by mean of a wavelet packet transform, and followed by a Fuzzy Clustering Method (FCM) to enable tool wear condition monitoring. This method resulted effective in monitoring the process, since the features had a low sensitivity to the changes of cutting conditions and FCM had a relevant success rate in a wide range of cutting conditions. Wang et al. (2013a) have employed Gaussian mixture regression model to extract features from cutting force signal for tool-wear prediction. They have shown that the presented methodology (Gaussian mixture regression model) performs better compared to other methodologies, namely, multiple linear regressions, radius basis function, and back propagation neural network. Pal et al. (2011) have employed wavelet packet tree and principal component analysis to extract toolwear-sensitive features from turning operations (strain on the tool holder and motor current). They have trained an artificial neural network to correlate cutting conditions with tool-wear features so that the amount of tool-wear can be predicted based on the cutting conditions applied. Wang and Cui (2013b) have trained an auto-associative neural network (empowered by Levenberg–Marquardt training algorithm) using the cutting force signals of milling operations without tool-wear (normal condition). As such, the methodology does not require training data of the abnormal conditions (force signals underlying tool-wear) and, thereby, is useful in on-line tool-wear monitoring in a real-life application (rough and finish cuts). Azmi (2015) has developed a system using the adaptive neural network-based fuzzy inference to monitor tool-condition while milling the glass fiber reinforced composites. The network has been trained using the feed force data sets. It has been stressed that the nonlinear relationship of tool-wear and feed force can adequately be captured by the presented neuro-fuzzy computational technique. Nouri et al. (2015) have developed a real-time tool-wear monitoring methodology for milling. The methodology uses a cutting-condition-independent coefficient that is shown to be correlated with the degree of tool-wear. The coefficient is developed using tangential and radial force models of milling. Ren et al. (2014) have employed the type-2 fuzzy number to monitor the tool-condition in micro-milling. The type-2 fuzzy number has been used because it not only can model the acoustic emission signals but also can handle the uncertainty associated with signals. Wang and Feng (2013) have developed a tool condition monitoring methodology integrating a hidden Markov chain and conditional random field. It has been shown that the conditional random field improves the pattern recognition ability of the hidden Markov

123

J Intell Manuf (2017) 28:1285–1301

chain underlying the acoustic emission signals while monitoring the tool-wear in a real industrial environment. Kilundu et al. (2011) have integrated a signal processing methodology called singular spectrum analysis and different machine learning methodologies (decision trees, Bayesian networks, k-nearest neighbor, and neural network) to handle the computational complexities in monitoring the tool-wear using the cutting vibration signals. A comparison of the tool-wear detection performance has also been presented, showing which machine learning methodology is good for recognizing which tool-wear pattern. Prakash and Kanthababu (2013) have established the relationships among the tool-wear (flank wear), acoustic emission signals, surface roughness, and chip morphology in order to develop a methodology for online detection of tool-wear in micro-end-milling for several metallic materials. Murata et al. (2012) have developed an inprocess tool-wear detection system. The system senses the thermo-electromotive force generated in the vicinity of cutting tool and workpiece. They have found that the electric resistance and tool-wear are correlated, i.e., a decrease in the electric resistance means an increase in the flank wear. This trend helps detect the progress of tool-wear during the intermittent cutting (e.g., face milling) in a reliable manner. Xu et al. (2014) have developed a methodology to detect the toolwear in drilling. They have used the wavelet packet transform to identify the features in the static and dynamic components of drilling torque and thrust force signals. They have employed a back propagation neural network for the feature recognition, and found that the features extracted from the low-frequency band in the dynamic component of the signals are effective in detecting the drilling tool-wear. Patra et al. (2010) have developed a tool condition (flank wear) monitoring system using the vibration signals of cutting. They have shown that the fuzzy radial basis function based neural network can recognize the features (extracted from the time domain by applying the wavelet packet approach) underlying the vibration signals more effectively than others (back propagation neural network, radial basis function network, and normalized radial basis function network). Teshima et al. (1993) have developed a methodology to train an artificial neural network using the tool-wear (flank and crater) image data of turning and implemented it within the framework of flexible manufacturing systems. Takuma et al. (1994) have developed a methodology to identify the tool-condition (rest of the cutting time and the type of wear) from a neural network. The network has been trained by using the image data of the worn-zone of the cutting tool. Besides the image data, the inputs of the network consist of workpiece hardness and cutting conditions (feed, depth of cut, and cutting speed). The outputs of the network are the rest of the cutting time (finish, medium finish and rough cuts) and tool-wear (flank, crater, and groove). Kassim et al. (2000) have analyzed the images of workpiece surfaces investigated the correlation of tool-wear

J Intell Manuf (2017) 28:1285–1301

and patterns exhibited by the images of the workpiece surfaces. They have identified that the condition of a cutting tool (sharp, semi-dull, and dull) can be successfully detected by analyzing the respective image data of the machined surfaces. Lanzetta (2001) has developed a high-resolution vision sensor for tool-condition monitoring. The author has established the logical relationships among the tool-wear morphology and degree of tool-wear. The author has also developed a parameter-driven image processing technique that detects all types of tool-wear in the presented morphology. Wang et al. (2006a) have developed a tool-wear measurement technique using the phase shifting method that measures the degree of crater wear from a single shot or frame taken by a CCD camera system. They have shown that the technique is robust enough with respect to the background, ambient light, and setup configuration, i.e., it can be used for the on-line measurement of tool-wear. However, without special hardware devices the method is not responsive enough for the on-line applications. Wang et al. (2006b) have developed a methodology to measure the flank wear of a cutting tool using a threshold independent edge detection approach. It has been stressed that the methodology is suitable for the in-process flank wear measurement of an insert-based cutting tool due to its fast response time and high measurement accuracy. D’Addona and Teti (2013) have developed a procedure to create a library of tool-wear image data sets to capture the scenarios of tool-wear progression along with the machining time for turning operations under different cutting conditions. They have focused on a standard procedure to processing the cutting tool images and to design an optimized artificial neural network for tool-wear prediction by using the images stored in the library. As understood from the research works described above, a great deal of computational complexity arises while processing the signals (cutting force, vibration, or acoustic emission) or images (on- or off-line images of the worn-zone of a cutting tool) for the sake of monitoring the tool-condition or predicting the degree of tool-wear. To overcome the computational complexities, most of the researchers have relied on a nature-inspired computing methodology called artificial neural network. Some researchers have integrated some other computing techniques (as described above) with the artificial neural network making it even more meaningful and effective. However, recall the work of D’Addona and Teti (2013) where a library of tool-wear images has been created. One can process the raw images according her/his choice. As a result, many variants of the tool-wear related images come into being depending on the individual who processes the information. On the other hand, the artificial neural network (or any other means) usages the information stored in the image library for the purpose training so as to predict the toolwear for a given machining time. Since the images underlie

1287

the same phenomenon, i.e., tool-wear in material removal process under a certain set of cutting conditions, the images must exhibit similar structure no matter the image processing technique applied. As a result, besides predicting the progression of tool-wear learning from a set of images, identifying the similarity/dissimilarity among the images is an important computational problem to be solved. This has become an important requirement for the Internet-aided manufacturing (Ullah et al. 2013, 2014) because for the case of Internetaided manufacturing, the relevant information for making a decision or building a system (e.g., building a tool monitoring system) may come from an anonymous source. As such, before using a piece of decision- or system-buildingrelevant information, it is important to build trust on it. A nature-inspired computing methodology called (in silico) DNA-Based Computing (hereinafter referred to as DBC) can be applied to build trust on a set of information (i.e., to verify whether or not a set of information underlies the same phenomenon or similar in nature even the visual inspection might say the opposite). See Ullah et al. (2014) for more detail on the DBC for pattern recognition. Based on this contemplation this paper is written. In particular, this paper employs two different nature-inspired computing methodology, namely, artificial neural network and DBC for two different requirements. The first requirement is to predicting the degree of tool-wear from a set of tool-wear images processed under a given procedure. The other requirement is to identifying the degree of similarity among the processed images. The remainder of this article is organized as follows: “Data acquisition and image processing” section describes the tool-wear image data acquisition and image processing processes. Three different image processing used to process the images of worn-zone of a cutting tool for a given set of cutting conditions. “ANN-based tool-wear prediction” section describes how the artificial neural network has been trained to predict the progression of tool-wear with respect to machining time. Two different training techniques have been employed ensuring the robustness of the prediction. “DBCbased pattern-recognition” section describes the results of pattern recognition using a customized DBC for tool-wear image data handling, identifying the similarity/dissimilarity among the processed tool-wear images for the three different image processing techniques. “Conclusions” section describes the concluding remarks of this study.

Data acquisition and image processing This section describes the machining data acquisition and image processing techniques of the tool-wear. The barshaped workpieces made of stainless steel (Grade: AISI 1045) were turned (quasi-orthogonal machining) using the cutting tools of type P3 (tungsten carbide insert). The machin-

123

1288 Table 1 Cutting parameters for quasi-orthogonal tests

J Intell Manuf (2017) 28:1285–1301

Cutting speed (m/min)

Feed rate (mm/rev)

Depth of cut (mm)

A

100

D

0.06

H

1

B

150

E

0.12

I

1.5

L

250

F

0.19

Fig. 1 AEI 020-080-140 non-standard images

ing operations were stopped after each minute of machining to take the picture of the worn-zone and measure the crater wear of the cutting tool. It is worth mentioning that the maximum crater wear (V B max ) was measured by using a microscope (Carl Zeiss Axioskop 40 focused at 10×, which was equipped with a digital camera). In addition, a reference copper wire of diameter 0.25 mm was included in each picture of the worn-zone for the sake of subsequent image processing. The cutting conditions (depth of cut, feed rate, and cutting velocity) used while turning the work-piece are listed in Table 1. As seen from Table 1, three different cutting velocities (100, 150, and 250 m/min), three different feed rates (0.06, 0.12, and 0.19 mm/rev), and two different depths of cut (1 and 1.5 mm) were used. Given the cutting conditions, the maximum number of possible combinations was eighteen and the experimental tests were conducted considering all of them. In this research work, fourteen combinations were considered to be meaningful for processing (“ADH”, “AEH”, “AEI”, “AFH”, “AFI”, “BDI”, “BEH”, “BFH”, “BFI”, “LDI”, “LEH”, “LEI”, “LFH”, “LFI”). Since the experimental tests conducted under the cutting conditions combinations denoted as “ADI”, “BDH”, “BEI”, “LDH” presented some failures and were not considered suitable. Each image was identified by a code, representing both a cutting conditions combination and the machining time elapsed. For example, the image code AEI020 means that the cutting conditions combination is AEI (Table 1) and the machining time elapsed is 2 min. However, the produced set of images is not homogenous, as realized from the images shown in Fig. 1 that come from the same set conditions combination denoted as AEI. (One can identify the reference copper wire and the worn-zone in these images.) It necessitates an image standardization

123

Fig. 2 P-3 carbide tungsten fresh tool image

process to align the quality of the contrast, brightness, and position of an insert (cutting tool). To standardize a set of images, a quasi-standard procedure is proposed that highlights the area of the crater wear separating it from the background. The procedure consists of the following six major steps: Step 1: Aligning and resizing The images are aligned and resized to have the same size images, i.e., 445 by 445 pixels per image. Step 2: Overlapping Each aligned and resized image is overlapped with images of the cutting area of a fresh cutting tool (Fig. 2). Step 3: Initial standardized images Initial standardized images are produced (e.g., the images in Fig. 3) where the concentration of the gray pixels are not optimized.

J Intell Manuf (2017) 28:1285–1301

1289

Fig. 3 AEI 020-080-140 after first pre-processing

Fig. 4 AEI020 gray level histogram

Step 4: Quasi-standardized images Using an image-editing package (Adobe Photoshop CS5), the set of images corresponding to a cutting conditions combination (e.g., AEI) is treated by the Red-Green-Blue (RGB) scale in a gray scale and lined up with the fresh tool picture until the worn-zone could adequately be highlighted. The worn-zone in each image is cropped out and saved as a new image data. As seen from the image gray scale histogram (Fig. 4), i.e., the histogram of the values in the interval [0, 255], where 0 represents the black color and 255 represents the white color, the newly produced images are adjusted in both contrast and level of brightness producing a set of as standardized as possible images. The standardized set of images corresponding to the cutting conditions combination called AEI is shown in Fig. 5. Figure 6 shows the histogram the images corresponding to AEI020/080/140. As seen from Fig. 6, the highest concentration of the pixels is around 0 (black). This is a reasonable result in accordance with the case shown in Fig. 5. Step 5: Variants of standardized image sets Multiple sets of standardized images can be produced, if needed. In doing this, the original set of images (i.e., the set of images produced in Step 4) can be processed further

by setting the other image processing parameters at different levels. In this sense, the set of images shown in Fig. 5 is the original set of standardized images corresponding to the cutting conditions combination AEI (hereinafter referred to as ORIGINAL AEI images). As such, the set of images shown in Fig. 7 is a set of standardized images denoted as 2.5 GB AEI images, because modifying the ORIGINAL AEI images keeping the Gaussian Blur (GB) level at 2.5 produces these images. Figure 8 shows the modified ORIGINAL AEI images by setting the GB level at 5. This way, one can produce as many variants of the original set of images as possible. Step 6: Binary image data A set of two-dimensional binary arrays can be produced representing the set of images. In this case, the standard packages (e.g., MATLAB image processing toolbox) can be used. For example, Figs. 9, 10 and 11 show the sets of binary images (black and white) from the sets of images shown in Figs. 5, 7 and 8, respectively using the MATLAB image processing toolbox. It is worth mentioning that the binary image data produced in this step can be utilized in various computing systems for making decisions. This issue is described in the following two sections.

123

1290

J Intell Manuf (2017) 28:1285–1301

Fig. 5 AEI 020/150 fully processed Fig. 6 AEI020/080/140 gray level histogram trend

ANN-based tool-wear prediction ANN learns from examples and classifies/recognizes the hidden structures underlying the examples. This way, it helps establish functional relationships among some input and output parameters. As described in “Introduction” section, ANN has extensively been used in developing computing systems for predicting the degree of wear and recognizing the patterns underlying tool-wear. In this study, a type of ANN called Back Propagation Neural Network (hereinafter referred to as BP NN) (D’Addona et al. 2011) is used to predict the progress

123

in tool-wear. Figure 12 illustrates the general architecture of the BP NN used in this study. As seen from Fig. 12, the BP NN consists of three layers namely, input, hidden, and output layers. The BP NN was trained by using the information of three different sets of images (Figs. 9, 10, 11), cutting conditions combination (AEI), and machining time. In particular, from each set of AEI binary images, characteristic features were extracted and a feature vector was built. By combining each of these vectors, an n by m matrix representing the parameters of the input layer, was created, where n = 8 represents the

J Intell Manuf (2017) 28:1285–1301

1291

Fig. 7 AEI 020/150 fully processed with Gaussian Blur level set at 2.5

Fig. 8 AEI 020/150 fully processed with Gaussian Blur level set at 5

dimension of the BP NN training set, m (=7) represents the number of the time of machining, cutting conditions underlying AEI, and the extracted image features, as listed below: • • • • •

Time of machining; Cutting speed (A) (m/min); Feed rate (E) (mm/rev); Depth of cut (I) (mm); Average number of white pixels (1s) of each image;

• Number of white pixels (1s) of each image (N ); • Percentage (%) of white pixels in each binary image (i.e., (N /(445 × 445)) × 100); The output of the BP NN is represented by a column vector, which contains the measured values of the crater wear [mm]. The procedure of extracting the input matrix was repeated for each binary image set (ORIGINAL, 2.5, and 5 GB, as shown in Figs. 9, 10 and 11, respectively). Table 2 shows the input

123

1292

J Intell Manuf (2017) 28:1285–1301

Fig. 9 AEI 020/150 ORIGINAL in monochrome scale

Fig. 10 AEI 020/150 2.5 GB in monochrome scale

matrix of the ORIGINAL AEI binary set while Table 3 shows the output vector, which is in common for each AEI image set. However, the other parameters of the BP NN (e.g., the number of nodes in hidden layer and the training function) may affect its performance. To find the suitable configuration of the BP NN, two test networks (referred to as Model 1 and Model 2) were created. The Model 1 uses the cascade forward back propagation BP NN (N_CF) (D’Addona and Teti 2013) and the Model

123

2 uses the cascade forward BP NN (CFN) (Tengeleng and Armand 2014). The Model 1, N_CF, has the following structure. The number of nodes in the hidden layer was chosen by trial and according to the “cascade learning” procedure, i.e., the hidden units were added one at a time until the error could be minimized and an acceptable training speed was achieved. The weights and thresholds were randomly initialized between −1 and +1 (Masters 1993). The learning coeffi-

J Intell Manuf (2017) 28:1285–1301

1293

Fig. 11 AEI 020/150 5 GB in monochrome scale

cients were set as follows: learning rate between input and hidden layers was 0.3, learning rate between hidden and output layers was 0.15, and momentum was 0.4. The learning rule called Normal Cumulative Delta Rule was used and the sigmoid function was used as the transfer function. The N_CF was trained using the Levenberg-Marquardt training function (trainlm), which is based on numerical optimization techniques and is the fastest method to train neural networks of small size. The Levenberg-Marquardt function is particularly efficient in terms of solving time of functions approximation problems (Masters 1993). The N_CF training was carried out by the “leave-k-out” method (l-k-o), particularly useful when dealing with small training sets: one vector (k = 1) was held back in turn for the testing phase and the other vectors were used for training (D’Addona and Teti 2013). On the other hand, Model 2, CFN, creates a weighted connection from the input layer and every previous layer to the following layers. Furthermore, it is able to create connections between the three-layer net and from the input layer to all the three layers (D’Addona et al. 2015a, b). The CFN was trained with the Training–Validation– Testing (T–V–T) method (Segreto et al. 2014): the 70 % of the input matrix was chosen for the training phase, a 15 % for the validation phase and the remaining 15 % for the testing phase. While the best number of hidden nodes was chosen by trial, to apply the T–V–T method, MATLAB randomly divides the input matrix in three sets, which are the best, according to the specific problem. The Levenberg-Marquardt function was chosen to perform the training.

For both models, the performance of the BP NN in predicting the degree of tool-wear over the time was evaluated by the parameters called mean error (e) and Mean Absolute Percentage Error (MAPE) [%], as given by the Eqs. (1)–(2) (Lou and Dong 2015). In Eq. (1), Ak denotes the measured value of V Bmax , Fk the predicted degree of tool-wear, and n denotes the dimension of the training set. |Fk − Ak | A  k n  1 ek M AP E = × 100 n ek =

(1) (2)

k=1

Table 4 provides an overview of all trained and tested BP NNs for both N_CF and CFN models while Tables 5 and 6 show the best performances of the BP NNs. The N_CF and CFN models were trained with the 2.5 and 5 GB image sets and using the best BP NN configuration underlying the BP NN for the ORIGINAL AEI image set. Figures 13 and 14 show the trends of the BP NN Models. As seen from Figs. 13, 14, the trends in the predicted degree of tool-wear are very similar, and the calculated error is small in comparison with the measured tool-wear. In addition, both models produced satisfactory results in terms of MAPE. In particular, the CFN model exhibits an error smaller than 1 % for the ORIGINAL and 2.5 GB image sets. The error is a bit more than 1 % for the 5 GB image set. It is worth mentioning that even if the input conditions change, the trend in the error for both models remains the same. It means that the models

123

1294

J Intell Manuf (2017) 28:1285–1301

Fig. 12 General architecture of a BP NN

Table 2 ANN INPUT matrix (P) of the ORIGINAL configuration INPUT Time of machining (min)

Cutting speed (m/min)

Feed rate (mm/rev)

Depth of cut (mm)

Average number of white pixels (1s)

Number N of white pixels (1s)

Percentage (%) N /(445× 445) × 100

AEI020

2

100

0.12

1.5

92

41160

0.208

AEI040

4

100

0.12

1.5

163

72400

0.366

AEI060

6

100

0.12

1.5

173

77109

0.389

AEI080

8

100

0.12

1.5

178

79154

0.400

AEI100

10

100

0.12

1.5

185

82321

0.416

AEI120

12

100

0.12

1.5

160

71201

0.360

AEI140

14

100

0.12

1.5

162

71963

0.363

AEI150

15

100

0.12

1.5

184

82089

0.415

Table 3 Measured crater wear (mm), transposed OUTPUT vector (T) of all the ANN configurations

123

TRANSPOSED OUTPUT VECTOR (T’) VBmax (mm) 0.5250

0.6000

0.6400

0.6750

0.7100

0.7500

0.8200

0.8600

J Intell Manuf (2017) 28:1285–1301 Table 4 ANN performed tests

1295

ANN Model

Image set

1

ORIGINAL

1

ORIGINAL

2

ORIGINAL

2

ORIGINAL

1

2.5 GB

1

Nodes

Method

% Training

k

Tr. set

MAPE (%)

3

L-k-O

Full-k

1

8

6.28

14

L-k-O

Full-k

1

8

11.39

1

T–V–T

70–15–15

#

8

0.76

12

T–V–T

70–15–15

#

8

2.29

3

L-k-O

Full-k

1

8

5.60

2.5 GB

10

L-k-O

Full-k

1

8

26.26

1

2.5 GB

14

L-k-O

Full-k

1

8

16.76

2

2.5 GB

5

T–V–T

70–15–15

#

8

0.61

2

2.5 GB

12

T–V–T

70–15–15

#

8

0.25

2

2.5 GB

1

T–V–T

70–15–15

#

8

0.59

1

5 GB

3

L-k-O

Full-k

1

8

3.34

1

5 GB

14

L-k-O

Full-k

1

8

6.26

2

5 GB

1

T–V–T

70–15–15

#

8

1.05

2

5 GB

12

T–V–T

70–15–15

#

8

4.22

Table 5 Best performance obtained for ANN Model 1 ANN Model 1

ORIGINAL

2.5 GB

5 GB

Nodes in hidden layer

3

3

3

VBmax

NN OUTPUT

e

NN OUTPUT

e

NN OUTPUT

e

AEI020

0.5250

0.6342

0.2080

0.5643

0.0749

0.5458

0.0397

AEI040

0.6000

0.5489

0.0852

0.5196

0.1340

0.5576

0.0706

AEI060

0.6400

0.6082

0.0496

0.5781

0.0967

0.6077

0.0504

AEI080

0.6750

0.6687

0.0093

0.6723

0.0040

0.6883

0.0197

AEI100

0.7100

0.7335

0.0330

0.7319

0.0308

0.6935

0.0233

AEI120

0.7500

0.7729

0.0305

0.7738

0.0318

0.7877

0.0502

AEI140

0.8200

0.7881

0.0389

0.8107

0.0113

0.8238

0.0046

AEI150

0.8600

0.8189

0.0478

0.8045

0.0646

0.8525

MAPE (%)

6.28

5.60

0.0087 3.34

Table 6 Best performance obtained for ANN Model 2 obtained ANN Model 2

ORIGINAL

Nodes in hidden layer

2.5 GB

1

5 GB

1

1

VBmax

NN OUTPUT

e

NN OUTPUT

e

NN OUTPUT

e

AEI020

0.5250

0.5249

0.0002

0.5253

0.0005

0.5247

0.0006

AEI040

0.6000

0.5974

0.0043

0.6051

0.0085

0.6007

0.0011

AEI060

0.6400

0.6347

0.0083

0.6316

0.0132

0.6385

0.0024

AEI080

0.6750

0.6737

0.0020

0.6747

0.0005

0.6543

0.0307

AEI100

0.7100

0.7206

0.0149

0.7150

0.0071

0.7121

0.0030

AEI120

0.7500

0.7680

0.0240

0.7519

0.0025

0.7606

0.0141

AEI140

0.8200

0.8209

0.0011

0.8161

0.0047

0.8324

0.0152

AEI150

0.8600

0.8550

0.0058

0.8514

0.0100

0.8747

0.0171

MAPE (%)

0.76

0.59

1.05

123

1296

J Intell Manuf (2017) 28:1285–1301

Fig. 13 ANN Model 1 predicted tool wear vs measured tool wear

Fig. 14 ANN Model 2 predicted tool wear vs measured tool wear

(BP NN) are robust in predicting the degree of tool-wear over the time.

DBC-based pattern-recognition In the previous section, it is shown that some sets of images of tool-wear progression produce less error while predicting the degree of tool-wear from a trained neural network (the image sets denoted as ORIGINAL and 2.5 GB). Some others produce a relatively high error (i.e., 5 GB). If one could have been identified the less favorable set of images (5 GB) beforehand while training the artificial neural network, s/he could have saved time. At the same time, the most reliable set of images could have been stored in the tool-wear library for reuse, discarding the unreliable one. An (in silico) DBC can solve

123

this kind of problems. It has been found that the DBC can distinguish a normal pattern from an abnormal cyclical pattern even though the patterns look alike in the short-window size (Ullah 2010). The DBC can also find out the intrinsic similarities of images even though the images look different (Ullah et al. 2014). Therefore, the authors believe that DBC can be used to identify the similarity/dissimilar among the image sets denoted as ORIGINAL, 2.5, and 5 GB. To do this, the DBC framework for pattern recognition described in (Ullah et al. 2014) has been used. The schematic diagram of the DBC for pattern recognition underlying a tool-wear image is illustrated in Fig. 15. As seen from Fig. 15, first the binary array representing the image is converted into a two-dimensional DNA array. To do this, each two consecutive digits, 00, 01, 10, and 11, are converted into A, C, G, and T, respectively. The

J Intell Manuf (2017) 28:1285–1301

1297

Fig. 15 DBC Framework for Tool Wear Image Pattern Recognition

Table 7 Genetic code

No

Amino acids

Single-letter symbol of amino acids

Codons (in terms of three-letter DNA bases)

1 2

Isoleucine

I

ATT, ATC, ATA

Leucine

L

CTT, CTC, CTA, CTG, TTA, TTG

3

Valine

V

GTT, GTC, GTA, GTG

4

Phenylalanine

F

TTT, TTC

5

Methionine

M

ATG

6

Cysteine

C

TGT, TGC

7

Alanine

A

GCT, GCC, GCA, GCG

8

Glycine

G

GGT, GGC, GGA, GGG

9

Proline

P

CCT, CCC, CCA, CCG

10

Threonine

T

ACT, ACC, ACA, ACG

11

Serine

S

TCT, TCC, TCA, TCG, AGT, AGC

12

Tyrosine

Y

TAT, TAC

13

Tryptophan

W

TGG

14

Glutamine

Q

CAA, CAG

15

Asparagine

N

AAT, AAC

16

Histidine

H

CAT, CAC

17

Glutamic acid

E

GAA, GAG

18

Aspartic acid

D

GAT, GAC

19

Lysine

K

AAA, AAG

20

Arginine

R

CGT, CGC, CGA, CGG, AGA, AGG

21

None

X

TAA, TAG, TGA

reading-frame has been the row-wise continuous-readingframe as prescribed in Ullah et al. (2014). Afterward, each three consecutive bases of DNA (e.g., ACG, AAC, GTA, and alike) is converted into one-letter symbol of Amino Acid (AA) by using the genetic codes to produce a twodimensional Protein array. The genetic codes are listed in

Table 7. Finally, the informational characteristics (IC) of the Protein have been analyzed. The IC refers to such features as absence/presence/abundance/entropy of some selected AAs, relationships among the likelihoods of some other selected AAs, and alike (Ullah et al. 2014; Ullah 2010).

123

1298

J Intell Manuf (2017) 28:1285–1301

Table 8 AAs frequencies of the 2D protein array for the AEI ORIGINAL AEI ORIGINAL AEI020

AEI040

AEI060

AEI080

AEI100

AEI120

AEI140

AEI150

A

0

9

0

6

5

2

0

0

C

0

5

0

3

3

2

0

0

D

0

1

0

5

1

0

1

1

E

608

567

458

725

730

620

715

715

F

39284

70655

75583

76840

79865

69387

79783

79783

K

154228

122764

118408

115345

112520

123696

112895

112895

L

1513

1537

1370

1961

1920

1664

1808

1808

N

879

950

836

1170

1059

1042

1013

1013

R

0

5

0

8

4

0

1

1

T

879

951

836

1175

1060

1042

1014

1014

V

0

0

0

0

0

0

0

0

X

608

567

458

725

729

620

715

715

G, H, I, M, P, Q, S, W, Y TOTAL

26

14

76

62

129

0

80

80

198025

198025

198025

198025

198025

198025

198025

198025

Table 9 AAs frequencies of the 2D protein array for the AEI 2.5 GB AEI 2.5 GB AEI020

AEI040

AEI060

AEI080

AEI100

AEI120

AEI140

AEI150

A

0

0

0

1

5

1

0

0

C

0

0

0

0

3

0

0

0

D

0

0

0

2

0

0

0

0

E

475

388

436

463

525

534

509

509

F

48804

78484

79054

86795

89204

75341

87216

87216

K

145569

116073

115147

106967

104489

118503

106788

106788

L

1226

1168

1300

1442

1488

1393

1370

1370

N

725

744

788

908

823

859

772

772

R

0

0

0

3

3

1

0

0

T

725

744

788

910

823

859

772

772

V

0

0

0

0

0

0

0

0

X

475

388

436

463

524

534

509

509

G, H, I, M, P, Q, S, W, Y TOTAL

26

36

76

71

138

0

89

89

198025

198025

198025

198025

198025

198025

198025

198025

Accordingly, the IC of the Proteins of all image sets (AEI ORIGINAL, AEI 2.5 GB, and AEI 5 GB, as shown in Figs. 9, 10 and 11, respectively) are determined. It is worth mentioning that all these images underlie the same physical phenomenon (i.e., tool-wear during turning under the same set of cutting conditions) and, thereby, must exhibit the same IC, even though the outlook of the images are different. However, Tables 8, 9 and 10 list the frequencies of AAs of the images shown in Figs. 9, 10 and 11. In order to obtain the related frequencies of amino acids, the DBC schematically described in Fig. 15 was applied (Ullah et al. 2014).

123

The following IC are observed from the results shown in Tables 8, 9 and 10. Table 11 summarizes the results of the IC. The results shown in Table 11 underlie the following statements. Abundance The AA denoted as K has the height frequency followed by those of F and L, respectively, except for the AEI 5 GB case, where F has the height frequency. Excluding the frequencies of K , F, and L, the frequencies of other AAs are plotted in Figs. 16, 17 and 18 to understand other IC, as follows.

J Intell Manuf (2017) 28:1285–1301

1299

Table 10 AAs frequencies of the 2D protein array for the AEI 5 GB AEI 5 GB AEI020

AEI040

AEI060

AEI080

AEI100

AEI120

AEI140

AEI150

A

0

1

0

1

0

0

1

1

C

0

1

0

1

0

0

1

1

D

0

0

0

0

0

1

0

0

E

210

258

348

462

262

266

236

236

F

131869

110475

109887

100750

91652

107549

96666

96666

K

63385

84757

84928

93778

103644

87480

98584

98584

L

1062

1069

1190

1286

1009

1146

1042

1042

N

437

396

482

462

449

435

454

454

R

0

0

0

0

0

1

0

0

T

437

396

482

462

449

436

454

454

V

0

0

0

0

0

0

0

0

X

210

258

348

462

262

266

236

236

G, H, I, M, P, Q, S, W, Y TOTAL

415

414

360

361

298

445

351

351

198025

198025

198025

198025

198025

198025

198025

198025

Table 11 Results of IC of the proteins of images Issue

Results expected

Results obtained ORIGINAL

2.5 GB

Abundance of AA

AA denoted as K has the height frequency followed by those of F and L

True

Presence/absence of some selected AAs

The frequencies of AAs denoted as G, H, I, M, P, Q, S, V, W , and Y are always zero

Not true in some cases

Relationships among the likelihoods of some selected amino acids

The frequencies of AAs denoted as N and T are almost the same

True

Frequencies of E are not equal to the frequencies of N

True

f r (D) + f r (N ) = f r (T ) Entropy of selected AAs

Entropy of the AAs in the set Z {A, C, D, E, L , N , R, T, X }

Presence/Absence The frequencies of AAs denoted as G, H, I, M, P, Q, S, V, W , and Y are almost zero in most of the cases, except for the AEI 5 GB set. It means that these AAs do not occur no matter the image processing technique. Relationships among the likelihoods of some selected AAs The frequencies of AAs denoted as N and T are almost the same. The frequencies of E are not equal to the frequencies of N . The summation of frequencies of D and N is equal to the frequency T , i.e., “ f r (D) + f r (N ) = f r (T )” holds, for all cases except for the three images underlying the set 5 GB. Entropy The entropy of less frequent AAs, i.e., the entropy of the AAs in the set Z = {A, C, D, E, L , N , R, T, X } has been calculated, as defined in Ullah et al. (2014). The values of entropy shown in Fig. 19 are calculated using the following procedure.

=

5 GB Not true

True

Not True in some cases

Shows a similar pattern

Shows different pattern

Let Sq be a 2D Protein Array as described in Fig. 15, Z be the set of less-frequent amino acids, i.e., Z = {A, C, D, E, L , N , R, T, X }, and Z i be the i-th element of Z , i = 1, . . . , 9. The average information content or entropy (Shannon 1948; ISO/IEC 1996) denoted as E(Z ) is thus given by: E (Z ) =

9 

 Pr (Z i ) × log2

i=1

so that

9 

1 Pr (Z i )

Pr (Z i ) = 1

 Bits

(3)

i=1

In (3), Pr(Z i ) is the probability of Z i in Sq given that the universe is Z (not all types of amino acids associated with the 2D

123

1300

J Intell Manuf (2017) 28:1285–1301

Fig. 19 Entropy of less frequent AAs Fig. 16 Frequencies of AAs (except K, F and L) in the 2D protein arrays of the tool-wear image for the AEI ORIGINAL data set

from those in the sets of ORIGINAL and 2.5 GB. As a result, one can discard the images of the set 5 GB while producing a library of tool-wear images. One the other hand, either the images in the set of ORIGINAL or 2.5 GB can be used while producing the library of tool-wear images. It also implies that before using the ANN, the images must go through a system that implements DBC for the similarity checking.

Conclusions

Fig. 17 Frequencies of AAs (except K, F and L) in the 2D protein arrays of the tool-wear image for the AEI 2.5 GB data set

Fig. 18 Frequencies of AAs (except K, F and L) in the 2D protein arrays of the tool-wear image for the AEI 5 GB data set

Protein Array) and by definition Pr(Z i )×log2 (1/Pr(Z i )) = 0 if Pr(Z i ) = 0. The maximum possible entropy is equal to 3.1699 Bits corresponding to all Pr(Z i ) = 1/9, i = 1, . . . , 9, which is not the case this time as understood from the results shown in Fig. 19. Therefore, from the context of the relationships among the likelihoods of some selected AAs and entropy of the less frequent AAs, the images in the set of 5 GB are different

123

This study identifies that two nature-inspired computing methodologies, ANN and DBC, can be used simultaneously in solving computation problems underlying tool-wear monitoring in material removal processes. The ANN successfully plays its intended role and predicts the degree of tool-wear as a function of machining time. Similarly, the DBC successfully plays its intended role and identifies the similarities/dissimilarities among the images of the wornzone of the cutting tool. One may verify the similarities (or trustworthiness) of the input information (in this case the similarity among the images of the worn-zone of a cutting tool) by using the DBC before using it (input information) for training an artificial neural network. It would reduce unnecessary time and volume of information while solving complex computation problems. Then, the proposed paradigms are fully applicable to a real time tool wear monitoring. In the future, the usefulness of such an integration between the ANN (or other nature-inspired computing methodologies) and DBC might open a door for dealing with other complex computational problems in an effective manner.

References Azmi, A. I. (2015). Monitoring of tool wear using measured machining forces and neuro-fuzzy modelling approaches during machining of GFRP composites. Advances in Engineering Software, 82, 53–64. D’Addona, D. M., & Teti, R. (2013). Image data processing via neural networks for tool wear prediction. Procedia CIRP, 12, 252–257.

J Intell Manuf (2017) 28:1285–1301 D’Addona, D., Segreto, T., Simeone, A., & Teti, R. (2011). ANN toolwear modelling in the machining of nickel superalloy industrial products. CIRP Journal of Manufacturing Science and Technology, 4, 33–37. D’Addona, D. M., Matarazzo, D., Di Foggia, M., Caramiello, C., & Iannuzzi, S. (2015a). Inclusion scraps control in aerospace blades production through cognitive paradigms. Procedia CIRP, 33, 322– 327. D’Addona, D. M., Matarazzo, D., Ullah, A. M. M. S., & Teti, R. (2015b). Tool wear control through cognitive paradigms. Procedia CIRP, 33, 221–226. ISO/IEC. (1996). Information technology-vocabulary-part 16: Information theory. International Standard, ISO/IEC 2382/16:1996(E/F). Jemielniak, K., Urba´nski, T., Kossakowska, J., & Bombi´nski, S. (2012). Tool condition monitoring based on numerous signal features. The International Journal of Advanced Manufacturing Technology, 59, 73–81. Kassim, A. A., Mannan, M. A., & Jing, M. (2000). Machine tool condition monitoring using workpiece surface texture analysis. Machine Vision and Applications, 11, 257–263. Kilundu, B., Dehombreux, P., & Chiementin, X. (2011). Tool wear monitoring by machine learning techniques and singular spectrum analysis. Mechanical Systems and Signal Processing, 25, 400–415. Lanzetta, M. (2001). A new flexible high-resolution vision sensor for tool condition monitoring. Journal of Materials Processing Technology, 119, 73–82. Li, X., Djordjevich, A., & Venuvinod, P. K. (2000). Current-sensorbased feed cutting force intelligent estimation and tool wear condition monitoring. IEEE Transactions on Industrial Electronics, 47, 697–702. Li, X., & Zhejun, Y. (1998). Tool wear monitoring with wavelet packet transform fuzzy clustering method. Wear, 219, 145–154. Lou, C. W., & Dong, M. C. (2015). A novel random fuzzy neural networks for tackling uncertainties of electric load forecasting. International Journal of Electrical Power and Energy Systems, 73, 34–44. Masters, T. (1993). Practical neural network recipes in C++. San Diego: Academic Press. Murata, M., Kurokawa, S., Ohnishi, O., Uneda, M., & Doi, T. (2012). Real-time evaluation of tool flank wear by in-process contact resistance measurement in face milling. Journal of Advanced Mechanical Design, Systems, and Manufacturing, 6, 958–970. Nouri, M., Fussell, B. K., Ziniti, B. L., & Linder, E. (2015). Realtime tool wear monitoring in milling using a cutting condition independent method. International Journal of Machine Tools and Manufacture, 89, 1–13. Pal, S., Heyns, P. S., Freyer, B. H., Theron, N. J., & Pal, S. K. (2011). Tool wear monitoring and selection of optimum cutting conditions with progressive tool wear effect and input uncertainties. Journal of Intelligent Manufacturing, 22, 491–504. Patra, K., Pal, S. K., & Bhattacharyya, K. (2010). Fuzzy radial basis function (FRBF) network based tool condition monitoring system using vibration signals. Machining Science and Technology, 14, 280–300. Penedo, F., Haber, R. E., Gajate, A., & del Toro, R. M. (2012). Hybrid incremental modeling based on least squares and fuzzy K-NN for monitoring tool wear in turning processes. IEEE Transactions on Industrial Informatics, 8, 811–818.

1301 Prakash, M., & Kanthababu, M. (2013). In-process tool condition monitoring using acoustic emission sensor in microendmilling. Machining Science and Technology, 17, 209–227. Ren, Q., Balazinski, M., Baron, L., Jemielniak, K., Botez, R., & Achiche, S. (2014). Type-2 fuzzy tool condition monitoring system based on acoustic emission in micromilling. Information Sciences, 255, 121–134. Segreto, T., Simeone, A., & Teti, R. (2014). Principal component analysis for feature extraction and NN pattern recognition in sensor monitoring of chip form during turning. CIRP Journal of Manufacturing Science and Technology, 7, 202–209. Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27, 379-423 and 623-656. Takuma, M., Shibasaka, T., Yamamoto, A., & Teshima, T. (1994). A study on tool management system in turning (1st report): Estimation of cutting tool life by processing image data with neural network. Journal of the Japan Society for Precision Engineering, 60, 723–727. Tengeleng, S., & Armand, N. (2014). Performance of using cascade forward back propagation neural networks for estimating rain parameters with rain drop size distribution. Atmosphere, 5, 454. Teshima, T., Shibasaka, T., Takuma, M., Yamamoto, A., & Iwata, K. (1993). Estimation of cutting tool life by processing tool image data with neural network. CIRP Annals Manufacturing Technology, 42, 59–62. Teti, R., Jemielniak, K., O’Donnell, G., & Dornfeld, D. (2010). Advanced monitoring of machining operations. CIRP Annals Manufacturing Technology, 59, 717–739. Ullah, A. M. M. S., Arai, N., & Watanabe, M. (2013). Concept map and internet-aided manufacturing. Procedia CIRP, 12, 378–383. Ullah, A. M. M. S., D’Addona, D., & Arai, N. (2014). DNA based computing for understanding complex shapes. Biosystems, 117, 40–53. Ullah, A. M. M. S. (2010). A DNA-based computing method for solving control chart pattern recognition problems. CIRP Journal of Manufacturing Science and Technology, 3, 293–303. Wang, W. H., Hong, G. S., Wong, Y. S., & Zhu, K. P. (2007). Sensor fusion for online tool condition monitoring in milling. International Journal of Production Research, 45, 5095–5116. Wang, G., Qian, L., & Guo, Z. (2013a). Continuous tool wear prediction based on Gaussian mixture regression model. The International Journal of Advanced Manufacturing Technology, 66, 1921–1929. Wang, G., & Cui, Y. (2013b). On line tool wear monitoring based on auto associative neural network. Journal of Intelligent Manufacturing, 24, 1085–1094. Wang, G., & Feng, X. (2013). Tool wear state recognition based on linear chain conditional random field model. Engineering Applications of Artificial Intelligence, 26, 1421–1427. Wang, W. H., Wong, Y. S., & Hong, G. S. (2006a). 3D measurement of crater wear by phase shifting method. Wear, 261, 164–171. Wang, W. H., Hong, G. S., & Wong, Y. S. (2006b). Flank wear measurement by a threshold independent method with sub-pixel accuracy. International Journal of Machine Tools and Manufacture, 46, 199– 207. Xu, J., Yamada, K., Seikiya, K., Tanaka, R., & Yamane, Y. (2014). Comparison of applying static and dynamic features for drill wear prediction. Journal of Advanced Mechanical Design, Systems, and Manufacturing, 8, JAMDSM0056.

123