International Journal of Software Engineering and Its Applications Vol. 9, No. 10 (2015), pp. 147-160 http://dx.doi.org/10.14257/ijseia.2015.9.10.15
Texture-based Classification of Workpiece Surface Images using the Support Vector Machine Mohammed Waleed Ashour1, Alfian Abdul Halin2, Fatimah Khalid3, Lili Nurliyana Abdullah4 and Samy H. Darwish5 1,2,3,4
Faculty of Computer Science and Information Technology, Universiti Putra Malaysia, Serdang, Malaysia 5 Faculty of Engineering, Pharos University, Alexandria, Egypt 1
[email protected],
[email protected],
[email protected], 4
[email protected],
[email protected] Abstract
Identifying the specific machining processes used to produce specific workpiece surfaces is very useful in materials inspection. Machine vision can be used to semi- or fully automate this identification process by firstly extracting features from captured workpiece images, followed by analysis using machine learning algorithms. This enables inspection to be performed more reliably with minimal human intervention. In this paper, three visual texture features are investigated to classify machined workpiece surfaces into the six machining process classes of Turning, Grinding, Horizontal Milling, Vertical Milling, Lapping, and Shaping. These are the multi-directional Gabor filtered images, intensity histogram and edge features statistics. Support Vector Machines (SVM) applying different kernel functions are investigated for best classifier performance. Results indicate that the Gabor-based SVM-linear kernel provides superior performance. Keywords: Machined surface classification, support vector machine, Gabor filter, machining, surface inspection, computer vision
1. Introduction Image processing (IP) and computer vision (CV) can be indispensable technologies in manufacturing processes. The applications of IP and CV have been thoroughly discussed in [20] where tasks such as materials inspection, tool condition and defects monitoring, and control process examination are greatly facilitated by such technologies. One important technique of IP and CV is texture analysis of the machined workpiece surfaces. Texture features can be extracted and consequently analyzed to help engineers semi- or fully-automatically identify specific machining processes used during production. This allows important information to be acquired such as the machining technique used, the specific tool kinematics, and the identification of possible material defects or anomalies. The authors in [20] further explain texture as repetitive patterns with a set of gradually varying local properties. Texture differences are perceived in object structure, roughness, orientation, smoothness and regularity, which are also commonly unique across processed workpieces [1]. The uses of texture analyses are widespread, such as for classification [2, 3], retrieval [4], and inspection [5]. During inspection, workpieces are normally placed under good lighting conditions. Texture features can hence be reliably extracted. In [19] for example, edge features were obtained for the task of classifying machined surfaces into Milling or Shaping. The GLCM (gray-level concurrence matrix) is another useful texture feature, where the work in [23] used it to determine rotary drill bits wear status. The same feature was also used in [24] for establishing the relationship between milled material roughness and its
ISSN: 1738-9984 IJSEIA Copyright ⓒ 2015 SERSC
International Journal of Software Engineering and Its Applications Vol. 9, No. 10 (2015)
underlying surface texture. In [3], machining processes were categorized via analysis of gray-level histogram information, combined with edge statistics from metal workpiece images. It is also important to point out that in most cases, features are fed into a machine learning algorithm to perform the required task (e.g., machining process categorization or analysis). Among the algorithms that have been used are decision trees [11], Artificial Neural Networks (ANN) [3, 7, 10] and the Support Vector Machine (SVM) [3, 12, 17]. In this paper, we investigate three texture features for classifying workpieces surfaces into the six machining processes of Turning, Grinding, Horizontal Milling, Vertical Milling, Lapping, and Shaping. We firstly look at the multidirectional Gabor filter, followed by the gray-level intensity histogram and finally edge feature statistics. Each feature is used to independently train a supervised SVM. Average classification accuracy is calculated (as performed in [13]) for performance evaluation. A dataset of 72-machined workpiece images is used, where each image belongs to a specific machining process class. Besides measuring accuracy, we also do a comparison with our previous implementation from [14] using an ANN.
2. Features Extraction Effective machine learning depends on representative and discriminative input features. In this work, three features are investigated, namely (i) Gabor features, (ii) gray-level intensity histogram, and (ii) edge feature statistics. At this point in time, we are only looking at the strength of each feature independently. 2.1. Gabor Filters The Gabor filter was used in [25, 26] for texture analysis. A Gabor filter is composed of a sinusoidal plane wave of a particular frequency and orientation, modulated with a Gaussian kernel [3]. The mathematical representation can be written as: (u , v ) 2 2 ( x , y ; , u , v ) exp ( x , y ) exp i ( ux vy ) 2 2 2
2
(1)
where (x,y) are positional representations in the spatial domain, (u,v) the spatial frequencies and σ is the Gaussian width. G(u,v), which is the Gabor transform of image fragment I(x, y) is the convolution of a Gabor kernel Ψ with I, where: G (u , v )
( x , y ; , u , v ) I ( x , y ) dxdy
(2)
Features generated from Gabor filtered images are obtained at a definable sizes and angles. Hence, orientation (i.e., θ = tan-1 v/u) and image fragments sizes can be customized when performing Gabor filtration. According to [27], a two dimensional Gabor function g(x,y) and its Fourier transform G(u,v) can be written as: g ( x, y)
2 1 x2 y exp 2 2 2 jWx 2 x y 2 y
1
G ( u , v ) exp
148
1 2
2 (u W ) 2 v 2 2 u v
(3)
(4)
Copyright ⓒ 2015 SERSC
International Journal of Software Engineering and Its Applications Vol. 9, No. 10 (2015)
where,
u
1 2
,
v
1
2
x
. y
Gabor functions form a complete, non-orthogonal basis set. When a signal is expanded based on this, a localized frequency description is obtained. A class of self-similar functions referred to as Gabor wavelets, is now considered. Let be the mother Gabor wavelet, then this self-similar filter dictionary can be obtained by appropriate dilations and rotations of through the generating function: ,
,
g mn ( x , y ) aG ( x , y )
> 1 and
are integers
(5)
,
m
( xCos ySin )
(6)
,
m
( xSin yCos )
(7)
x a x a
where,
, where,
n k
and
is the total number of orientations. The scale factor a-m ascertains
that the energy remains independent of m. The multidirectional Gabor filter allows filtration to be done at varying angles. Specifically, a sine and a cosine Gabor mask are applied, squared, and summed at each orientation angle. This process is illustrated in Figures 1 and 2. Moreover, through alterations of the tunable orientation and radial frequency bandwidths parameters, the filter can pass any elliptical region of spatial frequencies to optimally achieve joint resolution in the domains of space and frequency. Figure 3 illustrates an image that is filtered at various orientations at increments of 30-degrees.
Figure 1. (a) the Sinusoidal Signal, (b) the Gaussian Kernel, and (c) the uniDimensional Gabor Filter
Figure 2. (a) 2-D Sinusoid with a 30◦ Orientation with the x-axis, (b) a Gaussian Kernel, (c) the Resultant Gabor Filter
Copyright ⓒ 2015 SERSC
149
International Journal of Software Engineering and Its Applications Vol. 9, No. 10 (2015)
Figure 3. A Shaping Image Filtered at Various Angles using the Gabor Filter 2.2. Intensity Histogram An intensity histogram shows the pixel distribution of grayscale values, ranging from 0 to 255. In Figure 4, an example of a 256-bin histogram is shown for a Turning class image. Note that the x-axis is the quantized intensity value whereas the quantity of pixels having that value is shown on the y-axis.
Figure 4. (a) Original Image, (b) Sobel Edge Image 2.3. Edge Features Statistics Rapid intensity changes within an image are detected as edges. Various techniques can be used for edge detection such as the Prewitt, Canny and Sobel algorithms. For the purpose of our work the Sobel edge detection algorithm is used as empirical evaluation shows good results. Using the Sobel method, two separate convolutions with the filters hx and hy is performed on an image. Filter hx responds to any vertical edges in the image whereas hy responds to horizontal edges. The separate convolution results are consequently combined into a Sobel edge-intensity map S_mag (Eq. 2), which is then thresholded (e.g., using Otsu‟s method) to generate the final edge image [18]. An illustration of an edge detected image is shown in Figure 5. √
150
(
)
(8)
Copyright ⓒ 2015 SERSC
International Journal of Software Engineering and Its Applications Vol. 9, No. 10 (2015)
Figure 5. (a) The Grayscale Image of a Turning Machined Processed Image from our Dataset, and (b) the Resultant 256-bin Histogram
3. Feature Dimensionality Reduction 3.1. Gabor Filtered Image Features For each image, the probability density of the intensity level occurrences is calculated. This is done by dividing the image ‟s intensity histogram by the overall pixel count . This can be written: p i
h i
, where
NM
(9)
Resultantly, the following features set can be derived that describe the statistical properties of the images [10]. These can be calculated via Equations 10-18. G 1
Mean:
ip i
(10)
i0
G 1
Variance:
2
=
2 i p i
(11)
i0
Energy:
G 1
G 1
i0
j0
2 p (i, j )
G 1
Entropy:
i0
Copyright ⓒ 2015 SERSC
G 1
P ( i , j ) log
2
(12) p (i, j )
(13)
j0
151
International Journal of Software Engineering and Its Applications Vol. 9, No. 10 (2015)
G 1
G 1
(14)
2 i j P i , j
(15)
i0
G 1
Correlation:
G 1
G 1
i0
j0
G 1
3
x
j0
3
2
ijp ( i , j ) x
i0
Skewness:
1 (i j )
j0
Contrast:
1
P i , j
Homogeneity:
y
(16)
y
G 1
3 i p i
(17)
i0
Kurtosis:
3
4
G 1
4 i p i
(18)
i0
The function refers to the -th pixel value in an image, where covers all the intensity levels in the image. , and , are the mean and standard deviations of the row and column sums of the matrix, respectively. The mean µ is the average intensity level of the image, while the variance σ defines the intensity variation around µ. Skewness determines the symmetry of the intensity values in the histogram (negative values indicating values are more towards the left, and positive to the right. Symmetry about the mean has a skewness value of zero). Kurtosis is the measure of the flatness of a histogram, while entropy is the measure of its uniformity. The local variation in grayleves is determined by Contrast, which is also the linear dependency factor of neighboring pixels. Homogeneity measures the uniformity of non-zero entries in the image. This value is in sorts a contra to Contrast, where the lower the homogeneity weight, the higher the contrast weight. Correlation measures the gray level linear dependence between pixels at specified positions relative to each other. Entropy represents a measure of spatial „chaos‟ in texture, where it is considered more statistically chaotic when entropy is higher (lower calues indicate smoother texture). Energy measures the local homogeneity of texture. Higher energy values indicate higher texture homogeneity. The other remaining features are self-explanatory, namely sum, the minimum, the maximum, the range and the median. 3.2. Intensity Histogram and Edge Feature Statistics A 256-bin histogram is generated for each image, resulting in a 256-dimensional feature vector. Hence, the feature vector for each of the images is , for where is number of samples in each of the -classes (i.e., ). The Sobel edge features statistics for each image generates a 1024 x 1280 edge image where the column-wise mean and variance are used for representation. The mean vector for the edge image of class (for similar values of and as in the previous paragraph) is represented as ̅ , and the corresponding variance vector as . Both these vectors are eventually combined producing whose dimensionality is 2,560 (i.e. ). 3.3. Principal Components Analysis Principal component analysis (PCA) has been widely used for feature dimensionality reduction. It has the advantage of significantly reducing data dimensionality without losing the essential variability present in the original data representation. With PCA, the uncorrelated principal components are transformed to
152
Copyright ⓒ 2015 SERSC
International Journal of Software Engineering and Its Applications Vol. 9, No. 10 (2015)
a new set of variables, which are then ordered. The first few variables will preserve most of the essential data variation. For further details regarding PCA, readers can be directed to the works in [7 and 10]. After performing PCA, both datasets are reduced to 30-features (i.e. 30 for intensity histogram, and 30 for the edge statistics).
4. Supervised Learning: The Support Vector Machine Given a labeled set of training instances and their respective class labels, which } { } a supervised machine can be represented as: { learning algorithm is supposed to learn a function f: X→Y (X and Y are the input and output spaces, respectively) that accurately maps input to output for unseen data [16]. The Support Vector Machine (SVM) is such an algorithm, which is based on statistical learning, which is used for solving pattern recognition problems [15]. In SVM, a pattern x is represented in an n-dimensional space having features . The final output of the SVM is a hyperplane in this space that effectively separates the features into two classes, i.e., w 1 and w 2. The hyperplane is optimal when there is maximum margin between itself to the nearest training examples of each class (the support vectors). According to [17], the simple mathematical formulation can be written as: (19) with being the weight vector and is a threshold. SVMs were originally used for linear two-class classification problems. Its use however has been extended to non linear multi-class classification problems by transforming the data into a higherdimensional space and utilizing the kernel trick.
5. The Dataset The workpieces from our dataset are produced by six different machining processes (treated as classes in this work) namely Turning, Grinding, Horizontal Milling, Vertical Milling, Lapping, and Shaping (Figure 6). For the purpose of training the SVM classifier, 12-samples are taken from each class. Note that, although samples originate from the same class and have relatively similar texture, there exists small variations as different machining parameters such as feed rate, cutting speed, and cut depth were used during production [10]. In all, a total of 72sample images are collected (12-samples/class x 6-classes). In order to acquire the images, a workpiece is placed under an optical microscope. In this work, the Olympus BX51M metallurgical microscope equipped with a CCD (charged -coupled device) digital camera was used. Each workpiece image subsequently converted to grayscale with a pixel resolution of 1024 × 1280. The image acquisition setup diagram is shown in Figure 7. Note that all the captured images are put through contrast adjustment.
Copyright ⓒ 2015 SERSC
153
International Journal of Software Engineering and Its Applications Vol. 9, No. 10 (2015)
Figure 7. Setup for Image Acquisition
6. Workflow and Experimental Setup Figure 8 shows the flow of the proposed work. The dataset was equally divided with six random samples from each class were used for training, whereas the remainder used for testing. The SVM was trained using Platt‟s Sequential Minimal Optimization (SMO) algorithm. SMO was chosen due as it proved to be effective in our previous work [3]. The SVM multi-class classification employed the one-versus-one approach. Since the SVM is originally meant for binary classification tasks, for our case with k = 6classes, (in total 15) binary SVMs were trained. During testing, if the -th decision function shows that a vector is classified into the i-th class, then the vote for that class is incremented by one. Otherwise, the vote is decreased by one. At the very end, the final class label is determined by the majority vote.
Figure 8. The General Workflow
7. Results and Discussion Three SVM kernels were investigated namely the Linear, Polynomial and Radial Basis Function (RBF) kernels. We refer readers to Table 1, where it is clear that the linear kernel achieved superior classification accuracy with the PCA -reduced multi-
154
Copyright ⓒ 2015 SERSC
International Journal of Software Engineering and Its Applications Vol. 9, No. 10 (2015)
directional Gabor features (100%). In Table 2, the linear kernel surpassed the other configurations for the PCA-reduced histogram features (97.2%). In Table 3, similarly, the linear kernel was superior compared to the others for the PCA-reduced edge statistics features (97.2%). A graphical comparison is further given in Figure 9 to provide better visualization. Table 1. Different Kernels-SVM Classification Accuracy (%) for Gabor Filter Method Class Name
Linear
Polynomial
RBF
Turning Grinding H-Milling V-Milling Lapping Shaping Total
100 % 100 % 100 % 100 % 100 % 100 % 100 %
100 % 100 % 100 % 100 % 100 % 100 % 100 %
83.3 % 100 % 100 % 100 % 100 % 100 % 97.2 %
Table 2. Different Kernels-SVM Classification Accuracy (%) for Histogram Method Class Name
Linear
Polynomial
RBF
Turning Grinding H-Milling V-Milling Lapping Shaping Total
100 % 100 % 100 % 100 % 100 % 83.3 % 97.2 %
83.3 % 100 % 100 % 100 % 100 % 83.3 % 94.4 %
83.3 % 83.3 % 100 % 100 % 83.3 % 83.3 % 88.8 %
Table 3. Different Kernels-SVM Classification Accuracy (%) for Edge Detection Method Class Name
Linear
Polynomial
RBF
Turning Grinding H-Milling V-Milling Lapping Shaping Total
100 % 100 % 100 % 100 % 100 % 100 % 100 %
100 % 100 % 100 % 100 % 100 % 83.3 % 97.2 %
83.3 % 83.3 % 100 % 100 % 100 % 83.3 % 91.6 %
Table 40. The ANN Configuration in Work [14] ANN Type Training algo. # Input layers # Hidden layers Activation functions Learning rate Max. epochs Error goal Classification tolerance
Copyright ⓒ 2015 SERSC
Multilayer Perceptron Back Propagation% 30 1 Tan-sigmoid (between input & hidden) Pure-line (between hidden & output) Adaptive (with momentum) 10,000 0.0001 ±0.005
155
International Journal of Software Engineering and Its Applications Vol. 9, No. 10 (2015)
Figure 9. SVM Classification with Different Kernels The performance of the SVM (all kernels) was compared with our previous work to solve the same classification task using an Artificial Neural Network (ANN) [10], [14]. With training and testing datasets being the same and the ANN p arameters are given in Table 4, the results in Figure 10 shows that the SVM (linear kernel) outperforms the ANN for all the three features. In fact, the ANN shows lower accuracy when compared to the SVM-polynomial kernel for the PCA-reduced histogram features. There however is no difference between the ANN and the SVM polynomial for the PCA-reduced edge features. Overall, the SVM with the RBF kernel reports the least accuracy compared to the other classifier configurations, and for both feature types as well.
Figure 10. Comparison between the SVM and ANN Two observations are made from the results, namely (i) The SVM with the linear kernel provides the best accuracy for the three features over all other classifier configurations (including the ANN), and (ii) SVM classification using Gabor features give best results for all kernels. At this stage of the work, it seems that the SVM-linear kernel with Gabor features gives best classification performance. A comparison regarding training time was also performed. Compared to the ANN, all
156
Copyright ⓒ 2015 SERSC
International Journal of Software Engineering and Its Applications Vol. 9, No. 10 (2015)
the SVMs had half the training time of the ANN. This is another factor for favoring the SVM. However, due to the lack of data (72-images altogether), a concrete conclusion can yet be established on which configuration would yield optimal results.
8. Conclusion In this paper supervised SVM classifiers with the linear, Polynomial, and RBF kernels were presented for classification of workpiece images into their respective machining processes. Three texture features were considered namely the multidirectional Gabor filter, intensity histogram, and edge feature statistics. A dataset of 72-grayscale was used for training and testing, and it seems that classification based on the multidirectional Gabor features with SVM-linear kernel provided superior results. A comparison with the ANN classifier [14] was also done using the same training and testing data. The results favored the SVM (all kernels) both in terms of accuracy and training time. Our technique has shown improvement in classification accuracy and computational time compared to previous experiments [10, 14]. These improvements might have been due to the nature of the multiple-orientation features from the Gabor filters (six different orientations). However, although classification results are promising, we have yet to test on a larger dataset. With the possibility of higher texture variability, further experimentation is required with possible tweaks in feature utilization and classifier configurations.
Acknowledgments The publication of this work was made possible by funding provided by the Malaysian Ministry of Education through the Fundamental Research Grant Scheme (Cost center 5524665 / Project number 08-02-14-1581FR). Gratitude also goes out to Prof. Dr. Hamidah Ibrahim and also the officers at the Finance Department, Faculty of Computer Science & Information Technology, UPM.
References C. Chi-hau, L. F. Pau and P. S. Wang, Editors, “Handbook of pattern recognition and computer vision”, Imperial College Press, (2010). [2] R. M. Haralick, K. Shanmugam and I. Dins1tein, “Texture Feature For Image Classification”, IEEE Transaction Systems, vol. SMC-B, no. 6, (1973), pp. 610-621. [3] M. W. Ashour, M. F. Hussin and K. M. Mahar, “Supervised Texture Classification Using Several Features Extraction Techniques Based on ANN and SVM”, Proceedings of the IEEE/ACS International Conference on Computer Systems and Applications, (2008), pp. 567-574. [4] S. Selvarajah and S. R. Kodituwakku, “Analysis and Comparison of Texture Features for Content Based Image Retrieval”, International Journal of Latest Trends in Computing, vol. 2, no. 1, (2011), pp. 108113. [5] M. A. Younes, S. Darwish and M. El-Sayed, “Online Quality Monitoring of Perforated Steel Strips Using An Automated Visual Inspection (AVI) System”, Proceedings of IEEE International Conference on Quality and Reliability, (2011), pp. 575-579. [6] M. Unser, “Texture Classification and Segmentation Using Wavelet Frames”, IEEE Transactions on Image Processing, vol. 4, no. 11, (2002), pp. 1549-1560. [7] H. Uğuz, “A Biomedical System Based on Artificial Neural Network and Principal Component Analysis for Diagnosis of the Heart Valve Diseases”, Journal of Medical Systems, vol. 36, no. 1, (2012), pp. 6172. [8] A. Gebejes and R. Huertas, “Texture Characterization Based on Grey-Level Co-occurrence Matrix”, International Conference on Information and Communication Technologies, (2013), pp. 375-378. [9] K. Houari, Y. Chahir and M. Kholladi, “Spectral Clustering and Dimensionality Reduction Applied to Content Based Image Retrieval with Hybrid Descriptors”, International Review on Computers and Software, vol. 4, no. 6, (2009), pp. 633-639. [10] M. W. Ashour, F. Khalid, L. N. Abdullah and A. A. Halin, “Artificial Neural Network-based Texture Classification Using Reduced Multidirectional Gabor Features”, International Review on Computers and Software, vol. 9, no. 6, (2014), pp. 1007-1016. [1]
Copyright ⓒ 2015 SERSC
157
International Journal of Software Engineering and Its Applications Vol. 9, No. 10 (2015)
[11] M. A. Friedl and C. E. Brodley, “Decision Tree Classification of Land Cover From Remotely Sensed
Data”, Journal of Remote Sensing of Environment, vol. 61, no. 3, pp. 399-409. [12] E. Esraa, E. Nashwa, M. M. M. Fouad, J. Platoš, A. E. Hassanie and A. M. M. Hussein, “Innovations in
[13] [14]
[15] [16] [17]
[18]
[19]
[20] [21] [22]
[23]
[24]
[25] [26] [27]
Bio-inspired Computing and Applications”, Edited A. Abraham, P. Krömer and V. Snášel, Springer International Publishing, Switzerland, vol. 237, (2014), pp. 175-186. A. Kassner and R. E. Thornhill, “Texture Analysis: A Review of Neurologic MR Imaging Applications”, The American Journal of Neuroradiology, vol. 31, no. 5, (2010), pp. 809-816. M. W. Ashour, F. Khalid and M. Al-Obaydee, “Supervised ANN Classification for Engineering Machined Textures Based on Enhanced Features Extraction and Reduction Scheme”, Journal of Artificial Intelligence & Computer Science, vol. 1, (2013), pp. 71-80. V. Vapnik, “The Nature of Statistical Learning Theory”, Springer Science & Business Media, (2013). F. Camastra and A. Vinciarelli, “Machine Learning for Audio, Image and Video Analysis: Theory and Applications (Advanced Information and Knowledge Processing)”, Springer, (2008). C. G. M. Snoek and M. Worring, “Time Interval based Modelling and Classification of Events in Soccer Video”, Proceedings of the 9th Annual Conference of the Advanced School for Computing and Imaging (ASCI), Heijen, Netherlands, (2003). U. Y. Desai, M. M. Mizuki, I. Masaki and B. K. P. Horn, “Edge and mean based image compression”, Technical Report 1584 (A.I. Memo), Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA, (1996) November. M. A. U. Patwari, D. A. Muammer, M. S. I. Chowdhury and N. A. Chowdhury, “Identifications of Machined Surfaces using Digital Image Processing”, International Journal of Engineering, vol. 1, (2012), pp. 213-218. S. Dutta, K. P. Surjya and S. Ranjan, “Modern Mechanical Engineering”, Springer, Berlin Heidelberg (2014), pp. 367-410. A. Zawada-Tomkiewicz, “Estimation of surface roughness parameter based on machined surface image”, Metrology and Measurement Systems, vol. 17, no. 3, (2010), pp. 493-504. V. N. Niola and G. Quaremba, “A problem of emphasizing features of a surface roughness by means the Discrete Wavelet Transform”, Journal of Materials Processing Technology, vol. 164–165, (2005), pp. 1410–1415. S. Omid, R. Jamal, A. Mohammad and R. T. Seyed, “Use of digital image processing techniques for evaluating wear of cemented carbide bits in rotary drilling”, Automation in Construction, vol. 44, (2014), pp. 140-151. D. Nathan, G. Thanigaiyarasu and K. Vani, “Study On the Relationship between Surface Roughness of AA6061 Alloy End Milling and Image Texture Features of Milled Surface”, Procedia Engineering, vol. 97, (2014), pp. 150-157. M. R. Turner, “Texture discrimination by Gabor functions”, Biological Cybernetics, vol. 55, no. 2-3, (1986), pp. 71-82. M. Clark, A. Bovik and W. Geisler, “Texture segmentation using Gabor modulation/demodulation”, Pattern Recognition Letters, vol. 6, no. 4, pp. 261-267. V. S. Vyas and P. Rege, “Automated texture analysis with gabor filter”, Journal of Graphics, Vision and Image Processing, vol. 6, no. 1, (2006), pp. 35-41.
Authors Mohammed Waleed Ashour, obtained his BSc in Computer Engineering from the Near East University Cyprus in 2003, and MSc from the Arab Academy for Science, Technology and Maritime Transports, Egypt in 2007. He is currently a PhD student in the Faculty of Computer Science and Information Technology, Universiti Putra Malaysia. His research revolves around the application of computer vision to solve manufacturing and production related issues.
Alfian Abdul Halin, is a senior lecturer at the Multimedia Department, Faculty of Computer Science & Information Technology, Universiti Putra Malaysia. He obtained his Masters of Multimedia Computing from Monash University, Australia (2004), and PhD in Computer Science from Universiti Sains
158
Copyright ⓒ 2015 SERSC
International Journal of Software Engineering and Its Applications Vol. 9, No. 10 (2015)
Malaysia (2011). His research interests are mainly image/video processing and machine learning applications.
in
Fatimah Khalid, obtained her PhD in Computer Science from Universiti Kebangsaan Malaysia in 2008. She is currently an Associate Professor and Head of the Multimedia department at the Faculty of Computer Science and Information Technology, Universiti Putra Malaysia. Her main research areas are image processing and computer vision applications.
Lili Nurliyana Abdullah, is an Associate Professor at the Multimedia Department, Faculty of Computer Science & Information Technology, Universiti Putra Malaysia. She received her PhD in Information Science from Universiti Kebangsaan Malaysia in 2007. Her research interests include multimedia systems and image/video retrieval.
Samy H. Darwish, is an affiliate instructor at the Faculty of Engineering, Pharos University in Alexandria, Egypt. He obtained his BSc in Communications and Electro-Physics in 1983 from Alexandria University. He then obtained both his MSc and PhD in Electrical Engineering from the same university in 2000 and 2007, respectively. Dr Samy's research interests include, but are not limited to, signal/image processing applications and artificial intelligence.
Copyright ⓒ 2015 SERSC
159
International Journal of Software Engineering and Its Applications Vol. 9, No. 10 (2015)
160
Copyright ⓒ 2015 SERSC