1236
IEEE SENSORS JOURNAL, VOL. 13, NO. 4, APRIL 2013
Architecture of Noninvasive Real Time Visual Monitoring System for Dial Type Measuring Instrument Zainul Abdin Jaffery and Ashwani Kumar Dubey
Abstract— Noninvasive real time visual monitoring systems are more demanding in the field of automation because of their high reliability, robustness, fast execution time, portability, and flexibility. In this paper, a novel architecture of detection algorithm has been developed to identify the geometrical and statistical features of position of the pointer’s image from the captured image of indicating type meters in real time. In our view, a real time visual monitoring system has the following stages: 1) image acquisition; 2) image pre-processing; 3) segmentation; 4) feature extraction; 5) feature matching; and 6) display of result into graphical user interface window along with controlling decision. The geometrical, statistical, and wavelet-based image features were used to recognize the indicated value using feature matching algorithm. Also, the system controller compares the recognized value with the set value of parameters and if, it is found beyond the specified limit, it generates various alarms and controlling or tripping signals for the final control elements. Index Terms— Dynamic sliding window algorithm (DSWA), fast wavelet transform, real time visual monitoring system (RTVMS), region of interest (ROI).
I. I NTRODUCTION
W
ITH the fast industrial growth, it has become necessary to incessant monitor the stipulation of the electrical machines and systems to avoid any unscheduled breakdown during the production process. To protect plants, switchgear systems, and alternators, the non-invasive sensors and controllers plays important roles at the time of integrated faults or catastrophe. In recent years, more attention has been focused on the monitoring of various parameters such as voltage, current, and temperature of electrical systems from control room using remote sensing methods [1]. As the electrical machines are critical components in industrial processes and the machine failure may yield an unexpected interruption in the production process, with consequences in costs, product quality, and safety. The measuring instruments such as voltmeters, ammeters, and temperature gauges must be watched continuously and constantly to avoid any devastation in the process [1], [2]. Manuscript received October 6, 2012; revised November 18, 2012; accepted November 26, 2012. Date of publication December 11, 2012; date of current version February 11, 2013. The associate editor coordinating the review of this paper and approving it for publication was Prof. Octavian Postolache. The authors are with the Department of Electrical Engineering, Faculty of Engineering and Technology, Jamia Millia Islamia, New Delhi 110025, India (e-mail:
[email protected];
[email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/JSEN.2012.2231940
In most of the process industries, analog type meters and gauges are still being used because human inspectors are very prone to visualize deflection more easily than to recognize a digital display. Hence, to replace the human inspector for supervision and monitoring of electrical parameters with a real time visual monitoring system (RTVMS) having knowledge based artificial intelligence is essential. II. R ELATED W ORK Monitoring of electrical parameters has a very perspective point in the sense of system analysis, tolerance analysis, predictive analysis, fault diagnosis, and after effects etc. Electrical machine failure can cause a break-down of industrial plants and consequently affects the cost, safety and quality of products. The non invasive technique, such as Extended Park’s Vector Approach [2], is one of the complex computational based techniques being used to diagnose any electrical failures in induction motors. The variation in stator current depends on electromagnetic torque developed by the motor and therefore, the measured current can be interpreted to reach up to some consequent level in various fault detection techniques [3]. The monitoring of voltage variation is also equally useful to detect any flaws either in the motor or in the connected systems [4]. The current and voltage patterns being utilized using various artificial techniques viz. fuzzy-wavelet, neuro-fuzzy, and neuro-wavelet to identify faults in voltage source inverters (VSI) [5]. PC based and DSP based remote monitoring of electrical parameters is also too popular in modern measuring approaches [1], [6]. The development of visual inspection system for automatic reading of analog type gauges or meters was started three decades ago, but robust, economic, and user friendly automatic analog meters reader still finds scope in the field. The various methods and techniques already developed for automatic reading of dial type instruments were based on only geometrical lay-out of the dial of meter or gauges [7]–[13]. Antoniadis et al. [14] have used Lab VIEW and PXI for monitoring of mechanical parameters. Lan et al. [15] have developed DSP based calibration system of analog gauges. Wavelet analysis and Hough transform based inspection system created a room for application of soft computing techniques in the monitoring, calibration or inspection of gauges [16], [17]. Jaffery and Dubey [18] have designed a real-time visual inspection system for the calibration of
1530–437X/$31.00 © 2012 IEEE
JAFFERY AND DUBEY: ARCHITECTURE OF NONINVASIVE RTVMS FOR DIAL TYPE MEASURING INSTRUMENT
1237
mechanical gauges using artificial intelligence. The application of visual inspection and monitoring system is not limited to any one field of engineering and sciences. It finds scope in the field of manufacturing [19]–[21], defect detection and classification [22]–[25], surface inspections [26], [27], traffic sign inspection [28], medical, aviation, space, and agriculture for monitoring, controlling and analysis [29]–[31]. A number of techniques are being used for the designing of visual monitoring systems such as wavelet transform, neural network, fuzzy logic, and morphological reconstruction filter [32]–[38]. III. R EAL T IME V ISUAL M ONITORING S YSTEM The block diagram of proposed real time visual monitoring system (RTVMS) for monitoring of electrical parameters is shown in Fig. 1. It mainly includes: a) Software Module having algorithm for image acquisition, processing, analyzing, feature extraction, recognition, and display of result, b) Hardware Module having facility to interface with the parallel port of computer, and control panel where all the meters and relays are available for controlling, and c) Image Acquisition Module with illumination control facility to capture the images according to the algorithm. The reliability of the entire system is primarily depends upon this module. The various types of analog ammeters and voltmeters with different scales and range are shown in Fig. 2. All these instruments have half semicircular curved scale. So, there is a need to develop consistent algorithm which can identify the indicated value accurately in the non linear scale. A real time visual monitoring system has the following basic stages of image processing: image acquisition, image preprocessing, segmentation, feature extraction, feature matching, and display of result into GUI window along with controlling decision [18]. The methods applied to execute above stages are as follows: A. Image Acquisition Image acquisition techniques play an excellent role in the field of visual monitoring system (VMS). The procedure and excellence of image acquisition enables RTVMS to work proficiently and precisely. The quality of image depends upon the type of camera, position of camera, number of cameras, and the illumination systems [26]. The illumination techniques help in exploring the characteristics of images under test. In this approach, webcams [18] with 5.0 mega pixel resolution and LED based illumination are used. B. Image De-Noising The wavelet transform (WT) is a prevailing tool for signal and image processing. It is being applied in many scientific and engineering fields such as signal preprocessing, image compression, image fusion, computer graphics, computer vision, feature calculation, image inspection, and pattern recognition [39]. The fundamental concept involved in multiresolution wavelet transform is to find average features and details of the image signal using scalar products with scaling signals and wavelets. The noise can be discriminated through
Fig. 1.
Fig. 2.
Block diagram of proposed RTVMS.
Dials of indicating type ammeters and voltmeters.
the decomposition of multi-resolution into different levels. Usually, an image is sullied by various noisy factors during the acquisition and processing of it. This noisy effect decreases the performance of real time visual monitoring systems. Therefore, the de-noising process can be implicated as to remove the noise while retaining the quality of the processed image. Let the captured image A = {a [x, y] | x, y = 1,. . .,N} has an additive noise b[x, y] = a[x, y] + c[x, y], x, y = 1,. . .,N, where {c[x, y]| x, y = 1,. . .,N} are zero mean Gaussian (ZMG) with variance σ 2 . The de-noising of corrupted image B = {b[x, y]} means estimation of the closeness of image à = {ã[x, y]} to the original image A by minimizing the mean squared error (MSE) between à and A given by: N M S E(A, Â) = 1/N 2 (1) (a[x, y] − â[x, y])z . x y=1
The three main steps involved in the de-noising process using wavelet approach are shown in Fig. 3. This includes: wavelet transform, thresholding, and inverse wavelet transform. The type of wavelet and thresholding technique decides the efficiency of de-noising process. In this paper, the denoising of captured images is done using Daubechies wavelet (db10) with fixed form soft thresholding and non scaled white noise structure. The original, de-noised, and residual images are shown in Fig. 4. C. Image Segmentation The segmentation is a process in which features or regions having homogeneous characteristics are identified and grouped together. The image segmentation method involves edge detection, boundary detection, statistical classification, region detection, and thresholding or combination of these techniques. There are numerous image segmentation techniques available for the segmentation of desired portions or objects from
1238
IEEE SENSORS JOURNAL, VOL. 13, NO. 4, APRIL 2013
Noisy Image B
Wavelet Transform (WT)
Thresholding (Hard/Soft)
Fig. 5.
Selection of ROI for dynamic sliding window algorithm.
Inverse Wavelet Transform (IWT)
De-noised Image A Fig. 3.
(a)
Image de-noising steps using wavelet approach.
(b)
(c)
(d)
Fig. 4. Image de-noising. (a) Original noisy image. (b) De-noised image. (c) Residuels of de-noised imaged. (d) Statistics of de-noised image. Fig. 6.
colored, intensity, and black & white images. These techniques can be broadly classified into two main categories as follows. 1) Supervised Image Segmentation Techniques: These techniques were developed using fuzzy, neural network (NN), genetics algorithm (GA) [40]–[41], pixel-based texture classification [42], thresholding [43], texture and boundary encoding-based (TBES) [44], graph based [45], isoperimetric graph partitioning [46], and histogram thresholding using fuzzy sets based [47]. All these techniques come under automatic image segmentation application category. 2) Unsupervised Image Segmentation Techniques: These techniques have been developed by using pyramidal image segmentation with the fuzzy c-means clustering algorithm [48], dynamic region growth and multi-resolution merging [49], hybrid graph model (HGM) [50], recursive clustering algorithm [51], and multi-resolution technique [52]. All above techinques have their own limitations for the real time application, where less computational power and execution time is a prime requisite. Therefore, a novel ROI based sliding window techniqe have been developed for the segmentation of desired portion from the captured image in real time applications. 3) Dynamic Sliding Window Algorithm for Segmentation of Image: The ROI based sliding window technique for the segmentation of a desired portion (a region having image of pointer of indicating type meter for RTVMS application) from the captured image is shown in Fig. 5 and 6. The sliding window algorithm involves some computational steps for the calculation of window size and number of sliding
Art map of ROI-based dynamic sliding window technique.
steps. It also includes the algorithm to detect solid line in the sliding window and if, in case, line is not detected in the window, the window shifts to the next higher location and so on till the detection of line in it and the moment, line is detected, algorithm stops the window movement further and image feature calculation steps are initiated. The computations involved in the sliding window algorithm are as follows. a) Calculation of size of sliding window: Let X 1 = l; where l = 1 to i (i = Maximum value of x-coordinate of Image), Y 1 = m; where m = 1 to j (j = Maximum value of y-coordinate of Image), X 2 = X 1 +w; where w = Width of the sliding window, Y 2 = Y 1 + z; where z = Height of the sliding window. Therefore, the diagonal coordinates of ROI based rectangular sliding window are (X 1, Y 1 ) and (X 2 , Y 2 ). These coordinates defines the size of the window. b) Calculation of no of sliding steps: No of Sliding Steps n = x/w; where, x = Total Width of Resized Image, w = Width of the Sliding Window. c) Locations of the pointer: Let L = Total number of steps in a defined range of the meter and Li = Location of pointer at it h step, where i = {0, 1, 2– (L − 1)}. d) Development of ROI matrices: The ROI matrix defines the size of the window. The database of various window sizes has to be developed for one type of meter. The size of window and number of ROIs are the function of dial span and dial type, such as semi circular, quarter circular and linear etc. The general formats for computation and arrangement of ROI
JAFFERY AND DUBEY: ARCHITECTURE OF NONINVASIVE RTVMS FOR DIAL TYPE MEASURING INSTRUMENT
(a)
(b)
(e)
(c)
(f)
1239
(d)
(g)
Fig. 7. Effects of size of sliding window. (a) Original image. (b) Segmentation using larger window size. (c) Black and white image of segmented image (b) using threshold technique. (d) Edges of image objects appearing in (c). (e) Segmentation using smaller window size. (f) Black and white image of segmented image (e) using threshold technique. (g) Edges of image objects appearing in (f).
matrix database are as follows: R0 = [x 0 y0 x 0 y 0 ] R1 = [x 1 y1 x 1 y 1 ] R2 = [x 2 y2 x 2 y 2 ]
Fig. 8.
: Rn = [x n yn x n y n ]
Each is the product of two one-dimensional functions
where R0, R1 , . . .,Rn are the different ROIs in the defined range of scale and x n−1 2 yn−1 . yn = yn−1 + 2
x n = x n−1 +
Flow chart of a DSWA.
(2) (3)
The accuracy of system depends upon the size of sliding window. It should be as close as possible to the size of pointer of the indicating type meter. The best results were obtained when the image of pointer appears as the diagonal of the selected ROI as shown in Fig. 7. The dots appearing in the Fig. 7(c) and (d) affects the image features. Therefore, the desired image should be like Fig. 7(f) and (g), where no dots are present in the window. e) Detection of solid line in the sliding window: To detect lines, we applied the method described by Chung et al. [53] for accurate line detection from digital image. It was an orientation-based algorithm to filter out inappropriate edge pixels before performing the line-detection task. This method reduces the memory size and computation time during detection process using Hough Transform. The flow chart of sliding window algorithm is shown in Fig. 8. D. Feature Extraction In this step, three types of image features are extracted from the segmented image. These are described as follows. 1) Wavelet Features: In this paper, a two-dimensional fast wavelet transform (FWT) [54]–[57] is used, which requires a two-dimensional scaling function, (x, y), and three twodimensional wavelets, ψ H (x, y), ψV (x, y), and ψ D (x, y).
(x, y) = (x).(y) ψ H (x, y) = ψ(x).(y)
(4) (5)
ψV (x, y) = (x).ψ(y)
(6)
ψ D (x, y) = ψ(x).ψ(y).
(7)
These wavelets measure functional variations e.g. ψ H represents variations along columns, ψ V variations along rows, and ψ D variations along diagonal. Let the scaled and translated basis functions are: j,m,n (x, y) = 2 j/2 (2 j x − m, 2 j y − n)
(8)
ψ ij,m,n (x, y)
(9)
=2
j/2
ψ (2 x − m, 2 y − n) i
j
j
where i (directional wavelet) = {H, V, D} and the discrete wavelet transform of image f (x, y) of size M × N: 1 M−1 N−1 W∅( jo , m, n) = √ X =0 y=0 MN × f (x, y)∅ j o,m,n (x, y) (10) M−1 N−1 1 Wψi ( j, m, n) = √ X =0 y=0 MN i × f (x, y)ψ j jo,m,n (X, Y ) (11) where represent an approximation of f (x, y) at scale j0 and represents horizontal, vertical, and diagonal details for scales jj0 as shown in Fig. 9. All wavelet coefficients of image are calculated at each level and arranged in matrix form as follows: WLq = [Aq H q V q Dq ], where, WLq = Wavelet coefficients matrix, Aq = Approximate, H q = Detail Horizontal Coefficient, V q = Detail Vertical Coefficient, Dq = Detail Diagonal Coefficient, L = Type of Wavelet, q = Level No. The number
1240
IEEE SENSORS JOURNAL, VOL. 13, NO. 4, APRIL 2013
Fig. 10.
Fig. 9.
Fast wavelet transforms structure of 2-D images.
Feature matching algorithm.
Consider an image function f(x, y) in real domain, and a pixel function in the domain is defined as: R[Pl ] = x,y f (x, y).
of matrices depends upon the number of levels in the wavelet decomposition tree [33]–[35], [55]. 2) Geometrical Features: Geometrical features are a set of geometric elements such as: points, lines, curves, surfaces, and orientation etc. It includes primeval and amalgam features. The primeval feature comprises corners, edges, blobs, and ridges. The Marr-Hildreth edge detector and Canney edge detector are the two suitable techniques available to detect edge of pointer in the selected ROI window [58]. The Marr-Hildreth algorithm consists of the Laplacian of a Gaussian (LOG) filter to determine the location of edges in two dimensional image signals f (x, y). It is good to apply Gaussian low pass filter before the LOG filter in edge detection algorithm. But in Canney edge detection algorithm, the smoothing of input image is done using Gaussian filter and thereafter gradient magnitude, and double thresholding is used to detect and link the edges. The geometrical feature of segmented image involves the computation of area, orientation, and centroid of the pointer’s black & white image. Where, the area is a measure of the size of image foreground i.e. the number of ON pixels in the image, orientation is the angle (in degrees) between the x-axis and the major axis of the ellipse that has the same second-moments as the region of image, and centroid is the center of mass of the region [37], [38], [59]. The process of calculating orientation of segmented image involves the following steps. 1) Conversion of image into binary image using thresholding technique. 2) Detection of boundary of pointer image with smoothing Gaussian filters. 3) Allocation of reference point vector and boundary follower vector in clockwise direction. If the directional vector follows the counterclockwise direction then the orientation angle becomes negative and vice versa. 4) Detection of center of the region of interest (ROI). The mathematical modeling for the calculation of black and white image orientation is as follows.
(12)
Here, f (x, y) represents the gray level of the pixel at (x, y) and Pl = {a(x, y) : y = (x − x 0 ) + y0 , x ∈ L a }
(13)
where La = {0, 1… a − 1} and ‘a’ is any positive integer known as slope of lattice La . The coordinate (x0 , y0 ) represents a center point of the lattice La . The gray-levels of the pixels, in segmented pointer’s image, at the boundary lines are lower than those of the neighboring pixels. The line orientation and energy of the pixels at (x, y) are defined as: Porientation(l) = arg(mi n a (R[Pl ])) n−1 R[Pl ] Penergy (l) = mi n a (R[Pl ]) − n
(14) (15)
x=0
where l = 0, 1, 2, . . . , n − 1 and n is the number of direction. 3) Statistical Features: The statistics of image envisage object properties including its location in the captured image. The image statistics can be used to foretell the presence and absence of objects in the image. Statistical model can be utilized for segmentation, shape modeling, shape analysis, recognition, and tracking in the in the visual inspection systems. The statistical feature of an image includes arithmetic mean (μ), variance (σ 2 ), and standard deviation (σ ). Arithmetic mean is the average of pixels value, standard deviation is the range of the pixel values, and variance is the square of the standard deviation of an image. For consistent images, the standard deviation is too small because the pixel intensities do not wander off far from the mean [60]. The statistical features are evaluated as follows: Px,y (16) Mean of Image Pixel Values (μ) = x×y Px,y × Px,y − (μ2 ) Standard Derivation of Image (σ )= z x×y (17) Variance of Image = σ 2
(18)
JAFFERY AND DUBEY: ARCHITECTURE OF NONINVASIVE RTVMS FOR DIAL TYPE MEASURING INSTRUMENT
1241
TABLE I I MPACT OF S LIDING W INDOW S IZE ON P ERFORMANCE OF RTVMS Parameters
Size of Sliding Window in Decending Order (W1>W2>W3>W4>W5) W1 W2 W3 W4 82.00% 89.50% 95.80% 98.30%
W5 99.1% Accuracy Execution (0.01–0.1) s (0.05–0.4) s (0.1–0.6) s (0.45–0.8) s (0.6–1.0) s Time Less Average Average Good V. Good Robustness Less Less More More More Intricacy Less Average Average Good V. Good Reliability Probability More More Less Less V. Less to False Reading
then the similarity factor will be evaluated as: z z cos z ∅qr Si (M1 , M2 ) = T r ace AB T B A T = q=1 r=1
(19) where A, B are the matrices having first p principle components of M 1 , M 2 and is the angle between qt h principal component of M 1 and rt h principal component of M 2. The range of similarity factor spans from 0 to z. If the segmented image (I c ) belongs to Rn, then the image features of segmented image is matched with those feature matrices (LI nk ) who are associated with the Rn and each feature matrix has a explicit result value to be displayed as shown in Fig. 10. Further, the recognized values are compared with the set values and if, the recognized value is greater than the set value then an alarm signal is generated and also controlling signals initiated. The feature matrices are developed and defined as follows: R0 = {L I 01 , L I 02 ,....... L I 0k } R1 = {L I 11 , L I 12 ,....... L I 1k } . . Rn = {L I n1 , L I n2 ,....... L I nk }. Fig. 11.
Flow chart of RTVMS.
where, P x,y represents the pixel values at x, y location in the segmented image.
E. Feature Matching Principal Component Analysis (PCA) similarity factors as described by [61]–[62] are applied for the matching of image feature matrices in RTVMS, because these matrices follows the condition of having similar number of columns. The PCA firstly extracts the principal components for each matrix, and identifies the p principal components heuristically and then computes similarity between the first p principal components. Let two matrices M 1 and M 2 having same numbers of columns are under inspection for similarity measurement using PCA,
The flow chart for complete functional algorithm of RTVMS is shown in Fig. 11. This flow chart includes various stages such as: setting of parametrs, initialization of ports, image preprocessing, segmentation, image feature calculation, feature matching, display of result, and controlling actions etc. IV. I MPLEMENTATION AND R ESULTS The proposed real time visual monitoring system for monitoring of indicating type instruments is simulated on the MATLAB 7.0 and tested on more than 100 sets of readings. The Logitech make web cam having resolution of five megapixel is used to capture the image at regular intervals. The number of frames per second can be adjusted as per the requirement and sensitivity of the system. The minimum number of frames depends upon processor speed, distance of camera, type of protocol, and network structures. The proposed
1242
IEEE SENSORS JOURNAL, VOL. 13, NO. 4, APRIL 2013
Fig. 15.
Fig. 12.
Graphical user interface model for RTVMS.
Impact of change in sliding window size on image features.
window size with pointer’s image as the diagonal of the window. V. C ONCLUSION
Fig. 13.
Monitoring of voltmeter using RTVMS.
Fig. 14.
Monitoring of ammeter using RTVMS.
algorithm for RTVMS is simulated using a personal computer having 1.80 GHz Intel Core2 Duo processor. The results obtained from the RTVMS are compared with the readings of the indicating meters taken directly. It is found that the accuracy of the RTVMS is very good. The simulation is repeated for different window size. A comparative result obtained for five different window sizes is tabulated in Table-I. From this table, it is observed that as the window size reduces the accuracy increases. However, this increases the execution time. A graphical user interface (GUI) window is developed to make the RTVMS user friendly as shown in Fig. 12. This GUI window have features to show maximum and minimum values of measuring parameters, present values, set values, and also the running or tripping conditions of the electrical systems. The original and segmented images of a voltmeter and an ammeter having different pointer positions used to calculate image features are shown in Fig. 13 and 14 respectively. The various image features for the images have been computed for the image recognition. The effect of window size (Wi ) on the image features is represented graphically in Fig. 15. In this figure, the sharp corner at the W4 reflects the degree of correctness of the measured value. This sharp valley corner can be is achieved by selecting sliding
In this paper, a novel dynamic sliding window algorithm has been developed for the segmentation of the pointer’s image of the meter from the captured image. The proposed algorithm gives very good result in real time mode and in robust conditions. The wavelet transform is used for the image de-noising and the computation of the various image features. Canney edge detector is used for the detection of edges in the region of interest (ROI). Principal component analysis similarity factor measurement algorithm is used to make the RTVMS more sensitive towards detection of small parametric changes in real time. A study of effect of window size is made on the performance of proposed algorithm for RTVMS. From the simulation results, it is observed that smaller window size gives better results in terms of accuracy. However, this increases the computational complexity and in turns reduces the speed of the system. Hence, there is a need for the optimization of the window size. The proposed RTVMS is a non destructive and a non invasive technique for the real time monitoring of electrical parameters. The proposed system may also be used as the third eye monitoring of speed, fuel, and temperature gauges of automobiles and aviations. An artificial intelligence based decision making algorithm may be added to further enhance the reliability and widens the application areas of RTVMS in the field of automations. R EFERENCES [1] J. Datta, S. Chowdhuri, J. Bera, and G. Sarkar, “Remote monitoring of different electrical parameters of multi-machine system using PC,” Measurement, vol. 45, no. 1, pp. 118–125, Jan. 2012. [2] G. G. Acosta, C. J. Verucchi, and E. R. Gelso, “A current monitoring system for diagnosing electrical failures in induction motors,” Int. J. Mech. Syst. Signal Process., vol. 20, no. 4, pp. 953–965, May 2006. [3] H. S. Liu, B. Y. Lee, and Y. S. Tarng, “Monitoring of drill fracture from the current measurement of a three-phase induction motor,” Int. J. Mach. Tools Manuf., vol. 36, no. 6, pp. 729–738, Jun. 1996.
JAFFERY AND DUBEY: ARCHITECTURE OF NONINVASIVE RTVMS FOR DIAL TYPE MEASURING INSTRUMENT
[4] R. L. Cárdenas, L. P. S. Fernández, O. Progrebnyak, and Á. A. C. Montiel, “Inter-turn short circuit and unbalanced voltage pattern recognition for three-phase induction motors,” in Progress in Pattern Recognition, Image Analysis and Applications (Lecture Notes in Computer Science), vol. 5197. Berlin, Germany: Springer-Verlag, 2008, pp. 470–478. [5] M. R. Mamat, M. Rizon, and M. S. Khanniche, “Fault detection of 3-phase VSI using wavelet-fuzzy algorithm,” Amer. J. Appl. Sci., vol. 3, no. 1, pp. 1642–1648, Feb. 2006. [6] S. Seifvand and A. Vahedi, “Monitoring electrical parameters of the synchronous generator using DSP,” in Proc. 6th Int. Conf. Electr. Mach. Syst., vol. 2. Beijing, China, 2003, pp. 841–843. [7] R. T. Chin, “Automated visual inspection techniques and applications: A bibliography,” Int. J. Pattern Recognit., vol. 15, no. 4, pp. 343–357, Nov. 1982. [8] M. L. Baird, “GAGESIGHT: A computer vision system for automatic inspection of instrument gauges,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 5, no. 6, pp. 618–621, Nov. 1983. [9] R. Sablatnig and W. G. Kropatsch, “Automatic reading of analog display instruments,” in Proc. 12th IAPR Int. Conf. Pattern Recognit., Jerusalem, Israel, Oct. 1994, pp. 794–797. [10] R. Sablatnig, “Visual inspection of water meters used for automatic calibration,” in Image Analysis Applications and Computer Graphics (Lecture Notes in Computer Science), vol. 1024. Berlin, Germany: Springer-Verlag, 1995, pp. 518–519. [11] R. Sablatnig, “Flexible automatic visual inspection based on the separation of detection and analysis,” in Proc. IEEE Int. Conf. Pattern Recognit. Vienna, Austria, 1996, pp. 944–948. [12] D. M. Siegel, “Remote and automated inspection: Status and prospects,” in Proc. Joint DoD/FAA/NASA Conf. Aging Aircraft, Ogden, UT, Jul. 1997, pp. 1–12. [13] R. Sablatnig, “Increasing flexibility for automatic visual inspection: The general analysis graph,” J. Mach. Vis. Appl., vol. 12, no. 4, pp. 158–169, Dec. 2000. [14] I. Antoniadis, A. Hountras, and G. Glossitis, “Using LabVIEW and PXI for online mechanical parameter monitoring system for a diesel generator unit set,” Dept. Mech. Eng., National Technical Univ. Athens, Athens, Greece, Tech. Rep. 2002_530_821_123_8.5x10.875.qxd, 2002. [15] J. Lan, X. Wei, and Z. Bai, “Automatic calibration system for analog instruments based on DSP and CCD sensor,” Proc. SPIE, vol. 7156, pp. 71560Q-1–71560Q-10, Nov. 2008. [16] S. G. Liu, M. Y. Liu, and Y. He, “Checking on the quality of gauge panel based on wavelet analysis,” in Proc. Int. Conf. Mach. Learn. Cybern., vol. 2. 2002, pp. 763–767. [17] C. R. Dyer, “Gauge inspection using Hough transforms pattern analysis and machine intelligence,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 5, no. 6, pp. 621–623, Nov. 1983. [18] Z. A. Jaffery and A. K. Dubey, “Real time visual inspection system (RTVIS) for calibration of mechanical gauges,” in Proc. IEEE Recent Adv. Intell. Comput. Syst., Jun. 2011, pp. 841–846. [19] G. Ming and X. Yangsheng, “An intelligent online monitoring and diagnostic system for manufacturing automation,” IEEE Trans. Autom. Sci. Eng., vol. 5, no. 1, pp. 127–139, Jan. 2008. [20] I. Edinbarough, R. Balderas, and S. Bose, “A vision and robot based on-line inspection monitoring system for electronic manufacturing,” Comput. Ind., vol. 56, nos. 8–9, pp. 986–996, Dec. 2005. [21] N. S. S. Mar, P. K. D. V. Yarlagadda, and C. Fookes, “Design and development of automatic visual inspection system for PCB manufacturing,” Robot. Comput.-Integr. Manuf., vol. 27, no. 5, pp. 949–962, Oct. 2011. [22] A. R. Gonzalo, A. E. Pablo, and A. R. Pablo, “Automated visual inspection system for wood defect classification using computational intelligence techniques,” Int. J. Syst. Sci., vol. 40, no. 2, pp. 163–172, Feb. 2009. [23] A. Kumar, “Computer vision based fabric defect detection: A survey,” IEEE Trans. Ind. Electron., vol. 55, no. 1, pp. 348–363, Jan. 2008. [24] G. S. Kumar, U. Natarajan, and S. S. Ananthan, “Vision inspection system for the identification and classification of defects in MIG welding joints,” Int. J. Adv. Manuf. Technol., vol. 61, pp. 1–11, Dec. 2011. [25] E. Lughofer, J. E. Smith, M. A. Tahir, P. C. Solly, C. Eitzinger, D. Sannen, and M. Nuttin, “Human-machine interaction issues in quality control based on online image classification,” IEEE Trans. Syst., Man Cybern. A, Syst. Humans, vol. 39, no. 5, pp. 960–971, Sep. 2009. [26] F. Pernkopf and P. O’Leary, “Image acquisition techniques for automatic visual inspection of metallic surfaces,” NDT&E Int., vol. 36, no. 8, pp. 609–617, Dec. 2003.
1243
[27] T. H. Sun, C. C. Tseng, and M. S. Chen, “Electric contacts inspection using machine vision,” Image Vis. Comput., vol. 28, no. 6, pp. 890–901, Jun. 2010. [28] A. Gonzalez, M. A. Garrido, D. F. Llorca, M. Gavilan, J. P. Fernandez, P. F. Alcantarilla, I. Parra, F. Herranz, L. M. Bergasa, M. A. Sotelo, and P. R. Toro, “Automatic traffic signs and panels inspection system using computer vision,” IEEE Trans. Intell. Transp. Syst., vol. 12, no. 2, pp. 485–499, Jun. 2011. [29] M. Shirvaikar, “Trends in automated visual inspection,” J. Real-Time Image Process., vol. 1, no. 1, pp. 41–43, Oct. 2006. [30] H. C. Garcia, J. R. Villalobos, R. Pang, and G. C. Runger, “A novel feature selection methodology for automated inspection systems,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 7, pp. 1338–1344, Jul. 2009. [31] H. C. Garcia and J. R. Villalobos, “Automated refinement of automated visual inspection algorithms,” IEEE Trans. Autom. Sci. Eng., vol. 6, no. 3, pp. 514–524, Jul. 2009. [32] T. M. Silberberg, L. Davis, and D. Harwood, “An iterative Hough procedure for three-dimensional object recognition,” J. Pattern Recognit., vol. 17, no. 6, pp. 621–629, Mar. 1984. [33] A. K. Dubey, “WAVELETS: A novel approach for 1D and 2D image analysis and synthesis,” in Proc. IEEE Int. Adv. Comput. Conf., Punjab, India, Jul. 2009, pp. 725–730. [34] Z. A. Jaffery, A. K. Dubey, and R. P. Singh, “Effect of two dimensional image compression on statistical features of image using wavelet approach,” in Proc. IJCA Int. Conf. Workshop Emerg. Trends Technol., vol. 3. Mumbai, India, 2011, pp. 1–6. [35] A. K. Dubey, A. Q. Ansari, and R. P. Singh, “Analysis of two dimensional image to obtain unique statistical features for developing image recognition techniques using wavelet approach,” in Proc. Int. Conf. Workshop Emerg. Trends Technol., vol. 1. Mumbai, India, 2011, pp. 66–70. [36] S. S. Huang, L. C. Fu, and P. Y. Hsiao, “Region-level motion-based background modeling and subtraction using MRFs,” IEEE Trans. Image Process., vol. 16, no. 5, pp. 1446–1456, May 2007. [37] S. Jayaraman, S. Esakkirajan, and T. Veerakumar, Digital Image Processing. New Delhi, India: McGraw-Hill, 2010, pp. 243–407. [38] A. K. Jain, Fundamental of Digital Image Processing. New Delhi, India: PHI Learning, 2010, pp. 342–421. [39] A. Sharma, G. Sheoran, Z. A. Jaffery, and S.-H. Moinuddin, “Improvement of signal-to-noise ratio in digital holography using wavelet transform,” Opt. Lasers Eng., vol. 46, no. 1, pp. 42–47, Jan. 2008. [40] R. M. Haralick and L. G. Shapiro, “Image segmentation techniques,” Comput. Vis., Graph., Image Process., vol. 29, no. 1, pp. 100–132, Jan. 1985. [41] N. R. Pal and S. K. Pal, “A review on image segmentation techniques,” Pattern Recognit., vol. 26, no. 9, pp. 1277–1294, Sep. 1993. [42] J. Melendez, M. A. Garcia, D. Puig, and M. Petrou, “Unsupervised texture-based image segmentation through pattern discovery,” Comput. Vis. Image Understand., vol. 115, no. 8, pp. 1121–1133, Aug. 2011. [43] S. Zhu, X. Xia, Q. Zhang, and K. Belloulata, “An image segmentation algorithm in image processing based on threshold segmentation,” in Proc. 3rd Int. IEEE Conf. Signal-Image Technol. Internet-Based Syst., Shanghai, China, Mar. 2007, pp. 673–678. [44] H. Mobahi, S. R. Rao, A. Y. Yang, S. S. Sastry, and Y. Ma, “Segmentation of natural images by texture and boundary compression,” Int. J. Comput. Vis., vol. 95, no. 1, pp. 86–98, Oct. 2011. [45] L. Grady, “Random walks for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 11, pp. 1–17, Nov. 2006. [46] L. Grady and E. L. Schwartz, “Isoperimetric graph partitioning for image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 3, pp. 469–475, Mar. 2006. [47] O. J. Tobias and R. Seara, “Image segmentation by histogram thresholding using fuzzy sets,” IEEE Trans. Image Process., vol. 11, no. 12, pp. 1455–1465, Dec. 2002. [48] M. R. Rezaee, P. M. J. van der Zwet, B. P. E. Lelieveldt, R. J. van der Geest, and J. H. C. Reiber, “A multiresolution image segmentation technique based on pyramidal segmentation and fuzzy clustering,” IEEE Trans. Image Process., vol. 9, no. 7, pp. 1238–1248, Jul. 2000. [49] L. G. Ugarriza, E. Saber, S. R. Vantaram, V. Amuso, M. Shaw, and R. Bhaskar, “Automatic image segmentation by dynamic region growth and multi-resolution merging,” IEEE Trans. Image Process., vol. 18, no. 10, pp. 2275–2288, Oct. 2009. [50] G. Liu, Z. Lin, Y. Yu, and X. Tang, “Unsupervised object segmentation with a hybrid graph model (HGM),” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 5, pp. 910–924, May 2010.
1244
[51] S. Thilagamani, M. Kumarasamy, and N. Shanthi, “A novel recursive clustering algorithm for image over segmentation,” Eur. J. Sci. Res., vol. 52, no. 3, pp. 430–436, May 2011. [52] C. R. Jung, “Unsupervised multi-scale segmentation of color images,” Pattern Recognit. Lett., vol. 28, no. 4, pp. 523–533, Mar. 2007. [53] K. L. Chung, Z. W. Lin, S. T. Huang, Y. H. Huang, and H. Y. M. Liao, “New orientation-based elimination approach for accurate linedetection,” Pattern Recognit. Lett., vol. 31, no. 1, pp. 11–19, Jan. 2010. [54] G. Acciani, G. Brunetti, and G. Fornarelli, “Application of neural networks in optical inspection and classification of solder joints in surface mount technology,” IEEE Trans. Ind. Inf., vol. 2, no. 3, pp. 200–209, Aug. 2006. [55] J. Oliver and M. P. Malumbres, “On the design of fast wavelet transform algorithms with low memory requirements,” IEEE Trans. Circuits Syst. Video Technol., vol. 18, no. 3, pp. 237–248, Mar. 2008. [56] G. Beylkin, R. Coifman, and V. Rokhlin, “Fast wavelet transforms and numerical algorithms I,” Commun. Pure Appl. Math., vol. 44, pp. 141–183, Oct. 1991. [57] S. Attallah and M. Najim, “A fast wavelet transform-domain LMS algorithm,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., vol. 3. Atlanta, GA, Mar. 1996, pp. 1343–1346. [58] K. Teelen and P. Veelaert, “Computing regions of interest for geometric features in digital images,” Discrete Appl. Math., vol. 157, no. 16, pp. 3457–3472, Aug. 2009. [59] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed. New Delhi, India: Pearson Education, 2011, pp. 501–510. [60] A. Torralba and A. Oliva, “Statistics of natural image categories,” Netw., Comput. Neural Syst., vol. 14, pp. 391–412, May 2003. [61] K. Yang and C. Shahabi, “A PCA-based similarity measure for multivariate time series,” in Proc. 2nd ACM Int. Workshop Multimedia Databases, Washington, DC, 2004, pp. 65–74. [62] W. Krzanowski, “Between-groups comparison of principal components,” J. Amer. Stat. Assoc., vol. 74, no. 367, pp. 703–707, Sep. 1979.
IEEE SENSORS JOURNAL, VOL. 13, NO. 4, APRIL 2013
Zainul Abdin Jaffery received the B.Tech. and M.Tech. degrees in electronics and communication engineering from Aligarh Muslim University, Aligarh, India, in 1987 and 1989, respectively, and the Ph.D. degree from Jamia Millia Islamia, New Delhi, India, in 2004. He is currently a Professor with the Department of Electrical Engineering, Faculty of Engineering and Technology, Jamia Millia Islamia. His research work focuses on the applications of soft computing techniques in signal and image processing.
Ashwani Kumar Dubey received the M.Tech. degree in instrumentation and control engineering from Maharshi Dayanand University, Haryana, India, in 2007. He is currently pursuing the Ph.D. degree from the Department Electrical Engineering, Faculty of Engineering and Technology, Jamia Millia Islamia, New Delhi, India. His current research interests include hardware implementation of real-time vision algorithms and machine vision applications.