IOP PUBLISHING
MEASUREMENT SCIENCE AND TECHNOLOGY
doi:10.1088/0957-0233/18/7/035
Meas. Sci. Technol. 18 (2007) 2048–2058
Image processing algorithm for automated monitoring of metal transfer in double-electrode GMAW Zhen Zhou Wang and Yu Ming Zhang Electrical and Computer Engineering Department, University of Kentucky, Lexington, KY 40508, USA E-mail:
[email protected]
Received 9 January 2007, in final form 16 April 2007 Published 21 May 2007 Online at stacks.iop.org/MST/18/2048 Abstract Controlled metal transfer in gas metal arc welding (GMAW) implies controllable weld quality. To understand, analyse and control the metal transfer process, the droplet should be monitored and tracked. To process the metal transfer images in double-electrode GMAW (DE-GMAW), a novel modification of GMAW, a brightness-based algorithm is proposed to locate the droplet and compute the droplet size automatically. Although this algorithm can locate the droplet with adequate accuracy, its accuracy in droplet size computation needs improvements. To this end, the correlation among adjacent images due to the droplet development is taken advantage of to improve the algorithm. Experimental results verified that the improved algorithm can automatically locate the droplets and compute the droplet size with an adequate accuracy. Keywords: image processing, computer vision, welding
(Some figures in this article are in colour only in the electronic version)
1. Introduction Image processing plays a critical role in extracting useful information from visual scenes and has been used in welding industry for seam tracking and in welding research for process monitoring [1–4]. Because skilled welders make quality welds primarily based on their visual feedback, it is believed that image processing and computer vision will play a critical role also in the development of the next generation intelligent welding machines especially for relatively complex processes such as the most widely used gas metal arc welding (GMAW) and its modifications where visual information can greatly enhance the capability of process monitoring and control. Figure 1 illustrates the GMAW process. In this process, the wire is fed to the contact tube which is connected to the positive terminal of the power supply and is thus positively charged. The nozzle restricts the shield gas to a local area surrounding the wire. When the wire touches the negatively charged work piece (connected to the negative terminal of the power supply), the tip of the wire is rapidly burnt and an arc is ignited in the gap between the wire and the work piece. The arc 0957-0233/07/072048+11$30.00
© 2007 IOP Publishing Ltd
melts the wire and the melted metal forms a droplet at the tip of the wire; after the droplet is detached, a new droplet starts to form and a new cycle starts. This metal melting, droplet forming and droplet detaching process is referred to as the metal transfer process. The metal transfer process determines the welding quality in a critical way because the stability of the arc, the amount of spatter and the heat and mass inputs are all related to the metal transfer process. Controlled metal transfer [5–11] thus implies controllable weld quality. It is apparent that the metal transfer process can be monitored to better understand and control the GMAW process. In fact, to understand and study this relatively rapid process, high speed cameras have been used to image and record the dynamic developments of the droplets [12, 13]. However, their analysis has only been done manually. To scientifically study the dynamic metal transfer process, effective image processing algorithms are needed to automatically process the large number of high speed images to obtain accurate measurements of the droplets without human interference. For example, to examine how the droplet size is affected by the welding current, varying welding current can
Printed in the UK
2048
Image processing algorithm for automated monitoring of metal transfer in double-electrode GMAW
capture an image at 33 000 frames per second to capture the metal transfer video. The paper is organized as follows. Section 2 describes the problem addressed and section 3 analyses and discusses the characteristics of the images to be processed and tries to use the correlation method to process the image. In section 4, an image processing algorithm is proposed for effective computation of the size and position of the droplets. Based on the algorithm proposed in section 4, a further improved algorithm is proposed in section 5. In section 6, the effectiveness of the algorithm proposed in section 5 is demonstrated through experimental results. Its potential for real-time [15–18] processing is also evaluated. Finally, conclusions are drawn. Figure 1. Illustration of the GMAW process [14].
be applied and the resultant large quantity of accurate data about the dynamic change of the droplet size can be used to establish an accurate dynamic model for the metal transfer process. Because of the importance in determining the weld quality, the metal transfer must also be accurately monitored to provide the needed feedback in the next generation intelligent GMAW machines. Unfortunately, to date, effective automatic image processing of metal transfer has not been developed possibly due to the level of difficulty involved for welding researchers. To establish the foundations for the development of the next generation intelligent welding machines, this study addresses automated processing of metal transfer images in a modified GMAW referred to as the double-electrode GMAW where the metal transfer is equally important as in the conventional GMAW. While this paper focuses on automated computation of the size and position of the droplets, the algorithms proposed in this paper as will be discussed later may be readily transferred into a high speed system (with parallel processors if needed) to obtain the speed needed for the control of the metal transfer process. In this paper, the authors used a high frame rate CCD camera—high-speed camera, Olympus i-SPEED—which can
(a)
2. Problem description The novel DE-GMAW process [19] has been implemented by adding a non-consumable tungsten electrode to a conventional GMAW system to form a bypass loop as shown in figure 2(a). Because of the bypass loop, only part of the current which melts the electrode flows to the base metal. The melting current I is thus divided into two branches: bypass current Ibp from the GMAW wire to the bypass electrode and base metal current Ibm from the GMAW wire to the base metal (figure 2(a)). While the total (melting) current I can be controlled by adjusting the wire feed speed, the University of Kentucky has developed a control system [19] to adjust the base metal current at a desired level by adjusting the resistance of the variable power resistor (figure 2(a)). As a result, the base metal current and bypass current can be controlled freely and independently. This controllability gives DE-GMAW the capability to reduce the base metal heat input (determined primarily by the base metal current) and achieve the needed deposition rate (determined by the wire feed speed or primarily by the total current). In addition, it also gives the DE-GMAW the capability to control the metal transfer process by adjusting the distribution of the total current in the base metal and bypass currents. Experiments have revealed, for example, that when a fixed total current below the so-called transition current which
(b)
Figure 2. DE-GMAW process. In (b), the horizontal and vertical axes of the image are y and x respectively and both are given in pixels. (a) DE-GMAW system, (b) typical metal transfer image with two droplets.
2049
Z Z Wang and Y M Zhang
is approximately 225 A is used, the transfer changes from the undesirable globular mode to the desired spray mode after the bypass current increases to 50 A. In addition, the diameter of the droplets decreases as the bypass current increases but the transfer rate (number of droplets per second) increases. The stability and metal transfer of DE-GMAW can thus be accurately monitored using high speed images and be controlled by adjusting the current distribution or the waveform of the current distribution using the high speed images as the feedback. Hence, the problem of controlling the size of the droplet (thus the transfer rate of the droplet because the melting speed is given) in real time arises correspondingly. Since a high frame rate CCD camera can be used to capture the metal transfer video in real time, an effective image processing algorithm which can compute the size and position of droplets is thus needed for advanced control of metal transfer in DEGMAW. Figure 2(b) shows a typical metal transfer image in DEGMAW with two droplets existing simultaneously captured and stored at a frame rate of 3000 frames per second. Such a high frame rate must be used in order to reduce the exposure so that the arc radiation is sufficiently reduced and the droplets can be clearly imaged. Hence, the resolution and effective area of the stored images are reduced. As can be seen, the centre part of the image is the welding arc, on which a bright droplet is identifiable. The image processing algorithm to be developed in this paper needs to locate the centre of the droplet in the image as the droplet position and calculate the number of pixels the droplet occupies in the image as the droplet size. To be used in on-line control, a sequence of images can be acquired first. After the image acquisition is completed, then the sequential images can be processed by the proposed algorithm and the resultant positions and sizes of the droplets in sequential images can be used to determine the trajectory and average droplet size as the process feedback. Finally, the control algorithm can use the process feedback to determine how the welding parameters should be adjusted to achieve the desired trajectory and droplet size. Please note that (1) it is not desirable that the welding parameters be adjusted rapidly and (2) it is a wrong impression that the images need to be processed at the same speed as they are acquired in order to be used in on-line control. The needed image processing speed should typically be much lower than that of the image acquisition when the control scheme mentioned above is used. To demonstrate how to determine the required image processing speed, let us consider a typical example where (1) the transfer rate is 100 Hz, (2) five frames are used to track the life span of a droplet in order to accurately calculate the trajectory and size of the droplet and (3) the control rate is 2 Hz. In this typical example, 20 ms of time period can be used to acquire ten images to assure that at least one droplet’s entire life span is completely recorded. The image acquisition rate should be 500 Hz. For the 2 Hz control rate, the allowed image processing time for the ten frames of image can be up to 470 ms because the control algorithm computation time can be negligible. Hence, the image processing speed needed would be approximately 47 ms per frame. Based on the above analysis, the authors identified the problem to be solved in this study as the development of an image processing algorithm which has the potential to be 2050
Figure 3. Defined droplet model.
Figure 4. Correlation response.
implemented to extract the droplets from the DE-GMAW metal transfer image and perform related computations at a speed of 47 ms or less per frame.
3. Correlation analysis The traditional way of tracking an object like a droplet needs to define a model first. Then the model is tracked by correlation with the equation given below: f (x, y) ◦ h(x, y) = (N −1)/2
×
1 MN
(M−1)/2
m=−(M−1)/2
h∗ (m, n)f (x + m, y + n),
(1)
n=−(N −1)/2
where h is the function of the defined model and h∗ denotes its complex conjugate. In our case, h is a real function (image); thus h∗ = h. f is the captured image that has metal transfer droplets. M and N indicate the size of the defined model. Since the droplet changes from frame to frame, the model is difficult to determine exactly and the authors choose M = 9 and N = 9 here. For the experiment, the authors define the model as a disc with radius equal to 4 as shown in the centre of figure 3 based on observing the average size of the droplets. Figure 4 shows the resultant image of correlation. According to the correlation theory [20, 21], the point where the correlation response reaches the peak (132, 214) is the position of the droplet. Unfortunately, the droplet might not be the brightest part in the entire image and the brightness of the droplet is
Image processing algorithm for automated monitoring of metal transfer in double-electrode GMAW
Similarly, the probability of erroneously classifying a point belonging to p1 (z) as a point belonging to p2 (z) is ∞ E2 (T ) = p1 (z) dz. (5) T
Then the overall probability of error is E(T ) = P2 E1 (T ) + P1 E2 (T ).
To find the threshold value for which this error is minimal requires differentiating E(T ) with respect to T (using Leibniz’s rule) and equating the result to 0. The result is
Figure 5. Mixture density function [26].
not uniformly distributed. It is thus difficult to define an effective droplet model for the droplets in all the images. As a result, the authors have not been successful in using this method to obtain adequate accuracy. For the metal transfer image shown in figure 2(b) the correlation peak is at (132, 214), while the actual position of the bottom droplet is at (143, 215) and the actual position of the top droplet is at (130, 218). This computed peak position does not match with either of the droplets. It is possible that the edge correlation method might improve the accuracy [22, 23]. However, the resolution of the captured image is relatively poor. Hence, the authors decided not to use the correlation method.
4. Brightness-based separation algorithm Based on the facts that droplets are brighter than most of the welding arc in the image and that adjacent frames are highly correlated, two algorithms are proposed to compute the position and size of the droplet. Before evaluating the proposed algorithms, the authors would like to introduce the idea of the histogram-based [24, 25] thresholding method. Suppose that an image contains only two principal greylevel regions. Let z denote grey-level values. These values can be viewed as random quantities, and their histogram can be considered as an estimate of their probability density function (PDF) p(z). This overall density function is the sum or mixture of two densities, one for the light and the other for the dark regions in the image. Furthermore, the mixture parameters are proportional to the relative areas of the dark and light regions. If the form of the densities is known or assumed, it is possible to determine an optimal threshold T (in terms of a minimum error) for segmenting the image into two distinct regions. Figure 5 shows two probability density functions. The mixture probability density function describing the overall grey-level variation in the image is p(z) = P1 p1 (z) + P2 p2 (z),
(2)
where P1 and P2 are the probabilities of occurrence of the two classes of pixels, respectively. They have the following relationship: P1 + P2 = 1.
(3)
The probability of erroneously classifying a point belonging to p2 (z) as a point belonging to p1 (z) is T E1 (T ) = p2 (z) dz. (4) −∞
(6)
P1 p1 (T ) = P2 p2 (T ).
(7)
This equation is solved for T to find the optimum threshold. The authors treat the density function as a Gaussian density which is completely characterized by two parameters: the mean and the variance. In this case, (z − µ1 )2 P1 exp − p(z) = √ 2σ12 2πσ1 (z − µ2 )2 P2 , (8) exp − +√ 2σ22 2πσ2 where µ1 and σ12 are the mean and variance of the Gaussian density of p1 (z) and µ2 and σ22 are the mean and variance of p2 (z), respectively. To simplify the computation, the variances are assumed to be equal: σ 2 = σ12 = σ22 .
(9)
Combining equations (7)–(9), the optimal threshold is computed as σ2 P2 µ1 + µ2 + . (10) ln T = 2 µ1 − µ2 P1 This histogram-based thresholding method will be used in the proposed algorithms with P1 and P2 obtained in advance. The first algorithm proposed makes use of the high brightness of droplets. It can be divided into six steps. The metal transfer image shown in figure 2(a) is used to illustrate the step-by-step results. Step 1. Pre-processing This step is to separate the objects of interest completely from the background which is completely dark. To this end, the image is binarized: 1, if f (x, y) > T f (x, y) = , (11) 0, if f (x, y) < T where the threshold T is determined using equation (10). Figure 6(b) shows the binarized image. The binarized image is then filtered to eliminate noises to ease the computation of the original position of the droplet. In this step, the largest bright area is selected as S and then the filtered image f (x, y) is determined as f (x, y), if (x, y) ∈ S . (12) f (x, y) = 0, if (x, y) ∈ /S The filtered image is shown in figure 6(c). 2051
Z Z Wang and Y M Zhang
(a)
(b)
(c)
Figure 8. Filtered binarized image. Figure 6. Pre-processing of example image: (a) original image, (b) after binarizing, (c) after filtering.
Figure 7. Binarization towards droplet identification.
Step 2. Identification of the wire tip
Figure 9. Binarizing and dilation example for one droplet image.
The droplet is formed at and detached from the tip of the wire. In the image, it is located at the top of the welding arc. To identify it, the coordinates of the topmost point of the welding arc need to be calculated. Since the image has been binarized, the computation appears straightforward. Its position, denoted as p, can be computed using the equation below: p = (min(x), y) for (fˆ(x − 1, y) = 0) and
(fˆ(x, y) = 1).
(13)
Step 3. Binarization for droplet identification In this step, the original image in the arc area is rebinarized towards the identification of the droplet. To this end, equation (11) is used again but with a higher new threshold which is determined using equation (10) based on the histogram of the arc area. Figure 7 shows the resultant binarized image for the arc area in the example image shown in figure 2(a). Step 4. Image filtering Because multiple droplets may exist and the number of the droplets is unknown, a pre-specified threshold is used to determine if a particular bright area in the binarized image is a droplet or noise. The union of all bright areas which are larger than the threshold is defined as S, and equation (12) is then used to filter the image. The filtering result for figure 7 is demonstrated in figure 8. Step 5. Image dilating [26–28] At this point, the droplet has been identified approximately. The next step is to approximate the computed droplet as much 2052
as possible to the real droplet through dilation. Before dilation is discussed, three basic definitions are introduced. ˆ is defined as The reflection of set B, denoted as B, Bˆ = {w|w = −b, for b ∈ B}. (14) The translation of set A by point z = (z1 , z2 ), denoted as (A)z , is defined as (A)z = {c|c = a + z,
for
a ∈ A}.
(15)
The complement of a set A is the set of elements not contained in A / A}. Ac = {w|w ∈
(16)
The dilation of A by B, denoted as A ⊕ B, is defined as ˆ z ∩ A = ∅}, A ⊕ B = {z|(B) (17) where A is the separated droplet in the filtered and binarized images and B is commonly referred to as the structuring element in dilation. This equation is based on obtaining the reflection of B about its origin and shifting this reflection by z. The dilation of A by B is then the set of all displacements, z, such that Bˆ and A overlap by at least one element. Based on this interpretation, equation (17) can be rewritten as ˆ z ∩ A ⊆ A}. (18) A ⊕ B = {z|(B) Since the object to be dilated is a disc-like droplet, the following B is used [29–31]: 0 1 0 B = 1 1 1 . 0 1 0 The result of binarization and dilation is shown in figures 9 and 10, respectively. Figure 9 is an example for the one-droplet case and figure 10 is an example for the two-droplet case.
Image processing algorithm for automated monitoring of metal transfer in double-electrode GMAW
Figure 12. Binarized Diff i .
5. Brightness- and subtraction-based separation algorithm
Figure 10. Binarizing and dilation example for two droplet image.
Figure 11. Example of image Diff i .
The brightness-based separation algorithm has a major drawback because the brightness of the droplet is not uniformly distributed. This makes it difficult to compute the droplet size with high accuracy. To resolve this issue, the authors have tried to enhance the difference between droplets and other parts in the image. To this end, several popular image enhancement algorithms [32–46], e.g. histogram equalization [38–41], Laplacian [42–45], homomorphic filtering [46], have been tested. Unfortunately, all these algorithms failed due to the poor resolution of the captured image. The second algorithm proposed is based on the first one, but it makes full use of the high correlation relationship between adjacent frames. It is composed of eight steps. The first two steps are completely the same as those for the first algorithm. The remaining six steps are as follows.
Step 6. Size and position computation
Step 3. Subtracting
The following equation is used to compute the droplet position (xc , yc ) from the dilated image: xmax + xmin x c = 2 (19) y + ymin max y c = , 2 where xmin and xmax are the minimum and maximum values of the x coordinate of one droplet and ymin and ymax are the minimum and maximum values of the y coordinate of the same droplet in the dilated image. The equation used for computing the size, denoted as Sarea , is given below: Sarea = 0 (20) Sarea = Sarea + 1, if (x, y) ∈ S,
This step is to subtract every frame by a reference frame which can be chosen as the first frame of a life cycle series of the metal transfer. It is obvious that the adjacent frames are very similar and the development of the droplets is the major cause for their small difference. If the difference from the reference image can be obtained, the development of the droplet would be monitored. To this end, the following equation has been defined to perform subtraction:
where S represents the separated droplet in the dilated image. The computed position of the droplet from the last image in figure 9 is (137, 216) and the actual position is (135, 215). The computed size is 88 and the actual size is 90. The computed positions for the top and bottom droplets shown in the last image in figure 10 are (131, 219) and (144, 218) respectively, and the actual positions for the top and bottom droplets are (131, 218) and (144, 215) respectively. However, the computed sizes for the top and bottom droplets are 86 and 51, respectively, while the actual sizes of the corresponding droplets are 50 and 47.
Diff i (x, y) = fi (x, y) − fR (x, y),
(21)
wherefi and fR denote the image frame i and reference frame respectively. Figure 11 shows an example of Diff i . Please note that the reference frame should be chosen with the least droplet on it, e.g. the first frame in the life span of the droplet to guarantee that the value of the droplet part is positive after subtraction. Step 4. Binarizing In this step, Diff i is binarized. Equation (11) is used again but with a higher new threshold which is determined based on the histogram of the part with a value greater than 0 in Diff i . Figure 12 shows the resultant binarized image. As can be seen, after binarization, the droplet becomes much more obvious and the noise in Diff i is reduced significantly. 2053
Z Z Wang and Y M Zhang
Figure 13. Filtering result of binarized Diff i .
Figure 15. B1 dilated result.
Figure 14. After region filling.
Figure 16. B2 dilated result.
Step 5. Binarized image filtering As can be seen, the droplet can now be clearly observed in the binarized image. In this step, the binarized Diff i is filtered using equation (12). The result is shown in figure 13. Step 6. Region filling and bilinear interpolating
Step 7. Dilating
There might be holes or cracks inside the filtered droplet. At this time, the region filling algorithm can be used to fill the holes and cracks. To this end, the part of the droplet with value 1 is defined as the boundary and the part with value 0 is defined as the region to be filled. Beginning with a point p inside the boundary, the objective of filling is to fill the entire region with 1s. If the convention that all non-boundary (background) points are labelled as 0 is adopted, value 1 can be assigned to p to begin. The following procedure then fills the region with 1s: Xk = (Xk−1 ⊕ E) ∩ D c ,
k = 1, 2, 3 . . . ,
(22)
where X0 = p, D denotes the filtered image and E is the symmetric structuring element. In our case, E is chosen as 0 1 0 E = 1 1 1 . 0 1 0 This algorithm terminates at the iteration step k if Xk = Xk−1 , i.e. the set union of Xk and D contains the filled set and its boundary. Figure 14 shows the droplet after region filling. The bilinear interpolation method [47, 48] can also be used for filling the holes and cracks; its equation is defined as f (x, y) = ax + by + cxy + d,
In this step, the separated droplets are dilated as much as possible to the actual ones. Equation (18) is used again. In addition to adopting the previous disc structuring element 0 1 0 B1 = 1 1 1 , 0 1 0 another structuring element B2 = 1 is introduced. The droplet is dilated separately using these two different structuring elements. Figure 15 shows the dilated droplet using the structuring element B1 and figure 16 shows the dilated droplet using the structuring element B2 . Step 8. Adaptive computing The computed dilated droplets’ sizes based on the two structuring elements B1 and B2 , denoted as S1 and S2 respectively, are compared with the pre-knowledge of the size of the droplet. Here the authors choose the pre-knowledge of the size as 70. • If S2 < 70 < S1 S = (S1 + S2 )/2.
(24)
S = S1 .
(25)
S = S2 .
(26)
• If S1 < 70
(23)
where x and y denote the coordinates of the point on the droplet, and a, b, c and d are the coefficients. It is straightforward that using four neighbours of the point 2054
in question, one can obtain the coefficients. Then after substituting the coordinate of the point in question into equation (22), its pixel value f (x, y) can be computed. After every step of bilinear interpolation, if the pixel value is greater than 0.5, it is set to 1.
• If S2 > 70
Image processing algorithm for automated monitoring of metal transfer in double-electrode GMAW
Figure 18. Frame 8 processing results.
Figure 19. Frame 9 processing result.
Figure 17. Fifteen adjacent frames.
If there is no pre-knowledge of the size of the droplet, equation (23) is used directly to compute the droplet’s size. A much more precise result can still be achieved. Equation (19) is used again to compute the position of the droplet. Here only the dilated droplet based on the structuring element B1 is used to compute the position, since the result is the same no matter whether B1 or B2 is used. The computed position and size of the dilated droplet shown in figure 11 are (146, 208) and 66 respectively, while the actual position is (146, 208.5) and the actual size is 67. 5.1. Results and discussion The authors have tested the second algorithm proposed in section 5 using a series of adjacent frames of metal transfer images. The series used is shown in figure 17. It contains 15 frames which describe the life span of a droplet. The first frame is the image when the droplet is just detached, and the following frames record the travel of the detached droplet. As can be seen, in addition to the travel from the wire towards the work piece, the droplet also shifts in the horizontal direction. In conventional GMAW, the droplet travels from the anode wire towards the cathode work piece because the detaching electromagnetic force points from the anode towards the cathode. In DE-GMAW, both the work piece and the bypass electrode are cathode and there are two components of
detaching electromagnetic forces pointing from the wire to the bypass electrode and work piece respectively. Hence, in DEGMAW, a horizontal motion can be observed for the travelling droplet. The image processing results for frame 8 are shown in figure 18. The first image in this figure is the reference image which is chosen as the first image in the image sequence. The separated droplet is shown in the last picture. Since the droplet is enhanced and separated, it is natural that it retains the original position. The error in position computation using equation (19) can only be caused by the shape of the droplet. Even if the size is not the same as the original one, the position can be computed correctly as long as the algorithm is symmetric in recovering the size. It is apparent that the separated droplet not only retains the original position, but also retains the shape. It can be seen that the obtained droplet shape is very similar to the original one. As the experimental results verified, the size of the droplet can also be restored correctly. The image processing results for frames 9–13 in figure 17 are shown in figures 19–23 where the first image in each figure is the reference image, i.e. the first image in the image sequence. It can also be seen that all the separated droplets preserve their actual shapes well. This indicates that the size of the droplet can be recovered accurately. Table 1 shows these frames’ actual positions and sizes against their computations. It is seen that the positions and sizes can be recovered with almost no error. Please note that the sizes in frames 12 and 13 are smaller because the droplet is landing on the work piece in these frames. 2055
Z Z Wang and Y M Zhang
Table 1. Comparison of computed and actual droplets. Index of frames
Size of droplet
Coordinates of droplet
Computed size
Computed coordinates
Error of size
Error of coordinates
Frame 8 Frame 9 Frame 10 Frame 11 Frame 12 Frame 13
67 67 69 75 61 36
(144.5, 209) (146, 208.5) (147.5, 208) (149, 207) (149, 206.5) (149.5, 205.5)
66 68 70 76 61 36
(144.5, 209) (146, 208) (147, 208) (149, 207) (149,207) (149.5, 206)
1 1 1 1 0 0
(0, 0) (0, 0.5) (−0.5, 0) (0, 0) (0, 0.5) (0, 0.5)
Figure 20. Frame 10 processing result. Figure 23. Frame 13 processing result.
Figure 21. Frame 11 processing result.
process one frame in MatLab (the computer used is a notebook R R Pentium M 1.86 GHz processor and computer with an Intel 1 Gbyte Ram). Since the final real-time processing system will be programmed in C, the processing time will be significantly reduced. Luo et al [49] reported an over 100 times speed improvement of compiled C over the interpreted MatLab. In [50], Tommiska et al stated ‘This indicated compiled (i.e., C source) code runs approximately 400 times faster than interpreted (i.e., MatLab) code’ in their case. In addition, a multiple parallel processor system can be used to further reduce the processing time if necessary. Hence, the proposed algorithm appears to have the potential to be used in on-line control of the metal transfer process.
6. Conclusions
Figure 22. Frame 12 processing result.
Testing shows that the proposed brightness- and subtraction-based separation algorithm needs less than 3 s to 2056
Understanding, analysis and control of the GMAW process and its modifications rely on the availability of effective algorithms which can automatically process metal transfer images. This paper aims at developing an automated algorithm to process the metal transfer image in the DE-GMAW process. It was found that the traditional correlation method has difficulties in achieving reasonable accuracy from metal transfer images. The proposed brightness-based separation method can locate the droplets with adequate accuracy, but its accuracy in droplet size computation is affected by the fact that the brightness of the droplets is typically not uniform. To improve the accuracy, the algorithm is modified by taking advantage of the correlation of adjacent images. Experimental results show that the revised algorithm which is referred to as the brightness- and subtraction-based separation algorithm can achieve adequate accuracy for both the droplet location and droplet size. In addition, it is found that the proposed
Image processing algorithm for automated monitoring of metal transfer in double-electrode GMAW
algorithm has the potential to provide an adequate processing speed for use in real-time metal transfer control. Future work will focus on optimizing the algorithm and integrating it into a real-time metal transfer control system.
Acknowledgment This research is funded by the National Science Foundation under grant CMMI-0355324.
References [1] Saeed G and Zhang Y M 2003 Mathematical formulation and simulation of specular reflection based measurement system for gas tungsten arc weld pool surface Meas. Sci. Technol. 14 1671–82 [2] Kim J S et al 1996 A robust visual seam tracking system for robotic arc welding Mechatronics 6 141–63 [3] Tsai C H, Hou KH and Chuang H T 2006 Fuzzy control of pulsed GTA welds by using real-time root bead image feedback J. Mater. Process. Technol. 176 158–67 [4] Zhang G J, Yan ZH and Wu L 2006 Reconstructing a three-dimensional P-GMAW weld pool shape from a two-dimensional visual image Meas. Sci. Technol. 17 877–1882 [5] Zhang YM and Li P J 2001 Modified active control of metal transfer and pulsed GMAW of titanium Weld. J. 80 54S–61S [6] Amin M 1983 Pulse current parameters for arc stability and controlled metal transfer in arc welding Metal Constr. 15 272–8 [7] Jones L A, Eagar T W and Lang J H 1998 A dynamic model of drops detaching from a gas metal arc welding electrode J. Phys. D: Appl. Phys. 31 107–23 [8] Wang G, Huang P G and Zhang Y M 2004 Numerical analysis of metal transfer in gas metal arc welding under modified pulsed current conditions Metall. Mater. Trans. B 35 857–66 [9] Chakraborty S 2005 Analytical investigations on breakup of viscous liquid droplets on surface tension modulation during welding metal transfer Appl. Phys. Lett. 86 174104 [10] Wu C S, Chen M A and Lu Y F 2005 Effect of current waveforms on metal transfer in pulsed gas metal arc welding Meas. Sci. Technol. 16 2459–65 [11] Jones L A, Eagar T W and Lang J H 1992 Investigation of drop detachment control in gas metal arc welding Proc. 3rd Int. Conf. on Trends in Welding Research [12] Yudodibroto B Y B et al 2006 Pendant droplet oscillation during GMAW Sci. Technol. Weld. Joining 11 308–14 [13] Rhee S and Kannatey-Asibu E 1992 Observation of metal transfer during gas metal arc-welding Weld. J. 71 S381–6 [14] O’Brien R L 1991 Welding Handbook 8th edn, vol 2 (Miami, FL: American Welding Society) [15] Kovacevic R and Zhang Y M 1997 Real-time image processing for monitoring of free weld pool surface ASME J. Manuf. Sci. Eng. 119 161–9 [16] Wren C, Azarbayejani A, Darrell T and Pentland A 1997 Pfinder: real-time tracking of the human body IEEE Trans. Pattern Anal. Mach. Intell. 19 780–5 [17] Beymer D and Konolige K 1998 Real-time tracking of multiple people using continuous detection SRI International Report Artificial Intelligence Center [18] Azarbayejani A and Pentland A 1996 Real-time self-calibrating stereo person tracking using 3D shape estimation from blob features Proc. ICPR, IEEE [19] Li K, Chen J S and Zhang Y M 2007 Double-electrode GMAW process and control Weld. J. at press [20] Wang D L and Terman D 1995 Image segmentation based on oscillatory correlation Proc. World Congress on Neural Net pp 521–5
[21] Savvides M, Vijaya Kumar B V K and Khosla P 2002 Face verification using correlation filters Proc. 3rd IEEE Automatic Identification Advanced Technologies pp 56–61 [22] Qian R J and Huang T S 1996 Optimal edge detection in two-dimensional images IEEE Trans. Image Process. 5 1215–20 [23] Marr D and Hildreth E 1980 Theory of edge detection Proc. R. Soc. Lond. B 207 187–217 [24] Coltuc D and Bolon Ph 1998 An inverse problem: histogram equalization Signal Processing IX, Theories and Applications, EUSIPCO’98 vol 2, pp 861–4 [25] Coltuc D and Bolon Ph 1999 Watermarking by histogram specification IEEE Proc. Image Processing 2 336–9 [26] Rafael C and Richard E 2002 Digital Image Processing 2nd edn (Reading, MA: Addision-Wesley) [27] Nadadur D and Haralick R M 2000 Recursive binary dilation and erosion using digital line structuring elements in arbitrary orientations IEEE Trans. Image Process. 9 749–59 [28] Gil J Y and Kimmel R 2002 Efficient dilation, erosion, opening and closing algorithms IEEE Trans. Pattern Recognit. Mach. Intell. 24 1606–17 [29] Razaz M and Hagyard D M P 1996 Morphological structuring element decomposition: implementation and comparison Signal Processing VIII, Theories & Applications vol 1 pp 288–91 [30] Zhuang X and Haralick R M 1983 Morphological structuring element decomposition IEEE Comput. Vision. Graphics Image Proc. 35 370–82 [31] DeFatta D J, Lucas J G and Hodgkiss W S 1988 Digital Signal Processing: A System Design Approach (New York: Wiley) pp 305–15 [32] Woods R E and Gonzalez R C 1981 Real-time digital image enhancement Proc. IEEE 69 643–54 [33] Prewitt J M S 1970 Object enhancement and extraction Picture Processing and Psychopictorics ed B S Lipkin and A Rosenfeld (New York: Academic) [34] Narendra PM and Fitch R C 1981 Real-time adaptive contrast enhancement IEEE Trans. Pattern Anal. Mach. Intell. 3 655–61 [35] Highnam R and Brady M 1997 Model-based image enhancement of far infrared images IEEE Trans. Pattern Anal. Mach. Intell. 19 410–5 [36] Gonzalez R C and Fittes B A 1977 Gray-level transformations for interactive image enhancement Mech. Mach. Theor. 12 111–22 [37] Centeno J A S and Haertel V 1997 An adaptive image enhancement algorithm Pattern Recognit. 30 1183–9 [38] Zhu H, Chan F H Y and Lam F K 1999 Image contrast enhancement by constrained local histogram equalization Comput. Vis. Image Underst. 73 281–90 [39] Stark J A 2000 Adaptive image contrast enhancement using generalizations of histogram equalization IEEE Trans. Image Process. 9 889–96 [40] Shafarenko L, Petrou M and Kittler J 1998 Histogram-based segmentation in a perceptually uniform color space IEEE. Trans. Image Process. 7 1354–8 [41] Hummel R A 1974 Histogram modification techniques Technical Report TR-329. F-44620–72C-0062 (Computer Science Center, University of Maryland, College Park, MD) [42] Piech M A 1990 Decomposing the Laplacian IEEE Trans. Pattern Anal. Mach. Intell. 12 830–1 [43] Kim J K, Park J M, Song K S and Park H W 1997 Adaptive mammographic image enhancement using first derivative and local statistics IEEE Trans. Med. Imag. 16 495–502 [44] Gunn S R 1998 Edge detection error in the discrete Laplacian of a Gaussian Proc. Int. Conf. Image Process. 2 515–9 [45] Gunn S R 1999 On the discrete representation of the Laplacian of a Gaussian Pattern Recognit. 32 1463–72
2057
Z Z Wang and Y M Zhang
[46] Cannon T M 1974 Digital image de-blurring by non-linear homomorphic filtering PhD Thesis University of Utah [47] Marion B, Rummel S and Anderberg A 2004 Current–voltage curve translation by bilinear interpolation Prog. Photovolt., Res. Appl. 12 1–16 [48] Knutsson H and Westin C F 1993 Normalized and differential convolution: methods for interpolation and filtering of incomplete and uncertain data Proc. IEEE Computer
2058
Society Conf. on Computer Vision and Pattern Recognition pp 515–23 [49] Luo F, Wu S J, Jiao LC and Zhang L R 2000 Implementation of de-noise DWT chip based on adaptivesoft-threshold Int. Conf., Signal Process. Proc. 1 614–8 [50] Tommiska M, Loukola M and Koskivirta T 1999 An FPGA-based simulation and implementation of AAL type 2 receiver J. Commun. Netw. 1 63–7