A VBM SYSTEM FOR INTELLIGENT ROBOTIC WELDING C RISTIANO R AFAEL S TEFFENS∗, B RUNO Q UARESMA L EONARDO∗, S IDNEI C ARLOS DA S ILVA F ILHO∗ , M ARCIO ROZANTE AGUIAR∗, VALQUIRIA H ÜTTNER∗, Y GOR Q UADROS DE AGUIAR∗, S ILVIA S ILVA DA C OSTA B OTELHO∗, VAGNER S ANTOS ROSA∗ ∗
Center of Computational Science Federal University of Rio Grande Rio Grande, RS, Brazil
Emails:
[email protected],
[email protected],
[email protected],
[email protected],
[email protected],
[email protected],
[email protected],
[email protected] Abstract— During the past few years, there have been a large number of attempts to automate the welding process in many applications. Due to the rising demand in shipbuilding applications, the automation of the welding process is necessary for improving the productivity and quality of shipyards. Another rising trend in intelligent systems is the development of vision based methods for automation, called Vision-Based Measurement (VBM) systems. The present paper proposes a VBM system for the automation of intelligent robotic welding applications. The proposed system is based on a single visual sensor for the image acquisition, a CMOS camera, and a FPGA (Field Programmable Gate Arrays) development board, responsible for the adjustment of the robot parameters in real time. The image processing and welding parameters extraction is performed using a standard computer. The paper presents the full stack development discussing the implementation and results of the proposed solution. Keywords— welding robots, automation, VBM, computer vision, image processing
1
Introduction
Many robot models are being used in the industrial environment for different kinds of applications such as cargo transportation and surveillance, among other specific tasks. These applications are being more explored due to the technology improvement, robot versatility and the reducing costs of this robots (Leal, 2005)(Eda Turan and Ü., 2011). It is well known that the Latin American shipyards are far way from the Asian competitors in terms of quality, production and safety levels (Rhodes and McDonald, 2015). One of the reasons for that is the lack of technology used when comparing to the emergent countries. One of the latest trends in technology, which can be used in a wide variety of automated applications, is the Vision-Based Measurement (VBM) systems. The advances of both hardware and software technologies enable the development of even cheaper, faster, higher quality and smaller cameras and electronic devices. It means that vision based methods can be implemented more easily and affordably than ever by using a camera and its associated operation units. Because it uses electronic devices and computational intelligence, besides the level of automation inserted into the application, the VBM is also typically faster and more accurate than manual or human based techniques (Shirmohammadi and Ferrero, 2014). The Brazilian shipbuilding industry mostly uses analogical robots for welding large steel plates. The main reason is due to the high cost of improving their equipment and the inability of the shop floor workers to use robots with more advanced technology - lack of training and skilled people. In a shipyard, these robots are often used in the ship block construction part, for the SAW(Submerged Arc Welding) welding tasks involving linear steel plates. This process is per-
formed by a longitudinal movement while following the groove between the plates. The control parameters of the welding equipment are adjusted during the operation (Corporation, 2014) and it demands a high level of attention and skill by the operators exposing them to the unhealthy environment where the welding process takes place. The quality of electric arc welds is highly dependent on the equipment configuration. Voltage, current, tractor speed, torch positioning, wire feeding speed and torch weaving, among other parameters. When these parameters are not properly configured it may result in plate warping, weld spatter, weld slags and/or fume. One solution is to provide a higher level of control of the process giving more independence for the operator. It is imperative the need to implement a method that provides all necessary data from the process automatically using the right settings for each welding operation. This work proposes a VBM system in order to recognize the groove geometry for the welding systems. Using computational vision the system estimates the 3D volume of the welding groove by mapping the groove geometry acquiring the parameters related to it and gauging the control settings for the welding process. The system is tested and validated with the use of an operational welding robot named BUG-O Matic Weaver robot. The paper is organized as follows. In the Sec. 2, we provide a quick overview on Vision-Based Measurement systems and present some of the previously proposed approaches for the welding automation problem. In Sec. 3 we present the implementation proposal providing a detailed description of each of the image processing steps in the VBM System. In Sec. 4 we present a case study on a Bug-O MDS-1005 Welding Robot, highlighting the attached devices their functionalities. We provide a short overview of the
signal acquisition and conditioning, control module and image acquisition using a FPGA board. In Sec. 5 the results of the implementation are presented. In Sec. 6 we discuss the current results and future challenges of the proposed solution. 2
Robot-based architectures for Welding Processes
Automating welding process is a challenging task. Many architectures for the acquisition of welding plate characteristics and position have been proposed (Zhang et al., 2014). Min Young et al. (2000) propose a system where the sensor system is composed of three laser stripes and two lipstick cameras to detect the welding positions and to recognize the shape of the welding plates, using the multi-structured light based on the optical triangulation method. Donghun et al. (2011) propose a system in which the sensor system contains a 3-DoF shock sensor, a laser distance sensor with on/off sealing cap and a straight typed welding torch on the end-effector of the 3P3R manipulator proposed. As the industrial robot works in hazardous environments, the shock sensor can detect sudden impact and prevent critical damages to the robot. The laser distance sensor for the initial sensing of the V-shaped trajectories, rather than the touch sensor, helps to reduce the required time, leading to a rise in efficiency and productivity. Other prior works in the area, also use a laser beam projector to obtain a small line or a grid as in Kawahara (1983), Drews et al. (1986) and Zhang et al. (2014). 3
A Computer Vision System for Welding Groove Identification
A VBM system usually consists of a visual sensor plus operations unit (In Fig.1 a high-level architecture of a generic VBM system is presented). The visual sensor can be a camera, laser scanner, x-ray scanner, or any other sensor which an image of the physical scene containing the measurand can be obtained, and the operations unit can be implemented in software or hardware (Shirmohammadi and Ferrero, 2014). The proposed system uses image processing and machine vision to detect the welding groove and adjust the correct tension, current and speed. As the system is implemented in hardware, the algorithms are carefully selected to maximize the accuracy and performance levels. The machine vision implementation is structured in 4 consecutive steps: 1. Contrast enhancement 2. Noise removal 3. Straight line detection 4. Heuristics
Contrast enhancement The image normalization allows us to improve the image contrast. Considering an 8 bits representation for a gray scale image, we use a min-max normalization to perform a linear transformation on the original data. The min-max normalization preserves the relationships among the original pixel values (Han et al., 2011). In our case, it is assumed that the darkest pixel on the image stands for 0 and the brightest pixel stands for 255. Once the input image is normalized, a histogram equalization is performed to enhance its contrast and correct some image aberrations that occur due to uneven lighting across the scene. Histogram equalization spreads pixel values between a floor and ceiling using a contrast remapping function, with the goal of creating a histogram with approximately equal bin counts approaching a straight-line distribution (Krig, 2014). Noise Removal While the normalization and histogram equalization produce good results improving the overall contrast, they also accentuate the small failures that are already on the observed scene. Therefore, a noise removal step is necessary. One of the simplest approaches to surpass the small imperfections observed on the steel plate that is being welded and avoid miss detections of the groove edges is the use of a mean linear filter. The mean filter gives equal coefficients to all the pixels in a defined neighbourhood. The image is, in this step, processed as a matrix, considering rows and columns. The filter kernel is a square. The mean filter implementation is very simple and it effectively removes the imperfections on the plate. As a drawback, nonetheless, it also works as a low-pass filter, removing the high-frequency image details that are important for the line detection step. A larger kernel produces a better noise suppression but results in further degradation of the image quality and edge blurring. Given that the edges perform an important rule on the groove detection, one alternative is the median filter. The median filter overcomes the main limitations of the mean filter, at the expense of greater computational cost, once, by definition it requires the pixels in the neighbourhood to be ordered by their value. As each pixel is addressed, it is replaced by the statistical median of its neighbourhood. In our application, the results using the median filter are superior to the ones given by the mean filter, removing the metal imperfections and preserving the high-frequency. Straight line detection The camera position is orthogonal to the observed surface. Once the image is preprocessed we can now work on the line detection to determine the groove properties. Among previous approaches that focus on
Figure 1: High-level architecture of a Vision-Based Measurement system. Adapted from Shirmohammadi andFerrero (2014). standardized computer vision algorithms, such as Ma et al. (2010), Hou and Liu (2012) and Xu et al. (2012), the combination of a Canny edge detector and a Hough transform is the most common route. In order to identify the straight lines the Hough transform associates to each pixel, in a polar geometry space, to the bundle of lines passing through it. However, as we intend to implement the machine vision step on a low cost equipment with real-time performance, we have to consider the algorithms complexity and use a solution that maximizes the computational cost and still achieves good results. As it has been evidenced in prior work by Risse (1989) and latter by Hollitt (2009) the computational cost of the Canny edge detection plus the cost of the classical Hough line transform can be over O(n4 ), where n is the number of pixels in the analysed image. Using some simplification steps, authors were able to reduce the complexity down to O(n3 log n) (Hollitt, 2009), which is still not appropriated for the system restrictions. Therefore, the LSD algorithm, proposed in Von Gioi et al. (2010) and Von Gioi et al. (2012) has been chosen. LSD works with images in gray scale, detecting lines formed by edges, which in our case are given by surfaces with sudden light changes. The LSD was designed with the intention of not requiring parameters adjustment (although it determines the algorithm behavior) because their values have been carefully designed to work among a variety of different images. Another advantage of LSD is its O(n log(n)) complexity which makes it a fast execution algorithm (Von Gioi et al., 2012). According to Von Gioi et al. (2012) LSD applies the filter in three stages:
(2005), to find the adequate configuration for the welding equipment, which depend on the groove width and bevel angle. The scale factor S is given by Eq. 1 where dobject represents the physical dimension of a given object in millimeters and dimage represents the size of the correspondent object on the image in pixels.
S=
dobject [mm/pixel] dimage
Heuristics Fig. 2 gives a insight on how the presented VBM system applies to the welding groove. First, a greedy non-maximum suppression (NMS) is used to determine which of the lines are the most likely edges. Once the possible groove edges are known, some heuristics are used to avoid false positives and negatives that may occur when the welding robot is used in the shop floor. It is known that the steel plates thickness can vary from 13mm up to 20mm and the groove angle can vary within 45-55 degrees. Also, the gap between the plates that are being welded (Gap B on the figure) has to be something between 3mm and 9mm. Considering that the camera position is orthogonal to the groove we can assume that both sides are symmetrical and therefore the distances of each point to the groove center has to be almost equal. Using the application domain information we can establish some threshold values to filter the outputs of the vision algorithms.
1. Divide the image into smaller regions (supporting images), grouping connected pixels that share the same angle of inclination, considering a certain tolerance. 2. Find the line segments that best apply to these smaller regions. 3. Validate or not each line segment based on the image information support for this region. We use a scale factor, as proposed in Badali et al.
(1)
Figure 2: Welding groove
Figure 5: Delta Sigma Modulator and robot speed setpoints
Figure 3: Welding robot BUG-O-MDS 1005 Weaver.
Figure 4: Equipment disposition layout.
4
Case study: Application of the VBM System on a Bug-O Matic Weaver
The VBM approach is used to control a linear welding robot. In this context, the proposed system is embedded in a hardware/software architecture associated with a MDS 1005 welding robot, shown in Fig. 3. Fig. 4 shows the physical disposition of the required ecquipement on the shop floor. In the proposed architecture the image acquisition is performed by a CMOS camera as the visual sensor. The operations unit is composed of a FPGA development board (Altera’s DE0-NANO FPGA) and a standard computer. Together, they are responsible for receiving the image acquired by the visual sensor, performing the image processing and the adjustment of the robot’s parameters in real time. The camera (Terasic D5M) uses a standard Bayer color filter on the sensor. The Bayer to RGB and RGB to gray scale conversions are performed on the FPGA board. The motor control of the robot is performed by an analogical PID with the feedback given by two different types of sensors, an incremental encoder for the longitudinal motion, and a Hall sensor for the transverse one. The robot parametrization is provided by the adjustment of a set of potentiometers available for the weld operator. It demands a skilled person per welding robot to be monitoring its work, lowering the applicability and directly exposing him to the harsh environment.
From the robot construction point of view, many intrinsic limitations can be mentioned. Firstly, the need for an experienced weld operator fixed in each robot station, in which this case is composed by only one robot, resulting in a low level of automation. Secondly, the robot parametrization is limited because of the imprecise small adjustments given by the set of potentiometers, requiring high skilled operators to improve it. And at last, but not least in importance, is the fact of being unable to receive a feedback on the quality of the provided weld. Inasmuch as not giving warranties about the process aspect and what is being executed, just considering the visual analysis and at times losing all workpieces. The acquisition of the speed set points from the weaver and tractor are realized by the 12-bit AD chip available in the FPGA board, and the encoders data are decoded using an 8-bit quadrature decoder module designed in VHDL. The speed set points generator is given by a ∆Σ modulator with 8-bit resolution, parallel processing the data to the weaver and tractor. The clock usage is 1 MHz and the values assignment are shown in Fig. 5. With the tractor the direction are setup with an extra pin and for that the full resolution of the modulator is used. In the weaver’s movement the speed value already contains the direction of the movement, and for that is used half of the resolution for each side. 4.1
DE0-NANO FPGA Board
The FPGA architecture consists of a set of programmable logic blocks and interconnected resources. The major characteristic of FPGA technology is the possibility of reconfiguration by the generation of a new bitstream, which is, in fact, a reallocation of the LUT’s content and the interconnections between the logic blocks. Besides the versatility provided via reconfiguration, using a FPGA-based system also has the advantage of providing parallelism at run time, which ensures correct sampling instants of acquired signals in constant time (Real Time Operating System - RTOS). The DE0-Nano FPGA board is a development platform designed for applications involving robots and portable projects. It has the Cyclone FPGA IV containing about 22 thousand logic elements, 594 KB of embedded memory, 66 multipliers of 18x18 and 153 I/O’s pins. The programming of this board is
Control ref. 0 8 32 64 128 192 224 255
∆Σ 8-bit input 000000002 000010002 001000002 010000002 100000002 110000002 111000002 111111112
Analog value 0.008v 0.110v 0.389v 0.805v 1.64v 2.46v 2.89v 3.16v
Table 1: Control reference values and generated speed set points Figure 6: Overview of Digital Control System.
made from Quartus II software owned by Altera. One of several advantages of using the DE0-NANO board combined with a Terasic D5M camera, is that, in the case of a physical implementation, is designed to occupy the minimum space in the receptacle of the robot. 4.2
Control module
The overview of the digital control system is shown in Fig. 6. The control module is the centralizer unit in the FPGA controller. All the tasks involving the control and parametrization of the camera, Modbus module, AD chip and ∆Σ modulators are controlled in this module. The execution order is firstly to read all the data signals from the decoders and any message driven from the Modbus port. Based on that, the actions will be taken such as stopping the camera acquisition, accelerating the robot, changing the reference trajectory or stopping and starting the manipulation of the robot, changing camera parameters or sending any message to the Modbus serial line. 5
Experiment Results
The FPGA input signals, such as the signals for the robot encoders, are simulated using the ModelSim tool. One of the experiments was the validation of the quadrature decoder values for the tractor forward movement. The speed values for the tractor and weaver are generated by an 8-bit ∆Σ modulator. Tab. 1 presents the relation between the control reference with values from 0 to 255, the ∆Σ values generated from the control module and the analog values that are used to set the velocities on the robot. The Fig. 7 shows the results of the image processing and computer vision steps. Once the input image is acquired from the camera, the contrast enhancement and noise filtering steps are performed resulting in an image that is further used to extract the line positions. The LSD algorithm is parameterless and adapts to different image characteristics. The line detection results are stable and satisfy the accuracy requirements allowing the extraction of the welding groove angle. Once
Figure 7: Results of the image processing step, plus line segment detection and heuristics (b).
with the line positions, the system uses the scale factor to convert the pixel-wise data to metric data that is used to adjust the proper welding equipment settings. The groove properties are obtained using trigonometric functions. 6
Conclusion
This work presented a VBM approach for dimensional measurement of metal bevels and its implementation with common hardware. The approach is based on image acquisition and processing making use of computer vision algorithms. The system processes and filters the image by normalization, histogram equalization and median box filtering. The groove modeling is achieved through the use of LSD algorithm to obtain straight lines and knowledge of the plate thickness. The mapping of the groove model into welding attitudes was obtained through empirical testing. The robot control system was implemented in a FPGA board. The proposal has been tested and validated in a case study associated with the linear welding robot Bug-O Matic Weaver. To do so, the robot control was customized and integrated with the VBM. Tests involving the VBM and MDS1005 robot were performed, validating the proposal and its potential use. Acknowledgment The authors would like to thank CNPq – National Counsel of Technological and Scientific Development
–, CAPES – Coordination for the Improvement of Higher Education Personnel – and FINEP – Funding Authority for Studies and Projects – for their financial support. References Badali, A. P., Zhang, Y., Carr, P., Thomas, P. J. and Hornsey, R. I. (2005). Scale factor in digital cameras, Photonics North 2005, International Society for Optics and Photonics, pp. 59692B– 59692B. Corporation, B.-O. S. (2014). Instructions and parts manual: Modular drive system. URL: http://www.bugo. com/administrator/files/ downloadables/MDS_ipm_2_15_ 1423511987.pdf Donghun, L., Namkug, K., Tae-Wan, K., Kyu-Yeul, L. and Youg-Shuk, S. (2011). Development and application of an intelligent welding robot system for shipbuilding, Robotics and ComputerIntegrated Manufacturing 27: 377–388. Drews, P., Frassek, B. and Willms, K. (1986). Optical sensor systems for automated arc welding, Robotics 2(1): 31–43. Eda Turan, T. K. and Ü., K. (2011). Welding technologies in shipbuilding industry, The Online Journal of Science and Technology, TOJSAT 1(4): 24–30. Han, J., Kamber, M. and Pei, J. (2011). Data Mining: Concepts and Techniques: Concepts and Techniques, The Morgan Kaufmann Series in Data Management Systems, Elsevier Science. URL: https://books.google.com. br/books?id=pQws07tdpjoC Hollitt, C. (2009). Reduction of computational complexity of hough transforms using a convolution approach, Image and Vision Computing New Zealand, 2009. IVCNZ’09. 24th International Conference, IEEE, pp. 373–378. Hou, X. and Liu, H. (2012). Welding image edge detection and identification research based on canny operator, Computer Science & Service System (CSSS), 2012 International Conference on, IEEE, pp. 250–253. Kawahara, M. (1983). Tracking control system using image sensor for arc welding, Automatica 19(4): 357–363. Krig, S. (2014). Ground truth data, content, metrics, and analysis, Computer Vision Metrics, Springer, pp. 283–311. Leal, R. D. G. (2005). Impactos sociais e econômicos da robotização: Estudo de caso do projeto roboturb.
Ma, H., Wei, S., Sheng, Z., Lin, T. and Chen, S. (2010). Robot welding seam tracking method based on passive vision for thin plate closedgap butt welding, The International Journal of Advanced Manufacturing Technology 48(912): 945–953. Min Young, K., Kuk-won, K., Hyung, S. C. and Jaehoon, K. (2000). Visual sensing and recognition of welding environment for intelligent shipyard welding robots, International Conference on Intelligent Robots and Systems . Rhodes, T. and McDonald, G. (2015). Brazilian shipyards: industry in crisis or growing pains? URL: http://www.lexology. com/library/detail.aspx?g= 1e86cb7b-f8b1-4e8a-8bfd-c97cb7839b4d Risse, T. (1989). Hough transform for line recognition: complexity of evidence accumulation and cluster detection, Computer Vision, Graphics, and Image Processing 46(3): 327–345. Shirmohammadi, S. and Ferrero, A. (2014). Camera as the instrument: the rising trend of vision based measurement, Instrumentation & Measurement Magazine, IEEE 17(3): 41–47. Von Gioi, R. G., Jakubowicz, J., Morel, J.-M. and Randall, G. (2012). Lsd: a line segment detector, Image Processing On Line 2(3): 5. Von Gioi, R., Jakubowicz, J., Morel, J.-M. and Randall, G. (2010). Lsd: A fast line segment detector with a false detection control, Pattern Analysis and Machine Intelligence, IEEE Transactions on 32(4): 722–732. Xu, Y., Yu, H., Zhong, J., Lin, T. and Chen, S. (2012). Real-time seam tracking control technology during welding robot gtaw process based on passive vision sensor, Journal of Materials Processing Technology 212(8): 1654–1662. Zhang, L., Ke, W., Ye, Q. and Jiao, J. (2014). A novel laser vision sensor for weld line detection on wall-climbing robot, Optics & Laser Technology 60: 69–79.