Visual Programming of Robots Gabor Sziebig, Peter Zanaty Department of Automation and Applied Informatics Budapest University of Technology and Economics
[email protected],
[email protected]
Abstract. Robot programming methodologies are moving from the traditional "Teach" and "Oine" into more human friendly, rapid and exible programming alternatives. In this paper a new Visual Robot Programming Methodology is introduced. The proposed programming methodology physically disconnects the robot programmer from the robot itself. Thus, the operator can move freely around in the production facilities and identify necessary robot paths on individual work pieces. Further the methodology is human friendly and rapid. It can automatically simulate the robot path and check for any errors (singularities). Also the rst experimental results of application of the proposed methodology in remote control of mobile robot are presented. Keywords: Robot Programming; 3D Visualization; Augmented Reality; Motion Cap-
ture
1 Introduction Typically, industrial robots are used as transporting devices (material handling of work pieces between machines) or in some kind of additive- (e.g. welding, painting etc.) or subtractive- manufacturing process (e.g. grinding, deburring etc.). Also the industrial robot controller has good capability of I/O communication and often acts as cell controller in a typical set-up of a exible manufacturing cell or system. Thus, in advanced manufacturing systems the industrial robot serves as a key component for coordinated control and eective utilisation of the complete production unit. In manufacturing engineering, man-machine interaction has gone from typical online programming techniques into virtual reality based oine programming methodologies [1]. Today, a wide range of oine software tools is used to imitate, simulate and control real manufacturing systems [2][3][4][5]. However, also these new methodologies lack capability when it comes to human-machine communication. Typically, today communication with the virtual environment is done via a keyboard/mouse interface while feedback to the operator is given from the computer screen. These desktop reality systems uses only a limited spectre of human senses and there is quite a low sensation of "being inside" the system. Programming is still done on the premises of the machines. As humans receive most of their information visually, signicant part of the neurons in the human brain is dedicated to visual processing; probably the best way of programming robots is visual programming, without any restrictions. 1
Previously visual programming was only used in computer programming [6][7], but with new series of robots and robot controllers a new era in robot programming had started [8]. Mentioned above, these solution are usually limited to keyboard/mouse input and feedback [9]. Thus, this article focuses on a new interactive human robot programming methodology where the knowledge and exibility of the human operator is combined with information from CAD models and simulations of real robot movements. This 3D interactive environment is especially suitable for planning robot machining operations. The organization of the paper is as follows: Section 2 gives an overview of the programming methodology while Section 3 presents the very rst steps of introduction the new robot programming methodology and also includes experiment results, while Section 4 concludes the paper.
2 Overview By introducing modern technologies, like motion capturing and augmented reality principles, with the goal of adding human representations into the programming environment it will be possible to create a new programming concept where the human operator can interfere with machines in a cognitive manner. Thus, the key-idea of the proposed methodology is to capture the knowledge of a skilled operator and make an automatic knowledge transfer to a robot system. Figure 1 summarizes the conceptual overview of the proposed methodology.
Figure 1: Overview of methodology 2
The proposed methodology allows a human operator to freely move around in an industrial environment (e.g. shop-oor) and identify the work-pieces which are necessary to modify via some robot action. As before mentioned, such a typical situation could be a robot grinding process following a molding process where the work-pieces suer from some irregularities (burrs). By sight, the human operator can so very easily identify the area of problem but cannot exactly quantify the error in term of necessary material removal to reach the ideal nal geometry. As will be seen the proposed interactive robot programming methodology counts for both of these issues. The proposed methodology consists from the following steps:
Step I: After work-piece identication, the next step in interactive robot pro-
gramming is to let the operator make a decision on what kind of machining action is necessary to undertake. Naturally, larger modications with several machining passes requires a more detailed analysis and often leads to selecting other machinery than a robot for the machining process (e.g. milling- or turning- machine). On the other side, if the cutting depth and material removal is moderate the robot can serve as an excellent alternative.
Step II: Assuming the selection of robot machining, the next step is to capture
the machining path. Here it is proposed to use a motion tracking suit c III Motion Capture System from Measurand Inc.). (e.g. ShapeWrap° A motion capture suit is easy to calibrate, wireless and can be operated wherever a wireless network is available. For industrial usage, freedom to move around, in an ever changing environment like a shop-oor, is rated above (a possible) higher accuracy camera based motion tracking system. In the proposed methodology the human operator can just move his nger, hand, or arm over the region of interest while the motion capture system will store all its movements. As shown in Section 2.1.
Step III: The path from the motion tracking system must be transferred onto
to the ideal work piece (e.g. CAD-model) in order to verify that the path actually is located as expected. The path is automatically projected (as a line) onto the surface of the CAD model. By transferring the robot path directly onto the CAD model actually denes what the nal end-geometry should look like after the manufacturing process. By this the risk of cutting into the nal geometry is reduced. Further, the line from the CAD model is fed-back to the operator either through a (handheld) computer screen or as suggested via an augmented virtual reality system in the sense of a Head Mounted Display (HMD). This ability of instant feed- back makes it possible for the operator to relocate his path according to what he sees at the real work-piece. In this paper the usage of a see-through headmounted-display (HMD) device for feedback is focused upon. By placing the operator "into the loop" of adjusting/relocating the generated path, 3
the accuracy requirements of the motion capturing system are minimized. As shown in Section 2.3.
Step IV: Before machining can start, the work-piece base coordinate system
must be established with respect to the robot base coordinate system. The accuracy of the machining process will greatly depend on the quality of this work. Also simulations of the complete robot motions can be undertaken in order to check for singularities, out of range limitations, and possible collisions. As shown in Section 2.4.
Step V: Finally the machining process can be executed.
2.1 Motion Capture A motion capturing system is used to capture the movement of the human operator. The result of the motion capturing will be a 3D curve, represented as points, in the manufactured work piece's coordinate system Kw . To achieve this, rst the relationship between the work-piece coordinate system and the motion capture suit must be established. The operators index nger's endpoint is stored as input for all calculations. However the elastic properties of human's ngers need further processing of the collected data. Typically, for the most signicant (largest geometries) substitute (ideal) geometries are created based on nger point measurements. The creation of such substitute geometry is done by letting an operator identify what kind of ideal geometry should be created (plane, cylinder, sphere, cone etc.) Then, based on a minimum number of measurement points, geometry is created in such a manner that the squared sum of the distances to each measurement point is minimized, as shown in Eq. 1 (Gaussian Least Square Methodology) [10]. These substitute geometries are given a vector description with a position and a directional (if applicable) vector. m X
li2 → min
(1)
i=1
where i is the number of the measurement point and li is the distance from the measurement point to the geometry. When establishing Kw , typically a directional vector from the most widespread geometry is selected as one of the directional cosines Xw , Yw , or Zw . The next axis could be dened by the intersection line from the crossing of two substitute geometries while the nal third is found with the cross product of the two rst directional cosines. The origin of Kw is typically located in the intersection of three substitute geometries. Figure 2 demonstrates this methodology for a given work-piece. When Kw is readily established the operator can turn to dening the path where the machining action should take part. All the nger movements are calculated and stored with reference to the work-piece coordinate system. 4
Figure 2: Dening a plane
2.2 Operator feedback The generated 3D path from is automatically projected (as a line) onto the surface of the CAD model. This line is shown to the operator via a HMD unit or via a hand held screen. The key point is to let the operator see both the real and ideal work-piece at the same time and let him to modify the path according based on his visual feedback. Figure 3 depict the line projection on a given surface.
Figure 3: Line projection over CAD model
2.3 Task execution Before machining can start the path must be re-stored with reference to the robot base coordinate system K0 . A transformation matrix is used to describe the relationship between two coordinate systems. The matrix T0w consists of the directional cosines xw , yw , zw and the positional vector pw 0 representing the location of the origin of Kw in k0 coordinates. The xw , yw and zw is dened 5
from a selection of well known basic geometric elements (at the ideal work piece) and these elements is re-found on the actual work piece by using the robot as the measuring or probing device. In general the same methodology as used in Section 2.1. Figure 4 is from such probing procedure.
Figure 4: Probing the manufactured work-piece
2.4 Supervision In order to check for singularities, out of range limitations, and possible collisions the proposed methodology suggest to simulate the robot path in an oine programming environment. In case of the occurrence of any of the above mentioned constraints the work-piece should be relocated. To speed up relocation process it is suggested to use a turntable that can relocate the work-piece at a pre dened value, enabling to automatically keep track of the movement of the work-piece coordinate system. After relocation a new simulation should be carried out. Figure 5 shows a robot simulation of a motion-captured path.
3 Mobile robot control in Mixed Reality Before applying the methodology to a complex machine (e.g. industrial robot, which has 6 degree of freedom) as a rst step, control of a mobile robot, which has only 2 degree of freedom, was achieved and evaluated. Already at this stage, the following problems occurred and needed to be compensated:
• Positioning error in real robot (Robot position is calculated from dierent measurement devices) 6
Figure 5: Simulating robot task
• Time-delay in underlying network connection • Positioning error of virtual robot, caused by time-delay As mentioned before, humans receive most of their information visually and 3D visualization of information is probably the best way of feedback to the operator. This can be achieved using active stereoscopic rendering, but generating one image per eye in order to enhance the depth perception in the virtual world is needed by the computer system [11]. This type of rendering is supported by special graphics hardware and software, which can draw into four buers instead of the usual two, and can display eye specic images in sync with the LCD shutter glasses. In order to utilize the capabilities of the special hardware, a well-known open source graphical engine for image rendering, called OGRE 3D [12] was modied and adapted. There are several projects that use OGRE for 3D visualization, but there is no existing native support for stereo rendering in the framework. In order to enable the stereo rendering mode, the OpenGL rendering subsystem of OGRE has been altered and Quad-Buer support was introduced [13]. The application developed for providing feedback for the operator is the virtual equivalent of the Hashimoto laboratory in Tokyo, which also hosts iSpace [14]. The virtual equivalent of the real laboratory in Hungary can be seen in Figure 6(a), while the laboratory itself is shown in Figure 6(b). Remote control experiments were executed between Hungary and Japan during March, 2008.
3.1
Position prediction
The network connection between the Hungary and Japan has an average roundtrip latency of 300 milliseconds. In order to compensate for the delay, the 7
(a)
(b)
Figure 6: (a) Real laboratory in Japan (b) Virtual laboratory in Hungary virtual robot's position must be predicted based on the parameters of its motion. Without this compensation, the network delay would result in an incoherent and fuzzy image of the virtual robot in Hungary. Before this compensation takes place, the positional data from the real robot's positioning system needed to be smoothed. To compensate measurement noise and thus the positional error of ultrasonic positioning system and wheel encoders, Extended Kalman Filter (EKF) was used. The implementation of the EKF is described in detail in [15]. The combination of dierent sensor data (ultrasonic positioning system, laser-range nders and the robot's wheel encoders) is required in order to reduce position measurement errors. Raw sensor data is collected and processed (applying EKF) by dierent components and is sent to the main computer, which handles the communication over the internet. Network latency resulted in unacceptable visualization and inadequate control. This was mainly due to the uncertainty in the time delay, which was considerably high because of the wireless subnet in the real robot. The following state equations were introduced to describe the position and the orientation of the robot:
• x v cos(θ) y = v sin(θ) θ ω
(2)
where x and y are the 2D coordinates and θ is the position-angle of the robot at any given time in the coordinate system that is tied to the oor of the room. Variable v denotes the speed of the robot and ω is the angular velocity around the center of the robot. Solving Eq. 2 in small discrete time intervals can predict the position of the robot, provided that the change in speed and angular velocity is not abrupt. In our case this latter condition will hold, since we will send commands in discrete time intervals to the robot over the internet, and predict the position of the local (virtual) robot during these short time intervals 8
when speed and angular velocity is not changing. The discrete form of Eq. 2 can be integrated with the above assumption. During time-step ∆t both the speed and the angular velocity remains constant, therefore the position-angle for τ (that is in [0, ∆t]) can be calculated as:
θ(t+τ ) =θ(t) + ω(t) = θ(t) + ωτ With using Eq. 3 the integration of Eq. 2 is easy: x(t + ∆t) x(t) + ωv (sin(θ(t) + ω∆t) − sin(θ(t))) y(t + ∆t) = y(t) + v (cos(θ(t) + ω∆t) − cos(θ(t))) ω θ(t + ∆t) θ(t) + ω∆t
(3)
(4)
Based on Eq. 4, a position predictor can be constructed, which predicts the robot's real response to the given commands. The predictor maintains a state of the virtual robot based on synchronisation messages from the real robot in Japan and on the issued control command. It also estimates the time-delay based on the packet round trip time and only modies the state of the virtual robot after the estimated time. The commands, which are no longer aecting the prediction, are removed from the list and the estimated aection time of the rst command is modied to be equal with the time of the synchronisation. The proposed predictor is summarized in Figure 7.
Figure 7: Simple position predictor Two dierent type of path have been executed in order to verify the behaviour of the predictor. In the following gures the real and virtual path are compared. The rst test case is shown in Figure 8(a). This demonstrates the mobile robot following a variable curve. Based on the achieved results it can be clearly seen, that the virtual robot reacts faster to commands, than the real robot. This is caused by the fact, that the applied predictor does not take in account the acceleration model of the real robot and also because of the prediction of time delay of robot command execution. However the overall mean 9
deviation is lower than 74.1 mm, which is an acceptable error rate in case of a very fast and simple predictor. In the second test case (See in Figure 8(b)) the robot was driven to predened positions (−1, −0.9) and (−1.2, −1.7) and (0.1, −0.6) and (0.6, −0.8) and (1, −1.4), where the rst coordinate is the and the second is respectively.
(a)
(b)
Figure 8: (a) First test case (variable curve following of mobile robot) (b) Second test case (destination approach of mobile robot) It can be observed, that in this case the predictor works better which is the result of the mainly linear motion. The mean deviation in this case is lower than 44 mm. The current system setup only considers delays in one direction; the robot is visible at the estimated current position in real-time, but the control is still delayed according to the image and the one way latency.
4 Conclusion In this paper the Visual Robot Programming Methodology has been presented. The proposed system makes robot programming human friendly, rapid and exible. The methodology uses motion capturing and 3D feedback to the operator that enables the operator to immediately adjust his actions. The generated path can be simulated before machining in order to take account for any limitations in the real robot movements. The operator is physically disconnected from the robot and can move freely around in any shop-oor environment. Preliminary results of the methodology shows, that 3D visual feedback is highly ecient in interactive environments.
Acknowledgments The author would like to express his thanks to Prof. Peter Korondi, Prof. Bjørn Solvang and Prof. Zsolt Frei for his support as a scientic advisor. This work has been supported by the National Science Research Fund (OTKA K62836) and by the Norwegian Governmental Scholarship Fund.
10
References [1] A. Sett and K. Vollmann, Computer based robot training in a virtual environment, in Proc. of IEEE International Conference on Industrial Technology (ICIT'02), vol. 2, pp. 11851189, Dec 2002. [2] A. Jaramillo-Botero, A. Matta-Gomez, J. Correa-Caicedo, and W. PereaCastro, Robomosp, IEEE Robotics & Automation Magazine, vol. 13, no. 4, pp. 300302, 2006. [3] K. Chang-Sei, H. Keum-Shik, H. H. Yong-Sub, K. Soo-Ho, and K. SoonChang, Pc-based o-line programming using vrml for welding robots in shipbuilding, in Proc. of IEEE Conference on Robotics, Automation and Mechatronics, vol. 2, pp. 949954, Dec 2004. [4] G. Biegelbauer, A. Pichler, M. Vincze, C. Nielsen, H. Andersen, and K. Haeusler, The inverse approach of expaint [robotic spray painting], IEEE Robotics & Automation Magazine, vol. 12, no. 3, pp. 2434, 2005. [5] M. Bruccoleri, C. D'Onofrio, and U. L. Commare, O-line programming and simulation for automatic robot control software generation, in Proc. of 5th IEEE International Conference on Industrial Informatics, pp. 491496, June 2007. [6] T. Okamura, B. Shizuki, and J. Tanaka, Execution visualization and debugging in three-dimensional visual programming, in Proc. of Eighth International Conference on Information Visualisation (IV2004), pp. 167172, July 2004. [7] W. Broil, J. Herling, and L. Blum, Interactive bits: Prototyping of mixed reality applications and interaction techniques through visual programmingh, in Proc. of IEEE Symposium on 3D User Interfaces, pp. 109115, March 2008. [8] K. S. Han and J. J. Wook, Programming lego mindstorms nxt with visual programming, in Proc. of International Conference on Control, Automation and Systems (ICCAS'07), pp. 24682472, Oct. 2007. [9] T. Lourens, Tivipe - tino's visual programming environment, in Proc. of the 28th Annual International Computer Software and Applications Conference (COMPSAC 2004), vol. 2, pp. 1015, Sept. 2004. [10] J. Cosmas and R. Hibberd, Geometrical testing of three-dimensional objects with the aid of pattern recognition, IEEE Proceedings - Computers and Digital Techniques, vol. 138, pp. 250254, July 1991. [11] J. Nurre and E. Hall, Positioning quadric surfaces in all active stereo imaging system, IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 13, pp. 491495, May 1991. 11
[12] O. source, Ogre 3d. http://www.ogre3d.org/. Online. [13] P. source, Opengl handbook. http://www.ogre3d.org/. Online. [14] J. Lee and H. Hashimoto, Intelligent space - concept and contents, Advanced Robotics, vol. 16, no. 3, pp. 265280, 2002. [15] D. Brscic, T. Sasaki, and H. Hashimoto, Implementation of mobile robot control in intelligent space, in Proc. of SICE-ICASEInternational Joint Conference, pp. 12281233, Oct 2006.
12