applied sciences Article
Design of Demonstration-Driven Assembling Manipulator Qianxiao Wei, Canjun Yang *, Wu Fan and Yibing Zhao State Key Laboratory of Fluid Power and Mechatronic Systems, Zhejiang University, Hangzhou 310027, China;
[email protected] (Q.W.);
[email protected] (W.F.);
[email protected] (Y.Z.) * Correspondence:
[email protected]; Tel.: +86-571-87951271 (ext. 6318) Received: 26 March 2018; Accepted: 10 May 2018; Published: 16 May 2018
Abstract: Currently, a mechanical arm or manipulator needs to be programmed by humans in advance to define its motion trajectory before practical use. However, the programming is tedious and high-cost, which renders such manipulators unable to perform various different tasks easily and quickly. This article focuses on the design of a vision-guided manipulator without explicit human programming. The proposed demonstration-driven system mainly consists of a manipulator, control box, and camera. Instead of programming of the detailed motion trajectory, one only needs to show the system how to perform a given task manually. Based on internal object recognition and motion detection algorithms, the camera can capture the information of the task to be performed and generate the motion trajectories for the manipulator to make it copy the human demonstration. The movement of the joints of the manipulator is given by a trajectory planner in the control box. Experimental results show that the system can imitate humans easily, quickly, and accurately for common tasks such as sorting and assembling objects. Teaching the manipulator how to complete the desired motion can help eliminate the complexity of programming for motion control. Keywords: automatic control of manipulator; identification of target object; assembly system; manipulator learning; robotic imitation learning
1. Introduction Assembly is required when a machine is built of many individual parts. In the past few decades, assembly has been manually completed by workers [1]. Thanks to the rapid development of industrial automation, more and more work is now completed by machines [2]. Peg-hole assembly, flat panel parts assembly, and many other automatic assembly machines have been invented for this purpose [3,4]. Though these assembly machines could accomplish the desired task satisfactorily, they are usually complicated, and can only be used in some specific scenarios. In a large-scale automobile assembly system, flexible and adaptable assembly technology and strategies include robotic fixtureless assembly, self-reconfigurable assembly systems, and increased modularity in the assembly process [5]. In flexible manufacturing, it is required that the assembly machine be able to perform various tasks easily and quickly. However, it is expensive to adjust the existing assembly line, since the equipment needs to be replaced to performance different tasks. Besides, different tasks have different requirements for the robot’s initial and final postures as well as the placement location of the workpieces. The traditional assembly system not only has difficulties in sorting and assembling various different workpieces, but also may fail to grab the required positions of workpieces that change dynamically in different tasks, and thus has negative influences on production efficiency [6,7]. To further improve the universality of assembly machines, 6-DOF (Degree Of Freedom) robot manipulators are widely used in automated assembly because of their flexibility. However, how to control the manipulators efficiently, accurately, and easily is always a difficult and urgent problem. Appl. Sci. 2018, 8, 797; doi:10.3390/app8050797
www.mdpi.com/journal/applsci
Appl. Sci. 2018, 8, x FOR PEER REVIEW
2 of 12
To further improve the universality of assembly machines, 6-DOF (Degree Of Freedom) robot 2 of 11 manipulators are widely used in automated assembly because of their flexibility. However, how to control the manipulators efficiently, accurately, and easily is always a difficult and urgent problem. Vision-guidedmanipulators manipulators were proposed to this tackle this problem the Vision-guided were proposed to tackle problem [8–10]. To [8–10]. improveTo theimprove performance performance of object the system, Laptev proposed a method of boosted of object recognition in recognition the system, in Laptev proposed a method of boosted histograms [11].histograms Influential [11]. Influential research by Zampetis Aivaliotis presented and Zampetis presented method the visualof recognition of research by Aivaliotis and a method for athe visualfor recognition parts using parts using machine learningtoalgorithms enable the manipulation of parts Zhang [12]. Chunjie Zhang machine learning algorithms enable theto manipulation of parts [12]. Chunjie came up with came up with an image method representation methodrandom using boosted random contextual spaces an image representation using boosted contextual semantic spacessemantic [13]. Panagiota [13]. Panagiota Tsarouchi proposed an online two-dimensional (2D) vision system combined with Tsarouchi proposed an online two-dimensional (2D) vision system combined with offline data from offline(Computer data from CAD Aided files for the of three-dimensional (3D) CAD Aided(Computer Design) files for Design) the computation of computation three-dimensional (3D) POI (Points of POI (Points of Interest) coordinates [14]. Interest) coordinates [14]. roboticassembly assembly with a vision system, new technologies are emerging. Corner In robotic with a vision system, manymany new technologies are emerging. Corner detection detection is quiteinefficient in part identification quick beonline used for online applications is quite efficient part identification and quickand enough toenough be usedtofor applications [15]. Part [15]. Part recognition using and ultrasonic sensor, which is at installed at the end-effector of the recognition using vision andvision ultrasonic sensor, which is installed the end-effector of the robot, is robot,toisrecognize used to recognize for assembly a robotic assembly system [16]. Twoare cameras are coordinated used objects forobjects a robotic system [16]. Two cameras coordinated to capture to capture images, which are used for golf club head Theautomated system adopts automated images, which are used for golf club head production. Theproduction. system adopts mechanical arms mechanical arms matched with cameras for 3D space alignment [17]. For large-scale components, matched with cameras for 3D space alignment [17]. For large-scale components, robotic assembly robotic guided assembly guided by two vision and three one-dimensional laser sensors system bysystem two vision sensors and threesensors one-dimensional (1D) laser sensors(1D) were realized to were realized to automatically localize and search the best 3D position and orientation between the automatically localize and search the best 3D position and orientation between the object and the object and the installation installation location [18]. location [18]. demonstration-driven industrial manipulator based on the object This article focuses on aa demonstration-driven recognition and vision-guided vision-guided method. One workpieces One can can demonstrate demonstrate how how to assemble the workpieces manually to manipulator can understand thethe task based on manually to the the manipulator manipulatorin inadvance, advance,and andthen thenthe the manipulator can understand task based visual information processing. Teaching the manipulator how to complete tasks by simple on visual information processing. Teaching the manipulator how to complete tasks by simple demonstration can eliminate the complexity of explicit programming for manipulator control for different tasks. tasks. different Appl. Sci. 2018, 8, 797
2. Methods The proposed system shown in Figure Figure 11 works works in in two phases: phases: the demonstration phase and the automatic assembly assembly phase. First, in the demonstration phase, one manually shows to the manipulator each step (e.g., move one workpiece on on top topof ofanother) another)ofofa aspecific specifictask. task. The camera obtains spatial positions of The 3D3D camera obtains thethe spatial positions of the the workpiece dynamically during the demonstration, and then transform the positions based on the workpiece dynamically during the demonstration, and then transform the positions based coordinate system of the industrial industrial manipulator. According to the inverse kinematic model of the industrial industrial manipulator, manipulator,the the control control information informationof of each each manipulator manipulator joint joint is is obtained obtained for each position of the workpiece. workpiece. Then, in the auto-assembly phase, the manipulator is driven to complete the task automatically based on the control control information information generated generated above. above.
Figure 1. Route chart of teaching a manipulator by vision guiding, without programming the device.
The proposed automatic assembly system consists of a manipulator, a control box, and a camera, as Figure 2 shows. NVIDIA Jetson TK1 was chosen as the image processing module for this system.
Appl. Sci. 2018, 8, x FOR PEER REVIEW
3 of 12
Appl. Sci.The 2018,proposed 8, 797 3 of 11 automatic assembly system consists of a manipulator, a control box, and a camera,
as Figure 2 shows. NVIDIA Jetson TK1 was chosen as the image processing module for this system. YOLO (You Only Look Once) [19,20] was adopted and implemented object detection. It ais fast a fast YOLO (You Only Look Once) [19,20] was adopted and implemented forfor object detection. It is algorithm based on convolutional neural network that can detect the target of interest in the input algorithm based on convolutional neural network that can detect the target of interest in the input image accurately. detection speed is fast, veryup fast, upframes to 45 frames per second. The robotcircuit control image accurately. TheThe detection speed is very to 45 per second. The robot control circuit uses STM32F103 embedded structure to control the movements of seven servomotors, six of uses STM32F103 embedded structure to control the movements of seven servomotors, six of which which are the same joint servomotors, while the seventh is a claw servomotor. The time of execution are the same joint servomotors, while the seventh is a claw servomotor. The time of execution of the of thealgorithm control algorithm on the STM32 microcontroller is about 3 ms. The parameters the robot control on the STM32 microcontroller is about 3 ms. The parameters of the robot of dynamics dynamics of the six joint servomotors are shown in Table 1. of the six joint servomotors are shown in Table 1.
Figure 2. The structure of of thethe automatic assembly system. Figure 2. The structure automatic assembly system. Table 1. The parameters of the robot dynamics of the sixsix joint servomotors. Table 1. The parameters of the robot dynamics of the joint servomotors.
Parameter Parameter Type Type Torque Torque Speed Speed Load Load Frequency Frequency Voltage Voltage
Numerical Value Numerical Value X20-8.4-50 X20-8.4-50 38 38 Kg·cm Kg·cm s/60◦ 0.20.2 s/60° 130130 gg 333 Hz 333 DCHz 6.0 V DC 6.0 V
The theoretical safety factor cancan be be obtained as as follows: The theoretical safety factor obtained follows:
3.7278 N M∗ ∗ 3.7278 N·m∙ m = 1.79 N = = == = 1.79 2.082 N M 2.082 N·m∙ m
(1) (1)
Here, is the maximum output torque X20-8.4-50 steering gear and is the theoretical Here, M*M* is the maximum output torque of of thethe X20-8.4-50 steering gear and MM is the theoretical total torque calculated the actual mass and Therefore, length. Therefore, safety factor provides total torque calculated fromfrom the actual mass and length. the safety the factor provides sufficient sufficient allowance for the uncalculated friction force and non-inertial force of the mechanism. allowance for the uncalculated friction force and non-inertial force of the mechanism. The visual signal processing hardware (NVIDIA TK1), including a 3D camera, can recognize The visual signal processing hardware (NVIDIA TK1), including a 3D camera, can recognize thethe objects real-time and capture their motions. According signals given TK1, manipulator objects in in real-time and capture their motions. According to to thethe signals given byby TK1, thethe manipulator controller can grasp the object, move it to a specified place (as demonstrated by a human), release controller can grasp the object, move it to a specified place (as demonstrated by a human), and and release it. it. A wireless communication module is used to increase the flexibility of the manipulator. The A wireless communication module is used to increase the flexibility of the manipulator. The structure ofbox the is control is shown of structure the control shownbox in Figure 3. in Figure 3. The underlying models for robot control are are shown shown in boxes with solid lines The underlying models for robot control in Figure Figure44below. below.The The boxes with solid nextnext to the green arrows represent the the input image which contains target objects. The The blueblue boxes lines to the green arrows represent input image which contains target objects. represent the sensors and controller components (including a camera, upper computer, lower boxes represent the sensors and controller components (including a camera, upper computer, lower computer, and rudder). The boxes with solid lines next orange arrows represent output computer, and rudder). The boxes with solid lines next to to thethe orange arrows represent thethe output of of each component. boxes dotted the green arrows represent the transmission each component. TheThe boxes withwith dotted lineslines next next to thetogreen arrows represent the transmission link. link. The control interface of the upper computer is written in C# language. It takes the position and shape of the target object as input. Given the arm length of the manipulator and the initial value of each joint, the rudder angles of each joint are solved by the D-H algorithm and are then sent to the lower machine. The upper computer can initialize and configure the minimum and maximum rotation angles of each joint rudder to ensure that there is no collision with the parts or environment. The upper computer also has several other functions which enables fast adjustment of the configurations of the manipulator. First, the rotation angle of each joint rudder can be adjusted and the initial values
Appl. Sci. 2018, 8, 797
4 of 11
can be set manually. Second, aided by a human’s demonstration, the manipulator can move along a specific trajectory. The angle values of the joint rudder are sent to the lower computer through the wireless module and the output of the joint rudder rotation is controlled through the PWM (Pulse Width Modulation), which works at 10 Hz. The angle of each transformation can be set by the upper computer and the velocity range is 1◦ /s–180◦ /s. In order to prevent jitter, progressive acceleration and Appl. Sci. 2018, are 8, x FOR REVIEW 4 of 12 deceleration usedPEER when starting and stopping. Appl. Sci. 2018, 8, x FOR PEER REVIEW
4 of 12
Figure box. Figure3. 3. The The structure structure of of the the control control box. Figure 3. The structure of the control box.
Figure 4. The underlying models for robot control. Figure 4. The The underlying underlying models Figure 4. models for for robot robot control. control.
The control interface of the upper computer is written in C# language. It takes the position and The interface of the upper computer written C#manipulator language. Itand takes the position and of control the target object as input. Given the armislength ofin the the initial value of 3.shape Implementation shape of the target object as input. Given length of the andare thethen initial value of each joint, the rudder angles of each joint the are arm solved by the D-Hmanipulator algorithm and sent to the each joint, theModeling rudder angles of each joint solved by theconfigure D-H algorithm and are then to the 3.1. Kinematics of Manipulator lower machine. The upper computer canare initialize and the minimum and sent maximum lower machine. The upper computer can initialize and configure the minimum and maximum rotation angles of each joint rudder to ensure that there is no collision with the parts or environment. The D-H kinematics model (as shown in Figure 5) of a 6-DOF manipulator is solved, and the rotation angles of each joint to ensure thatfunctions there is nowhich collision with the parts or environment. The upper computer also rudder has several other enables fast adjustment of the positive and inverse kinematics equations are obtained, which provides the designed parameters for The upper computer also has several other functions which fast can adjustment of and the configurations of the manipulator. First, the rotation angle of each enables joint rudder be adjusted intelligent control of the manipulator. According to the actual grasping situation, the degree of the configurations of can the manipulator. First, the rotation of each joint rudder can the be adjusted and the initial values be set manually. Second, aided angle by a human’s demonstration, manipulator the be set manually. aided by aof human’s demonstration, the to manipulator can initial move values along can a specific trajectory. Second, The angle values the joint rudder are sent the lower can move through along a specific trajectory. The angle sent to the lower computer the wireless module and the values output of of the the joint joint rudder rudder are rotation is controlled computer through the wireless module andwhich the output joint rotation is controlled through the PWM (Pulse Width Modulation), worksof at the 10 Hz. Therudder angle of each transformation through the PWM (Pulse Width Modulation), which works at 10 Hz. The angle of each transformation can be set by the upper computer and the velocity range is 1°/s–180°/s. In order to prevent jitter, can be set by the upper computer and the velocity range is 1°/s–180°/s. In order to prevent jitter,
Appl. Sci. 2018, 8, x FOR PEER REVIEW
5 of 12
3.Appl. Implementation Sci. 2018, 8, 797
5 of 11
3.1. Kinematics Modeling of Manipulator freedom of the manipulator is simplified and the complex mathematical model is transformed into a Thekinematic D-H kinematics model (as shown Figure 5) of a 6-DOF manipulator and the simple equation. According to theinparameters of the D-H linkage modelisofsolved, the manipulator positive and inverse kinematics equations areadjacent obtained, which providesThe the pose designed parameters joint [21,22], the transformation matrix of the joint is obtained. expression of the for end intelligent control of the manipulator. According to the actual grasping situation, the degree of the2 (Equation (3)) is easily obtained according to Equation (2). The degree of the joints is shown in Table freedom of to thethe manipulator is simplified according D-H coordinate system.and the complex mathematical model is transformed into a simple kinematic equation. According to the parameters of the D-H linkage model of the manipulator joint [21,22], the transformation matrix adjacent joint is obtained. The pose expression of the Tii−1 =ofRthe (2) Z ( θi ) DZ ( di ) DX ( ai ) R X ( αi ) end (Equation (3)) is easily obtained according to Equation (2). The degree of the joints is shown in Table 2 according to the D-H coordinate system. A = 12 T 23 T 34 T 45 T 56 T 6e T (3)
Figure 5. Kinematics modeling of the manipulator. Figure 5. Kinematics modeling of the manipulator. i
Ti12. Degree RZ i of DZjoints Table di DinX D-H ai Rcoordinates. X i Joint 1
Joint 2
βi
1 2 3 4 5 6◦ A No. T 2T 3T 4T 5T α6T i (e )
Ai (mm)
Adduction/Abduction 2(1–2) 90 28.57 −90 < β1 < 90 Table 2. Degree of joints in D-H coordinates. Flexion/Extension 3(2–3)No. α0 i (°) A120.05 i (mm) i < 90 −90 < β β2
(2) di (mm)
θi (◦ )
101.59
90 + β1
0 di (mm)
(3)
β2i (°) θ
Adduction/Abduction Flexion/Extension 2(1–2) 9090 28.57 101.59 90β3+ β1 4(3–4) 20 122.35 −90 < β3 β1