Implementation of Position-Force Control in MRROC++

0 downloads 0 Views 653KB Size Report
case this sensor is treated as an exteroceptor detecting events in the environment. Each of those roles imposes a different structure on the system, and that had ...
Implementation of Position-Force Control in MRROC++ Tomasz Winiarski

Cezary Zieliński

[email protected] [email protected] Warsaw University of Technology, Inst. of Control and Computation Engineering, Faculty of Electronics and Information Technology, ul. Nowowiejska 15/19, 00-665 Warsaw, POLAND Abstract SEMAPHORE PULSE

1

The paper concentrates on the way that position-force control has been implemented in the MRROC++ robot programming framework. Moreover, a position-force control based application is presented. Besides executing the control task the controller collected experimental data enabling the evaluation of the implementation of the control scheme.

1

Introduction

MRROC++ [10] is an object oriented programming framework [5] enabling the implementation of single and multi-robot system controllers executing diverse tasks. Its structure was derived by formalizing the description of the general structure of multi-robot controllers [11]. Position-force control algorithms are necessary both for diverse industrial applications, e.g. milling and polishing, and for the creation of effective service robots [3, 9, 6, 4]. In this paper the traditional name „position-force” control will be used, although it will pertain to the end-effector pose (position and orientation) and force/torque exerted by the end-effector on the objects in the ambient.

2

Position-force controller

2.1

General structure

MRROC++ is based on facilities (i.e. threads, processes, inter-thread communication) provided by the QNX 6.3 real-time operating systems (RTOS) [1]. It is coded in C++. Fig. 1 presents a general MRROC++ based system structure. 1 This

work was supported by Polish Ministry of Science and Information Technology grant: 4 T11A 003 25.

MP UI

MESSAGE

UI_ COMM

INTERRUPT SHARED MEMORY

UI_SRP UI_ MASTER

VSP

(receives messages from other processes)

EDP

ECP

VSP_ MASTER

EDP_ MASTER

EDP_ TRANS

EDP_ VSP_I

EDP_ READER

EDP_ SERVO

EDP_ FORCE

Manipulator motors

VSP_ CREAD

Force/torque transducer

Figure 1: A MRROC++ based position-force control system structure with inter thread communication diagram

The Master Process MP is the system coordinator. Each subsystem is composed of the Effector Control Process ECP, Effector Driver Process EDP and is supplemented by Virtual Sensor Processes VSP. There are as many such subsystems as there are independent effectors (i.e. mechanical devices influencing the state of the environment) in the system. Both the EDP and the VSPs depend on the associated hardware, whereas the MP and the ECPs are hardware independent, but depend on the task that is to be executed by the system. The User Interface Process UI is responsible for presenting the system status to the operator (UI SRP thread) and enables the opera-

tor to start, pause, resume or terminate the execution of the code responsible for the realization of the task (UI MASTER thread). Force measurements in the applications described below are utilized in two ways. Firstly, they are used for continuous control of the force exerted by the endeffector on the surrounding objects. Secondly, they are used to detect certain events, e.g. sudden changes in the exerted force (impacts). In the first case the force sensor is treated as a proprioceptor, enabling continuous position-force control of the limb. In the second case this sensor is treated as an exteroceptor detecting events in the environment. Each of those roles imposes a different structure on the system, and that had to be reflected in the structure of the EDP. EDP process consists of six threads. The EDP MASTER thread handles the communication with the ECP and the UI . The EDP TRANS thread is responsible for the distribution of position and force control components and prepares the position setpoint for the EDP SERVO. The EDP SERVO consist of six position controllers for manipulator motors. EDP READER is an extra thread created for gathering experimental measurements from the other threads of the EDP and does not take part in control of the system. It is included just for collecting experimental data, but in industrial systems could be used for self-diagnostic purposes. EDP FORCE reads force and torque measurements from the transducer. EDP VSP I transmits force readings to the VSP. The VSP process consist of two threads. The VSP CREAD thread obtains the measurements from the EDP VSP I, and then process them to detect events. VSP MASTER communicates with the ECP or The MP, informing them about the detected events.

2.2

Communication between ECP and EDP

EDP is commanded by its ECP. If the type of the robot is changed, a new EDP will have to be supplied, but the remaining part (i.e. the task dependent part) will not have to be altered, because a standard communication protocol was devised. EDP is treated as a server interpreting commands issued by the associated ECP and the UI (in the case of manual motions). Being a server the EDP waits for the client to issue a command. Once the command is delivered, it is interpreted and executed. The list of all commands is presented in fig. 2. If a reply from EDP is required another command has to be issued by the client (i.e. QUERY). This is so, because the execution of a command by the electro-mechanical part of the system

EDP commands

? SET

? SET GET

 b ARM b ROBOT MODEL - OUTPUTS

? GET

? SYNCHRO

? QUERY

b ARM b ROBOT MODEL - INPUTS

ROBOT MODEL r KINEMATIC MODEL r TOOL

r SERVO ALGORITHM r rrrrr

r r r -

parameter set no

r r r r r -

number of steps value in step no

corrector no frame xyz Euler zyz xyz angle axis algorithm no algorithm parameter set no

ARM

r POSE

r POSE FORCE TORQUE r rrrrr

absolute/relative motors joints frame xyz euler zyz xyz angle axis local reference frame pos xyz rot xyz force xyz torque xyz selection vector

Figure 2: Effector Driver Process commands and their arguments (arrows with bullets represent EITHER OR, those with open circles – OR, and those without any symbol – AND) takes considerable time. There are two main commands: SET and GET. The former influences the state of the EDP and thus the robot, and the latter causes the EDP to read the current state of the manipulator controller embedded in the EDP. Sometimes, the user needs to exert simultaneous influence on the robot and to read its current state, so a SET GET command has been defined, which causes simultaneous execution of a SET and GET command. As the majority of robots has incremental position measuring devices, it is required that prior to the task execution the robot defines its current pose (position and orientation) in the work-space. This is usually done only once by moving the arm to a known base location at system initiation. This is caused by the SYNCHRO command. The SET command can: set the arm pose, i.e. cause the robot to move to the desired location, redefine the

EDP

ECP

EDP_ MASTER R

Macrostep generation

EDP_TRANS L

variant 2

Reply

Interpret

EDP_SERVO

variant 1

Pc DKin

Mi

Select

L

L

Fd + Fe ( k ) L

L

Macrostep

P( k −1) Delay ∆ Pp( k ) StGen L + P( k ) + L

∆ L Pp

M

Hardware

∆ Pf (k+) L

F/T CL

Fc (k )

EDP_FORCE

Step

G

G L

T

P( k )

IKin

θd ( k ) + -

motor τ ( k ) control

θ( k )

Robot & F F/T trans ( k )

F/T conv macrostep data flow

step data flow

Figure 3: Position-force controller, where: Interpret – the ECP commands interpretation, Reply – sends the status to the ECP, DKin – direct kinematic problem, Select – pose and force/torque coordinates selection, StGen – step generator of pose component, F/T CL – force/torque control law, IKin – inverse kinematic problem, F/T trans – force/torque transducer, F/T conv – force/torque reading conversion tool affixed to the arm, change the set of parameters or the local corrector of the kinematic model, switch the servo-control algorithm of any or all of the arm motors, alter the parameters of the servo algorithms, or set the binary outputs of the robot controller. The GET command can read: the current pose of the arm, the currently used tool, the kinematic model and its corrector and servo algorithm parameters, or the binary inputs to the robot controller. Switching kinematic model parameters and correctors enables the improvement of the local precision of arm motions. Modification of the servo algorithms or their parameters can improve tracking ability. This switch can be performed when significant load modification is anticipated. Both the tool and the arm positions and orientations can be defined in terms of homogeneous transforms, Cartesian coordinates with orientation specified either as Euler angles or in angle and axis convention. Moreover, poses can be specified in terms of joint angles or motor shaft angular increments. The arm pose argument in the command can be regarded as an absolute or relative value. Each motion command SET ARM is treated as a macro-step. An extra argument specifies into how many interpolation steps (servo sampling periods) it should be divided. Because the incremental position measurement is delivered simultaneously with commanding the new PWM value for the motors, to obtain a continuous motion without stopping, the reading has to be delivered to the upper control layers a few steps before the interpolated motion terminates. The user has control over that by specifying in which step number the reading is required. If this value is

one more than the number of interpolation steps, the reading is delivered after the motion stops. For uninterrupted trajectory segment transition it suffices if it is one less than the number of interpolation steps. For the purpose of position-force control the POSE FORCE TORQUE argument was introduced. POSE FORCE TORQUE command takes as one of its arguments a local reference frame. This frame can be related to either: the tool frame (relative mode) or the global frame (absolute mode). The three remaining arguments: pos xyz rot xyz, force xyz torque xyz and selection vector are vectors with six components each. The selection vector determines along or about which axes of the local reference frame position/orientation is controlled and along/about which force/torque is controlled. The pos xyz rot xyz and force xyz torque xyz determine the set values of translation/rotation and force/torque along/about the selected axes. The translation/rotation components in the directions selected as force/torque controlled are irrelevant and vice versa, thus out of the twelve components of the two vectors (pos xyz rot xyz and force xyz torque xyz) only six are relevant.

2.3

Control algorithm

Fig. 3 presents the structure of the position-force controller. The figure contains two parts: the macrostep execution part and the step execution part. The ECP sends the SEND ARM POSE FORCE TORQUE command (M ) to the EDP MASTER thread of the EDP. This

command contains the definition of the macrostep that must be executed. The definition specifies: pos xyz rot xyz and force xyz torque xyz vectors, number of servo sampling periods (motion steps) defining the duration of the macrostep, selection vector and local reference frame. The local reference frame is represented by the G L T matrix relating the local to the global reference frame. In the following the left superscript denotes the reference frame (L – local reference frame, G – global) of the associated vector quantity. The right subscript in parenthesis is the control step number, counted from the beginning of the macrostep execution. The pos xyz rot xyz vector defines ∆ L Pp , the endeffector pose increment for the current macrostep. This increment is subdivided by the StGen block into pose increments for each step within the macrostep ∆ L Pp(k) : ∆ L Pp ∆ L Pp(k) = (1) n where n is the number of steps in the macrostep, and k is the current step number. In each step the force/torque error vector L Fe(k) is the difference between the desired force/torque vector L Fd (defined by force xyz torque xyz) and the meassured force/torque vector L Fc(k) : L

Fe(k) = L Fd − L Fc(k)

L

∆ Pf (k) = K Fe(k)

(3)

where K is the viscosity. The end-effector pose setpoint for the current step of the macrostep execution is described by the vector L P(k) : L

P(k) = L P(k−1) + ∆ L Pp(k) + ∆ L Pf (k)

(4)

It is transformed into the global reference frame: G

L P(k) = G L T P(k)

3

Copying drawings

(2)

The vector L ∆Pf (k) is obtained from the force/torque control law: L

The end-effector pose that is delivered to the ECP, for the purpose of high-level feedback, can originate either directly with the encoder measurements that are subsequently transformed by the procedure solving the direct kinematics problem (variant 1) or it is simply the rewritten set value for step n (variant 2). The EDP FORCE thread transforms the raw force/torque measurements vector F(k) into the vector L Fc(k) .

(5)

The new motor position vector θd(k) is the solution of the inverse kinematic problem for G P(k) . The vector θd(k) is a part of the command sent to the motor controllers implemented in the EDP SERVO thread. Motor controllers force the motors to move in the direction of the setpoint by delivering the vector τ(k) consisting of the values of PWM duty cycles.

Figure 4: Copying drawings - the reproduction phase An IRp-6 robot mounted on a track and equipped with a force/torque sensor has been applied to the execution of three benchmark tasks: copying drawing, rotating the crank and following an unknown contour. All tasks use the same EDP position-force controller and communication structures between the ECP and the EDP, but differ in the task dependent part of MRROC++: ECP and VSP processes. Only the first task will be presented here. The drawing in human–robot cooperative system has attracted the attention of other researchers [7]. In our investigation the force sensor is used to manually guide the robot holding a pen through the motions producing a drawing. During that phase the robot is fully compliant and poses are recorded at certain intervals of time – this is the teach-in phase. Besides continuous force control the sensor is also used for detecting the vertical jerks signaling the necessity of lifting the pen off or lowering it onto the surface of the paper to transfer it to another area of the drawing. Once the drawing is complete the pen held by the robot is dis-

placed to a new location and the mode of operation is switched to partially compliant – reproduction phase. In that phase the robot is position controlled in the plane of the drawing and it is force controlled in the axis normal to the drawing. In this way the pen adjusts itself to the unevenness of the surface on which the drawing paper rests. The force sensor plays a dual role. On the one hand, it is engaged in the continuous limb control thus it is a proprioceptor and on the other hand it detects events occurring in the environment, thus it behaves as an exteroceptor. The latter behaviour requires the addition of a VSP to the system.

moving up and down. The operator makes unconstrained moves, hence the trajectory above the paper is uneven in the vertical direction. The reproduction algorithm produces exact vertical motions, thus the trajectories above the paper are horizontal. This is evident in graphs 7 and 8. 0.89 Z [m] 0.885

0.88

c 0.875

0.88 b

Z [m]

d

0.87

0.875 a

* 0.865

1.36

1.38

1.4

1.42 step

0.87

1.44

1.46

[nr] 1.48 5

x 10

0.865 -0.02

Figure 7: The Z coordinate of the end-effector pose during copying drawings: the teach-in phase

-0.03

-0.04 Y [m] -0.05 0.76

0.765

0.77

0.78

0.775

X [m] 0.79 0.785

0.88 Z [m]

c

Figure 5: Three dimensional motion trajectory during copying drawings: the teach-in phase

0.875

b

d

0.87

0.88 Z[m]

* a

0.875 0.865

0.87

0.865

1.8

1.82

1.84

1.86

1.88 step

1.9

1.92

1.94 [nr] 1.96 5

x 10

Figure 8: The Z coordinate of the end-effector pose during copying drawings: the reproduction phase

0.05 0.045 0.04 0.035 Y[m] 0.03 0.755

0.76

0.765

0.77

0.775

X [m]

0.78

Figure 6: Three dimensional motion trajectory during copying drawings: the reproduction phase Figs. 5, 6 present three dimensional trajectories of the end-effector motion during teach-in and reproduction of the six fethers of the arrow (fig. 4). A visible difference between the graphs is the way the pen is

There are four segments marked a, b, c and d in graphs 7, 8, 9, 10. All the four segments occur while drawing each fether of an arrow: a - motion on the paper surface, b - pen lift-off, c - motion above the paper, d - pen lowering. The symbol „*” points to the impact of the pen on the surface of the paper. During the whole teach-in phase and in the segment a of the reproduction phase, the controller is commanded to reach the vertical force of 20u – 1N , however, in the segments: b, c and d of the reproduction phase, the motion is purely position controlled.

250 [u] 200

The Z force coordinate

150 100

*

50 0

a

b

c

d

-50 -100 -150 -200 -250

1.36

1.38

1.4

1.42 step

1.44

1.46

[nr] 1.48 5 x 10

Figure 9: Force applied in the vertical direction during copying drawings: the teach-in phase 100 [u]

The Z force coordinate

References [1] QNX Neutrino reference manual, 2004.

*

[2] Herman Bruyninckx and Joris De Schutter. Specification of Force-Controlled Actions in the “Task Frame Formalism”: A Synthesis. IEEE Transactions on Robotics And Automation, 12(4):581–589, August 1996.

80

60

40

[3] J. Craig. Introduction to Robotics, Mechanics & Control. Addison-Wesley, 1986.

20

0 a -20

1.8

b, c, d 1.82

1.84

1.86

1.88 step

1.9

1.92

1.94 [nr] 1.96 5

x 10

Figure 10: Force applied in the vertical direction during copying drawings: the reproduction phase Experiments show that the applied algorithms are robust enough to execute the whole task correctly, even with the force loop of position-force control simplified to the utmost extent (proportional control) with only viscous component considered.

4

the three cases the obtained results were satisfactory, the control strategy can be improved by introducing a more elaborate control law [2]. This will be the subject of further research. Moreover, the study of impact and friction will be at the focus of attention. The drawing copying benchmark task is especially well suited for investigations as it includes all the important factors of position-force control: continuous position-force servoing, friction of the pen over the paper surface, impact of the pen on the paper, and detection of events through monitoring the force readings (jerks caused by lifting the pen off the surface of the paper by the human operator).

Conclusions

The conducted experiments verified that the MRROC++ robot programming framework is a suitable tool not only for the implementation of applications requiring position-force control, but also it can provide experimental data [8], necessary for the evaluation of the control strategy, in parallel to the executed task. Three benchmark tasks have been implemented: copying drawings presented in this paper, turning a crank and unknown contour following. Although in all of

[4] Shuguang Huang and J.M Schimmels. Sufficient Conditions Used in Admittance Selection for ForceGuided Assembly of Polygonal Parts. IEEE Transactions on Robotics And Automation, 19(4):737–742, August 2003. [5] M. E. Markiewicz and C. J. P. Lucena. Object oriented framework development. ACM Crossroads, 7(4), 2001. [6] C. Natale. Interaction Control of Robot Manipulators, Six Degrees of Freedom Tasks. Springer Tracts in Advanced Robotics, 3, 2003. [7] Tsumugiwa T., Yokogawa R., and K. Hara. Variable impedance control based on estimation of human arm stiffness for human-robot cooperative calligraphic task. In Proceedings of the 2002 IEEE Conference on Robotics And Automation, pages 644–650, May 2002. [8] T. Winiarski and C. Zieliński. Position-force controller experimental station (in Polish). In National Conference on Robotics, June 23-25 2004. [9] G. Zeng and A. Hemami. An overview of robot force control. Robotica, 15:473–482, 1997. [10] C. Zieliński. The MRROC++ System. In First Workshop on Robot Motion and Control, RoMoCo’99, pages 147–152, 28–29 June 1999. [11] C. Zieliński. By How Much Should a General Purpose Programming Language be Extended to Become a Multi-Robot System Programming Language? Advanced Robotics, 15(1):71–96, 2001.

Suggest Documents