International Conference on Systems and Processing Information ICSIP’09 May 2-4, 2009, Guelma
Agent-Based Coordinated Control of Mobile Manipulators Study on Hardware System of a Mobile Manipulator (1)
A. Hentout, (2) B. Bouzouia, (3) I. Akli, (4) R. Toumi and (5) Z. Toukal (1), (2), (3), (5)
Advanced Technologies Development Centre (CDTA) BP 17, Baba Hassen, Algiers 16303, Algeria (4) University of Sciences and Technology Houari Boumediene (USTHB) BP 32, El Alia, Bab Ezzouar, Algiers 16111, Algeria (1)
[email protected],
[email protected] Abstract— A mobile manipulator is constituted of a manipulator arm mounted upon a wheeled mobile base. A suitable control architecture of such robot should maintain the characteristics for each sub-system, while providing, at the same time, a basis for cooperation and coordination. In this paper, an agent-based architecture for coordinated control of mobile manipulators is described. It consists of six agents organized in three hierarchical levels. The High-level receives the mission to be carried out and decides on its feasibility. The Intermediate-level defines the desired positions for each dof of the robot. The Low-level provides driving torques input to the motors of the mobile base and the manipulator arm. Key-Words— Agent-based architecture, Coordinated control, Mobile manipulator, Kinematics analysis, RobuTER/ULM.
A
I. INTRODUCTION
mobile manipulator, which has a manipulator arm mounted upon a wheeled mobile base, combines the navigation of a mobile base and the control of a static manipulator arm. The mobility of the mobile base increments the workspace and the operational capability and flexibility of the manipulator arm [1]. This mobility allows the manipulator arm to perform manipulation tasks in a much larger workspace than that of a fixed-base manipulator arm. Such robots allow the most usual missions of robotics that require both abilities of locomotion and manipulation [2]. However, the many degrees of freedom (dof) present new challenges and the coordinated control of the robot becomes very difficult. What make the coordinated control of mobile manipulators a difficult one? There may be three points [3]: • Combining a mobile base and a manipulator arm creates redundancy [4]. A particular point in the workspace may be reached by moving the manipulator arm, by moving the mobile base, or by a combined motion of both. • The manipulator arm and the mobile base dynamically interact with each other, and the mobile base has a slower dynamic response than the manipulator arm. • The wheeled mobile base is often subject to nonholonomic constraints while the manipulator arm is usually unconstrained [5]. The three issues must be taken into account in developing a coordinated controller of such robots.
In recent years, there are a number of researchers studying coordinated control of mobile manipulators. The researches lead to two types of approaches [3]: decentralized control approaches and centralized control approaches. In decentralized approaches, the mobile base and the manipulator arm are controlled separately neglecting the dynamic interaction between them. Such strategy is appropriate when the coupled dynamics is not significant (ex. when the robot moves at low speed) [3]. The authors in [6] implement a behavior-based controller over a mobile manipulator to make it able to open a door. The locomotion control system, the manipulator control system and the sensor systems cooperate in order to realize such behavior. Petersson and al. [7] propose an architecture that enables the integration of the manipulator arm into a behavior-based control structure of the mobile base. This architecture combines existing techniques for navigation and mobility with a flexible control system for the manipulator arm. In centralized approaches, the mobile manipulator is regarded as a redundant robot where the redundancy is introduced by the motion of the mobile base [3]. Erden and colleagues [8] describe a multi-agent control system to a service mobile manipulator that interacts with human during an object delivery and hand-over task in two dimensions. The identified agents of the system are controlled using fuzzy control. The membership functions of the fuzzy controller are tuned by using genetic algorithms. The authors in [3] propose a three-level neural network-based hierarchical controller. The bottom-level controls each joint motor independently. The middle-level consists of a neural network and two sub-controllers. The high-level is a taskplanning unit that defines the desired motion trajectories of each dof. The approach described in this paper is different from both decentralized and centralized approaches exposed above. The unified model of the mobile manipulator is derived from the sub-models describing the mobile base and the manipulator arm. The controller takes into account the kinematics constraints on mobile base motion (ex. nonholonomic constraints) and constraints on the end-effector (ex. following imposed trajectories). First of all, the studied mobile manipulator is described. Then, kinematics and differential kinematics of both mobile base and manipulator arm are analyzed. Next, the agent-based coordinated control architecture of mobile manipulators is described. After that,
ICSIP’09
preliminary results on the execution of a typical mission are exposed. Finally, Conclusion and future works are presented. II. THE STUDIED SYSTEM The RobuTER/ULM [9] mobile manipulator, shown in Fig. 1, is composed of a mobile base surmounted by a manipulator arm. The robot is controlled by an on-board industrial PC and by four MPC555 microcontrollers. The first one controls the mobile base. The second and the third control the first three and the last three joints of the arm. The last MPC555 controls the effort sensor. The rectangular mobile base comprises two driving wheels ensuring its mobility and two insane wheels ensuring its stability. The mobile base has 24 ultrasonic sensors placed around it, a laser measurement system at the front (SICK200) and an odometric sensor on each driving wheel. A six dof ultra-light manipulator (ULM) arm with two fingers electrical gripper is mounted upon the mobile base. All of joints are rotatable. The ULM is equipped with an incremental encoder on each joint. Also, a six dof effort sensor and a CCD camera are installed on the gripper. Manipulator arm
• Manipulator arm reference frame G G G R M = (O M , x M , y M , z M ) : assigned to the manipulator arm basis where must be calculated the position coordinates and the orientation of the end-effector. RM is represented in RB by the position of its origin OM. B. Kinematics analysis of the manipulator arm For the kinematics modeling of the ULM manipulator arm, position coordinates and orientation of the end-effector are calculated by using the Modified Denavit-Hartenberg (MDH) representation [10]. The location of the end-effector is calculated in RM by (1). M (1) TE =MT2 .2T3 .3T4 .4T5 .5T6 .6TE TE is the transformation matrix defining the End-effector frame RE in RM, MT2 is the transformation matrix defining R2 in RM, 6TE is the transformation matrix defining RE in R6 and k −1 Tk is the transformation matrix defining Rk in Rk-1. M
T2 , 6TE and
M
Tk (k=3… 6) are given by (2) [11]: 0 − sin θ k ak ⎡ cos θ k ⎤ ⎢cos α . sin θ cos α . cos θ − sin α − d . sin α ⎥ k k k k k k k k −1 ⎥ Tk = ⎢ ⎢ sin α k . sin θ k sin α k . cos θ k cos α k d k . cos α k ⎥ ⎢ ⎣
k−1
0
0
0
1
(2)
⎥ ⎦
Camera
Embedded PC
Efforts sensor
Joystick Wireless video transmission
Ultrasonic sensors Mobile base
Figure 2. MDH parameters of the ULM manipulator arm LMS
Figure 1. The RobuTER/ULM mobile manipulator
III. KINEMATICS ANALYSIS OF A MOBILE MANIPULATOR The kinematics analysis of mobile manipulators has three parts: The first the manipulator arm kinematics. The second is that of the mobile base kinematics. The last one is related to the hybrid kinematics of the mobile manipulator. A. Reference frames The kinematics analysis of mobile manipulators needs to focus on three main reference frames: G G G • Absolute reference frame R A = (O A , x A , y A , z A ) : it is fixed in the plane of motion. The position coordinates and orientation of the robot are expressed according to G G G x A , y A and z A axis respectively. G G G G • Mobile base reference frame R B = (OB , x B , y B , z B ) : y B G is along the coaxial of the two driving wheels. x B is the G vertical with y B and passes through the mid-point of the line connecting the two-driving wheel centers. RB is represented in RA by the position of its origin OB.
The different parameters αk, dk, θk and ak of the ULM manipulator arm, shown in Fig. 2, are given in Table 1 [12]. k 1 2 3 4 5 6 7
Table 1. MDH parameters of the ULM manipulator arm αk (°) dk (mm) θk ak(mm) QMin(°) QMax(°) 0 d1=290 θ1 0 -95 96 90 d2=108.49 θ2 0 -24 88 -90 d3=113 0 a3=402 – – 90 0 θ3 0 -2 160 90 d4=389 θ4 0 -50 107 -90 0 θ5 0 -73 40 90 deff=220 θ6 0 -91 91
The position of the end-effector can be obtained in the fourth column of the matrix MTE . The posture of the endeffector can be obtained from the first three columns, which is the normal direction vector, the orientation vector and the approach direction vector respectively [3]. C. Kinematics analysis of the mobile base Assuming that the robot moves on the plane, the kinematics model of the mobile base can be decided by three parameters: (Xb, Yb) the Cartesian coordinates of the mobile base and θ the orientation angle. The mobile base must know its current location in order to move to other ones. During the navigation, the mobile
ICSIP’09
base calculates, by odometry, its position and orientation angle in real time. Three cases are distinguished and are studied here below where (see Fig. 3): • Pk, Pk+1: Current and next position of the mobile base. • Δθk: Elementary rotation of the mobile base. • VLk, VRk: Left and right wheel velocity at position Pk. • Vk: Velocity of the mobile base at position Pk. • ωLk, ωRk: Left and right wheel velocity at position Pk. • ωk: Rotation velocity of the mobile base. • ΔDLk, ΔDRk: Left and right elementary advances. • ΔDk: Elementary displacement of the mobile base. • φk: Yaw angle at Pk. • Rk: Steering radius of the mobile base at Pk. • L: Half distance between the two driving wheels. • r: The diameter of the driving wheels. XB
Rk φk
Ybk+1
YB
Pk+1
ωRk+1
ΔDLk ΔDk
→
YB
ΔDRk →
XB
VLk Ybk
θk
Pk L
OA
→
VRk Xbk
XA Xbk+1
Figure 3. Anticlockwise motion of the mobile base
1) Clockwise motion: To calculate the location of the mobile base, the traveled distance can be represented by: ⎛ ΔDLk ⎞ ⎛ Rk + L ⎞ (3) ⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟.φ k ⎝ ΔDRk ⎠ ⎝ Rk − L ⎠
From this equation, the yaw angle and the steering radius can be expressed as: ⎛ ( ΔD Lk + ΔD Rk ) ⎞ ⎜L ⎟ ⎛ Rk ⎞ ⎜ ( ΔD Lk − ΔD Rk ) ⎟ ⎜⎜ ⎟⎟ = ⎟ ⎝ φ k ⎠ ⎜⎜ ΔD Lk − ΔD Rk ⎟ 2L ⎝ ⎠
(4)
Also: Δθ k =
ΔDRK − ΔDLK 2L
So, the next orientation angle is given by: θk+1=θk +Δθk The elementary displacement of the mobile base: ΔDk =
ΔDRk + ΔDLk 2
(5) (6) (7)
Finally, the next position of the mobile base is given by: ⎛ cos(θ k +1 ) ⎞ ⎛ Xb k +1 ⎞ ⎛ Xb k ⎞ ⎟⎟ ⎟⎟ + ΔD k .⎜⎜ ⎜⎜ ⎟⎟ = ⎜⎜ Yb Yb ⎝ k +1 ⎠ ⎝ k ⎠ ⎝ sin (θ k +1 ) ⎠
(13)
2) Anticlockwise motion: In this second case, the distance traveled by the robot can be represented by: ⎛ ΔDLk ⎞ ⎛ Rk − L ⎞ (14) ⎜ ⎟=⎜ ⎟.φ ⎜ ΔD ⎟ ⎜ R + L ⎟ ⎝ Rk ⎠ ⎝ k ⎠
k
⎛ ΔDRk − ΔDLk ⎞ ⎜ ⎟ ⎛φk ⎞ ⎜ 2L ⎟ ⎜⎜ ⎟⎟ = ⎝ Rk ⎠ ⎜⎜ L (ΔDRk + ΔDLk ) ⎟⎟ ⎝ (ΔDRk − ΔDLk ) ⎠
θk+1
ωLk+1
→
⎛•⎞ ⎜ x• ⎟ ⎛ cos θ k 0 ⎞ ⎜ y ⎟ = ⎜ sin θ 0 ⎟.⎛⎜ Vk ⎞⎟ ⎟⎜ ⎟ k ⎜•⎟ ⎜ w ⎜ θ ⎟ ⎜⎝ 0 1 ⎟⎠ ⎝ k ⎠ ⎝ ⎠
So:
→
→
YA
wLk + wRk (11) 2.L V +V w − wRk (12) Vk = Rk Lk = r. Lk 2 2 Finally, the differential kinematics model of the mobile base is given by (13). wk = r.
(8)
To calculate the differential kinematics model of the mobile base: VRk = -r.ωRk=(Rk – L).ωk (9) VLk = r.ωLk=(Rk + L).ωk (10) From (9) and (10), ωk and Vk are given as follows:
(15)
Δθk and θk+1 are given by (5) and (6) respectively. The elementary displacement of the mobile base is given by (7) and the next position is given by (8). In anticlockwise motion: (16) VRk = r.ωRk=(Rk + L).ωk VLk = -r.ωLk=(Rk – L).ωk (17) The differential kinematics model of the mobile base is given by (13) where ωk is given by (11) and Vk is given by: V + VLk w − wLk (18) Vk = Rk = r. Rk 2 2 3) Straight line motion: The elementary displacement of the mobile base: ΔDk=ΔDRk =ΔDLk (19) Also, Δθk=0 so: θk+1=θk (20) Finally, the next position of the mobile base is given by (8). The differential kinematics model of the mobile base is given by (13) where ωk=0 and: Vk=VRk=VLk=r.ωRk= r.ωLk (21) D. Kinematics analysis of the mobile manipulator For the coordinated control of a mobile manipulator, the hybrid kinematics analysis is needed. It involves the interaction between the mobile base and the manipulator arm. The location of the end-effector is given in RA by (23). A (23) TE = ATB .B TM .M TE A A TE is the transformation matrix defining RE in RA, TB is the transformation matrix defining RB in RA, BTM is the transformation matrix defining RM of in RB and MTE is the transformation matrix defining RE in RM and obtained by (1). A TB is given by (24): A
⎡cos θ ⎢ sin θ TB = ⎢ ⎢ 0 ⎢ ⎣ 0
− sin θ
0
cos θ 0
0 1
0
0
XB⎤ YB ⎥⎥ ZB ⎥ ⎥ 1 ⎦
(24)
ICSIP’09
organized in three levels. The following are their basic functions:
B
TM is given by: ⎡1 ⎢0 B TM = ⎢ ⎢0 ⎢ ⎣0
0 0 1 0 0 1 0 0
XM ⎤ YM ⎥⎥ ZM ⎥ ⎥ 1 ⎦
(25)
A. The High-level The Supervisory agent (SA) constitutes the High-level. SA receives the mission to be carried out by the robot and decides on its feasibility. If the mission is accepted, SA interprets it into a plan of operations. Each operation is then distributed on the corresponding agent for execution.
As shown in Fig. 4, (XB, YB, ZB) are the Cartesian coordinates of OB in RA. Also, (XM, YM, ZM) are the Cartesian coordinates of OM in RB. In case of RobuTER/ULM, ZB=12cm, XM=3cm, YM=0cm and ZM=52cm. (XB, YB) are the position coordinates of the mobile base and θ its orientation angle. These last three parameters are calculated by (8) and (6) respectively. →
YA
B. The Intermediate-level The High Mobile Robot (HMRA), the High Manipulator Robot (HARA) and the Vision System (VSA) agents constitute the Intermediate-level. They cooperate in order to define the desired positions for each dof of the robot. 1) High Mobile Robot agent (HMRA): HMRA cooperates with the other agents (HARA, VSA) and sends command to LMRA for execution. Also, it receives reports on the execution of operations and information sensors from LMRA in order to maintain a correct representation on the environment where evolves the robot. 2) High Manipulator Robot agent (HARA): This agent cooperates with the other agents (HMRA, VSA) in order to arrive to a compromise on the execution of the assigned mission and sends the orders to LARA for execution. Also, it receives information sensors and reports on the execution of the sent operations. 3) Vision System agent (VSA): VSA observes the environment by the camera installed on the robot and extracts useful information for the execution of operations (Filtering, Binarisation, Segmentation, and Calculation of 2D and 3D coordinates of objects).
→
→
YM
YB
YE →
OA
→
XM , XB
YB
θ
XB
XE
→
XA
OM OM
XM=3cm
OB
ZM=25cm ZB=12cm
Figure 4. Kinematics parameters of RobuTER/ULM
IV. AGENT-BASED CONTROL ARCHITECTURE
Communication
The proposed agent-based control architecture of mobile manipulators, shown in Fig. 5, was presented first in [13]. Next, it was extended for remote control of RobuTER/ULM in [14]. The architecture is developed with the purpose of making the robot partially or fully autonomous. The agent-based controller consists of six agents
Supervision
Perception
Decision
Action
Supervisory Agent (SA)
Supervision
Perception
Decision
Supervision
Communication Action
Decision
Perception Supervision
High Mobile Robot Agent (HMRA) Perception
Decision
Action
High Manipulator Robot Agent (HARA) Action
Vision System Agent (VSA) Supervision
Supervision Communication Perception
Decision
Perception
Action
Decision
Action
Low Manipulator Robot Agent (LARA)
Low Mobile Robot Agent (LMRA)
Virtual robot Physical robot (Material resources of the robot)
Figure 5. Agent-based coordinated control architecture
ICSIP’09
C. The Low-level The Low Mobile Robot (LMRA) and the Low Manipulator Robot (LARA) agents constitute the Low-level. This level controls each joint motor independently by providing driving torques input to the motors of the mobile base wheels and to the manipulator arm joints. 1) Low Mobile Robot agent (LMRA): LMRA ensures the local control of the mobile base by executing the multiple commands offered by HMRA. In addition, this agent scans the various proprioceptif and exteroceptif sensors equipping the mobile base and sends useful information to HMRA in order to maintain an up-to-date representation of the environment. 2) Low Manipulator Robot agent (LARA): This agent controls the motion of the manipulator arm by executing the movements offered by HARA. Also, it collects information of sensors equipping the manipulator arm and sends them to HARA for processing. V. EXECUTION OF A TYPICAL MISSION The coordinated control of a mobile manipulator is illustrated here through an example of a typical mission. The mission consists of grasping an object at Target position by the end-effector of the robot [15]. A. Execution diagram of the mission First of all, SA verifies if the object to be grasped can be reached by the end-effector of the robot. In such a case, a request (Grasp Object (Target)) is sent to HARA. Otherwise, a message is displayed to the operator (Mission impossible). If the mission is accepted, HARA tests if the object belongs of its current workspace. • The object belongs of the current workspace of the manipulator arm: HARA calculates Qi(i=1…dof) by using the Inverse Kinematics Model (IKM) of the ULM manipulator arm and sends a request (Move(Q1, .., Qdof), Low Mobile Robot Agent
High Mobile Robot Agent
Open Gripper) to LARA. Receiving this request, LARA opens the gripper, executes the movement and, when finished, informs HARA that the Gripper is opened and the End-effector is in Target position. Thus, HARA sends a request to Close the gripper. LARA closes the gripper and responds by Gripper closed. Finally, HARA sends Object grasped to SA. • The object does not belong of the current workspace of the manipulator arm: HARA sends a request (Move Base) to HMRA. This latter calculates New Position (New position is calculated so that the object belongs of the new workspace of the manipulator arm) of the mobile base and sends a request (Move (New Position)) to LMRA. LMRA moves towards New Position while avoiding possible obstacles on its trajectory. At the arrival, LMRA informs LARA that New Position is reached. Now, the object belongs of the new workspace of the manipulator arm. The process continues as in the preceding case. At the end of the mission, SA displays a message (Mission accomplished). The process explained previously is shown in Fig. 6 [15]. B. Experimental results For this experiment, the next assumptions are considered: • All positions are given in cm and in degrees. • The initial position of the mobile base (Fig. 7a) is (0, 0, 0°). The initial position of the manipulator arm, given in RM (Fig. 7a), is (51.63, –10.85, 66.5, –90°, –90°, –90°). • The Target position of the end-effector, given in RB (Fig. 7b), is (–517, –392.5, 25.30, –152°, –128°, –76°). When HARA receives the (Grasp Object (Target)) request, it realizes that the object does not belong of its current workspace. So, it sends a request (Move Base) to HMRA. 1. HMRA calculates New Position of the mobile base. New Position = (–517, –321, 65°). High Manipulator Robot Agent
Supervisory Agent
Low Manipulator Robot Agent
Grasp_Object(Xo, Yo, Zo)
M1
(Xo, Yo, Zo) reachable by the robot? Yes
No
Send M1= Grasp_Object(Xo, Yo, Zo)
M5 Send M2=Move_Base (Xo, Yo, Zo)
M2 Calculate (Xb, Yb, ?b) the New Position of the mobile base
Move (Q1...Qdof)
Open Gripper
No
Send M3= Move(Xb, Yb, ?b)
Navigate towards (Xb, Yb, ?b) while avoiding obstacles Send M4= (Xb, Yb, ?b) reached
Yes
Calculate Qi (i=1... dof) using the IKM of the manipulator arm
M4 M3
(Xo, Yo, Zo) ٛٛto the workspace of the manipulator arm?
Send M5= Move(Q1, .., Qdof), Open Gripper
No
Arm in position and Gripper opened? Yes
M4 M6
Send M4= (Xb, Yb, ?b) reached
Send M7= Close Gripper
Send M6=Arm in position, Gripper opened M7 Close Gripper
M9 M9 Mission impossible
Send M9=Object Grasped
Mission accomplished successfully
Figure 6. Grasping an object by the end-effector of the mobile manipulator
Send M8= Gripper closed
ICSIP’09
2. HMRA sends a request (Move (New Position)) to LMRA. 3. LMRA moves towards New Position while avoiding possible obstacles on its trajectory (Fig. 8). 4. At the arrival, LMRA informs LARA that New Position is reached. 5. HARA calculates Qi (i=1…6) by using the IKM of the manipulator arm Q = (–53°, 9°, 30°, 30°, 14°, –10°). 6. HARA sends a request (Move (Q1, .., Q6), Open Gripper) to LARA. 7. LARA opens the gripper and executes the movement (Fig. 9a). 8. When the movement is executed, LARA informs HARA that the Gripper is opened and the End-effector is in Target position. 9. HARA sends a request to Close the gripper. LARA closes the gripper and responds by Gripper closed (Fig. 9b). 10. HARA sends Object grasped to SA.
the two previous sub-models. Thirdly, the agent-based coordinated control architecture of mobile manipulators is described. Finally, preliminary results have been presented. The controller consists of six agents organized in three hierarchical levels. The High-level receives the mission to be carried out and decides either it can be executed by the robot or not. The Intermediate-level calculates the desired positions for each dof of the robot. The Low-level controls each joint motor by providing driving torques input to the motors of the robot. The validation of the proposed coordinated controller is being carried out on the RobuTER/ULM robot. It will be presented in future works. REFERENCES [1]
[2] [3]
[4] (a) (b) Figure 8. Initial and final position of the RobuTER/ULM
[5] [6] [7] [8]
Figure 8. Moving towards New Position while avoiding obstacles
[9] [10] [11] [12]
[13] (a) (b) Figure 9. Grasping the object by the gripper of the robot
[14]
VI. CONCLUSION The coordinated control of mobile manipulators has been investigated in this paper. Firstly, the hardware system of the studied mobile manipulator has been described. Secondly, the kinematics sub-models of both the manipulator arm and the wheeled mobile base have been analyzed. The unified kinematics of the mobile manipulator is established based on
[15]
P. Nebot, G. Saintluc, B. Berton, E. Cervera. Agentbased Software Integration for a Mobile Manipulator. 2004 IEEE International Conference on Systems, Man and Cybernetics, Vol. 7, pp. 6167-6172. 10-13 October 2004. B. Bayle, J.Y. Fourquet, M. Renaud. Manipulability of Wheeled Mobile Manipulators: Application to Motion Generation. International Journal of Robotics Research, 22(7), pp. 565-581. 2003. Yingshu Chen, Libing Liu, Minglu Zhang, Hui Rong. Study on Coordinated Control and Hardware System of a Mobile Manipulator. Proceedings of the 6th World Congress on Intelligent Control and Automation, Dalian, China. pp. 9037-9041, June 21-23, 2006. Y. Yamamoto, X. Yun. Control of mobile manipulators following a moving surface. Proceedings of International Conference on Robotics and Automation, Atlanta, USA, Vol. 3, pp. 1-6, May 1993. Y. Yamamoto, X. Yun. Coordinating locomotion and manipulation of a mobile manipulator. IEEE Transaction on Automation and Control, 39(6), pp. 1326-1332, June 1996. B.J.W. Waarsing, M. Nuttin, H. Van Brussel. Behaviour-based mobile manipulation: the opening of a door. Proceedings of ASER’03, Bardolino (Italy), pp. 168-175. 2003. L. Petersson, M. Egerstedt, H.I. Christensen. A hybrid control architecture for mobile manipulation, Proceedings of IROS’99, Kyongju (Korea), pp. 1285-1291. 1999. M.S. Erden, K. Leblebicioglu, U. Halici. Multi-Agent System-Based Fuzzy Controller Design with Genetic Tuning for a Mobile Manipulator Robot in the Hand Over Task. Journal of Intelligent and Robotic Systems, 39(3), pp. 287-306. 2004. http://www.robosoft.fr W. Khalil, J. F. Kleinfinger. A new geometric notation for open and closed-loop robots. Proceedings of the IEEE International Conference on Robotics and Automation, USA, 1175-1180, 1986. E. Dombre, W. Khalil. Modeling, Performance Analysis and Control of Robot Manipulators. Control Systems, Robotics and Manufacturing Series. Hermes Science Publishing, 2007. A. Hentout, B. Bouzouia, R. Toumi, Z. Toukal. Implementation of basic functionalities of a driving architecture of mobile manipulator robots. The Scientific Research Outlook & Technology Development in the Arab World (SRO’05). Fez, Morocco. 25-30 October 2008. A. Hentout, B. Bouzouia, Z. Toukal. Multi-agent Architecture Model for Driving Mobile Manipulator Robots. The International Journal of Advanced Robotic Systems, 5(3), pp. 257-268. 2008. A. Hentout, B. Bouzouia, R. Toumi, Z. Toukal. Multi-agent Remote Control of the RobuTER/ULM Mobile Manipulator Robot. The 5th IEEE International Conference on Mechatronics (ICM’09), Málaga, Spain. 14-17 April 2009. A. Hentout, B. Bouzouia, R. Toumi, Z. Toukal. Agent-based Control of Remote Mobile Manipulator Robotic Systems. 8ème Congrès International de Génie Industriel (CIGI2009), Bagnères de Bigorre, France. 10-12 June 2009.