Design and Implementation of Remotely Operation Interface for ...

4 downloads 0 Views 2MB Size Report
master-slave type humanoid robot interface \Puppet" which has the same DOF arrangement with the real robot. This master-slave based methods have an ad-.
Proceedings of the 2001 IEEE International Conference on Robotics & Automation Seoul, Korea • May 21-26, 2001

Design and Implementation of Remotely Operation Interface for Humanoid Robot Satoshi Kagami James J. Ku ner Jr. Koichi Nishiwaki Tomomichi Sugihara Takashi Michikata Takuma Aoyama Masayuki Inaba Hirochika Inoue Dept. of Mechano-Informatics, Univ. of Tokyo. 7{3{1, Hongo, Bunkyo-ku, Tokyo, 113{8656, Japan. Email: [email protected] Abstract

This paper describes a design of remote operation interface for humanoid robot with following three functions: 1) Body DOFs control interface through 3D robot model in a virtual environment, and it has two types of online stablilizing software for maintaining the body balance, 2) Environmental recognition display interface for sensors, sound and 3D vision, 3) Voice and face recognition interface for interacting with operator and a human in front of the robot body. Then, implementation and experiments on our humanoid robot H6 are denoted.

1

Introduction

Recently research on humanoid type robot is active eld in robotics society, and many elemental functions are proposed. Especially bipedal dynamic walking, soft skin, 3D vision, motion planning and other topics are very much progressing. However in order to achieve a humanoid robot which work in a human world together with human being, not only elemental functions but also integration of these functions will be a important problem. At present, many humanoid robots are developed, but almost all robots are designed for bipedal locomotion experiments. To satisfy both locomotion and high-level behavior by integrating tactile sensing/3D vision based perception and motion, robot should have a good functionality for mechanism, hardware and software. Especially full body behavior with many contact points to the environment will be a important problem, however such a motion requires more sophisticated body design. So far, we developed child size full body humanoid \H6" (tall:1370mm, weight:51kg) for a vision/tactile and motion coupling behavior research [1]. Dynamically stable walking pattern generation, motion plan-

0-7803-6475-9/01/$10.00© 2001 IEEE

401

ning, 3D vision functions are studied by using this H6 [2, 3]. However, these functions are still remaining primitive for a autonomous behavior of the humanoid type robot. Therefore network operation task is adopted, because it requires low-level autonomy for stability and task execution. Currently Humanoid Robot Project(HRP:MITI Japan) has sub-project for network through operation of a humanoid robot [4], however instead of developing low-level autonomy, it is strongly depend on a virtual reality technology and availability of human being who is controlling the robot. This low-level autonomy will be required for a higher level autonomous behavior research on a humanoid type robot. Therefore, not only tele-operated application, but also this autonomy will be useful for developing from low-level function to high-level autonomous behavior in various environment. In this paper, humanoid robot H6 is controlled remotely in order to achieve a humamanoid robot which work in a human world together with human being. 2

Network Operation Interface for Humanoid Robot

There exists three major requirements for a network operated humanoid type robot interface. The rst requirement is interface for controlling the robot body joints. In order to control humanoid robot remotely, it is impossible to control a robot by adjusting each joint manually. Even having a master-slave controller, it is very hard to control the robot by satisfying dynamically stable condition. Therefore, lowlevel autonomy and sophisticated robot DOF control interface are required for controlling a humanoid type robot. The second requirement is interface for recognizing the

Figure 2: H6 Joystick Control Interface using its hand. For this purpose, we propose a layered system as follows: 1) virtual puppet interface for intuitively control the DOF, and 2) \Autobalancer" which compensate a given motion to dynamically stable one online. The latter is a interface for walking and mostly using its legs. We propose a online walking pattern generation method by combining pre-calculated discrete patterns.

Figure 1: H6 Control Interface environment. When robot is working by controlling through network in a real environment, camera has relatively narrow view angle so that it is hard to have a 3D recognition of the environment. Several method are proposed for a tele-operated vision based robot, i) reconstructing a 3D environment [5], ii) using virtual reality technology [4], and so on. The third requirement is interface for human being who is a) in front of the remotely controlled humanoid robot, and b) in front of the remote operation interface. So far, several interface methods are proposed, however no methods is examined on a tele-operated humanoid type robot. Therefore, requirements for a network operation interface of humanoid robot are as follows.

   3

Robot control interface Environment Interface Human Interaction Interface

Robot Control Interface

Robot control interface is divided into two major functions. The former is a interface for working mostly

402

3.1

Virtual Puppet Interface

Since humanoid type robot has many degrees of freedom, it is hard to control dynamic stability by directly controlling its joints. Many research issues have been proposed especially for a robot arm, however there are relatively few researches have been proposed with a humanoid shape robot. So far we have proposed master-slave type humanoid robot interface \Puppet" which has the same DOF arrangement with the real robot. This master-slave based methods have an advantage of the intuitive control, however, either birateral or uni-rateral methods are dicult to apply to the human shape body, since body con guration may changed dramatically in case of birateral method, and since body con guration may be completely di erent with the real-body in case of uni-rateral method. Instead of using real device for control the robot body, we propose a virtual puppet interface concept. Robot con guration and its behavior can be simulated with environmental model and it can be displayed on a current 3D CG hardware. Fig.1 shows a virtual puppet interface in a virtual environment, and human operator can control end-e ector position and orientation by mouse.

3.2

Autobalancer

\AutoBalancer" reactively generates the stable motion of a standing humanoid robot on-line and from the given motion pattern [6]. The system consists of two parts, one is a planner for state transition from the relationship between legs and ground, and the other is a dynamic balance compensator which solves the balance problem as a second order nonlinear programming optimization by introducing several conditions. The latter can compensate for the centroid position and the tri-axial moments of any standing motion, using all joints of body in real-time. The complexity of AutoBalancer is O((p + c)3 ), where p is number of DOFs and c is number of condition equations. Therefore, any motion input from virtual puppet interface will be dynamically compensated using this \Autobalancer". 3.3

Walk Interface

3.3.1

Walk Trajectory Generation

For walking, it is also hard to control every DOF interactively. We have been proposed a oine dynamically stable trajectory generation method for a humanoid robot [2]. From a given input motion and the desired ZMP trajectory, the algorithm generates a dynamically stable trajectory using the relationship between the robot's center of gravity and the ZMP. A simpli ed robot model is introduced that represents the relationship between its center of gravity and ZMP. Then it is shown that horizontal shift of the torso satis es given desired ZMP trajectory. Let z axis be the vertical axis, and x and y axis be the other component of sagittal and lateral plane respectively. First, we introduce a model of humanoid type robot by representing motion and rotation of the center of the gravity (COG). Let total mass of the robot be mtotal , total center of the gravity be rcog = (rcogx ; rcogy ; rcogz ), and total force that robot obtains be f = (fx ; fy ; fz ). ZMP pcog = (pcogx ; pcogy ) around point p = (px ; py ; h) on the horizontal place z = h is de ned as a point where moment around point p be T = (0; 0; T z ). Then following di erential equation is obtained. err perr cog (t) = r cog (t) 0

mtotal rcog z (t) rerr cog (t) fzo (t)

(1)

3 Here let perr cog be an error between ideal ZMP pcog and err current ZMP pcog , and r cog be the an error between

403

Figure 3: Environment Interface of H6 ideal center of gravity trajectory r 3cog and current trajectory rcog . Finally a convergence method is adopted to eliminate approximation errors arising from the simpli ed model. 3.3.2

Online Mixture and Connection of Predesigned Motions

However, in order to control robot online, oine method is impossible to adopt. Therefore online generation of desired walking pattern is proposed. Utilizing the characteristics of ZMP dynamically stable mixture of pre-designed motion is carried out to generate desired walking motion. Eleven pre-designed motions are generated by previous oine trajectory generation method for translation motions. The mixture requires low calculation cost, so that desired walking pattern can be generated in real time. User only have to designate the direction and speed of the motion from the pointing device. Fig.2 shows a joystick control experiment of humanoid H6. 4

Environment Interface

In order to control remotely the humanoid robot, not only the body degrees of freedoms interface, but also environment present interface is important. Humanoid robot must work by its hand and walk in the complex environment, and may be disturbed from the environment (or human being). There are two kind of environment information. One is the robot internal sensor informations that indicate robot current physical state. The other is 3D vision informations and it is important to the operator to

Figure 4: H6 Find out a Step from the Plane Segment Finder in Walking Time control the robot. With vision recognition software, operator can only say/designate for example \grasp this object" or \climb this step". 4.1

Sensor Interface

Internal state of the robot, tactile information and sound data should be present to the operator. By using virtual puppet interface that is mentioned in previous section, these informations are overlayed on the virtual robot(Fig.3). 4.2

Depthmap Generation System

Real-time 3D Vision functions are fundamentally important for a robot that behaves in real-world. Recently there are several real-time 3D depth map generation systems have been proposed in computer vision eld (ex. [7,8]) and some commercial products are also available (ex. [9]). However, these solutions requires special hardware. Since onbody real-time system is required for mobile robotics (or other camera moving) applications, it is hard to make a onbody system using such extra hardwares. In order to solve this problem, we proposed a real-time depth map generation system using only standard PC hardware and simple image capture card [10]. Four key issues are adopted to achieve real-time and to obtain accurate range data, as follows: 1) recursive (normalized) correlation technique, 2) cache optimization, 3) online consistency checking method, 4) applying MMX/SSE(R) multimedia instruction set. results are denoted. 4.3

Figure 5: H6 Voice Control

Plane Segment Finder

3D plane information is very useful in a arti cial environment. In order to nd out a 3D plane, we proposed

404

a Plane Segment Finder by combining depth map generation and 3D hough transformation method [11]. The process includes: 1) precise depth map generation, 2) 3D hough transformation in order to nd the plane segment candidates, 3) t the candidates into the depth map so that plane regions and non plane regions can be distinguished, and 4) track segmented plane in order to achieve real-time plane segmentation. 3D plane is described by parametric notation as follows:  = (x0 cos() + y0 sin()) cos() + z0 sin( )

(2)

The algorithm requires O (M 3 ) calculation cost, however by limiting the search space (for example orientation of the plane), its cost reduces. Fig.4 shows the step nding experiment. 5

Human Interaction Interface

In order to make humanoid robot work in a human world by human operator, human interaction interface is important. Two interface is required, a) in front of the remotely controlled humanoid robot, and b) in

front of the remote operation interface. Two functions are adopted, A) voice function and B) face recognition function. 5.1

Voice Interface

Since humanoid robot has many degrees of freedom, noises happen while it is working. Therefore, voice recognition software should have a function to resist its noises. Adopted voice recognition software is developed by Dr.Hayamizu at ETL, and this software has advantages that it can run on onbody processor (it runs on Linux) and programmer can very easily to manage its dictionary. Using this advantage, task based dictionaries which contain only several words are prepared, and it is robust in terms of noises. Speech software is a commercial software (Fujitsu) and it also runs on Linux. Fig.5 shows a voice command based walking experiment. 5.2

Human Finding and Face Recognition

In order to nd out and recognizing a human being, human nding software and face recognition software are developed. It nd out a human existence by segmenting the depth image from the depthmap generation software. Fig.6 shows a human segmentation result. Then, the human region image is sent to commercial neural-net based face recognition software (MIRO) to recognize. Therefore, the robot automatically recognize a human. Fig.7 shows a face recognition experiment. 6

Conclusion

Finally as for interaction interface, voice recognitionspeech and human ndingface recognition functions are denoted. Operator can react to the human beings around the robot without too much spend a effort to communicate with them, and can concentrate to the robot task. We examined this remote operation interface by using humanoid robot H6, and con rmed its eciency for object handling task and walk in a complex environment task. Using this interface, only PC console with microphone and joystick is enough to control the humanoid robot, even humanoid robot has many degrees of freedom and must satisfy dynamic balance. Remote operation interface of the humanoid type robot has an application in a hazardous environment, however, we also believe that remote operation task is good for developing a low-level autonomy of the humanoid robotics. Acknowledgments

This research has been supported by Grant-in-Aid for Research for the Future Program of the Japan Society for the Promotion of Science, \Research on Micro and Soft-Mechanics Integration for Bio-mimetic Machines (JSPS-RFTF96P00801)" project and several grants of Grant-in-Aid for Scienti c Research. References

[1] K. NISHIWAKI, T. SUGIHARA, S. KAGAMI, F. KANEHIRO, M. INABA, and H. INOUE. Design and development of research platform for perceptionaction integration in humanoid robot : H6. In Proc. of IEEE/RSJ International Conference on Intelligent

In this paper, remote operation interface for humanoid robot is discussed and based on a motivation to develop a low-level autonomy of the humanoid robot, three key issues are denoted, 1) control interface, 2) environmental interface, and 3) interaction interface. Then as for control interface, virtual-puppet interface combining with Autobalancer and walk interface are denoted. Operator can control humanoid robot to manipulate the object and to walk to the desired direction. Every designated motion is ltered by online dynamically stabilize functions and its eciency is shown throughout the experiment of our humanoid robot H6. As for environmental interface, internal robot state display and 3D vision functions are denoted. With this interface, operator can simply designated the target object to grasp or the steps to climb.

405

, 2000. [2] S. KAGAMI, K. NISHIWAKI, T. KITAGAWA, T. SUGIHARA, M. INABA, and H. INOUE. A fast generation method of a dynamically stable humanoid robot trajectory with enhanced zmp constraint. In Robots and Systems (IROS'00) (to be appear)

Proc. of IEEE International Conference on Humanoid

, 2000. [3] J. J. Ku ner, S. KAGAMI, M. INABA, and H. INOUE. Dynamically-stable motion planning for humanoid robots. In Proc. of IEEE International Conference on Humanoid Robotics (Humanoid2000), 2000. [4] H. Inoue and S. Tachi and K. Tanie and K. Yokoi and S. Hirai and H. Hirukawa and K. Hirai and S. Nakayama and K. Sawada and T. Nishiyama and O. Miki and T. Itoko and H. Inaba and M. Sudo. HRP: Humanoid Robotics Project of MITI. In Robotics (Humanoid2000)

Figure 6: H6 Find out a Human from the Depth in Walking Time, Left:Depth map, Center:Distance Labeling, and Right:Bounding Box

Figure 7: H6 Interacts with Human Being Proc. of IEEE International Conference on Humanoid

Depth Mapping and Its New Applications. In Proc. of

Robotics (Humanoid2000)

the 1996 International Conference on Computer Vi-

, 2000. [5] M. Maimone, L. Matthies, J. Osborn, E. Rollins, J. Teza, and S. Thayer. A Photo-Realistic 3D Mapping System for Extreme Nuclear Environments: Chornobyl. In Proc. of International Conference on Intelligent Robots and Systems (IROS'98), Vol. 3, pp. 1521{1527, 1998. [6] S. KAGAMI, F. KANEHIRO, Y. TAMIYA, M. INABA, and H. INOUE. Autobalancer: An online dynamic balance compensation scheme for humanoid robots. In Proc. of Fourth Intl. Workshop on Algorithmic Foundations on Robotics (WAFR'00), pp. SA{79{SA{89, 2000. [7] K. Konolige. Small Vision Systems: Hardware and Implementation. In Y. Shirai and S. Hirose, editors, Robotics Research: The Eighth International Sympo-

, pp. 203{212. Springer, 1997. [8] T. Kanade, A. Yoshida, K. Oda, H. Kano, and M. Tanaka. A Stereo Machine for Video-rate Dense sium

406

, pp. 196{202, Jun 1996.

sion and Pattern Recognition

[9] Point Grey Research Inc. Triclops Stereo Vision System. http://www.ptgrey.com. [10] S. KAGAMI, K. OKADA, M. INABA, and H. INOUE. Design and implementation of onbody realtime depthmap generation system. In Proc. of International Conference on Robotics and Automation

, pp. 1441{1446, 2000. [11] S. Kagami, K. Okada, M. Inaba, and H. Inoue. Plane segment nder. In 5th Robotics Symposia, pp. 381{ 386, 2000. (ICRA'00)

Suggest Documents