First International Conference on Industrial and Information Systems, ICIIS 2006, 8 - 11 August 2006, Sri Lanka
Futuristic Humanoid Robots: An Overview Parul Gupta1, Vineet Tirth2, R.K.Srivastava3
"2Department of Mechanical Engineering, Moradabad Institute of Technology (Uttar Pradesh Technical University), Moradabad, Uttar Pradesh, India
3Department of Mechanical Engineering, Motilal Nehru National Institute of Technology (Deemed University), Allahabad, Uttar Pradesh, India parul_gupta_1 97gyahoo.com', v
[email protected],
[email protected]
and unknown environments. Force sensing and compliance at each robot joint can allow the robot to safely act in these environments. However, these features can be difficult to incorporate into robot designs. Today's robots are not able to manipulate objects with the skill of even a small child. For robots to gain general utility in areas such as space exploration, small-parts assembly, agriculture, end even in our homes, they must be able to intelligently manipulate unknown objects in unstructured environments. There are many interesting robot projects. Robo Monkey is a great example - a robot built to emulate the gibbon. This incredibly agile robot can swing from bar to bar (fixed distance) by using its body to give it the correct momentum. The robot learns from its mistakes, and will adapt accordingly. But for me, the most interesting projects are being conducted at MIT's robotics laboratory (Fig. 3). Two major robot projects are in progress (and have been so for many years) - Cog and Kismet. A humanoid robot is an autonomous robot because it can adapt to changes in its environment or itself and continue to reach its goal. This is the main difference between humanoids and other kinds of robots, like industrial robots which are used to performing tasks in highly structured environments. Like other mechanical robots, humanoids refer to the following basic components too: Sensing, Actuating and Planning and Control. Since they try to simulate the human structure and behaviour and they are autonomous systems, most of the times humanoid robots are more complex than other kinds of robots.
Abstract -Like never before, technology can bring imagination to life. Humanoid robots are without question a hot topic in research today. But will they really be the next break-through invention that changes the face of the world. For decades, popular culture has been enthralled with the possibility of robots that act and look like humans. We are promised by film, fiction and television that humanoids will cook for us, clean for us, become our best friends, teach our children, and even fall in love with us. Recently, the media has covered a surprising number of new humanoid robots emerging on the commercial market. Like many new technologies, these early generations of commercially available humanoids are costly curiosities, useful for entertainment.Yet, in time, they will accomplish a wide variety of tasks in homes, battlefields, nuclear plants, government installations, factory floors, and even space stations.Humanoids may prove to be the ideal robot design to interact with people. Humanoid Robotics also offers a unique research tool for understanding the human brain and body. Already, humanoids have provided revolutionary new ways for studying cognitive science. Using humanoids, researchers can embody their theories and take them to task at a variety of levels. Aside from their traditional roles, humanoid robots can be used to explore theories of human intelligence.This paper reviews a wide variety of Humanoid Robots being used throughout the world and explaining its typical applications and future challenges while developing humanoid robots which may come across such endeavours.This paper would review successes and failures in the field where humanoid research began. Further, an extrapolations of recent developments is also given where it may take us in the future. Hitherto, this paper would discuss how these technological developments have and will continue to affect the ways in which the present researchers understand.
II. EARLY ENDEAVORS With the rise of the computer, people immediately began to envision the potential for encoding human intelligence into textual programs, but soon discovered that static programs and rule-based logic cannot capture the true essence of human intelligence. Early attempts to create artificial intelligence produced information-processing machines that operated on high-level human concepts, but had difficulty relating those concepts to actions and perceptions in the external world. Once embodied in real robots, such programs were confounded by noisy and all-too-often inconsistent data streaming in and out from a host of realworld sensors and actuators.[13] Accepting what they believed to be one of the greatest engineering challenges of
Keywords- Humanoid, robots. I. INTRODUCTION Humanoid robots found in research and commercial use today typically lack the ability to operate in unstructured
1-4244-0322- 7/06/$20. 00 C2006 IEEE
247
First International Conference on Industrial and Information Systems, ICIIS 2006, 8 - 11 August 2006, Sri Lanka
all time, a few intrepid mechanical and electrical engineers began to build the world's first humanoid robots. In 1973, the construction of a human-like robot was started at the Waseda University in Tokyo under the direction of the late Ichiro Kato. He and his group developed WABOT-1, (Fig. 5) the first full-scale anthropomorphic robot in the world. It consisted of a limb-control system, a vision system and a conversation system. WABOT- 1 was able to communicate with a person in Japanese and to measure distances and directions to the objects using external receptors, artificial ears and eyes, and an artificial mouth. The WABOT-1 walked with its lower limbs and was able to grip and transport objects with touch-sensitive hands. At the time, it was estimated that the WABOT- 1 had the mental faculty of a one-and-half-year-old child. In 1985, Kato and his research group at Waseda University built WASUBOT (Fig. 6), a humanoid musician (WAseda SUmitomo roBOT), developed with Sumitomo Electric Industry Ltd. WASUBOT could read a musical score and play a repertoire of 16 tunes on a keyboard instrument. Since these early successes, the Japanese electronics and automotive industries have played a key role in the emergence of humanoids by creating robots of humanoids by developing robots capable of walking over uneven terrain, kicking a soccer ball, climbing stairs and performing dexterous tasks such as using a screwdriver and juggling.[12] At the present time, we have full-scale humanoid robots that roughly emulate the physical dynamics and mechanical dexterity of the human body.
roboticists, computer scientists, artificial intelligence researchers, psychologists, physicists, biologists, cognitive scientists, neurobiologists, philosophers, linguists and artists all contribute and lay claim to the diverse humanoid projects around the world. For example, some researchers are most interested in using the human form as a platform for machine learning and online adaptation, while others claim that machine intelligence is not necessary. How can we characterize such a broad range of efforts? Defining a humanoid robot is a lot like defining what it means to be human. Most likely, you'll know one when you see it, and yet have trouble putting the characteristics on paper. The physical constitution of the body is clearly crucial. Not surprisingly, some have chosen to define a humanoid robot as any robot with two arms, two legs and a human-like head. Unfortunately, such a definition says nothing about the ability of this robot to receive information, process it and respond. Eventually, a fully-fledged humanoid robot will incorporate work from each of the areas below.
A. Perception This area includes computer vision as well as a great variety of other sensing modalities including taste, smell, sonar, IR, haptic feedback, tactile sensors, and range of motion sensors. It also includes implementation of unconscious physiological mechanisms such as the vestibulo-ocular reflex, which allows humans to track visual areas of interest while moving. Lastly, this area includes the attentional, sensor fusion and perceptual categorization mechanisms which roboticists implement to filter stimulation and coordinate sensing.
III. WHAT IS A HUMANOID ROBOT? Humanoid Robotics [12] includes a rich diversity of projects where perception, processing and action are embodied in a recognizably anthropomorphic form in order to emulate some subset of the physical, cognitive and social dimensions of the human body and experience. Humanoid Robotics is not an attempt to recreate humans. The goal is not, nor should it ever be, to make machines that can be mistaken for or used interchangeably with real human beings. Rather, the goal is to create a new kind of tool, fundamentally different from any we have yet seen because it is designed to work with humans as well as for them. Humanoids will interact socially with people in typical, everyday environments. We already have robots to do tedious, repetitive labor for specialized environments and tasks. Instead, humanoids will be designed to act safely alongside humans, extending our capabilities in a wide variety of tasks and environments.
B. Human-robot interaction
This area includes the study of human factors related to the tasking and control of humanoid robots. How will we communicate efficiently, accurately, and conveniently with humanoids? Another concern is that many humanoids are, at least for now, large and heavy. How can we insure the safety of humans who interact with them? Much work in this area is focused on coding or training mechanisms that allow robots to pick up visual cues such as gestures and facial expressions that guide interaction. Lastly, this area considers the ways in which humanoids can be profitably and safely integrated into everyday life. C. Learning and adaptive behavior For robots to be useful in everyday environments, they must be able to adapt existing capabilities to cope with environmental changes. Eventually, humanoids will learn new tasks on the fly by sequencing existing behaviors. A spectrum of machine learning techniques will be used including supervised methods where a human trainer
At present, Humanoid Robotics (Fig.3) is not a welldefined field, but rather an underlying impulse driving collaborative efforts that crosscut many disciplines. Mechanical, electrical and computer engineers,
1-4244-0322- 7/06/$20. 00 C2006 IEEE
248
First International Conference on Industrial and Information Systems, ICIIS 2006, 8 - 11 August 2006, Sri Lanka
interacts with the humanoid, and unsupervised learning where a built-in critic is used to direct autonomous learning. Learning will not only allow robust, domaingeneral behavior, but will also facilitate tasking by hiding the complexity of task decomposition from the user. Humanoids should be told what to do rather than how to do it.
unforeseen features of the environment or task. Realizing the limitations of hard-coded, externally derived solutions, many within the Al community decided to look to fields such as neuroscience, cognitive psychology, and biology for new insight. Before long, the multidisciplinary field of cognitive science drove home the notion that the planning and high-level cognition humans are consciously aware of represents only the tip of a vast neurological iceberg.[4] The mainstay of human action, it was argued, derives from motor skills and implicit behavior encodings that lie beneath the level of conscious awareness. Borrowing on this understanding, Agre and Chapman argued that robots should likewise spend less time deliberating and more time responding to a world in constant flux.[5] A new, behaviorbased view of intelligence emerged which transferred the emphasis from intelligent processing to robust real-world action.
D. Legged locomotion For humanoids to exploit the way in which we have structured our environment, they will need to have legs. They must be able to walk up stairs and steep inclines and over rough, uneven terrain. The problem is that walking is not simply a forwards-backwards mechanical movement of the legs, but a full-body balancing act that must occur faster than real-time. The best approaches look closely at the dynamics of the human body for insight.
Neurobiology provided compelling evidence for a behaviorbased approach with studies on the behavioral architecture of low-level animals. In one experiment, scientists severed the connection between a frog's spine and brain, effectively removing the possibility of centralized, high-level control. They then stimulated particular points along the spinal cord and found that much of the frog's behavior was encoded directly into the spine.[6] For instance, stimulating one location prompted the frog to wipe its head whereas another location encoded jumping behavior. It was this implicit, reactive control layer that classical Al methods had ignored.
E. Arm control and dexterous manipulation The University of Tokyo Department of MechanoInformatics has developed a humanoid robot, Saika, (Fig.4) with a two-DOF neck, dual five-DOF upper arms, a torso and a head. Saika is able to dribble a bouncing ball and catch a thrown ball.Around the world, researchers are working on dexterous tasks including catching balls, juggling, chopping vegetables, performing telesurgery, and pouring coffee. From a mechanical point of view, robot arms have come a long way, even in the last year or so. Once large and heavy with noisy, awkward hydraulics, some humanoids now have sleek, compliant limbs with high strength to weight ratios. While mechanical innovation will and should continue, the real hard problem is how to move from brittle, hard-coded dexterity toward adaptive control where graceful degradation can be realized. The humanoid body functions as a whole and consequently, small errors in even one joint can drastically degrade the performance of the whole body.
V. HUMANOID ROBOTICS: BUILDING INTELLIGENCE FROM THE BOTTOM-UP
Today, the question for Humanoid Robotics is how best to impart these primitive behaviors to robots. Many researchers find it ineffective to directly hard-code such low-level behavior with imperative languages like C or C++ and instead use a more biologically motivated technique such as artificial neural networks. Artificial neural networks allow a 'supervised' learning approach where a designer trains a system's response to stimulation by adjusting weights between nodes of a network. The rise of artificial neural networks (ANNs) brought much optimism. Researchers believed that they could use ANNs to simulate the distributed, parallel nature of computation in the brain, allowing skills and knowledge to be conditioned as implicit generalizations of repeated experience. As it turns out, ANNs fail to capture the recursive power of the human brain. Unlike an ANN where the structure of the network is usually fixed, the brain's highly integrated, well-ordered structure emerges through competition between separately evolving collectives of neurons. Critics argue that ANNs' lack of such an architecture prohibits meta-level learning -the ability to not only generalize, but also extend acquired knowledge beyond the frontiers of experience. Although
IV. HUMANOID ROBOTICS: PAST PROBLEMS WITH "THINKING ROBOTS" In their zeal to make robots "think like humans," early humanoid researchers focused on high-level cognition and provided no mechanism for building control from the bottom up. Although intended to model humans, most of the systems did not, like humans, acquire their knowledge through interaction with the real world. When situated in the real world, these robots possessed little mastery over it. Even in the fortunate event that sensors could accurately connect internal 'archetypes' to real-world objects, robots could only extend the knowledge thrust upon them in rudimentary, systematic ways. Such robots carried out preconceived actions with no ability to react to
1-4244-0322- 7/06/$20. 00 C2006 IEEE
249
First International Conference on Industrial and Information Systems, ICIIS 2006, 8 - 11 August 2006, Sri Lanka
ANNs do not accurately model cognitive capacities of the human cortex, they do offer a truly unique and effective way to encode motor-skills and low-level behavior. It may be that, like the cerebellum and other, older structures of the brain, ANNs can provide a foundation on which high-level learning can be built. In any case, they have provided powerful insight into understanding both machine and biological learning. [10]
reduce response time, high costs and some of the dangers associated with sending an astronaut into space. To accomplish these aims, Robonaut must be able to efficiently assist astronauts by sharing their space and tools. Robonaut (Fig.2) will need to respond to natural communication and learn through what NASA JSC calls virtual-reality coaching where the human will effectively take control of the robot and guide it through certain movements and behaviors. To accomplish much of this work, JSC plans to use advances in learning to coordinate vision, tactile and proprioceptive sensing and action. Currently, Robonaut can be teleoperated by a human using a VR interface. Motion-sensitive apparel help map human actions to the body of the robot. To facilitate the interface, stereoscopic cameras provide the user with a panoramic view to give the feeling of being in the robot's body. The VR interface is so natural that even first-time users can learn to control Robonaut in minutes. Even greater skill will be enabled once haptic forces can be applied to the hand of the human operator. This work has proved to be an incredible integration effort. NASA reports that approximately 50 percent of the work has been spent integrating various components into a single body.
VI. HUMANOID ROBOTICS: ARM CONTROL AND DEXTEROUS MANIPULATION
The University of Tokyo Department of MechanoInformatics has developed a humanoid robot, Saika, with a two-DOF neck, dual five-DOF upper arms, a torso and a head. Saika is able to dribble a bouncing ball and catch a thrown ball. It can grope for and grasp unknown objects. Whereas many humanoids are heavy and require large, unwieldy off-board apparatus for actuation, Saika is designed to be lightweight and has almost all the motors built into the arms and torso. The head, torso and arms together weigh less than 17 pounds.[3] The majority of industrial robots are position-controlled devices that move exactly as they are told. Some of these arms can easily lift over 2,000 lbs. For machines intended to interact with people, those that move with such force are a definite danger. For humanoids, it is crucial to develop arms that monitor the interacting forces between the robot and the many other parts of the environment with which the arm may come in contact. Ideally, humanoid arms will be lightweight manipulators that can provide strength while exhibiting compliant motion. Compliance includes the ability to "give" when the arm encounters impedance. For example, if a robot is reaching to pick up an object and a human gets in the way, the force exerted by the arm should give accordingly. Seeking a compliant, yet strong robot arm, Waseda developed a 7-degrees-of-freedom (D.O.F.) anthropomorphic manipulator, which consists of shoulder, elbow and wrist. Instead of using an active (motor-driven) approach to compliance where performance is limited by the response of servo motors, Waseda uses a passive compliance control method in which a linear spring and brake systems are used to dynamically adjust the "give" in each arm. The result is a force-controlled robot that can safely cooperate with humans while carrying out advanced dexterous manipulation tasks. [11]
VII. ROBOT-HUMAN INTERACTION
For robots to be profitably integrated into the everyday lives of humans within military, commercial, educational or domestic contexts, robots must be able to interact with humans in more meaningful, natural ways. As artificial agents inundate our lives, it will be increasingly important to enable multi-modal, intuitive modes of communication that include speech, gesture, movement, affect, tactile stimulation and context. Body dictates behavior, and if we want a robot to relate with and learn from humans, it must be able to map its body to our own. [9] An ambitious project at MIT is based on the premise that
humanlike intelligence requires humanoid interactions with the world. These researchers are developing a robot they call Cog (Fig.7) as a set of sensors and actuators that tries to approximate the sensory and motor dynamics of a human body. Cog is equipped with a sophisticated visual system capable of saccades, smooth pursuit, vergence, and coordinating head and eyes through modeling of the human vestibulo-ocular reflex. Cog responds not only to visual stimulation, but also to sounds and to the ways people move Cog's body parts. By exploiting its ability to interact with humans, Cog can learn a diverse array of behaviors including everything from playing with a slinky to using a hammer. For imitative learning techniques to succeed, the robot must have some way of knowing which aspects of the environment it should attend to and precisely which actions it should try to reproduce. For instance, the robot should not imitate a cough or an itch when being shown how to turn a crank. To guide robots through the process of imitative
NASA (Fig.2) has engineered a dexterous humanoid robot that will deploy, maintain and operate a wide variety of shuttle and space-station components. Currently, astronauts must perform a variety of extremely dangerous and costly dexterous tasks. The objective of the Robonaut project is to develop a space robot with dexterous capability exceeding that of a suited astronaut. This will
1-4244-0322- 7/06/$20. 00 C2006 IEEE
250
First International Conference on Industrial and Information Systems, ICIIS 2006, 8 - 11 August 2006, Sri Lanka
learning, we must give them the ability to recognize and respond to natural cues we give unconsciously via body language. Another MIT project, called Kismet, is training a robot head with eyebrows, eyelids, ears and mouth, etc., to discern social cues such as nodding or eye contact which are crucial in correctly guiding interaction.
HARIS, a robotic arm and human interface, is designed to help disabled people move and fetch objects. With the human hand as a model, researchers created a robotic manipulation system capable of tasks such as picking up a coffee cup, grasping an egg, dialing a telephone call, and holding a coffee bottle. HARIS is comprised of three separate arm segments and a hand. The arm has three joints and 8 DOF. The hand has five fingers, 178 tactile sensors and 17 DOF. The mechanical arm itself, however, is only the first problem that must be solved before service robots can be truly useful. The robotic system must also include a scene-understanding system, 3D-vision system, a real-time motion-scheduling system, an arm-control system and a knowledge architecture that allows it to capture and use information about its environment. To be truly useful as a service robot, HARIS (Fig. 1 0)must understand simple relationships between the elements of its environment. It must know, for example, that tea and coffee go into cups but not into plates. It must know that cups go on saucers and that they are easier to move when empty than when full. It is difficult for us to conceive just how much knowledge we draw upon even when we do simple tasks. In fact, the hard problem for service robots is not the vision system, mechatronics or the natural language processing component, but rather the need for knowledge engineering. HARIS uses a semantic network to store names, roles, attributes and relationships for each object in the environment. [13]
WENDY ( Fig. 8)is a human symbiotic robot that consists of two anthropomorphic arms, a headand torso.It has wheels instead of legs. Wendy is designed to work with humans, often in the same working space, carrying out physical, informational and psychological interaction with humans. For human symbiotic robots, safety is a key issue. To ensure impact safety, the joints are equipped with force sensors that detect collision. Also, reliable shock absorption is accomplished by covering the arms with special material. Robust, dexterous handling is accomplished using a mechanism for pressure adjustment based on human fingertips. After modeling the way the human fingers work to pick up very difficult objects, they built realistic fingertips that can apply pressure much like a human. The fingers even include fingernails for picking up small, flat objects. The robot hand can accomplish a number of real tasks such as chopping vegetables and grasping very small coins.
Hadaly-2 (Fig. 9) is a new humanoid robot designed by Waseda University for the purpose of interactive communication with humans. Hadaly-2 has an environmental recognition system that uses vision and voice recognition to remain aware of the presence and actions of people around it. Like Wendy, it uses a compliant motion system and can achieve mobility using electric wheels. Hadaly-2 uses these capabilities to communicate with humans not only informationally, but also physically. The robot shown here is about 6 feet tall and weighs over 600 pounds.[13]
Humanoid robots are also surfacing in the entertainment industry. "Ursula" the Female Android is a remotecontrolled full-size robot that walks, talks, dances, plays music and more. "Ursula" makes for incredible entertainment and an effective communicator of special messages than can captivate any crowd. Each electromechanical android has a distinct look and unique personality that can be customized for any event. Special features of "Ursula" include fiber-optic hair, remotecontrolled water guns and onboard video cameras. Sarcos, a Utah-based company with considerable experience in entertainment engineering, has developed some of the world's most sophisticated humanoid robots and virtual reality interfaces. Sarcos entertainment robots are constructed not only to be high performance, but also to be sensitive and graceful. Sarcos has placed a great deal of emphasis on the aesthetics of its humanoid as well as the engineering. Concept development and graphic renderings are supported by a complete sculpting facility, where highperformance skins and other coverings are produced.. A number of smaller, commercially available humanoid robots have recently emerged on the scene, from some of the biggest names in the Japanese computer and electronics industry including Honda, Sony and Fujitsu. Competition amongst these players is harsh as the companies feel that humanoids represent perhaps the greatest untapped commercial market of the future.
VIII. SERVICE & ENTERTAINMENT ROBOTS
Human-robot interaction plays a crucial role in the burgeoning market for intelligent personal, service and entertainment robots. Applications include everything from robots that assist the elderly and severely disabled to entertainment robots at amusement parks. Increasingly, robots that can serve as mobile, autonomous tour guides and information kiosks will grace public places. One encouraging example is Minerva, a popular tour guide at the Smithsonian National Museum of American History, which uses a rich repertoire of interactive capabilities to attract people and guide them through the museum. Minerva's facial features and humanoid form have had a profound effect on the way in which people respond to it.
1-4244-0322- 7/06/$20. 00 C2006 IEEE
251
First International Conference on Industrial and Information Systems, ICIIS 2006, 8 - 11 August 2006, Sri Lanka
task. Recently, Hasbro has enlisted the help of top roboticists at iRobot to develop robot infants that can be sold as toys.[12]
IX. ANTHROPOPATHIC ROBOTS
Whereas anthropomorphic robots have bodies that look and physically act like the human body, anthropopathic robots are able to emote. The robots discussed in this section not only perceive and respond to human emotion, but are themselves possessed of an intrinsic emotional system that permeates their control architecture. For these humanoids, emotional state is not merely an outward expression, but can be used to influence the actions and behavior of the robot. The robot Kismet is capable of using emotional modeling to guide interaction with humans. Researchers at MIT have experimented with using many children and adults from different cultures to study how effectively Kismet can engage them through social interactions. Kismet responds not only to speech, but also to a variety of multi-modal body language including body posture, the distance of the human from the robot, the movements of the human, and the tone, volume and prosody of their speech. One of the underlying premises of the Kismet project is that emotion is necessary to guide productive learning and communication in general.[13]
XI. CONCLUSION
The futuristic humanoid robot will be able to do housework and provide assistance to the elderly. It will also be capable of displaying natural facial expressions similar to those of humans. Although scientific research usually takes credit as the inspiration for science fiction, it is possible that with Al and robotics, fiction led the way. However, over the past few years, humanoid robotics has become the focus of many research groups, conferences, and special issues. While outpacing the imagination of science-fiction writers might be difficult, our work does indicate one possible future. Robots will be able to interact with humans in humanlike ways, and people will find this normal and natural. At the same time, we will continue to learn more about the nature of our own intelligence by building these systems. REFERENCES [1]
X. HUMANOID ROBOTICS: WHAT DOES THE FUTURE HOLD?
[2]
The future will bring humanoids designed to take part in the drama of chaos, inconsistency and error we know fondly as the real world. Such humanoids will not be hindered by complexity and complication, but will embrace it and thrive on it.Many of the traits we consider uniquely human stem not from our strength, reliability or the precision with which we execute tasks. In fact, we do quite poorly in these arenas. This is not coincidence, but a vestige of our adaptability and ingenuity. Optimality brings stasis and hinders versatility. It is a concept that has little to do with the flux of change in our real world. Unlike classroom computer science, the algorithms of human intelligence are neither provable nor constant. When we move from algorithms and virtual worlds into the real-world arena of noisy sensors, humidity and slippery floors, the notion of one perfect solution will give way to the more compelling possibilities of flexibility and adaptation.Perhaps the most "human" thing of all is our amazing ability to be error-prone and inconsistent and yet cope. This ability derives, at least in part, from our ability to recognize imperfection and even exploit it, using the arbitrary fluctuations in ourselves and our environment to drive learning, creativity, humor and inventiveness. Humanoids will not encroach on these human attributes, but rather bolster them, allowing us to move further toward the heart of what it means to be human. Humans have always been eager to project emotion into machines. Our imaginations seem to have little trouble rising to the
1-4244-0322- 7/06/$20. 00 C2006 IEEE
[3] [4] [5]
[6] [7]
[8]
[9]
[10]
[11]
[12] [13] [14]
252
D. Michie, "Machine Learning in the Next Five Years", Proc. of the Third European Working Session on Learning, Glasgow, 1988, pp. 107-122. H.L. Dreyfus, What Computers Can't Do: The Limits of Artificial Intelligence, Harper Colophon Books, New York, 1979. G.B. Kleindorfer and J.E. Martin, "The Iron Cage, Single Vision, and Newton's Sleep", Research in Philosophy and Technology, Vol. 3, 1983, pp. 127-142. D.L. Schacter, C.Y.P Chiu and K.N. Ochsner, "Implicit Memory: A Selective Review", Annual Review of Neuroscience, Vol. 16, 1993, pp. 159-182. P.E. Agre and D. Chapman, "What are Plans For?, Robotics and Autonomous Systems", Vol. 6, 1990, pp. 17-34. S.F. Giszter, F.A. Mussa-Ivaldi and E. Bizzi, "Convergent force fields organized in the from spinal cord", J. Neuroscience, Vol. 13, 1993, pp. 467-491. R. A. Brooks, "Intelligence Without Reason", A.I. Memo No. 1293, MIT Al Laboratory, April 1991. T. Inamura, M. Inaba and H. Inoue, "Acquisition of Probabilistic Behavior Decision Model Based on the Interactive Teaching Method", Proc. of the 9th International Conference on Advanced Robotics, Tokyo, 1999, pp. 523-528. M. Williamson, Robot Arm Control Exploiting Natural Dynamics, doctoral thesis, Massachusetts Institute of Technology, Dept. Electrical Eng. and Computer Science, Cambridge, Mass., 1999. J. Wolfe, Guided Search 2.0: A Revised Model of Visual Search, Psychonomic Bull. and Rev., Vol. 1, No. 2, June 1994, pp. 202 238. C. Breazeal and B. Scassellati, A Context-Dependent Attention System for a Social Robot, Proc. 16th Int 1 Joint Conf. Artificial Intelligence (IJCAI 99), Morgan Kaufmann, San Francisco, 1999, pp. 1146 1153. Bruemmer, D., Swinson, M., "Humanoid Robots: A New Universal Tool", Encyclopedia of Physical Science and Technology, Third Edition. Academic Press, 2001. http:Hwww.inl.gov/index.shtml ,dated-25-3-06. Swinson, M., Bruemmer, D., "The Expanding Frontiers of Humanoid Robotics," Intelligent Systems, July, 2000.
First International Conference on Industrial and Information Systems, ICIIS 2006, 8 - 11 August 2006, Sri Lanka
Some Figures of Hamanoid robots (Courtsey by David Bruemmer)
Fig. 1 Sony has developed small but remarkable robots that can dance and sing for entertainment purposes
Fig. 2 Robonauut :A robot developed at NASA Johnson Space Center to inhabit the space station.
Fig. 3 An early version of Cog, developed under Rodney Brooks at the MIT Al Laboratory.
Fig. 4 Saika: a light-weight, full-sized humanoid robot developed a the University of Tokyo
1-4244-0322- 7/06/$20. 00 C2006 IEEE
253
First International Conference on Industrial and Information Systems, ICIIS 2006, 8 - 11 August 2006, Sri Lanka
Fig. 8 Wendy S. Sugano Laboratory Waseda University.
Fig. 5 WABOT - 1 Humanoid Project at Waseda University
Fig. 6 WABOT-2, an anthropomorphic robot musician.
Fig. 9 Hadaly - 2 Humanoid Project Waseda University.
Fig. 7 Cog, a humanoid developed at the MIT Al laboratory learns to recognize and respond to animate agents.
Fig. 10 Dexterous hand of HARIS, a robotic arm and human interface. National Center for Science Information Systems in Tokyo, Japan.
1-4244-0322- 7/06/$20. 00 C2006 IEEE
254