Development of intelligent service robots - IOS Press

2 downloads 6415 Views 1MB Size Report
IOS Press. 139. Development of intelligent service robots. Luca Iocchia,*. , Emanuele Menegattib, Andrea Bonarinic, Matteo Matteuccic, Enrico Pagellob,.
139

Intelligenza Artificiale 7 (2013) 139–152 DOI 10.3233/IA-130055 IOS Press

Development of intelligent service robots Luca Iocchia,∗ , Emanuele Menegattib , Andrea Bonarinic , Matteo Matteuccic , Enrico Pagellob , Luigia Carlucci Aielloa , Daniele Nardia , Fulvio Mastrogiovannid , Antonio Sgorbissad , Renato Zaccariad , Rosario Sorbelloe , Antonio Chellae , Marcello Giardinae , Primo Zingarettif , Emanuele Frontonif , Adriano Mancinif , Grazia Cicirellig , Alessandro Farinellih and Domenico G. Sorrentii a Sapienza

University of Rome, Italy of Padova, Italy c Politecnico di Milano, Milan, Italy d University of Genova, Italy e Universit` a degli Studi di Palermo, Italy f Universit` a Politecnica delle Marche, Italy g Institute of Intelligent Systems for Automation (ISSIA), National Research Council of Italy (CNR), Italy h University of Verona, Italy i Universit` a degli Studi di Milano - Bicocca, Italy b University

Abstract. The creation of intelligent robots has been a major goal of Artificial Intelligence since the early days and has provided many motivations to Artificial Intelligence researchers. Therefore, a large body of research has been done in this field and many relevant results have shown that integration of Artificial Intelligence and Robotics techniques is a viable approach towards this goal. This article summarizes the efforts and the achievements of several Italian research groups in the development of intelligent robotic systems characterized by a suitable integration of Artificial Intelligence and Robotic techniques. The contributions collected in this article show the long history of this research stream, the impact of the developed approaches in the scientific community, and the efforts towards actual deployment of the developed systems.

1. Introduction Developing intelligent robots has been a major goal of Artificial Intelligence since the very beginning and a large body of research in AI, such as knowledge representation and reasoning, planning, perception and learning, takes motivation from this goal. In the last decade, the development and the deployment of real robotic systems executing complex tasks in the real ∗ Corresponding author: Luca Iocchi, Sapienza University of Rome, Italy. E-mail: [email protected].

world has shown the feasibility and the advantages of integrating AI techniques with robotic technology and other disciplines, in order to build a complex system able to show an intelligent behavior in noisy and uncertain real world scenarios. This article summarizes the major achievements of several Italian research groups in developing methods and techniques for intelligent robotic systems. The contributions collected in this article show some important features: from one side, the long history of this research stream and the large collaborations among the research groups; from the other side, the impact of the developed

1724-8035/13/$27.50 © 2013 – IOS Press and the authors. All rights reserved

140

L. Iocchi et al. / Development of intelligent service robots

approaches in the scientific community, and the efforts towards actual deployment of intelligent autonomous systems in the real world.

2. Modular robot HW/SW development1 Since 1973, when the Artificial Intelligence and Robotics Laboratory (AIRLab) was established by Marco Somalvico and colleagues at Politecnico di Milano, its research activities are devoted at the development of robots which could effectively merge Artificial Intelligence with a physical body. Super Sigma, the first Italian robot in a university lab, was developed at the AIRLab in 1973 as an improvement of an Italian industrial robot, both from the hardware point of view, including re-designed modular electronics to control the motors, and from the “intelligence” side, which included image interpretation and planning. After a few years, the Mo.Ro. (Mobile Robot), one of the first fully autonomous mobile robots, was developed, exploiting on board the “full power” of a Digital Microvax interfaced to a large number of sensors; it was able, since 1984, to communicate and interact with other robots (through a wireless serial connection at 9600 baud) in what was later called an “agency”, where code and inferential rules were already exchanged among robots, implementing on autonomous mobile entities the concept of mobile code. Since these very first realizations, HW and SW modularity has been always at the center of the AIRLab research activity. For instance, both Fuzzy Cat and Spot (1991) included HW modules that made them able to navigate with poor sensors, and interact in one of the first multi-robot autonomous vacuum cleaner application. The controller of Fuzzy CAT was learnt by a Learning Fuzzy Classifier System, and the data interpretation was implemented by an ARTMAP Neural Network. In 1998, AIRLab started to participate to the RoboCup robot competition, first as part of the Italian National Team Azzurra, then with a complete team: the Milan Robocup Team (MRT). The robots in the team were developed by following a modular approach both for HW and SW. Modules for different aspects were developed, including high level functionalities such as cognitive reasoning for object tracking, and self-localization. MrBrian, a complete, modular, 1 Artificial Intelligence and Robotics Lab. Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano.

behavior-based control architecture was developed to combine different behavior modules together using a hierarchical approach based on fuzzy logic. All these software modules were “glued” into the so called Modular Robotic Tooklit (MRT) [9] by using a programming framework based on a publish/subscribe middleware; the same approach that nowadays seems to be the de-facto standard for developing robotics applications through ROS. The architecture was so flexible and effective that it could be used in many different applications, including a robotic wheelchair (RWC [7]), a robot for exhibitions (E-2?), several robogames [39], and a balancing robot (TiltOne), where, machine learning was used to develop the appropriate controller and, again, MRT was the developing framework. In Fig. 1 a few pictures of the AIRLab robots from past years are reported; from top to bottom and left to right we have Mo.Ro., the first robot of the AIRLab, Ribaldo and Robaldo, two of the players of the MRT RoboCup team, TiltOne, the balancing robot, Roby WheelChair (RWC), an autonomous wheelchair with multimodal interfaces, E-2?, an entertaining robot, and RoboWII, the main character of one of the AIRLab robogames. In the recent years, the concept of modularization has been pushed back towards HW, with the implementation of R2P (Robot Rapid Prototyping [8]), an HW/firmware architecture where HW modules that implement basic functionalities for a robot can be linked on a CAN bus, and interact with each other exploiting a distributed real-time publish/subscribe system. The reason for pushing the approach back to hardware has been mainly due to the interest in exploiting low-cost, modern, powerful microprocessors, demanding them low-level, real-time tasks, and, on the other hand, fostering also the reuse of not only SW modules, but also HW modules between different projects. Nowadays the development of robot platforms at AIRlab has reached a quite settled approach, where the physical part is composed of R2P modules implementing basic capabilities (e.g., motor control, kinematics, odometry), while high level functionalities are provided by SW modules in the ROS environment (under which also MRT modules such as MrBrian have been ported) running on standard PCs. To push this approach even further, a firmware implementing also the message passing protocol needed to exchange messages with ROS SW modules has been implemented (µROS 2 ), so that the HW modules implemented with R2P can 2 https://github.com/openrobots-dev/uROSnode

L. Iocchi et al. / Development of intelligent service robots

141

Fig. 1. The robots of the AIRLab (from top to bottom, left to right): Mo.Ro., Ribaldo and Robaldo, TiltOne, RWC, E-2?, and RoboWII.

interact directly with SW ROS modules on a standard PC.

3. Integrating perception and actions for Intelligent Autonomous Systems3 The application of AI methodologies to Robotics among Padua scientific community go back to July 1972, when E. Pagello at the Industrial Electronics Lab of CNR contacted Prof. M. Somalvico of Politecnico di Milano and Prof. B. Meltzer of Edinburgh University. Thus, thanks also to a fruitful scientific interaction with Bob Kowalski, in 1974 it was possible to address the robot plan formation problem using a first-order predicate calculus [19], and to implement and test in Italy the first two versions of PROLOG language in cooperation with the MP-AI Project Group of Politecnico di Milano. Thus, a new intermediate language for robot programming, working in a distributed environment, 3 Intelligent Autonomous Systems Laboratory (IAS-Lab), University of Padova, Italy.

enriched by a solid modeller for manipulating objects with a robot, was develop in Padova and Milano, with M. and G. Gini [27, 52]. Task planning required also the need of smart collision avoidance algorithms, like the original algorithm described in [48], designed thanks to the cooperation with C. Mirolo of Udine University. From 1980 till about 1998, a research group on AI and Robotics was active at CNR to develop robot languages, and algorithms for motion and task planning. In 1994, Pagello started a collaboration with Prof. T. Arai at the University of Tokyo on their multi-robot systems. Later on, in 1997 for the first time, a team of Italian students from University of Padova participated into the Soccer Robot Games organized by the International RoboCup Federation competing in the Simulation League at RoboCup-1997 in Nagoya. Thus, the scientific attention in Padua shifted towards the design and practice of autonomous multi-robot systems (MRS) [1], to stress the relevance of coordination issues for MRS, in cooperation with some Keio University scientists and with A. D’Angelo of Udine University. In July 2000, the 6th IAS Conference, sponsored by the Int. Society for Intelligent Autonomous Systems,

142

L. Iocchi et al. / Development of intelligent service robots

Fig. 2. A gallery of autonomous robots used at IAS-Lab.

run in Venice and Padua, and in 2003, RoboCup run for the first time in Italy [54], supported by PadovaFiere. Times were mature to investigate the role of robot perception in developing autonomous and intelligent robots [53]. In 2002 E. Menegatti started a collaboration with Prof. H. Ishiguro of Osaka University on omnidirectional vision [43] and humanoid robot perception based on touch for interacting with humans [36]. We understood how practicing humanoids inside academic scientific laboratories would have definitively changed the horizon of the robotics research for the years on [44]. As a consequence, two new series of international conferences were launched by the IAS-Lab of Dept. of Information Eng. In 2006, in cooperation with prof. C. Zhou of Singapore Polytechnic, the First Int. Workshop on Humanoid Soccer Robots was colocated with IEEE-Humanoids Conf., and in 2008 the First Int. Conf. on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR) run in Venice and Padua. Nowadays, IAS-Lab is working on tracking humans [51], understanding their actions [50], and modeling human motions to design new elastic generation of humanoids and exoskeletons by modeling the neuromusculoskeletal structure. This research was started to the social aim of contributing to rehabilitate people through robotics. 4. Robotic scientific competitions4 In the last years the organization of robotic scientific competitions has constantly increased. Competitions have proven to be very effective for: defining standard testbeds for many Robotics and Artificial Intelligence

tasks; attracting different research groups to compare their solutions on a common testbed; promoting collaborative solutions to a specific problem. Among many notable examples of robotic competitions, RoboCup5 is one of the largest world-wide in terms of team participation. RoboCup has been established in 1997 [2, 34] with the long-term goal of realizing a fully autonomous team of robotic soccer player that can compete in 2050 with the human world champions. In the last years, in addition to soccer, RoboCup has included also other leagues aiming at developing different aspects of development of intelligent robots that have a more direct impact on industry and society. For example, RoboCup Rescue (since 2001) aims at developing robots that can help in emergency and disaster scenarios, and RoboCup@Home (since 2006) aims at developing intelligent domestic and service robots that can help people in everyday activities. The participation of Italian researchers in the competitions of RoboCup and in its organization is very important. The first participation to the RoboCup of the Italian community in 1998 was very successful. Seven Italian Universities joined together to form the Azzurra Robot Team (ART)6 . Following this successful experience, many other teams were developed. A detailed description of participation and results of Italian research groups is summarized in the Italian RoboCup National Committee web site7 . In addition to the participation of the competitions, an important effort has been made to organize them and coordinate the activities of many people 5 www.robocup.org 6 www.dis.uniroma1.it/ART/

4 Lab

RoCoCo, Sapienza University of Rome, Italy.

7

www.dis.uniroma1.it/robocup-national-committee-italy/

L. Iocchi et al. / Development of intelligent service robots

143

Fig. 3. Robot platforms used in RoboCup: ART (Middle-Size), SPQR (Standard Platform 4-Legged and Humanoid).

involved in the organization and dissemination activities. Enrico Pagello has been Founding Trustee of the RoboCup Federation Vice-President and General Chair of RoboCup 2003, Padova, Italy. Daniele Nardi is currently President of Federation after being Trustee and Vice-President Andrea Bonarini and Luca Iocchi (now Trustee) have been involved in the Executive Committees of the Soccer Middle-Size league and @Home league [65]. The above mentioned people and Domenico G. Sorrenti have contributed also as cochairs of the RoboCup Symposium in different years and co-chairs of RoboCup related workshops, such as the International Workshop on Humanoids Soccer Robots, on Domestic Service Robots, etc. Moreover, the experience gained within the RoboCup competitions is used for creating a new set of robotic competitions within the EU funded project RoCKIn8 , in which Sapienza University of Rome and Politecnico di Milano are involved. Finally, but of highest importance as well, these competitions have motivated a large part of research in many fields of Artificial Intelligence. In the remainder of this section we sketch some relevant results that were motivated from RoboCup scenarios and then have been applied also to other domains. 8 rockinrobotchallenge.eu

The soccer domain in RoboCup involves teams of autonomous robots playing soccer, and for this purpose team coordination is a fundamental ability. Figure 3 shows some robotic platforms that have been used in different leagues: ART team in the middle-size wheeled robots, and SPQR9 team in Standard Platform League with AIBO quadruped robots and NAO humanoid robots. The robotic soccer challenge provides for a very challenging setting for multi-robot coordination in dynamic, uncertain and adversarial environment. Coordination of heterogeneous soccer payers has been first adopted by Azzurra Robot Team in RoboCup [30] and then used also in other teams and for different tasks, such as exploration and surveillance. Moreover, the definition of complex behaviors has been addressed by defining a complex high-level plan representation formalism based on Petri Net: Petri Net Plans (PNP) [66]. PNP was first developed to address the complexity of soccer behaviors in a dynamic, uncertain and adversarial environment, and then it has been used for other different tasks. Robot learning is also a key feature for improving the performance of the robot through experience. Learning techniques are particularly important because of the lack of detailed and precise knowledge about the models of the robots and of 9 spqr.dis.uniroma.it

144

L. Iocchi et al. / Development of intelligent service robots

the environments. In particular, we showed how learning techniques allow a soccer robot to improve the performance in the locomotion capability and in executing complex composition of behaviours (e.g., [17]). In addition, LearnPNP [35] introduces learning in the representation of high-level plans through PNP. Most of the research motived by the RoboCup soccer domain has been used in other application fields. The most relevant example is ARGOS [6], a system for monitoring the naval traffic in Venice that has been developed by exploiting tracking and data fusion techniques previously developed for the soccer robots. In addition to soccer, RoboCup provides also rescue and domestic scenarios. Within the RoboCup Rescue scenario, we have studied for example techniques for multi-robot exploration [10] and the use of micro UAV for emergency situations (see for example, the contributions collected in [29]). While the RoboCup@Home setting has been a testbed for several research projects related to service robotics, starting from elderly assistance in RoboCare10 (3rd position in RoboCup@Home 2006) to semantic mapping through human-robot interaction [55]. In conclusion, undoubtedly robotic competitions, and RoboCup in particular, are providing a strong impact on the development and deployment of Artificial Intelligence and Robotics technology.

5. Robots situated in their ecosystem11 The main goal of the research agenda of the Laboratorium lab, which has been pursued since 2003, is to extend the scope of service robots from well-defined, structured, environments to everyday, human populated, environments. The key idea behind this approach is the notion, introduced by [58], of artificial ecosystem. In Ecology, an ecosystem is a community of living organisms and non-living components of their environment which, as a whole, interact as a system. The artificial ecosystem is meant at extending this notion to mobile robots, intelligent components of the environment (i.e., elevators, automated doors and conveyor belts) as well as to sensors distributed in the environments itself, which interact as an artificial, distributed, system. The rationale is that, whereas everyday environments have been purposely designed and built for humans to use them, robots should

have at their disposal an appropriate infrastructure, specifically designed for them, if they had to carry out tasks with the same level of performance as humans do. It is noteworthy that the notion of artificial ecosystem developed in conjunction with two important related paradigm shifts in Robotics and Artificial Intelligence, namely Ubiquitous Robotics and Ambient Intelligence [41]. On the one hand, the Ubiquitous Robotics paradigm envisaged robotic systems where the software part (the mind) can have multiple hardware instances (the body), which may or may not be embodied in a unique physical device. On the other hand, Ambient Intelligence refers to environments which are sensitive and responsive to the presence of people, with a specific emphasis on context awareness, adaptivity and pro-activeness. From 2001 onwards, many instances of artificial ecosystems have been deployed in real-world, indoor as well as outdoor, environments. From the initial design of Staffetta, which worked at the Gaslini hospital, Genoa, Italy, for the transportation of goods and materials between different wards [58], two are the current working implementations, namely ANSER (Fig. 4 on the top) and Merry Porter (Fig. 4 on the bottom), which have been developed in a joint effort with Genova Robot SRL12 . The robot ANSER [11] has been used for surveillance purposes at the Villanova d’Albenga airport, Albenga, Italy, the main use being patrolling the landing site, checking hangar doors and looking for unattended luggage in the arrival station. The robot Merry Porter [42] has been installed at the Polyclinic of Modena, Italy, where it served for automated and on-demand transportation of waste and other materials between different floors of the building. Merry Porter interacts with other intelligent devices in the environment (e.g., automated conveyor belts for loading and unloading waste boxes and elevators) in order to carry out the required transportation tasks, whereas it autonomously navigates in the environment using on board sensors and algorithms to localize and avoid obstacles [59]. Moreover, all systems developed so far rely on ETHNOS, an open source, POSIX-based framework for real-time robotic applications initially developed for the Italian RoboCup team ART13 . The artificial ecosystem approach has been used to investigate the adoption of Artificial Intelligence techniques in real-world robotic systems, in particular the 12 Please

10 http://robocare.istc.cnr.it/ 11 University

of Genova, Italy.

refer to: www.genovarobot.com of ANSER, Merry Porter, and RoboCup and ETHNOS source code are in http://ethnos.sourceforge.net/ 13 Videos

L. Iocchi et al. / Development of intelligent service robots

145

Fig. 4. Top: ANSER. Bottom: Merry Porter.

use of logic-based approaches to context assessment, situation understanding and reasoning. From 2008 onwards, a number of logic-based solutions have been designed and developed to provide robots as well as their whole ecosystem with the ability of making sense of what happens in a given environment. Heterogeneous information from distributed sources is used to update in real-time an ontology which acts as a lumped representation, i.e., the state of the whole ecosystem. The collected information is organized in logic constructs (i.e., concepts and relationships between concepts) representing events and situations whose occurrence is to be monitored. The ontology itself is implemented in OWL 214 using Description Logics [3]. Situation recognition is achieved by checking whether actual facts in the ontology (i.e., individuals that are added at run-time depending, e.g., on distributed sensor readings) are 14 http://www.w3.org/TR/owl2-overview/

instance of selected logic constructs (i.e., descriptions of situations to be monitored) [57]. Finally, it is noteworthy that situation recognition constitutes the basis for implementing scheduling policies in the recently developed software framework SeART. SeART extends the potentialities of real-time middlewares for Robotics15 by enabling robots to schedule, on the on board processor, only those tasks which are strictly necessary to deal with the current situation [42]. 6. Social human-humanoid interaction using telenoid robot16 Since Humanoid robots are going to be part of the lives of the human beings, we have investigated, in 15 SeART has been ported to different real-time operating systems http://sartos.sourceforge.net/ 16 RoboticsLab, Universit` a degli Studi di Palermo, Italy.

146

L. Iocchi et al. / Development of intelligent service robots

Fig. 5. Telenoid robot.

collaboration with Hiroshi Ishiguro and his research group, emotional and social features related to HumanHumanoid Interaction (HHI) using Telenoid Android Robot (Fig.5) from Hiroshi Ishiguro Laboratory, Osaka University, Japan. The goal of our research is focused on the analysis and the comprehension of the benefits of two social dimension (Perception and Believability) for increasing the natural social behaviour between users and humanoid robots. In particular we are interested in the study related to the sense of a person to be present in a remote environment with a robot (“Telepresence”) and to the sense of a person to be present in a common environment with a robot [5] where humans and humanoid are "accessible, available and subject to one another" [4]. All of the tests for the demostration of experiemental results [31], has been conducted using Telenoid; a special humanoid robot with a minimalistic human appearance, designed to look and behave like a human. We want to assess the degree of Believability of interaction along dimensions that can be reasonably taken as meaningful indicators of social interaction, both in free and task directed conditions. These perceptual and cognitive features [16] underline the ordinary human–human interaction in daily life

contexts and therefore can be exploited in the humanhumanoid interaction because they could be seen as a sort of specialized implicit competences. The research was focused on the social dimensions that allow to obtain in general a successful interaction [32]. We found that the sense of a shared environment is substantive for obtaining a satisfying interaction, where the distances between agents are construed as perceptual-motor proxies of the regions where intentions and actions are available at a glance. One crucial result of our research is that subjects are significantly inclined to perceive the Telenoid as cooperative and competent even though this favorableness, is not associated with a clear cut implicit non-perceptual criterion for ascribing such behaviour to the Telenoid. Since the users percevied the behaviour of the Telenoid coherent and consistent, our results suggest that they are favorably incline to accept the humanoid behaviour as tuned to theirs and moved by the commitment to meet their demands. This research can be taken as a first basis to build a scale to assess the efficacy of humanhumanoid interaction as their natural-like character in connection with free or task driven conditions. Some findings emerge that might prove as much emphasizing the dimensions that make a human humanoid inter-

L. Iocchi et al. / Development of intelligent service robots

action socially acceptable and efficient in task driven contexts as also relevant for future research. The favorable acceptance of Telenoid behaviors by human users have recently oriented our research to the use of this android robot as co-therapist for the sibling of children with autism for the acceptance of diversity.

7. Perception for UAV: terrain classification and vision-based navigation17 Over the last years mobile robotics, both aerial and ground autonomous systems, have largely evolved. Unmanned Aerial Vehicles (UAVs) have deeply modified the concepts of surveillance, Search&Rescue, aerial photogrammetry, mapping, etc. Mobile Mapping Systems (MMSs) allow 2D/3D environment modeling with very dense point clouds. The kinds of missions grow continuously; missions are in most cases performed by a fleet of cooperating autonomous and heterogeneous vehicles. These systems are really complex and both robust classification approaches and simulation schemes become fundamental. The Vision, Robotics and Artificial Intelligence (VRAI) research group at the Universit`a Politecnica delle Marche in Ancona dealt with the robust classification requirement in two applications: building detection and thematic mapping. Five methods for automated building detection in aerial images and laser data at different spatial resolutions were compared using features extracted at both pixel level and object level. The results show a better performance of the DempsterShafer and the AdaBoost methods, although these two methods also yield a number of unclassified pixels [33]. A hybrid classification scheme was developed for the automation of thematic mapping from a set of remote sensed data (multi-spectral/LiDAR) acquired with aerial/satellite platforms. In particular the automatic Land Cover / Land Use (LCLU) thematic map building method, developed with the collaboration of the spinoff company SI2G srl, gives superior results in terms of reduced costs and production times with respect to the usual approach based on human photointerpretation. Besides, results do not suffer from the subjectivity of human beings, while being comparable in terms of accuracy [37]. 17 Dipartimento di Ingegneria dell’Informazione Universit` a Politecnica delle Marche, Italy

147

As concern the second requirement, the VRAI research group developed a framework for simulation and testing of UAVs in cooperative scenarios allowing an easy switching from simulated to real environments, and thus reducing testing and debugging times, especially in a training context [38]. A vision based approach for guidance and safe landing of an Unmanned Aerial Vehicle (UAV) was then developed [15]. The guidance system allows a remote user to define target areas from a high resolution aerial or satellite image to determine the waypoints of the navigation trajectory or the landing area. A feature-based image matching algorithms finds the natural landmarks and gives feedbacks to the control system for autonomous navigation and landing. Vision allows estimating the position and velocity of a set of features with respect to the UAV. The results obtained show the appropriateness of the vision-based approach that does not require any artificial landmark (e.g., helipad) and is quite robust to occlusions, light variations and seasonal changes. Recently, the VRAI research group explored new opportunities for photogrammetric applications from the combination of photogrammetric aerial and terrestrial recording methods. A UAV can cover both the aerial and quasi-terrestrial image acquisition methods. A UAV can be equipped with an on-board high resolution camera and a priori knowledge of the operating area where to perform photogrammetric tasks. In this general scenario our approach [14] proposes vision-based techniques for localizing a UAV. Natural landmarks provided by a feature tracking algorithm are considered, without the help of visual beacons or landmarks with known positions. The novel idea is to perform global localization, position tracking and localization failure recovery (kidnapping) based only on visual matching between current view and available georeferenced satellite images. The matching is based on SIFT features and the system estimates the position of the UAV and its altitude on the base of the reference image. The vision system replaces the GPS signal combining position information from visual odometry and georeferenced imagery. Georeferenced satellite or aerial images must be available on-board beforehand or downloaded during the flight. The growing availability of high resolution satellite images (e.g., provided by Google Earth or other local information sources) makes this topic very interesting and timely. Experiments with both synthetic (i.e., taken from satellites or datasets and pre elaborated) and real world images have been performed to test the accuracy and the robustness of our method. The results of the proposed Visual Global Positioning System (VGPS)

148

L. Iocchi et al. / Development of intelligent service robots

show sufficient performance if compared with common GPS systems. We obtained a good performance also in the altitude estimation, even if in this last case there are only preliminary results.

8. Mobile robots for surveillance18 Recently the increasing need for automated surveillance systems in indoor environments such as airports, warehouses, production plants, etc. has stimulated the development of intelligent surveillance systems based on mobile sensors. The main objective of the ISSIA Robotics Laboratory is the development of a cooperative multi-agent monitoring system that integrates a network of fixed calibrated cameras with a team of autonomous mobile robots equipped with various sensors [47]. The final aim is to use the fixed agents for surveillance tasks in transit areas and a network of mobile agents for some local and specific surveillance tasks. In this section part of this project is presented with particular focus on the mobile agent tasks. The robots are provided with a 2D map of the environment built by using the laser sensor, on-board the robots, and augmented with passive RFID tags [18]. Based on such a map, the robots are able to navigate throughout the environment and to reach goal-points. Goal positions are either areas of the environment that are not in the field of view of the fixed agents, or specific goalpositions estimated by the static agents after a particular event detection. The former refer to predefined regions of the environment marked with RFID tags that enable the robot to detect and monitor specific objects or provide it with additional information about the objects and regions they are attached to. In the last years RFID technology has quickly gained great attention in the robotics community for several reasons: contactless identification, cheapness and reading effectiveness (i.e. no need of line of sight between tags and reader). Figure 6 shows one robot at ISSIA-Lab used for surveillance purposes. It is based on the PeopleBot platform by MobileRobots Inc. and is equipped with an RFID system composed by a SICK RFI641 UHF reader and two circular polarized antennas. Passive UHF Gen2 DogBone tags by UPM Raflatac are placed at strategic positions for a twofold role. They are used either to label specific goal-positions that the robot has to reach in order to inspect the surroundings and for navigational 18 Institute of Intelligent Systems for Automation (ISSIA) National Research Council of Italy (CNR).

Fig. 6. Picture of the robot detecting an RFID tag.

purposes [46], such as localization or getting information for performing specific activities. New tags can be added to the map as they can be localized automatically by the robot [18]. In addition, the robot can localize itself not only by using the laser data and the map, but also using an RFID based localization algorithm [62]. This is useful each time the robot needs to globally localize itself into the map or when the robot gets lost due to very crowded environmental situations causing the laser-based algorithm be ineffective. Once the robot reaches a goal-position it surveys the scene searching for abandoned or removed objects. In this case visual and laser information is analyzed using various filtering, clustering, and pre-processing algorithms, run in parallel. The output of the various algorithms is then passed to a fuzzy logic component, which performs sensor fusion and decide if scene changes have actually occurred. The robot is also able to detect and follow people moving in its surroundings, using a leg detection and tracking algorithm, which allows to detect and track legs, based on typical human leg shape and motion characteristics, extracted from laser data [45]. 9. Decentralized coordination for multi-robot systems19 When multiple robots will be used in domestic or work environments, their collaboration in achieving common goals is a fundamental issue, specially when the robots are heterogeneous (e.g., developed 19 University

of Verona, Italy.

L. Iocchi et al. / Development of intelligent service robots

by different providers). More specifically, when multiple robots must execute high level complex tasks they must coordinate their actions so to avoid possible negative influences and to maximize the effectiveness of the system. This is typically the case in application scenarios that require cooperative exploration or cooperative information gathering, where robots should avoid hindering other robots’ movements and acquire information that are most important for the systems given what is already known to team mates. There is a large body of work focusing on coordination for Multi-Robot System (MRS) and the literature offers a rich wealth of approaches ranging from market based methods to biologically inspired techniques [22]. In the last years, there has been a growing interest for coordination techniques that can exploit problem decomposition to provide effective and efficient solutions that can be applied to large scale systems (i.e., many robots, many tasks, large environments). Such problem decomposition naturally arises from locality of perception, actions and interactions typically present in MRS. Within this context, our work proposes a novel perspective for MRS and Multi-Agent System (MAS) coordination focusing on Distributed Constraint Optimization Problems (DCOPs) [25] and probabilistic graphical models [28]. Specifically, we proposed the use of the max-sum algorithm for coordination [24]. The max-sum algorithm belongs to the Generalized Distributive Law framework which is frequently used for various inference tasks in the probabilistic graphical model community. In more detail, in [23, 24] we show how this algorithm can be used to coordinate the operations of low power devices, such as sensors in a sensor networks, showing that the max-sum algorithm is able to provide better solutions than previous sub-optimal coordination approaches, while maintaining a low coordination and communication overhead and being more robust to message loss. While in many practical applications, optimal solutions for coordination are not achievable it is often important to characterize solution quality so to avoid pathological behaviours that can not be accepted in critical scenarios.To this end, the max-sum technique has been extended to provide a bound for the optimal value of the solution, by operating on a relaxed treestructured version of the problem [56]. Along this line, in [64] we also provide important theoretical results on the quality of solutions for fixed points of the maxproduct and max-sum algorithms. This line of research has been further explored in several following contributions focusing on mobile robots.

149

Specifically, in [63] we apply the max-sum algorithm to a cooperative information gathering task, where robots must provide the most accurate information on a scalar phenomenon that varies in space and time (e.g., temperature, gas concentration etc.). Results show that the approach is able to provide good solutions while enforcing crucial constraints on robots movements (e.g., maintaining a connected network throughout all the simulation). In addition to that, in [26] we provide a methodology to apply the max-sum approach to robotic applications and validate this on a system composed of a set of UAVs providing video stream of interest points inserted by a set of users. Recently, we focused on the problem of coalition formation, i.e., building groups of robots that must cooperate on complex tasks. Along this line we proposed a decentralised coalition formation algorithm based on the max-sum approach, which was validated in the RoboCup Rescue agent simulator [56]. Moreover we devised a fast, low overhead, approach for coalition formation in highly dynamic environments, which was applied and validated in the context of the RoboCup soccer middle size league [21]. 10. On the parameterizations of 3D point features in visual SLAM20 When using a single image it is impossible to recover both the range and the bearing of a 3D point, but cameras are appreciable sensor under the point of view of cost, weight, power consumption. This made visual SLAM (VSLAM) widely studied. EKF is a well established technique and is known to be able to deal with VSLAM in large scale environments, thanks to the adoption of sub-mapping. The widespread usage of EKF is today questioned by approaches based on non linear optimization. The case of using one single camera is known as the Monocular SLAM problem. In the seminal work by Davison [20] 3D points are represented with this Euclidean coordinates; their proposal relies on a delayed initialization of the points, as shown by Sol´a et al. [61]. Differently, Montiel et al. [49] proposed the Unified Inverse Depth (UID) parametrization, the first instance of inverse parametrization of point features; they demonstrated that it is possible to correctly represent the feature uncertainty using a Gaussian uncertainty, by 20 IRAlab of Universit` a degli Studi di Milano - Bicocca, Italy and AIRlab of Politecnico di Milano, Italy.

150

L. Iocchi et al. / Development of intelligent service robots

changing the representation of the scene feature. This allowed an un-delayed initialization of features, while allowing the usage of the well-known EKF machinery. Nowadays, to the best of our knowledge, the problem of un-delayed feature initialization in EKF-based Monocular SLAM has five solutions, all related to how 3D points are represented in the filter state, and all based on inverse representations: UID [49], Inverse Scaling (IS) [40], Anchored Homogeneous Point (AHP) [60], Framed Homogeneous Point (FHP) [12], and its simplified variant Framed Inverse Depth (FID) [13]. All these parametrizations allow to represent a depth uncertainty that is skewed and extends up to infinity, by using a standard Gaussian distribution. It is important to note that, while an Euclidean point requires 3 parameters, these representations requires (the storage and processing of) more parameters, i.e., variables in the filter state. This makes the EKF filter slower because the EKF complexity, for a filter dominated by the size of the state, is O(n2.4 ), n being the size of the state vector. The UID parameterization requires a 6 elements vector: the camera projection center when the point was first observed (we’ll call it anchor in the following, and the azimuth and elevation of the line through the image point, a line which is defined in a world-aligned reference frame, and the inverse of the point depth along this line. The IS parameterization requires a 4 elements vector; IS is a scaled version of the triangle with a vertex in the projection centre, and whose sides are the focal length and the line from the image centre to the image of the point; as noted by [60], this is also an Homogeneous Point (HP), i.e., a 3D point in homogeneous coordinates. The AHP parameterization requires a 7 elements vector; it takes from UID the idea of representing the Euclidean position of the optical center at the initialization time, i.e., the anchor, and from IS the use of homogeneous coordinates to represent a 3D point. The FHP parameterization extends the anchor point, used in AHP and UID, to a complete anchor frame, i.e., it uses both the position and the orientation of the camera when the point was observed the first time. The parametrization results in a 9 elements vector, 10 when representing the orientation with quaternions. FID relies on the approximation of the initial point on the image plane not to change during the updates, a reasonable hypothesis with a properly calibrated camera, high image resolution, and low noise intensity. This means reducing to a 7 elements vector, 8 if using quaternions for camera orientation.

The advantage of anchored parametrizations in VSLAM is the higher quality of the reconstructed maps. In UID and AHP the anchor point codes the position of the camera, while the viewing ray carries mixed information about the orientation of the camera and the viewing ray in the camera frame. FHP and FID have the advantage of keeping these sources of information separated, and to filter/smooth selected past camera poses, which is somehow similar to non-EKF based methods such as FrameSLAM or GraphSLAM. 11. Conclusion In this article we have presented different research topics addressed by several Italian research groups that are tied together by the common goal of developing intelligent robotic systems through a proper integration of Artificial Intelligence and Robotics techniques. As shown in the previous sections, many significant results show the feasibility of the approaches, the effectiveness of the developed methods and the high relevance of the deployment in the real world of the developed systems. However, the overall goal is still very challenging and the research problems to be addressed are still compelling. In particular, the integration of Artificial Intelligence and Robotics must be further extended and should evolve from being applied to single specific tasks to become a general application-independent framework. We believe that the integration of the methodologies, techniques and results obtained by several research groups working in this field and the collaboration in developing more an more complex systems operating in complex scenarios will speed up the steps towards the creation and the actual deployment of intelligent service robots. References [1] T. Arai, E. Pagello and L.E. Parker, Guest editorial for: Advances in multirobot systems, IEEE Trans on Robotics and Automation 18(5), 2002. [2] M. Asada, H. Kitano, I. Noda and M. Veloso, Robocup: Today and tomorrow-what we have learned, Artificial Intelligence 110(2), 1999. [3] F. Baader, D. Calvanese, D.L. McGuinness, D. Nardi and P.F. Patel-Schneider, editors. The Description Logic Handbook: Theory, Implementation, and Applications. Cambridge University Press, 2003. [4] G. Balistreri, S. Nishio, R. Sorbello, A. Chella and H. Ishiguro, Natural human robot metacommunication through the integration of android’s sensors with environment embedded sensors. Frontiers in Artificial Intelligence and Applications 233, 2011 IOS Press.

L. Iocchi et al. / Development of intelligent service robots [5] G. Balistreri, S. Nishio, R. Sorbello and H. Ishiguro, Integrating built-in sensors of an android with sensors embedded in the environment for studying a more natural human-robot interaction. AI*IA 2011: Artificial Intelligence Around Man and Beyond, 2011, pages 432–437. [6] D.D. Bloisi and L. Iocchi, ARGOS - a video surveillance system for boat trafic monitoring in venice, Int Journ of Pattern Recognition and Artificial Intelligence 23(7), 2009. [7] A. Bonarini, S. Ceriani, G. Fontana and M. Matteucci, On the Development of a Multi-Modal Autonomous Wheelchair. Medical Information Science Reference (an imprint of IGI Global), Hershey PA 2013. [8] A. Bonarini, M. Matteucci, M. Migliavacca, and D. Rizzi, R2p: An open source hardware and software modular approach to robot prototyping, Robotics and Autonomous Systems, 2013. [9] A. Bonarini, M. Matteucci, and M. Restelli, MRT: Robotics Off-the-Shelf with the Modular Robotic Toolkit, volume 30 of Springer Tracts in Advanced Robotics. Springer Verlag, Berlin, D, 2007. [10] D. Calisi, A. Farinelli, L. Iocchi, and D. Nardi, Multiobjective exploration and search for autonomous rescue robots, Journ of Field Robotics Spec Iss on Quantitative Performance Evaluation of Robotic and Intelligent Systems 24, 2007. [11] F. Capezio, F. Mastrogiovanni, A. Sgorbissa, and R. Zaccaria, Robotassisted surveillance in large environments, Journ of Computing and Information Technology 17(1), 2009. [12] S. Ceriani, D. Marzorati, M. Matteucci, D. Migliore, and D.G. Sorrenti, On feature parameterization for ekf-based monocular slam. In Proc of 18th World Congress of the Int Feder of Automatic Control, 2011. [13] S. Ceriani, D. Marzorati, M. Matteucci, and D.G. Sorrenti, Single and multi camera simultaneous localization and mapping using the extended kalman filter: On the different parameterizations for 3d point features. Journ of Mathematical Modelling and Algorithms, 2013. [14] A. Cesetti, E. Frontoni, A. Mancini, A. Ascani, P. Zingaretti, and S. Longhi, A visual global positioning system for unmanned aerial vehicles used in photogrammetric applications, Journ of Intelligent & Robotic Systems 61, 2011. [15] A. Cesetti, E. Frontoni, A. Mancini, P. Zingaretti, and S. Longhi, A vision-based guidance system for uavs navigation and safe landing using natural landmarks. Journ of Intelligent & Robotic Systems 57(1-4), 2010. [16] A. Chella, C. Lebiere, D. Noelle, and A. Samsonovich, On a roadmap to biologically inspired cognitive agents. Frontiers in Artificial Intelligence and Applications 233, (2011) IOS Press. [17] A. Cherubini, F. Giannone, L. Iocchi, D. Nardi, and P.F. Palamara, Policy gradient learning for quadruped soccer robots, Robotics and Autonomous Systems 58(7), 2010. ISSN: 09218890. [18] G. Cicirelli, A. Milella, and D. Di Paola, Rfid tag localization by using adaptive neuro fuzzy inference for mobile robot applications, Industrial Robot: An International Journal 39(4), 2012. [19] M. Colombetti and E. Pagello, A logical framework for robot programming. In In Theory and Practice of Robots and manipulators. A. Morecki, K. Kedzior (Eds.), 1977. [20] A. Davison. Realtime simultaneous localisation and mapping with a single camera. In Proc of IEEE Int Conf on Computer Vision, 2003. [21] A. Farinelli, H. Fujii, N. Tomoyasu, M. Takahashi, A. D’Angelo, and E. Pagello, Cooperative control through objective achievement, Robotics and Autonomous Systems 58(7), 2010.

151

[22] A. Farinelli, L. Iocchi, and D. Nardi, Multi robot systems: A classification focused on coordination, IEEE Transactions on System Man and Cybernetics part B 34(5) (2004), 2015–2028. [23] A. Farinelli, A. Rogers, and N.R. Jennings, Agent-based decentralised coordination for sensor networks using the max-sum algorithm, Autonomous Agents and Multi-Agent Systems 2013. [24] A. Farinelli, A. Rogers, A. Petcu and N.R. Jennings, Decentralised coordination of low-power embedded devices using the max-sum algorithm, In AAMAS (2008). [25] A. Farinelli, M. Vinyals, A. Rogers, and N. Jennings, Multiagent Systems, chapter Distributed Constraint Handling and Optimization, MIT Press, 2013. [26] F.M. Delle Fave, A. Farinelli, A. Rogers and N.R. Jennings, A methodology for deploying the max-sum algorithm and a case study on unmanned aerial vehicles, In IAAI (2012). [27] G. Gini, M. Gini, E. Pagello, and G. Trainito, Distributed robot programming, In Proc 10th Int Symp on Industrial Robots, 1980. [28] C.E. Guestrin, D. Koller and R. Parr, Multiagent planning with factored MDPs, In Advances in Neural Information Processing Systems, Vancouver, Canada, 2002. [29] L. Iocchi and D. Nardi, editors. Proc of Workshop on Mini and Micro UAV for Security and Surveillance, 2008. [30] L. Iocchi, D. Nardi, M. Piaggio, and A. Sgorbissa, Distributed coordination in heterogeneous multi-robot systems, Autonomous Robots 15(2), 2003. [31] H. Ishiguro, S. Nishio, A. Chella, R. Sorbello, G. Balistreri, M. Giardina and C. Cal´i, Investigating perceptual features for a natural human-humanoid robot interaction inside a sponta-neous setting, Biologically Inspired Cognitive Architectures (2013). [32] H. Ishiguro, S. Nishio, A. Chella, R. Sorbello, G. Balistreri, M. Giardina and C. Cal´i, Perceptual social dimensions of humanhumanoid robot interaction, Intelligent Autonomous Systems (2013). [33] K. Khoshelham, C. Nardinocchi, E. Frontoni, A. Mancini, and P. Zingaretti, Performance evaluation of automated approaches to building detection in multi-source aerial data, ISPRS Journ of Photogrammetry and Remote Sensing 65, 2010. [34] H. Kitano, M. Asada, Y. Kuniyoshi, I. Noda, E. Osawa, and H. Matsubara, RoboCup: A challenge problem for AI, AI Magazine 18(1), 1997. [35] M. Leonetti and L. Iocchi, LearnPNP: A tool for learning agent behaviors. In RoboCup 2010: Robot Soccer World Cup XIV (LNCS 6556), 2011, pages 418–429. [36] F. Dalla Libera, T. Minato, I. Fasel, H. Ishiguro, E. Pagello, and E. Menegatti, A new paradigm of humanoid robot motion programming based on touch interpretation, Robotics and Autonomous Systems 57(8), 2009. [37] E.S. Malinverni, A.N. Tassetti, A. Mancini, P. Zingaretti, E. Frontoni, and A. Bernardini, Hybrid approach for land use /land cover mapping using high resolution imagery, Int Journ of Geographical Information Science 25, 2011. [38] A. Mancini, A. Cesetti, A. Iual`e, E. Frontoni, P. Zingaretti, and S. Longhi, A framework for simulation and testing of uavs in cooperative scenarios, Journ of Intelligent & Robotic Systems 54, 2009. [39] D. Martinoia, D. Calandriello, and A. Bonarini. Physically interactive robogames: Definition and design guidelines, Robotics and Autonomous Systems, 2013. [40] D. Marzorati, M. Matteucci, D. Migliore and D.G. Sorrenti, On the use of inverse scaling in monocular slam. In Proc of IEEE Int Conf on Robotics and Automation, 2009. [41] F. Mastrogiovanni and N.Y. Chong, Handbook of Research on Ambient Intelligence: Trends and Perspectives. 2010.

152

L. Iocchi et al. / Development of intelligent service robots

[42] F. Mastrogiovanni, A. Paikan, and A. Sgorbissa, Semantic aware realtime scheduling in robotics, IEEE Trans on Robotics 29(1), 2013. [43] E. Menegatti, A. Pretto, and E. Pagello, Testing omnidirectional vision-based Monte-Carlo localization under occlusion. In Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, 2004. [44] E. Menegatti, G. Silvestri, E. Pagello, N. Greggio, A. Cisternino, F. Mazzanti, R. Sorbello, and A. Chella, 3d models of humanoid soccer robot in usar sim and robotics studio simulators. Int Journ of Humanoid Robotics Spec Iss on Humanoid Soccer Robots C. Zhou E. Pagello (Eds.), 5(3), 2008. [45] A. Milella, C. Dimiccoli, G. Cicirelli, and A. Distante, Laserbased people-following for human-augmented mapping of in-door environments. In IASTED Int Conf on Artificial Intelligence and Applications, 2007. [46] A. Milella, D. Di Paola, G. Cicirelli and T. D’Orazio, Rfid tag bearing estimation for mobile robot localization, In Int Conf on Advanced Robotics Munich, Germany, 2009. [47] A. Milella, D. Di Paola, P.L. Mazzeo, P. Spagnolo, M. Leo, G. Cicirelli and T. D’Orazio, Active surveillance of dynamic environments using a multi-agent system, In 7th IFAC Symp on Intelligent Autonomous Vehicles Lecce, Italy, 2010. [48] C. Mirolo, S. Carpin, and E. Pagello, Incremental convex minimization for computing collision translations of convex polyhedra, IEEE Trans on Robotics 23(3), 2007. [49] J. Montiel, J. Civera and A.J. Davison, Unified inverse depth parametrization for monocular slam. In Proc, of Robotics: Science and Systems, 2006. [50] M. Munaro, G. Ballin, S. Michieletto, and E. Menegatti, 3d flow estimation for human action recognition from colored point clouds, Biologically Inspired Cognitive Architectures 5, 2013. [51] M. Munaro, F. Basso, and E. Menegatti, Tracking people within groups with rgbd data, In IEEE/RSJ Int Conf on Intelligent Robots and Systems, 2012. [52] E. Pagello, P. Bison, C. Mirolo, G. Perini, and G. Trainito, A message passing approach to robot programming, Computersin Industry 7(3), 1986. [53] E. Pagello, A. D’Angelo, and E. Menegatti, Cooperation issues and distributed sensing for multirobot systems, Proceedings of the IEEE 94(7), 2006.

[54] E. Pagello, E. Menegatti, A. Bredenfel, P. Costa, T. Christaller, A. Jacoff, D. Polani, M. Riedmiller, A. Saffiotti, E. Sklar, and T. Tomoichi, Robocup-2003 new scientific and technical advances, AI Magazine 25(2), 2004. [55] G. Randelli, T.M. Bonanni, L. Iocchi, and D. Nardi. Knowledge acquisition through human - robot multimodal interaction, Intelligent Service Robotics 6, 2013. [56] A. Rogers, A. Farinelli, R. Stranders, and N.R. Jennings, Bounded approximate decentralised coordination via the maxsum algorithm, Artificial Intelligence 175(2), 2011. [57] A. Scalmato, A. Sgorbissa, and R. Zaccaria, Describing and recognizing patterns of events in smart environments with description logic, IEEE Trans on Cybernetics, in press, 2013. [58] A. Sgorbissa and R. Zaccaria, Robot staffetta in its natural environment. In Proc of the 8th Int Conf on Intelligent Autonomous Systems (IAS-8), 2004. [59] A. Sgorbissa and R. Zaccaria. Planning and obstacle avoidance in mobile robotics, Robotics and Autonomous Systems 60(4), 2012. [60] J. Sol´a, Consistency of the monocular EKF-SLAM algorithm for three different landmark parametrizations. In Proc of IEEE Int Conf on Robotics and Automation, 2010. [61] J. Sol´a, A. Monin, M. Devy, and T. Lemaire, Undelayed initialization in bearing only slam. In IEEE/RSJ Int Conf on Intelligent Robots and Systems, 2005. [62] E. Stella, G. Cicirelli, F.P. Lovergine and A. Distante, Position estimation for a mobile robot using data fusion. In IEEE Int Symp on Intelligent Control, 1995. [63] R. Stranders, A. Farinelli, A. Rogers and N.R. Jennings, Decentralised coordination of mobile sensors using the max-sum algorithm, In IJCAI, 2009. [64] M. Vinyals, J. Cerquides, A. Farinelli, and J.A. Rodr´iguezAguilar, Worst-case bounds on the quality of max-product fixed-points. In NIPS, 2010. [65] T. Wisspeintner, T. van der Zant, L. Iocchi, and S. Schiffer, RoboCup@Home: Scientific competition and benchmarking for domestic service robots, Interaction Studies 10(3), 2009. [66] V.A. Ziparo, L. Iocchi, P. Lima, D. Nardi, and P. Palamara, Petri Net Plans - A framework for collaboration and coordination in multi-robot systems, Autonomous Agents and MultiAgent Systems 23(3), 2011.

Suggest Documents