4, OCTOBER 2009. 573. Guest Editorial. Introducing Automation and Engineering for. Ambient Intelligence. THE term Ambient Intelligence was coined in Europe ...
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 6, NO. 4, OCTOBER 2009
573
Guest Editorial Introducing Automation and Engineering for Ambient Intelligence
T
HE term Ambient Intelligence was coined in Europe in 2001 [1]–[3] to indicate a new paradigm to design intelligent and smart environments. The methodology behind Ambient Intelligence was to move from the passive monitoring and interpretation of an environment of interest to the implementation of active and proactive systems, engineered to support the users with an intelligent system. Ambient Intelligence requires complex technologies and state-of-the-art research to implement active and proactive intelligent environments. The intelligent environment must be endowed to understand automatically the behavior and intention of the user, for safety and security purposes. The underlying methodology includes the deployment of an array of sensors in the environment, the capture of the signature of the evolving scene, its interpretation, using signal processing algorithms, and then the packaging of the scene description for immediate (online) or later (offline) use. In order to implement the above mentioned processes, each sensor must be calibrated, its stream captured and processed to extract information and merge the multimodal streams to create a single description/interpretation of the evolving scene. Ambient intelligence can trace its links to the intelligent robotics and automation fields not for its overall scope but for some of the underlying concepts and tools. Innovations from these disciplines especially those dealing with sensory data interpretation, sensor-based controls, task planning, distributed systems, real-time architectures, all have significant role in the design and deployment of ambient intelligent systems. A number of early efforts in the intelligent spaces and environments came out of groups involved in intelligent robotics field. Early efforts initiated in the late 1990s showed exciting promise and potential utility of incorporating intelligence in our rooms and buildings. Examples of these are The Intelligent Room [5] and The KidsRoom [6] at MIT, Microsoft’s EasyLiving Project [4], and the AVIARY project at the University of California at San Diego [7]. In recent years, we have seen significant advances in the field of intelligent environments. These advances are made possible in part by progress in sensor and device technology, network, and computer technology, as well as in the machine perception, systems architecture, and design fields. A number of research projects and a smaller number of fielded projects addressing issues of intelligent environments have been reported. These include the Intelligent Dormitory iDorm [8], the Interactive Room iRoom [9], the HyperMedia Studio [10], The MavHome Project [11], MICASA project [12] and the Evidence-Based Digital Object Identifier 10.1109/TASE.2009.2022976
Nursing Care Support System [13], and the Sensing Room at the University of Tokyo [14] just to list a few. Ambient Intelligence is by its very nature a multidisciplinary filed. Ambient Intelligence has started creating synergy among different research disciplines, including computer vision, speech understanding, intelligent and adaptive networks, human machine and robot interface, machine learning, artificial intelligence, and pattern recognition. There is a large amount of literature on signal level processing and analysis, but not a great deal on information fusion for the automatic semantic interpretation of activities in complex scenes. The main focus of this Special Issue is to bring together solutions from research and engineering for the automatic understanding of a complex scene, via a multimodal array of sensors, with automation of adaptation as the theme connecting various research areas. As Ambient Intelligent Systems become more capable, issues dealing with shared control between humans and machine [15] and human-centered design and interfacing [16] would become increasing more important. As Ambient Intelligent Systems become more pervasive, issues regarding protection of user’s privacy [17] and other issues, (which are typically not considered as engineering or technical issues) those dealing with psychology, cognitive science, and sociological impacts would become important [18]–[20]. Finally, it should be noted that Ambient Intelligent Systems are not only restricted to rooms, buildings, offices or hospitals, but they are also finding their niche in mobile environments like automobiles, where intelligent driver support systems promise to enhance safety, comfort, and convenience [21]. I. SCANNING THE ISSUE This Special Issue includes papers that deal with single and multimodal sensor information, attempting at interpreting an environment that could be inhabited by one or more people. Assisted living, for instance, tends to focus on single occupancy, when the application is for the elderly (for instance, a single person at their home, aided by the system). However, an assisted living solution could also be deployed in a home, where several persons share the same space. The environment could be augmented by robotic platforms, introduced to help and inform the user. The introduction of robotic platforms may seem a daring and futuristic solution, although already widely accepted in eastern Asian countries, such as Japan and Korea. An intelligent system could not work if no intelligent infrastructure exists to support the underlying processing. Such infrastructure ought to optimize the information flow, the existing and most likely distributed resources and provide all modules of the system with
1545-5955/$26.00 © 2009 IEEE
574
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 6, NO. 4, OCTOBER 2009
an adaptive, self-adjusting and flexible seamless communication layer. Radio frequency, video, audio are the most common sensory suites for that providing rich representation of an evolving scene and support the Ambient Intelligence paradigm. Micheloni and Foresti propose an active tuning of camera parameters that can be applied to static and mobile cameras. The idea is to devise an automatic strategy for the tuning of video acquisition in monitoring applications. The proposed solution demonstrates how tuning camera parameters is really effective when illumination within the image is not homogeneous. In particular, while current techniques apply postprocessing filters to enhance object quality, the proposed solution tuning the acquisition parameters on the basis of the object of interest improves low-level tasks like object segmentation. This shows that improving quality acquisition is better than improving image quality. Thus, the proposed work extends the active vision notion introducing the concept how to see the object rather than simply what to see. Brdiczka et al. propose a multimodal information fusion for a more complete interpretation of a smart environment occupied by one or more people. They are attempting to recognize human behavior from audio video sensory data. They report promising results in experiments involving two persons in an instrumented room. Their methodology could be employed in assisted living solutions to interpret people behavior and their interactions. Lu and Fu consider the problem of human activity recognition in an space instrumented with wireless sensor networks. They propose Bayesian network for fusion of multimodal information and for the interpretation of activity. Their experiments, involving a single person in the room, indicate that their approach is effective and tolerant to some sensory noise and failures. Ren and Meng propose a game-theoretic modeling of joint topology control and power scheduling for a wireless heterogeneous sensor network deployed to analyze the decentralized interactions between heterogeneous sensors. They propose a joint power and topology control game solution. Reliability, connectivity, and power efficiency, are considered as desirable characteristics for the design the power and topology control game. The strategies played by the nodes reflect the tradeoff between node preferences including frame success rate, node degree and power consumption. He authors prove the existence of both Nash-equilibriums and perfect-Bayesian-Nash equilibriums, and gave the sufficient and necessary conditions to derive the equilibriums. The decentralized decision-making nature of sensor nodes may result in either separating or pooling equilibriums, so a further stringent game perfection is analyzed to refine the equilibriums. Ahn and Yu study an engineered solution to make use of radio frequency technology to localize an object or person in an environment. Two algorithms were implemented, called LFRN and LMRN, using a network of ZigBee sensors. The LFRN is more accurate than an existing localization system based on the Chipcon cc2431, while LMRN appears to estimate positions faster and works better in an open space. The authors conclude that LFRN can be employed to estimate the position of stationary objects in an indoor environment, whereas LMRN is more indicated for robotic applications in outdoor environments.
Yu et al. present a robotic service framework, where an ubiquitous robotic space is used as a framework for a monitoring application in an office environment, that combines conventional infrastructure with the robotic platform. The authors describe how a robot can achieve enhanced perception, recognition, decision, and execution capabilities using mobility-supporting algorithms, precision localization networks, dynamic reconfiguration, and virtual world modeling. Their framework is sufficiently flexible to adapt to different knowledge accommodating different robotic services. Their method is tested in a robotic security application. As illustrated by a collection of papers in this Special Issue, the state-of-the-art in intelligent environments is rapidly growing, there are still a full range of challenging unsolved problems. These problems are from a wide range of topics from computational sciences and systems engineering. Successful design of practical ambient intelligent systems require resolution of problems related to the robustness of the proposed algorithms, the reliability under different environmental conditions, and its stability over long periods of time. It will be important to have real-world deployments of well engineered Ambient Intelligent Systems prototypes, for careful and systematic experimental evaluation. Such experiments will provide rich and important data ongoing research in usability and user acceptance. Although the recent increase in interest in safety (assisted living) and security (intelligent surveillance) has rapidly grown, we believe this Special Issue provides a valuable source of information on state-of-the-art techniques that hopefully will inspire novel methods and research in the growing area of Ambient Intelligence. We are indebted to all the authors who contributed to our Special Issue and we wish to thank all the reviewers for their time and useful comments. PAOLO REMAGNINO, Guest Editor Faculty of Computing, Information Systems and Mathematics, Kingston University Kingston Upon Thames, KT1 2EE U.K. DOROTHY N. MONEKOSSO, Guest Editor Centre for Adaptive Wireless Systems Cork Institute of Technology Cork, Ireland YOSHINORI KUNO, Guest Editor Department of Information and Computer Sciences Saitama University Saitama, Japan MOHAN MANUBHAI TRIVEDI, Guest Editor Computer Vision and Robotics Research Laboratory University of California at San Diego La Jolla, CA 92093 USA HOW-LUNG ENG, Guest Editor Institute for InfoComm Research Singapore 119613, Singapore
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 6, NO. 4, OCTOBER 2009
REFERENCES [1] P. Remagnino and G. L. Foresti, “Ambient intelligence: A new multidisciplinary paradigm,” IEEE Trans. Syst., Man, Cybern.—Part A: Systems and Humans, vol. 35, no. 1, pp. 1–7, Jan. 2005. [2] Scenarios for Ambient Intelligence in 2010, IST Program Advisory Group (ISTAG), Feb. 2001, Final Rep., Eur. Commission. [3] Ambient Intelligence: From vision to reality, IST Program Advisory Group (ISTAG), Sep. 2003, Final Rep., Eur. Commission. [4] B. Brumitt, B. Meyers, J. Krumm, M. Hale, S. Harris, and S. Shafer, “EasyLiving: Technologies for intelligent environments,” in Proc. 2nd Int. Symp. Handheld and Ubiquitous Computing, Lecture Notes in Computer Science, 2000, vol. 1927, pp. 12–29, ISBN:3-540-41093-7. [5] M. Coen, B. Phillips, N. Washawsky, L. Weisman, S. Peters, and P. Finnin, “Meeting the computational needs of intelligent environments: The metaglue system,” in Proc. MANSE 99, Dublin, Ireland, 1999, pp. 201–212. [6] A. Bobock, S. Intille, J. Davis, F. Baird, C. Pinhanez, L. Campbell, Y. Ivanov, A. Schutte, and A. Wilson, “The kidsroom: A perceptually interactive story environment,” Presence, vol. 4, no. 4, pp. 367–391, Aug. 1999. [7] M. Trivedi, K. Huang, and I. Mikic, “Intelligent environments and active camera networks,” in Proc. IEEE Syst., Man, Cybern. Conf., Sep. 2000. [Online]. Available: http://www.youtube.com/watch?v=JNDpbKqpg3E, Related video: AVIARY—Developing Intelligent Spaces [8] H. Duman, H. Hagras, and V. Callaghan, “Intelligent association selection in embedded agents in intelligent inhabited environments,” Pervasive and Mobile Computing vol. 3, no. 2, 2007 [Online]. Available: http://iieg.essex.ac.uk/idorm.htm, The iDorm project home page, Essex University, U.K. [9] Bochers, M. Ringel, J. Tyler, and A. Fox, “Stanford interactive workspace: A framework for physical and graphical user interface prototyping,” IEEE Wireless Commun., vol. 9, no. 6, pp. 64–69, Dec. 2002. [10] E. Mendelowitz and J. Burke, “Kolo and Nebesko: A distributed media control framework for the arts,” in Proc. Int. Conf. Distrib. Frameworks for Multimedia Appl., Feb. 6–9, 2005, pp. 113–120.
575
[11] D. Cook, M. Youngblood, E. Helerman, K. Gopalratnam, S. Rao, A. Litvin, and F. Khawaja, “MavHome: An agent-based smart home,” in IEEE Int. Conf. Pervasive Comput. Commun., Mar. 23–26, 2003, pp. 521–524. [12] M. M. Trivedi, K. S. Huang, and I. Mikic, “Dynamic context capture and distributed video arrays for intelligent spaces,” IEEE Trans. Syst., Man, Cybern., vol. 35, no. 1, pt. A, pp. 145–163, Jan. 2005. [13] T. Hori, Y. Nishida, and S. Murakami, “A pervasive sensor system for nursing care support,” Ambient Intelligence Techniques and Applications, Computer Science Springer Verlag 2008 [Online]. Available: http://www.elite-care.com/oatfield-tech.html, to appear in [The CARE project home page] [14] T. Mori, A. Takada, H. Noguchi, T. Harada, and T. Sato, “Behavior prediction based on daily-life record database in distributed sensing space,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., IROS, Aug. 2–6, 2005, pp. 1703–1709. [15] T. Sheridan and R. Parasuraman, “Human-automation interaction,” Rev. Human Factors and Ergonomics, vol. 1, pp. 89–129, 2006. [16] M. Pantic, A. Pentland, and A. Nijholt, Eds., “Human computing, Special Issue,” IEEE Trans. Syst., Man, Cybern., pt. B, Feb. 2009, (Guest Editors). [17] D. Fidaleo, H. Nguyen, and M. M. Trivedi, “The networked sensor tapestry (NeST): A privacy enhanced software architecture for interactive analysis and processing of data in video sensor networks,” in Proc. ACM Int. Workshop on Video Surveillance Sensor Networks, 2004, pp. 46–53. [18] E. D. Mynatt, A.-S. Melenhors, A. D. Fisk, and W.-A. Rogers, “Aware technologies for aging in place: Understanding user needs and attitudes,” IEEE Trans. Pervasive Comput., pp. 46–53, Apr.–Jun. 2004. [19] P. H. Kahn, H. Ishiguro, B. Friedman, T. Kanda, N. G. Freier, R. L. Severson, and J. Miller, “What is a human? Toward psychological benchmarks in the field of human-robot interaction,” Interaction Studies, vol. 8, no. 3, pp. 363–390, 2007. [20] M. C. Mozer, “Lessons from an Adaptive Home,” in Smart Environments. New York: Wiley, 2005, pp. 271–294. [21] M. M. Trivedi and S. Y. Cheng, “Holistic sensing and active displays for intelligent driver support systems,” IEEE Computer, vol. 40, no. 5, pp. 60–68, May 2007.
Paolo Remagnino (M’98) received the Ph.D. degree from the University of Surrey, Surrey, U.K., in 1993. He is a Reader with the Faculty of Computing, Information Systems and Mathematics, Kingston University. He is a researcher in image and video understanding and one of the main promoters of intelligent and autonomous environments. He has published more than 85 papers in scientific journals and conferences and is the editor of four book collections, published with Springer, in the field of video surveillance and ambient intelligence. His research interests include image processing, computer vision, machine learning, artificial intelligence, and robotics. Dr. Remagnino is a member of nine journal editorial boards, including Machine Vision and Applications The Expert Systems, the International Journal of Robotics and Automation, Pattern Analysis and Application, and the IEEE TRANSACTIONS OF AUTOMATION, SCIENCE AND ENGINEERING.
Dorothy N. Monekosso (M’09) received the M.Sc. degree in satellite engineering and the Ph.D. degree in spacecraft autonomy from the University of Surrey, Surrey, U.K., in 1992 and 1999, respectively. She is currently a Senior Scientist in the Department of Electronic Engineering at Cork Institute of Technology (CIT), Cork, Ireland, working in the area of intelligent systems and robotics. Previously, she was a Senior Lecturer at Kingston University and a Senior R7D Engineer at Surrey Satellite Technology, Ltd. in the U.K. She has published more than 40 papers in scientific journals and conferences and is the editor of a book collection, published with Springer, in the field of ambient intelligence. Dr. Monekosso is a member of two journal editorial boards, the International Journal of Robotics and Automation (IJRA) and the International Journal of Advanced Intelligence Paradigms (IJAIP). She is a Guest Editor for the Special Issue on “Automation and Engineering from Ambient Intelligence” (2009) of the IEEE TRANSACTIONS OF AUTOMATION, SCIENCE AND ENGINEERING.
576
IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, VOL. 6, NO. 4, OCTOBER 2009
Yoshinori Kuno (S’80–M’83) received the B.S., M.S., and Ph.D. degrees from the University of Tokyo, Tokyo, Japan, in 1977, 1979, and 1982, respectively, all in electrical and electronics engineering. In 1982, he joined Toshiba Corporation. From 1987 to 1988, he was a Visiting Scientist at Carnegie Mellon University. In 1993, he moved to Osaka University as an Associate Professor in the Department of Computer-Controlled Mechanical Systems. Since 2000, he has been a Professor in the Department of Information and Computer Sciences, Saitama University. His research interests include computer vision, human interface, and human-robot interaction.
Mohan Manubhai Trivedi (F’09) was born on October 4, 1953, in Wardha, India. He received the B.E. (Hon) degree in electronics from the Birla Institute of Technology and Science, Pilani, India, in 1974, and M.E. and Ph.D. degrees in electrical engineering from the Utah State University, Logan, in 1976 and 1979, respectively. He is a Professor of Electrical and Computer Engineering at the University of California at San Diego. He has a broad range of research interests in the computer vision, intelligent transportation systems, active safety and driver assistance systems, intelligent (“smart”) environments, and human-machine interfaces areas. He established the Computer Vision and Robotics Research Laboratory and the Laboratory for Intelligent and Safe Automobiles LISA: promoting multidisciplinary research at UCSD. Currently, he and his team are pursuing research in human movement, gestures, affects, activities and interaction learning and recognition; human-centered intelligent driver support systems, robust multimodal systems, and active safety systems for automobiles. His team has played a key role in several major research collaborative initiatives. These include an autonomous robotic team for Shinkansen track maintenance, a human-centered collision avoidance system, panoramic vision system for incident detection and also for driver assistance, vision-based occupant protection system for “smart airbags.” His team also designed and deployed the “Eagle Eyes” system on the U.S.-Mexico border in 2006. He serves regularly as a consultant to industry and government agencies in the U.S. and abroad. He has given over 45 keynote and plenary talks at major conferences. His research and interviews appear regularly in prominent local, national and international broadcast and print mediums. Prof. Trivedi is a Fellow of SPIE. He has received the Distinguished Alumnus Award from the Utah State University, Pioneer (Technical Activities) and Meritorious Service Awards from the IEEE Computer Society. He is a coauthor of a number of papers winning “Best Papers” Awards. He serves as the General Chair of the IEEE Intelligent Vehicles: IV 2010. He is an Editor of the IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS and was the Editor-in-Chief of the Machine Vision and Applications Journal for seven years. He served on a panel dealing with the legal and technology issues of video surveillance organized by the Constitution Project in Washington, D.C., as well as at the Computers, Freedom and Privacy Conference. He is serving as an Expert Panelist for the Strategic Highway Research Program of the Transportation Research Board of the National Academy of Sciences.
How-Lung Eng (M’03) received the B.Eng. and Ph.D. degrees from Nanyang Technological University, Singapore, in 1998 and 2002, respectively, both in electrical and electronic engineering. Currently, he is with the Institute for Infocomm Research, Singapore, as a Senior Research Fellow. His research interest includes real-time vision, pattern classification, and machine learning for abnormal event detection. He has made several PCT fillings related to video surveillance applications and has actively published his works in the above areas of interest. Dr. Eng was a recipient of the 2000 Tan Kah Kee Young Inventors’ Award (Silver, Open Section) for his Ph.D. work, and a recipient of the 2002 TEC Innovator Award, and the 2006 and 2008 IES Prestigious Engineering Awards for his works in the areas of visual surveillance.