A decoupled three-layered architecture for service robotics in intelligent environments Robin Rasch
Aljoscha Pörtner
University of Applied Sciences Bielefeld Campus Minden Artilleriestraße 9 32427 Minden, Germany
University of Applied Sciences Bielefeld Campus Minden Artilleriestraße 9 32427 Minden, Germany
[email protected] Martin Hoffmann
[email protected] Matthias König
University of Applied Sciences Bielefeld Campus Minden Artilleriestraße 9 32427 Minden, Germany
University of Applied Sciences Bielefeld Campus Minden Artilleriestraße 9 32427 Minden, Germany
[email protected]
[email protected]
ABSTRACT
1.
To enable the usage of service robots in a smart environment we have developed a decoupled three-layered architecture. The architecture is separated into three parts: hardware abstraction layer, domain level as well as control and collaboration layer. The hardware abstraction layer considers hardware drivers and elementary functions, like inverse kinematics for manipulators. The domain level wraps the hardware in autonomous agents. They offer features in the system, can communicate with each other and are responsible for local task planning. The third layer enables the global task planning and allows the system to solve more complex tasks that are shared between single agents. The layer is based on an ontology to reduce the effort of implementing and incorporating new agents. Purpose of this work is to build a slim, reliable and effective architecture, which is able to work with heterogeneous robot and smart-home hardware. Furthermore, it should be able to solve dynamically tasks with the available resources.
There is a tendency towards a connection of distributed, autonomous service robots and smart environments. These environments are compound of several sensors and actuators, which can be dynamically added or removed from the running system. The same applies to service robotic devices. Furthermore, the working environment of the service robots is close to human. As a consequence, robot systems need a dynamic, safe and precise environmental adaption. Examples for simple service robots are lawnmower robots, vacuum cleaner robots and carrier robots. These robots are usually developed for a single use case. Complex domestic tasks need collaboration between the single robots. If, for example, a cup is falling down to the ground, multiple robots are involved to clean it up. At first, a sensor robot has to check if the cup is broken. Afterwards a robot with gripper has to clean up the shards before a vacuum cleaner robot can vacuum the remains. This collaboration needs a planning and depends on the art of system. Such systems are called multi-robot systems [1] and differ in distribution and autonomy. This work considers multi-robot systems interaction with smart environments. A starting point with a statement of the relevance and a postulation of its challenges and research directions is given by Cook [2]. An interesting case is the collaboration of a smart home and robots where a robot needs data or actions from the smart home to solve a complex task. Instances of this case are the Intelligent Spaces [3], Ubiquitous Robots [4], and PEIS ecologies [5]. In this case, a particularly issue is the request for an action in face of uncertainty about the action itself or the executing actuator or sensor. For example, if a robot with a camera needs to find an object in the dark, the robot does not request light to turn on. The robot requests that more light is required at a specific point in the world frame. Now, the most suitable light agent, e.g. a light bulb in the same room, turns on and
CCS Concepts •Computer systems organization → n-tier architectures;
Keywords Smart enviroments; Architecture; Agents; Ontology; Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from
[email protected].
EISE’16, November 16 2016, Tokyo, Japan c 2016 ACM. ISBN 978-1-4503-4555-2/16/11. . . $15.00
DOI: http://dx.doi.org/10.1145/3008028.3008032
INTRODUCTION
spend the light for the robot. Approaches for this topic are works by Huberman and Clearwater [6], who use a multiagent system with a market-based approach for the thermal regulation in office buildings, and Lesser et al. [7] with their Simple Home Agent Resource Protocol (SHARP ). This work enables the interaction and coordination of agents within a simulated intelligent environment. A more recent article is the work by I˜ nigo-Blasco et al [8]. They stated that the intersections between multi-robot systems and multi-agent systems are high and that the design of multi-robot systems based on agent technologies and architectures seems to be a good choice in terms of re-usability, scalability, flexibility, parallelism and robustness. Those systems are usually called Multi-agent robot systems (MARS ). The architecture of this work is mostly inspired by the work by Reinisch et al. [9], who proposes ThinkHome, a project with the target of an ontology-based multi-agent system for building automation. In this paper, we propose requirements of a software system, which connects a multi-robot system with a smart environment. Furthermore, we present a slim three-layered architecture, that connects heterogeneous hardware for distributed robot agents in a smart environment. The contribution of this work is the proposal for a scalable, distributed system which is able to accept and incorporate new unknown agents, while using defined interfaces and an ontology-based planning. Our implemented system shows how these agents work in order to solve domestic problems that are close to real life. After the Introduction, we propose requirements of a software system that can be used for a multi-robot system in interaction with a smart environment in Section 2. Section 3 presents the three-layered architecture which connects conventional concepts of multi-robot systems with approaches of smart environments. In Section 4, we present an implemented system which is built on the architecture in a real-life scenario. In Section 5, we test the functionality of the system to satisfy the requirements. Finally, the conclusions are summarized in Section 6.
2.
REQUIREMENTS OF A SYSTEM FOR SERVICE ROBOTS IN A INTELLIGENT ENVIRONMENT
To develop a system for service robotics in intelligent environments, we need to define the requirements for the system. The first requirement is the conformity of all components in the system. Every coherent computerized device, which is able to perceive or manipulate the environment, is an independent agent. This includes simple light bulbs with a digital switch, camera systems with visual and auditory sensors or robots with grippers. Every agent has to be clearly identifiable, to be reachable from other agents in the system. Furthermore, the identifier of the agents will be used to validate conflicting data from different agents. Every agent has a various number of features to interact with the environment. A single feature is also clearly distinguishable. All agents have to share their knowledge base with the system so that every other agent has access to these data. For example, a camera system tracks the position of an object. The pose of this object is saved in a collective knowledge base. If a robot moves this object, it can access the collective knowledge base to get the position of it. After the movement of the object, the robot can store the new position into the
collective knowledge base. The second requirement is the common language. For the collaboration between single agents, the agents have to communicate in the same way. This applies to active and passive communication. Active communication is explicitly between two agents. Features can be requested by one agent to be executed from every other agent in the system. The passive communication is oblique through the knowledge base. All data must be organized in a way to be understandable and interpretable by every agent. Another requirement is the decomposability of the system. Every agent in the system has to work in a distributed and autonomous way. If one agent fails in its action caused by technical failure or missing accessibility of a planned goal the system has to evaluate this failure and renew the plan for the task. We differentiate between global and local plans, corresponding to path planning in navigation. The global plan leads to the goal task, e.g. to transport one object to another object. First step in the global plan is a detection of the objects. After the detection the system has to manipulate the position of the first object. Every agent has a local plan to solve its individual task. The local plan needs to be planned by the agent. This local plan is only known by the executing agent. For example, a robot gets the instruction to move an object, itself decides how to pick up the object, it plans with its own path, and how to place the object. Finally, the fourth requirement is the modular expandability of the system. The running system needs the ability to accept new agents as well as remove current agents. The system has to recognize changes and renew global plans if necessary. For example, if an agent leaves the system, while it is part of the global plan. New agents should get access to the knowledge base and should be able to understand context and content.
3.
THREE-LAYERED ARCHITECTURE
To fulfill our requirements for a system for service robots in a smart environment, we applied a decoupled three-layered architecture. Figure 1 shows the three layers and the communication channels between them. The bottom layer is the hardware abstraction layer (HAL). This is the driver layer, which capsules control signals to the hardware in atomic functions. The second layer is the domain layer (DL). It represents the single agents with their features. This layer is closely linked to the hardware abstraction layer, because it capsules the atomic functions in usable features. The local plan is calculated at this layer. Every agent consists of one hardware abstraction layer and one domain layer. The third layer is the control and collaboration layer (CCL). It enables the global planning and the collective knowledge base. It is decoupled from layer one and two. This layer possesses an abstraction for every agent, that underlying physical agent. Furthermore, this layer has a human control interface for definition of tasks. Communication between the layers is only possible vertical between two adjacent layers and horizontal in the domain level and the control and collaboration layer.
3.1
Hardware Abstraction Layer
The hardware abstraction layer is a common hardware interface. This ranges from the digital switch of a light to the motor control unit of a manipulator. The layer offers an interface to the on-top layer and reduces the different control
denotes every agent r of a dynamic set R. r is defined as: r = hId, F, Z, z, Zs i
Figure 1: Blockdiagram of the three layered architecture for a multi-robot system in an smart environment. Lines between single components show communications. Dashed lines show communication between agents.
signals as specific functions. For example, a mobile robot with four omni-directional wheels needs control signals for every wheel. These signals have to be calculated to move the mobile robot in a certain direction with a defined velocity. The HAL reduces these signals to one function, which just needs a velocity command to move the mobile robot. It calculates the necessary voltage for the motors which is needed to affect the wheels. While this layer makes the hardware more abstract, it reduces the complexity for developers in higher layers. Because a robot engineer has to serve the interface and make simple and clear functions that can be used outside of the HAL.
3.2
Domain Layer - RATS
The domain layer is on top of the HAL. This layer abstracts the hardware to agents with features. It enables the communication between different agents and the planning and execution of local plans. Our domain layer is called RATS, which stands for Robot Action and Task System. The focus on this system is to organize a single agent with its several features. In our work, features are divided into actions and tasks. Actions are atomic features of an agent. For example, the movement to a pose, open the gripper or switch on the light. Tasks are a sequence of actions that a single agent can execute. They are more complex and need more planning. An example in this case is picking up an object. The agent needs to move to the object, move the manipulator at the right position, open the gripper and grasp the object.
3.2.1
RATS Member - The Agents
Every agent in RATS is called a RATS member. RATS uniforms different distributed systems like simple sensors (e.g. brightness-sensor or switches), complex sensors (e.g. camera-systems or a radar-system), simple actuators (e.g. light-bulb) and complex actuators (e.g. robots). Some agents are combined systems, for example, a camera-system that is mounted on a robot and the robot act like one agent. RATS
(1)
Every agent r in RATS is represented with an unique id Id, a finite set of features F , a finite set of possible states Z, a current state z and a finite set of safe states Zs ⊆ Z. Id is necessary for a direct communication and the task allocation. The set of features F represents the actions the agent is capable to execute (see Section 3.2.2). Single tasks of the set can be disabled caused by the current state of the robot or other running tasks. The state set Z is necessary to get feedback of the system and its condition. Every state in this set is an abstract representation and characterizes the robot. They will be used by the agent for local planning. Possible states are, for example, busy, standby, arm-in-movement and so on. The agent is always in one of these states. States can change while executing a feature, through detected changes of the environment or temporal conditions. Some of the states, at least one, have to be safe states. Every safe state shall prevent damage to the environment or the agent itself. These states can be used to control the agent directly. The procedure aims to ensure the safety for user and machine. The system distinguishes agents between single-task agents and multi-task agents. Single-task agents are limited to execute one action at a time. Multi-task agents are capable of executing multiple tasks. This multi-tasking needs more planning and is limited to the physical properties of the agents. For example, a mobile robot is capable to move the manipulator and to move the mobile base, but not to drive in two different directions at the same time. Tasks that cannot be executed caused by a blocked component will be saved in a task queue and executed sequentially. Every task in the waiting queue can be canceled by timeout and by calling. This enables two aspects of the behavior control: impatience and acquiescence as in the ALLIANCE system by Parker et al. [10]. If an agent notes that it can execute a task which is executed by another agent more effective, the agent can assume the task. This behavior is called impatience. The behavior acquiescence describes that an agent releases a task because it notes its own failure. The exitstrategy on canceling a task is predefined by the agent and has to end in a safe state.
3.2.2
Actions and Tasks
The notion of a feature of an agent is defined with regard to Lundh [11]. Every feature f of RATS follows the pattern: f = hId, r, I, O, Φ, P r, P o, F req, Costi,
(2)
Where Id is the unique identifier for the feature, r is the executing agent, I is the set of all input parameters and O the set of all output parameters. Depending on the type of agent denotes I = ∅ for sensor-agents and empty O = ∅ for all actuator-agents. Φ denotes the transition between input and output parameters. Furthermore, the execution of a feature depends on the state of the environment s. This state includes the current states of all agents. The state is s ∈ S, while S represents all possible states in the environment. They consist of the precondition for all features. For example, a robot needs the position of an object to pick it up or the camera system requires a defined light-level to find an object. P r ⊆ S is a set of initial states of the environment which enable the execution of a feature. The
change of the environment is represented by the function P o : P r × I → S. It describes, for example, the manipulation of an object. Sensor features changes the environment by adding or refreshing information into the knowledge base. The F req property encloses information about the execution frequency of the feature. For some features, the frequency is constant for the given property, for example the laser localization. For other features, the given frequency is the upper limit and can be reduced. Another feature cannot be executed frequently. The last property is the cost value. It can be constant or dynamic, depending on the agent and the environmental state. This value will be used from the global planner to allocate the cheapest feature sequence to solve a task. It will be requested by an agent with a parameter set in the planning state. The agent will calculate it costs. The control and collaboration layer decides which agent will be used with its own scheduling algorithm that can vary depending on the system.
3.2.3
Communication
RATS needs a uniform communication model to call features between agents. Every message c is denoted by: c = hId, r, f, p, ti
(3)
The Id identifies the message in the waiting queue and is used by the caller to listen for return values or error messages. r is the invoked agent and f the requested feature. The agent field can be left blank to call all agents, which have got this feature, to execute it. The p property is a keyvalue set and represents the parameters, that are necessary for the execution. This property is dynamic and can consist of data, like a goal position, and pointer to the knowledge base, like an object id, to get information about position or appearance. The t property sets the timeout for the feature before it has to be canceled. The communication is based on a bus-topology. All agents listen to the same topic. At the topic are new feature calls send. The calling agent saves the id of the message to find the result set. The listening agents evaluate the field r of the message. If the field corresponds to the Id of the agent or is blank, the agent checks if the field f of the message has a proper feature in its feature set. If there is one feature matching, the message will be pushed to the execution queue. After the execution, the agent publishes the result values to the result topic. All agents listen to this result topic and compare their saved message ids with the ids of the result set. The same applies to failures and timeouts. The communication of the RATS system requires that the requesting agent knows the feature that should be executed. To decouple the single RATS member, we designed a third layer, the control and collaboration layer.
3.3
Control and Collaboration Layer
The control and collaboration layer (CCL) is called MAESTRO, which is short cut for Multi-Agent System for Service Robotics in Intelligent Environments. The term Maestro is the Italian translation for master and means the director of an orchestra and describes the function of this layer. It controls or rather guides the several heterogeneous entities in the system: it orchestrates.
3.3.1
General Architecture
The design of MAESTRO is based on and mainly influenced by the underlying framework JADE [12]. JADE is designed as a highly distributed system consisting of platforms and containers, where every platform and container can be located on different hosts. Every platform must have a main container which is the administrative node of the system. Beside the classical agents, it contains two special agents: the Agent Management System (AMS), the only agent which can perform platform management actions, and Directory Fasciliator (DF), which provides the yellow pages service for finding agents and their proposed services. Additionally, the main container manages the Local Agent Descriptor Table (LADT), the Global Agent Descriptor Table (GADT) and the Container Table (CT). LADT is a local repository containing all the agents in the system and their current status. If a message should be delivered by the system, the main container examines LADT for the recipient. If it is not present, because the agent is located in another container, it browses through GADT. This is a global version of LADT but with an additional entry of the location of the agents. Every other container of the platform is registered with the main container and contains also a LADT as well as a cached version of GADT and can be the home of several classical agents. The communication between two containers within one platform is realized via the FIPA-compliant Internal Message Transport Protocol (IMTP). The communication between two or more platforms and other FIPAcompliant systems is done using the Message Transport Protocol (MTP).
3.3.2
MAESTRO Agents
To enable the interaction with the DL, the RATS member was abstracted by generic agents, which can be added, deleted and configured at runtime. The configuration measures are extensive so that every generic agent can be injected with domain-specific configurations to fulfill its role as a robotic agent as well as an light agent or door agent. Topical similar agents were pooled in special containers to allow a straightforward extension of the platform, e.g. with agents for home appliances. Every virtual agent is responsible for one or many physical entities in the environment. According to the specification of JADE, a MAESTRO platform is composed of one or many Abstraction Container (see Figure 2). Every Abstraction Container registers with one Main Container. A single MAESTRO agent lives in one Abstraction Container and every Abstraction Container can have zero or many agents.
3.3.3
Communication
Corresponding to the communication of a RATS member, there exists a communication between the abstract agents of MAESTRO. This communication is realized via an asynchronous message passing paradigm, which is induced by the JADE platform. Messaging in JADE respectively MAESTRO is realized according to the FIPA-ACL specification. The general message structure consists of an envelope for transport information and the payload which includes the encoded message. The encoded message itself is separated into the message parameters and the proper content of the message. The following example shows an ACL-Message in the MAESTRO system for the purpose of requesting a task execution. In this case, the agent Rose is requesting the Central Coordination Agent (CCA) for initiating the contract
Figure 2: Abstract architecture of the MAESTRO system
net protocol to search for an object called GreenObject: ( request : sender ( agent − i d e n t i f i e r : name Rose@RAC) : r e c e i v e r ( agent − i d e n t i f i e r : name CCA@MC) : o n t o l o g y WorldModel : l a n g u a g e MAESTRO − SL : protocol fipa − request : content ””( initCNP ( : g o a l ( canSee ? a g e n t GreenObject ) ) ”” ) In this example, the ontology is referring to underlying model respectively representation which should be used. The language describes the intentional semantics of the message and is derived from the FIPA-SL Content Language Specification. Here, the receiver is requested to initialize a CNP with the specified goal. The protocol parameter refers to the used protocol which is also named request but is independent with the communicative act. It is a specification of the communication procedure not of the act and is defined in the Request Interaction Protocol Specification [13]. Every MAESTRO agent is connected with its corresponding RATS member. The communication is implemented with a MDM connection. This connection is published by the MAESTRO system and is subscribed by the RATS member. The communication model that is used corresponds to the RATS communication model.
4.
EXPERIMENTAL SCENARIO
In this Section, we will show our implemented scenario. The scenario is shaped by a typical domestic use case. Goal of the scenario is the detection and localization of an object defined by a color. The detection is implemented with a camera system, which needs a bright environment to see
everything. After localization, the system shall move the object to another object.
4.1
Setup
The scenario is located in a laboratory which models a typical installation in the envisaged use case. We use a single robot with several sensors and actuators that are typically for an smart environment. The robot is a KUKA youBot with an omni-directional mobile base. The mounted gripper is based on jamming of granular material [14]. The sensors and actuators of the smart environment, which are figured in Figure 3: 1. Three ceiling-mounted cameras. The field of vision coverages the whole scenario. 2. Six down-lights. Every down-light can be controlled individually and allow the simulation of a typical light installation in a living space. 3. An illumination sensor to ensure that the cameras have enough light to detect and localize the object. All cameras are Microsoft LifeCams with a 75 degree field of view and a resolution of 1080p. The frame rate is limited to 30 frames per second. The downlights are Philips HUE Phoenix spots which can emit four different white shades. The luminosity sensor is a TAOS TSL2561 equipped with two photodiodes supporting a measurement range of 0.1 to 40k lux. It is especially designed for exposure control in cameras.
4.2
Scenario Description
The evaluation is build upon a typical scenario identified within an interdisciplinary preliminary discussion consisting of researchers within the field of health science, electrical engineering and computer science. The scenario is as follows:
Figure 4: Abstracted scenario task: Bring the green respectively red object next to the opposite one.
4.3
Figure 3: Top view on the three-dimensional model of the evaluation area. The colored objects are the cameras and the colored surfaces are their corresponding areas.
Scenario Agents
Corresponding to the hardware, the system consists of several agents. The robot, the camera system and the brightness sensor are implemented RATS members, based on ROS. The light system is connected with ZigBee over an external gateway. All agents are configured in MAESTRO. The agents and their implemented features are as follows: • RobotAgent : hmove, pick, placei • CameraAgent : hobserveObject, stopObservei
The elderly person Bob is sitting in room A and wants a cup which is in room B. It exists a robot, called Rose, which is also in room B. The doors of room A and B are open. The light in room B is off while the light in room A is on. Both rooms come with some sort of an building automation system which can control the light as well as it can measure the intensity of the light. Additionally, it has a camera system installed in room A as well as in room B. Bob utters the instruction that he wants the cup next to him. The system tries to locate Bob as well as the cup. Due to the low illumination of room B it cannot be recognized. The system analyze the situation and tries to solve the problem by turning on the light in room B. Subsequently, it can identify the cup, which is located in room B. After the system has located the object, it tries to find a solution for the problem how to bring the cup to Bob. The system identifies the robot as the appropriate agent to solve the task and instructs it to pick up the object, put it across and place it next to Bob. We abstracted the scenario to avoid mechanical problems regarding the process of gripping and allow repeatability on the one hand as well as a great spatial extent on the other hand. Therefore, the cup as well as Bob are substituted by two red and a green colored plastic rectangles (see Figure 4). The position of both objects is limited to the evaluation area. It allows the system to operate without the need of a human being located in the evaluation area. The corresponding task is as follows: Bring the green respectively red object next to the other one. The procedures described in the originally example remain the same.
• IlluminationAgent : hmessaureIlluminationi • LightAgent : hadjustLighti The robot, has the abilities to move its mobile base, pick up an object by using an on-board camera system and place a carrying object to the ground. The camera agent is able to observe an object frequently. The observation of an object is started with the feature observeObject and can be stopped with stopObserve. The agent is capable of observing multiple objects per time. The agent writes the position data of the observed object into the knowledge base. The same applies to the IlluminationAgent, which measures the luminosity and saves it. This knowledge will be used by the LightAgent to regulate the light if necessary.
5.
EXPERIMENTAL RESULTS
To evaluate the system, we have measured the performance of the system respectively to the goal. The evaluation is divided into two parts: the interaction with the building and the manipulating of the environment by the robots.
5.1
Interaction with the Smart Environment
This part discusses especially the optimization of the environmental parameters with respect to the object detection. The object detection itself depends on the luminance of the environment. In a very dark or very light environment, the detection process can be disturbed due to the different colorization of the objects in different lighting situations. In Figure 5 and 6, we see that the detection of the green object is prevented by the lighting situation, where we have a minimal luminance of approx. 16 lux and a maximal luminance of approx. 19 lux.
Figure 8: Scatter-/ and lineplot of the object detection rate and the corresponding lux values for a desired lighting situation.
Figure 5: Image of the camera without separate illumination. The system is not able to detect the red object. This is not the real brightness of the environment. The image was adjusted afterwards to improve the picture.
and the other half were executed with the target, to move the red object to the green object. Figure 9 shows such a run within execution and Figure 10 shows the corresponding evaluation for the 20 iterations.Here, we can see that the average execution time is approx. 42 sec. with little deviations. Additionally, the execution of the task is very reliable with individual exceptions in the second half, where the goal is defined as MoveObject(RedObject).
Figure 6: Scatter-/ and lineplot of the object detection rate and the corresponding lux values for a problematic lighting situation. In the previously discussed use case, the system detects such situations and the deliberation process of the IlluminationAgent yields a plan to solve this by increasing the minimal luminance.
Figure 9: Image of the task execution of Move object
Figure 7: Image with activated illumination. The system is able to detect both objects. In Figure 7 and 8, we see that the illumination of the room increases in succession of the adjustment from approx. 40 lux to approx. 75 lux. In this situation, the critical threshold of the camera is approx. 72 lux, which can be achieved by the adjustment of the room lighting. The object detection rate of the green object is therefore stabilized.
5.2
Manipulation of the Environment by the Robots
This part of the system evaluation discusses the performance of the robotic agents respectively their physical counterparts. Therefore, an iterative evaluation of 20 separate runs has been performed. The first 10 runs were executed with the target to move the green object to the red object
Figure 10: Evaluation of the task execution for Move object. The green and red bars represent the different start/goal combinations, the blue line the average execution time, the grey line the execution time per iteration and the dark grey bars indicate iterations with failures. This problem can also be observed in the heatmap of the robots location within the iterative evaluation (see Figure 11 and Figure 12). These heatmaps show the movement path of the mobile robot summed up over all iterations. Where the path of the robot in Figure 11 is absolutely consistent, the path of the robot in the second half of the evaluation indicates problems in the movement of the robot (see Figure 12). The wrong path of the robot is indicated by the red color of the bars and shows that the robot tries to move to an occupied position.
7.
Figure 11: Bivariate histogram of the location data recorded in the first half of the performance analysis.
Figure 12: Bivariate histogram of the location data recorded in the seconds half of the performance analysis.
This behavior can also explainable with the state of the environment in such situations and the object detection algorithm in general. Actually, the object detection algorithm is based on a color-based matching process in our illustrative scene. Every object in the field of view with the right color is a potential candidate for the selection. In case of both failures, a change in the environmental situation triggers the selection of the wrong object. This results in a problematic brightness situation, where the illumination of the object was too high. This situation cannot be handled by the system already. A possible solution to this problem would be the incorporation of the shutter system of the building, which is planned but not already implemented.
6.
CONCLUSION
In this paper, we address the development of an architecture for a multi-robot system in an smart environment. We present a three-layered architecture which uniforms heterogeneous hardware. The architecture enables the possibility to add and remove agents at run time. A further advantage, we noticed, is the decreasing development time for new agents. It is not necessary to change existing agents or publish the data of the function and address of the new agents. The only changes, that are necessary, has to be on the planner. The uniform communication and the usage of ontology reduce the changes on the planner, too. The evaluation of the whole system reveals that the usage of the architecture increases the systems performance in terms of flexibility and reliability. The envisaged scenario and their demands of the system can be complied with the currently available components and agents.
ACKNOWLEDGMENT This work is financially supported by the Federal Ministry of Education and Research. BMBF, Funding number: 03FH006PX5.
REFERENCES
[1] Pedro U. Lima and Luis M. Cust´ odio. Multi-Robot Systems, pages 1–64. Springer Berlin Heidelberg, 2005. [2] Diane J. Cook. Multi-agent Smart Environments. J. Ambient Intell. Smart Environ., 1(1):51–55, January 2009. [3] J.H. Lee and H. Hashimoto. Intelligent space - concept and contents. Advanced Robotics, 16(3):265–280, 2002. [4] J.H. Kim, Y.D. Kim, and K.H. Lee. The third generation of robotics: Ubiquitous robot. In Proc. of the 2nd International Conference on Autonomous Robots and Agents, December 2004. [5] A. Saffiotti, M. Broxvall, M. Gritti, K. LeBlanc, R. Lundh, J. Rashid, B.S. Seo, and Y.J. Cho. The peis-ecology project: Vision and results. In Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems, September 2008. [6] B. Huberman and S. H. Clearwater. A Multi-Agent System for Controlling Building Environments. In Proceedings of the 1st International Conference on Multiagent Systems (ICMAS), pages 171–176, 1995. [7] Victor Lesser, Michael Atighetchi, Brett Benyo, Bryan ˜ Horling, Victor Lesser Michael Atighetchi, RAl’gis Vincent, Anita Raja, Thomas Wagner, Ping Xuan, and Shelley XQ. Zhang. A Multi-Agent System for Intelligent Environment Control. In Proceedings of the 3rd International Conference on Autonomous Agents, pages 291–298, New York, USA, 1998. Association for Computing Machinery. [8] Pablo I˜ nigo Blasco, Fernando Diaz-del Rio, Ma Carmen Romero-Ternero, Daniel Cagigas-Mu˜ niz, and Saturnino Vicente-Diaz. Robotics Software Frameworks for Multi-agent Robotic Systems Development. Robot. Auton. Syst., 60(6):803–821, 2012. [9] C. Reinisch, M. J. Kofler, and W. Kastner. ThinkHome: A smart home as digital ecosystem. In Proceedings of the 4th IEEE International Conference on Digital Ecosystems and Technologies, pages 256–261, April 2010. [10] Lynne E. Parker. Evaluating success in autonomous multi-robot teams: experiences from alliance architecture implementations. Journal of Experimental & Theoretical Artificial Intelligence, 13(2):95–98, 2001. [11] Robert Lundh. Plan-Based Configuration of a Group ¨ of Robots. PhD thesis, Orebro University, 2006. [12] Fabio Bellifemine, Agostino Poggi, and Giovanni Rimassa. JADE–A FIPA-compliant agent framework. In Proceedings of the 4th International Conference and Exhibition on the Practical Application of Intelligent Agents and Multi-agents, pages 97–108. London, 1999. [13] Fipa request interaction protocol specification, 2002. [14] Eric Brown, Nicholas Rodenberg, John Amend, Annan Mozeika, Erik Steltz, Mitchell R. Zakin, Hod Lipson, and Heinrich M. Jaeger. Universal robotic gripper based on the jamming of granular material. Proceedings of the National Academy of Sciences, 107(44):18809–18814, 2010.