A Framework for Cooperative Multi-Robot Surveillance Tasks Isai Roman-Ballesteros and Carlos F. Pfeiffer Computer Science Department, ITESM, Campus Monterrey Monterrey, NL Mexico, C.P. 64849 Email:
[email protected],
[email protected]
Abstract— In the last years, security incidents, like terrorism, have raised new research areas focused in surveillance issues in different areas, and the automation of this task is an important research area using robotic platforms and fixed sensors. The use of a team of robots and sensors is an approach that has many advantages over a single robot platform. In this paper it is proposed a framework to develop cooperative multi-robot surveillance systems which could help in the development of surveillance platforms following this approach. A formal definition of the surveillance problem is presented. Then the subproblem decomposition of surveillance for cooperating agents is shown. Finally, a case of study of a surveillance problem is presented to show the application of the framework.
I. I NTRODUCTION There is not a universal definition for surveillance and it completely depends on the scope that needs to be covered. Webster Dictionary defines surveillance as close watch kept over someone or something. In the last years, security incidents, like terrorism, have raised new research areas focused in these security issues. New approaches that continue growing focus their attention in applications where robots deal with this surveillance tasks. The literature is agreed to name this new approach as Robotic Surveillance [1] and an important issue being researched until present is the formal definition for surveillance and [1]. This definition approach can be seen as very general, and this aspect is critical if the idea is to automate this task. Robotic surveillance basically refers to the automation of surveillance tasks and there are some approaches around this, some of them oriented to the formalization of the problem like [1], [2], [3] and some others oriented to real implementations of surveillance like [4], [5], [6] and [7]. In [2], the definition of surveillance is defined as a close watch focused on relevant events covered by the scope of the surveillance task. We agree with this approach but notice that these relevant events must be formally defined. Another important issue is that the surveillance task includes several activities and different kinds of sub-tasks can be differentiated. This can successfully define the problem because not all surveillance sub-tasks will be included in a certain instance of the surveillance problem and this classification would help in the process of surveillance automation and the use of cooperative approaches.
Cooperative Robotic robotics [8] proposed a new approach to the surveillance problem, because a designed platform with several robots dealing with these tasks have clear advantages over a platform with just one robot, like the coverage area over an specified time or several cheap robots instead of one expensive one. But, the use of several robots presents another important issue of great importance before these cooperative surveillance platforms can be successfully implemented. All this aspects mentioned before will be explained further. The following paper is organized as follows: In Section 2, a formal definition for surveillance events, agents and environment is proposed, Section 3 presents the problem of surveillance with multiple robotic agents and a way of cooperative problem solving for surveillance tasks is proposed and in Section 4 a case of study where the proposed framework is applied. II. S URVEILLANCE F ORMAL D EFINITION A. Surveillance Events Basically, surveillance task environment is a model of different levels of surveillance events and the heterogeneous group of sensors that will be detecting the relevant events according to the following hierarchy. We defined the following kinds of events: Low-level events, High-level events, Action events and Diagnose events.
Fig. 1. Sensor as an input/output system. x is referring an energy input and y is the measured quantity output.
Sensors play an important role in the surveillance task environment in the sense that they will be the ones generating the surveillance low-level events. A sensor can be defined as an input transducer. A transducer is defined as a device that receives certain kind of energy from a system as an input that measures and outputs a signal (normally an electric signal) specific to the measured quantity [9]. In other words, sensors can be though as an input/output system where x is the input and y is the output or the measured quantity, expressed as a function s(x) that will vary depending on the sensor measure
Proceedings of the Electronics, Robotics and Automotive Mechanics Conference (CERMA'06) 0-7695-2569-5/06 $20.00 © 2006
(see figure 1). We define the Sensors Result Set as the set of all the relevant measures. Formally: S = {sxi | sxi = si (x)}
(1)
In equation 1, x is the measured quantity and i is the sensor. A Low-Level Event would occur when a sensor fulfills one of these two specific cases: the sensor in the system gives a measurement, defined either by a specific value, continuous interval, or a binary signal measurement, the second case can be described as historical tendency of the sensor over time t. We define the set of all the Low-Level Events, either in the environment as: Fig. 2.
L = {l | l is a Low-Level Event}
The into-mapping ls : S → L is defined as s → l where s denotes the sensor measure and l the low-level event. This mapping will entail the relation between a specific sensor in the environment and the triggered Low-Level event according to the sensor measurement. A High-Level Event or HLE refers to the set of Low-Level Events triggering an alarm state condition. The set of all the High-Level Events or H is defined as, H = {h | h is a HLE}
(3)
In this case, the into-mapping hl : L → H defined as l → h defines how one low-level event l is related to a high-level event h. One important task of the High-level events is to decide whether or not an Alarm state is raised in the system. Alarm state is a condition in the system that announces or alerts imminent danger. Some high-level events can be managed by the agents of the robotic system. The high level events can be mapped to events known as Action Events or AE. An Action Event would be composed of one or more High-Level Events. We define the set of all the Action Events as: A = {a | a is an AE}
(4)
The into-mapping ah : H → A defined as h → a denotes how high-level events h are related with action events a. The action events would return a defined state of the system that could stop alarm state conditions raised by certain intruders in the surveillance environment. Diagnose Events or DE refer to the decision-taking process after when High-Level Events or alarms are detected or Action Events have failed. The set of all the diagnose events is defined as: D = {d | d is a Diagnose Event}
Surveillance Hierarchic-Event Definition.
(2)
(da ◦ ah ◦ hl ◦ ls) : L → D defined by d → da(ah(hl(ls(s))))
(6) (7)
The Diagnose events are not easily handled by a robotic agent and normally would be directly managed by an expert, either a human or an expert system. If this level is completely delegated to human, the surveillance events definition can be simplified from definition 6 to (ah ◦ hl ◦ ls) : L → D defined by a → ah(hl(ls(s))), taking diagnose events out of the model, leaving the surveillance process defined with three different event levels. B. Surveillance Agent Formal Definition The Surveillance Agent is an important piece in a surveillance system. Roughly speaking, surveillance agents are the first responsible of the surveillance tasks in the environment and the decision-taking processes in the cases where special attention is needed. Eventually, they will handle the entire surveillance task by their own in the environment and if a failure case hesitates, they will delegate this process to the expert. Then, the importance of the agent in a surveillance system is central in the problem and a precise definition is required. The formalization of surveillance agents is based on [10]. First, the environment E as a finite set of discrete, instantaneous states as: E = {e1 , e2 , ..., ei } where i ∈ N
(8)
Then, this robotic agent would have a set of different actions that in certain way change the environment, for example move, activate sensor, store measurements, etc. The set of all the actions of the robotic agent or AC is defined as:
(5)
The into-mapping da : A → D defined as a → d denotes how action events a are related with diagnose events a. This four different event levels constitute the Surveillance Hierarchical Event Definition that is shown in Figure 2. The definitions shown before are joint in the composite mapping:
AC = {a | a is an action}
(9)
This agent will be interacting with the environment started in some state, then the agent begins choosing the surveillance action to perform in the environment, Then the environment can change in many different states, but one will result. Finally,
Proceedings of the Electronics, Robotics and Automotive Mechanics Conference (CERMA'06) 0-7695-2569-5/06 $20.00 © 2006
the loop is closed and the surveillance agent will choose again a new action to perform with the new state of the environment. This process is know as a run and is defined as a sequence of interleaved environment states and actions [10]. Let R be the set of all this runs that end with an environment state. This is the way in witch an agent will behave in an environment. A surveillance agent can be defined by agent definition in [10], as a general mapping RAG : R → AC. With this definition the surveillance agent can be described as a black box with sensory inputs and a set of actions. Depending on the sub-problem decomposition, this sensory inputs and actions will vary and the group of surveillance agents could oscillate from homogeneity to heterogeneity.
one important point is environment segmentation according to the sensor range. Massios in [2] defines process know as Environment Graph Construction, where the environment is segmented according to the specific topological definition of the environment. A better criterion for segmentation is the sensor range, because it would provide the full area, a sensor can cover at a specific instant in the surveillance process.
C. Surveillance Agent Architecture In [11], an agent architecture based on cooperative agents is proposed. The purpose of this architecture is to cooperate efficiently, by supplying to the agent a goal and based on the actions the agent can be taken generate plans to achieve the goal of the agent. Based on this definition, some capabilities are required: 1) Agent Sensors and actuators: The agent needs capabilities to sensor measuring in order to catch relevant events for the surveillance process and some corrective actions when the events can be handled by the surveillance agents themselves. 2) Communication: support the communication needs in the case where cooperative agents deal with surveillance tasks. 3) World Model: The Surveillance hierarchical event model is the main part, including the problem decomposition to help on the agent actions and accurate planning for the surveillance tasks. D. Environment Model The surveillance task must be done in a specific environment and this environment will limit the surveillance task. Thus, it is very important to define a model of the environment to work on. Consider a Euclidean space W, called workspace, represented as RN , where N = 4 where the first three dimensions represent space and the other dimension is time t. The geometry of the space is accurately known, so uncertainty is minimal and can be depreciated. If N = 3 the surveillance process is simplified if flying platforms for surveillance agents are not used in the team. Then the surveillance space can be represented as a plane (see figure 3). We are conscious that certain sensors have ranges over a three-dimensional space, but this space can be projected into the X plane without important lost of information for the surveillance tasks. A Euclidean space could be used, but a discretization of the space similar to the Potential Gradient Method used on the Path Planning problem [12] can help in the path planning of mobile robotic agents. The most important aspect to be taken into account for this task is the Sensor range of the mobile robotic agents. If we are assuming sensor heterogeneity, different types of sensors would have different sensor ranges. So,
Fig. 3.
A Two-Dimensional environment segmentation example.
Dealing with several surveillance agents at the same time requires considerations focused on the location of each agent. It is important to remember that two surveillance agents cannot be at in the same location at the same time and their accessible locations would be shared by several agents that could select the same location. The problem is easily handled by the correct path planning in the space. This issue is mentioned in [8] and in [12] some solutions to this issue are proposed. III. C OOPERATION IN ROBOTIC S URVEILLANCE The definition of surveillance hierarchical events gives to the surveillance problem a different perspective in the sense that the granularity of the problem is refined. In [2] the events are defined as a single kind and this could work fine with many instances of the surveillance problem where a single agent does the job. But issues arise when several surveillance agents interact in the system to get the task done. Some approaches in this area use robot teams like [4] and [7]. Also, the approaches in past research only deal with the lower level tasks of the surveillance problem, like robot behaviors, distributed exploration or navigation [5] or robotic architectures like in [6]. A. Decomposition of the Surveillance Problem Our approach is based on the surveillance hierarchical event definition. Regarding the fact that many instances of the surveillance problem clearly cannot be treated by a single robotic agent. A clear example of this situation is the fact that a robotic agent cannot be in two places at the same time. In [2] concepts like location and time are treated as central in the surveillance tasks. In these instances, surveillance tasks must be distributed into several surveillance agents cooperating between them to solve the problem. Smith and Davis approach in [13] suggest a three stage activity (Problem Decomposition, Sub-Problem Solution and Solution Synthesis) in order to solve a problem with cooperation.
Proceedings of the Electronics, Robotics and Automotive Mechanics Conference (CERMA'06) 0-7695-2569-5/06 $20.00 © 2006
Our approach is the decomposition of the surveillance problem for cooperative solving using the hierarchical surveillance events definition. This definition provides an adequate level to problem decomposition because sub-problems of the original surveillance task could be seen from the different events. The problem decomposition is based in different criteria, as surveillance strategies, agent capabilities, environment limitations, agent location or time. The delegating process to a member of the robotic team would take care of the specific robotic agent capabilities and even two different members of the robotic team would be responsible of the same sub-problem to give redundancy of functions. The first step would be the definition of different sets denoted by Pi and each set will contain one or more events defined in the system. Formally, Pi = {a | a ∈ A ∪ a ∈ H ∪ a ∈ A} where i = 1, 2, 3, ..., ∞
(10)
In figure 4 is shown an arbitrary problem decomposition to illustrate the idea. There are four different sub-problems in the figure, each marked with different types of dashed lines. Each of the sub-problems is delegated to a specific agent that would eventually solve this problem. All this sub-problems group different events, either in quantity and type. For example, agent4 is responsible of a2 event while agent2 take care of three different events.
Fig. 4. Composite Mapping of Hierarchical Event Definition of Surveillance, showing arbitrary problem decomposition. Dashed lines represent the subproblem decomposition.
IV. C ASE OF S TUDY: S URVEILLANCE S YSTEM FOR AUTOMATIC F IRE D ETECTION ON I NDOORS Fire Detection is a problem critical in different many different areas. In order to reduce fire risks, many different organizations have protection programs focused on the people education on this. But there are situations where Automatic Fire Detection through a system is required. This system would deal with the task of identifying a fire emergency and take some actions over this situation, like raise alert the key people
concerning this issue. In [14] some processes are identified as critical in the fire detection: 1) The system would identify the fire emergency either by manual or automatic methods. 2) The system would alert the humans present in the area by means of an alarm condition. 3) Transmit the alarm to the fire department, police or other emergency organization. 4) Other important actions could be taken, like electrical shut down and initiate automatic suppression systems. Then, fire detection is a critical surveillance task and normally is focused on indoors, like buildings or office basements. Several works on intelligent buildings, like [15], [16] and [17], have the same arguments to refer the fire detection system as one of the most important in this kind of design. There are different approaches to this problem, most of them are working in centralized systems focused on the detection by different kinds of sensors (see [18], [19]). But, we think a robotic surveillance will bring better results in this area, then this problem is going to be analyzed next in terms of the proposed framework. A. Fire Events Definition In this section the fire detection problem is going to be addressed according to the surveillance events definition proposed before. As mentioned before, one of the key parts of fire detection is the correct identification by manual or automatic methods. There are several works focused on automatic fire detection by sensors as mentioned before, then surveillance low-level events can be defined according to the sensors. According to [14], there are three important sensors used for fire detection: smoke and combustion gas sensors, temperature sensors and flame sensors.Then, each of these sensors must be included in the definition. Because of the environment definition (see Environment Model section), at least one of these sensors per room are required, but normally Flame sensors are expensive, but with a mobile platforms only one will be required. Another important type of sensor could be cameras, but they are also expensive, then only one will be required. The complete sensor definition is shown in figure I. The specifications part and relevant measures could vary depending on the type of sensor selected. The relevant measures defined in table I will define the low-level events that have the main function of identify possible emergency situations. For example, low-level event l1 would be triggered with the sensor s1 receives a measure of smoke presence, while l5 it is triggered by s5 measure of high-temperatures in the area. In table II the corresponding low-level events are shown. The sub index on the sensor representing the number of the relevant measure is kept just to follow the proposed definition, because the sensors in this case are mapped just to one measure. High-level events will depend on the low-level events triggered and basically will represent alarm conditions. Table III show some high-level events that could be found in the fire detection problem, like the fire presence on some rooms if all
Proceedings of the Electronics, Robotics and Automotive Mechanics Conference (CERMA'06) 0-7695-2569-5/06 $20.00 © 2006
Sensor type Smoke Sensor
Specifications
Sensor ID
Location
Relevant Measure
Photoelectric Detector
s1 s2 s3 s4 s5 s6 s7 s8 s9
Room 1 Room 2 Room 3 Corridor Room 1 Room 2 Room 3 Corridor -
Smoke Presence Smoke Presence Smoke Presence Smoke Presence 51 − 70C 51 − 70C 51 − 70C 51 − 70C ˚ 4000 − 7700A
s10
-
Fire Recognition
Thermal Sensor
MCP9800 Thermal Sensor PICtail
Flame Sensor
FL3110 UV/IR Unitized Flame Detector Surveillance CCD Camera
Camera
Action Event a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11
Actions taken Move Agent1 to Room 1 Move Agent1 to Room 2 Move Agent1 to Room 3 Move Agent1 to Corridor Agent4 raises alarm state Agent4 raises alarm state Agent4 raises alarm state Agent4 raises alarm state Agent4 raises alarm state Agent4 raises alarm state Agent3 turns on sprinklers
Mapped High-Level Events {h5 } {h6 } {h7 } {h8 } {h9 } {h10 } {h2 } {h1 } {h3 } {h4 } {h2 , h1 , h3 , h4 }
TABLE IV H IGH -L EVEL E VENTS , MEANING AND LOW- LEVEL EVENTS MAPPED .
TABLE I S ENSOR DEFINITION USED IN THE FIRE DETECTION PROBLEM .
actions could be taken. Low-Level Event l1 l2 l3 l4 l5 l6 l7 l8 l9 l10
Relevant Measure ID s11 s12 s13 s14 s15 s16 s17 s18 s19 s110
Relevant Measure Smoke Presence Smoke Presence Smoke Presence Smoke Presence 51 − 70C 51 − 70C 51 − 70C 51 − 70C ˚ 4000 − 7700A Fire Recognition
TABLE II L OW- LEVEL E VENTS AND THEIR RELEVANT MEASURES .
B. Environment Model The environment of a fire detection system on indoors could be any building construction, but in this case, it is defined as the map shown in figure 5. This floor is composed by three different office rooms and the corridor connecting all these rooms. As it is mentioned in the section before, each room will have different fixed sensors in order to deal with the fire detection. The segmentation of the space is done taking into account the sensor capabilities. All the sensors used in this case have a wide-ranges. Then, segmentation by rooms is a good level of segmentation because each room will have two different sensors. The segmentation for the floor is shown in figure 5.
the sensors have triggered low-level events (see h1 , h2 , h3 and h4 ) or maybe some events that could be fire, like smoke on a room, but it cannot completely be assured. High-Level Event h1 h2 h3 h4 h5 h6 h7 h8 h9 h10
Meaning Fire on Room 1 Fire on Room 2 Fire on Room 3 Fire on Room 4 Smoke Presence Smoke Presence Smoke Presence Smoke Presence Fire on Room 1 Fire on Room 1
on on on on
Room Room Room Room
1 1 1 1
Low-Level Events {l1 , l5 , l9 , l1 } {l2 , l6 , l9 , l10 } {l3 , l7 , l9 , l10 } {l4 , l8 , l9 , l10 } {l1 } {l1 } {l1 } {l1 } {l1 , l9 , l10 } {l2 , l9 , l10 }
TABLE III H IGH -L EVEL E VENTS , MEANING AND LOW- LEVEL EVENTS MAPPED .
Action Events will take the corresponding actions according to triggered high-level events. Some action examples in the fire detection are shown in table IV. For example, if there is fire in room 1, the better action is to raise alarm state and evacuate people from the area (events a5 or a6 , but if only smoke is detected, it is better the confirm this event with other sensors (see events a1 or a2 ). As we have shown, surveillance events definition is a good way to model surveillance events, and the possible simple
Fig. 5.
Segmentation for the floor map.
Since in this case we are just using just one robotic agent that could be moving, space conflicts are not a big deal. Then, the path planning for this robotic agent is the simplest case, as mentioned in Latombe1991. C. Agent Definition and Architecture Due to sensor and environment requirements, it is important to define the agents required to deal with the task and also define the architecture required. For the fire detection presented, it is important to have an agent controlling the fixed sensors required because this will specifically control all the sensor requirements. This agent can be represented either by a computer or a control center. Because the flame sensor and
Proceedings of the Electronics, Robotics and Automotive Mechanics Conference (CERMA'06) 0-7695-2569-5/06 $20.00 © 2006
the cameras are important to define the events and there is only one of each, a robotic mobile platform is used. So, these sensors can be moving all over the environment, following predefined paths or maybe by action evens triggered due to an alarm state. Finally, another agent is required to control the automatic sprinklers system and also the alarm devices. In table V all the agents required for fire detection in the space example are shown as well as their capabilities. Agent Agent1
Description Mobile robotic Agent
Agent2
Fixed Agent to control smoke and thermal sensors. Fixed agent to control sprinklers system Fixed agent to control the alarm system
Agent3 Agent4
Sensors Flame Sensor
Actuators Movement actuators
and Camera Smoke Sensors and Thermal Sensors
-
-
Sprinklers
-
Alarms
TABLE V AGENT DEFINITION FOR FIRE DETECTION .
As it is shown in figure V, the architecture defined in the framework can be used. The World Model will have instantiated all the events definition and will take all the decisions relevant to their responsibilities. The Communication will deal with all the communication issues with the other members of the surveillance team. Finally, the Sensors and actuators will enclose all the sensors and actuators of each agent. D. Cooperation in Fire Detection After agent definition, cooperation can be defined for fire detection problem. Taking the hierarchy of events defined before, then each agent would deal with specific events according to the sub problem decomposition. In table VI all the agents in the surveillance team are shown and the surveillance event that this robot will handle. Agent Agent Agent1 Agent2 Agent3 Agent4
Low-level Events {l9 , l10 } {l1 , l2 , l3 , l4 l5 , l 6 , l 7 , l 8 } -
High-Level Events {h1 , h2 , h3 , h4 , h9 , h10 } {h1 , h2 , h3 , h4 , h9 , h10 } h5 , h 6 , h 7 , h 8 } -
Action Events {a1 , a2 a3 , a 4 } {a11 } {a5 , a6 , a7 , a8 , a 9 }
TABLE VI AGENT PROBLEM DECOMPOSITION FOR COOPERATION .
V. C ONCLUSIONS AND F URTHER WORK
the development of a real implementation of a platform for indoor surveillance, using mobile robotic platforms and fixed sensors controlled by software agents, but there are certain issues known specifically with the events definition, like the definition of the time in event triggering that need a resolution before the implementation. R EFERENCES [1] N. A. Massios, “Decision-theoretic robotic surveillance,” Ph.D. dissertation, Faculty of Science, University of Amsterdam, January 2002. [2] N. Massios and F. Voorbraak, “Hierarchical decision theoretic planning for autonomous robotic surveillance,” in EUROBOT’99 3rd European Workshop on Advanced Robotics, 1999, pp. 219–226. [Online]. Available: citeseer.ist.psu.edu/287442.html [3] F. Voorbraak and N. Massios, “Decision-theoretic planning for autonomous robotic surveillance,” in ECAI-98 Workshop Decision Theory meets AI - Qualitative and Quantitative Approaches, August 25 1998., pp. 23–32. [4] P.Rybski, S.Stoeter, M.Erickson, M.Gini, D.Hougen, and N.Papanikolopoulos, “A Team of Robotic Agents for Surveillance,” in Proceedings of the Fourth International Conference on Autonomous Agents, C. Sierra, M. Gini, and J. S. Rosenschein, Eds. Barcelona, Catalonia, Spain: ACM Press, 2000, pp. 9–16. [Online]. Available: citeseer.ist.psu.edu/article/rybski00team.html [5] H.Everett and D.Gage, “From laboratory to warehouse: security robots meet the real world,” 1999. [Online]. Available: citeseer.ist.psu.edu/everett99from.html [6] H.Everett, “Robotic security systems,” Instrumentation and Measurement Magazine, IEEE, vol. 6, no. 4, pp. 30 – 34, December 2003. [7] Centibots: Very large scale distributed robotic teams. International Symposium on Experimental Robotics (ISER-04), 2004. [8] Y. Cao, A. Fukunaga, and A. B. Kahng, “Cooperative mobile robotics: Antecedents and directions,” pp. 1–23, 1997. [9] R. Pallas-Areny and J. Webster, Sensors and Signal Conditioning. USA: John Wiley and Sons, INC, 1991. [10] M. Wooldridge, An Introduction to MultiAgent Systems. The Atrium, Southern Gate, Chichester, West Sussex PO198SQ, England: John Wiley and Sons, LTD, 2004. [11] H. Haugeneder and D. Steiner, “Co-operating agents: Concepts and applications,” in Agent Technology. Foundations, Applications and Markets, N. Jennings and M. Wooldridge, Eds. Department of Electronic Engineering. London E1, 4NS, UK: Springer, 2002, pp. 175–202. [12] J.Latombe, Robot Motion Planning. Dordrecht, The Nederlands: Kluwer Academic Publishers, 1991. [13] R. Smith and R. Davis, “Frameworks for cooperation in distributed problem solving,” 1980. [14] An Introduction to Fire Detection, Alarm and Automatic Fire Sprinklers, 3rd ed. Northeast Document Conservation Center, 1999, ch. 3. [15] A. Kujuro and H. Yasuda, “Systems evolution in intelligent buildings,” Communications Magazine, IEEE, vol. 31, no. 10, pp. 22–26 pp., Oct. 1993. [16] M.Finley, A. Karakura, and R. Nbogni, “Survey of intelligent building concepts,” Communications Magazine, IEEE, vol. 29, no. 4, pp. 18–23, Apr 1991. [17] B. Flax, “Intelligent buildings,” Communications Magazine, IEEE, vol. 29, no. 4, pp. 24–27, Apr 1991. [18] R. Luo, S. Kuo, and H. Kuo, “Fire detection and isolation for intelligent building system using adaptive sensory fusion method,” in Robotics and Automation, 2002. Proceedings. ICRA ’02. IEEE International Conference on, vol. 2, no. 4, May 2002, pp. 1777 – 1781. [19] J. Edgar, “The effectiveness of fire detection and fire sprinkler systems in the central office environment,” in Telecommunications Energy Conference, 1989. INTELEC ’89. Conference Proceedings, Eleventh International, vol. 2, Oct. 1989, pp. 21.4/1 – 21.4/5.
The framework proposed to develop cooperative multi-agent robotic systems for surveillance is a tool that could greatly help in the development process of surveillance platforms in several domains. As we see in the case of study, it can be fully applicable to a real surveillance problem like fire detection. Our next step is to apply this framework in
Proceedings of the Electronics, Robotics and Automotive Mechanics Conference (CERMA'06) 0-7695-2569-5/06 $20.00 © 2006