Multi Agent System Based Interface for Natural Disaster Zahra Sharmeen1, Ana Maria Martinez-Enriquez2, Muhammad Aslam1, Afraz Zahra Syed1, and Talha Waheed1 1 Department of Computer Science & Engineering, UET, Lahore, Pakistan
[email protected], {maslam,twaheed,afrazsyed}@uet.edu.pk 2 Department of Computer Science, CINVESTAV-IPN, D.F. Mexico
[email protected]
Abstract. Natural disasters cause devastation in the society due to unpredictable nature. Whether damage is minor or severe, emergency support should be provided within time. Multi-agent systems have been proposed to efficiently cope with emergency situations. Lot of work has been done on maturing core functionality of these systems but little attention has been given to their user interface. The world is moving towards an era where humans and machines work together to complete complex tasks. Management of such emergent situations is improved by combining superior human intelligence with efficiency of multiagent systems. Our goal is to design and develop agents based interface that facilitates humans not only in operating the system but also in resource mobilization like ambulances, fire brigade, etc. to reduce life and property loss. This enhancement improves system adaptability and speeds up the relief operation by saving time of human-agent consumed in dealing with complex computer interface. Keywords: Disaster management, multi agent system, system user interface, emergency situation, human machine interface.
1
Introduction
Natural disaster is an unavoidable situation [1]. It may take the form of a flood, a tsunami, a hurricane, an earthquake, a wildfire, and the most recent form, suicide attack. In 2012, a total of 6, 771 terrorist attacks occurred worldwide [2]. In Syria, 133 terrorist attacks claimed 657 lives and 1787 people were injured. There were 1, 404 terrorist attacks in Pakistan the same year, which claimed 1, 848 lives and injured over 3, 643 people. Efforts have been made by humans to somehow avoid such disaster situations for instance, use of metal detectors on entrances, installation of CCTV cameras, and other security measures but there has been no success so far. Disaster deprives people of their homes, their assets, and even their loved ones. It leaves the affected people with nothing but a hope of getting help from the society and welfare organizations. Disaster management is a long activity spanning over three phases [3]. The first one is the prediction phase. Despite many efforts, avoiding these disasters by predicting them beforehand has not been a great success. We can only take few D. Ślȩzak et al. (Eds.): AMT 2014, LNCS 8610, pp. 299–310, 2014. © Springer International Publishing Switzerland 2014
300
Z. Sharmeen et al.
precautionary measures like warning, announcement for relocation of people, etc. to avoid loss but when these natural catastrophes hit a region, a society gets affected. Loss of lives, homes, and everyday needs leave people stressed. Preventing the loss completely is still an ideology. Emergency support, the second phase, requires instantaneous action from different institutions. The process of gathering up all units and measures is very time consuming. If proper actions are not taken at the hour of need, the life and state loss increases, resulting in causalities that could have been prevented. The third step is rehabilitation of the sufferers, which is a slow and gradual process. Rebuilding the infrastructure requires time and money. It usually proves to be a great challenge for any government. Avoiding the disaster management stress by automating this process is required for proper handling of emergency situation [4]. This is where multi agent systems (MAS) step in. MAS have a number of intelligent agents performing different tasks and interacting with each other to handle the disaster situation efficiently. Very comprehensive architectures like Disasters2.0 [5], FMSIND [6], and ALADDIN [7], exist that address the core functionality of MAS [3]. However, little attention has been given to the user interface. Emergency relief operation is a race against time. A good system designed for managing the disaster situation facilitates the human agent to interact with system with minimum effort. Humans are naturally more inclined towards a system that is easy to use [8]. In a disaster situation everyone is busy. There is no room for waiting or dealing with a poorly designed system interface. It is crucial for disaster management system to provide suitable interface. As an example, if MAS provides only typing facility but a situation arises where there is smoke in the environment, then user will have difficulty in operating the system in that locality. The user thus needs to change his location or wait for the smoke to clear. Such an interface degrades performance of the user. Human agents play an important role by cooperating and communicating with disaster management institutions [9]. In countries, where bomb blasts and suicide attacks are happening every other day, the disaster situation is difficult to manage. A lot of time of the emergency institutions is wasted in getting to know about the incident as well as in identifying the spot of incident. Number of causalities due to such attacks is growing day by day. In MAS, human input proves extremely beneficial for managing disaster situation. They can inform the institutions about the disaster situation nearby. Today, the world is moving towards an era of human-machine cooperation [8]. We want to get benefit from both efficiency of automated systems and human intelligence. Being part of a society, it is our moral responsibility to help the rescue institutions in dealing with such disasters. The disaster situation becomes more manageable when common people cooperate with the organizations. People living near the place of incident can inform MAS about the disaster via telephone call, text message, pictures, maps or any other input mode that is feasible for a common man. The system then notifies the rescue teams with the details and disaster management plan to take instant action. These timely alerts minimize the extent of loss, by saving the time consumed in identifying the location of incident. Our motivation for conducting this research is to eradicate the mismanagement, created due to difficulty faced by rescue teams while operating the system in hurry.
Multi Agent System Based Interface for Natural Disaster
301
MAS interface accommodates different possible scenarios and adapt the situation accordingly. Easier the user interface, more will it be adopted by the users. We target the second phase of disaster management process that is emergency support. We aim to improve the MAS architecture by proposing a user interface behind of which seamlessly multi-agents are in-charge to perform several tasks in an easy and human friendly way. This enhancement improves human to system interaction, resource mobilization, and response time of MAS. It accommodates human input not only in decision making but also in managing the situation effectively. Section 2 reports some of the existing disaster management solutions. Section 3 presents our proposed improvement to MAS interface. Section 4 validates the results of our solution with the help of a case study. In Section 5, we conclude our work and give our future directions.
2
Related Work
Disasters2.0 [5] presents a novel approach for human and agent collaboration during disaster management. It focuses social aspect by integrating user generated information about disaster. Web client of the system provides real time summary of an activity by displaying important buildings. Users can view agents moving in the environment. Multi-Agent Situation Management for Supporting Large-Scale Disaster Relief Operations [10] present solution to problems arising in complex situations of a disaster. It focuses on distributed solution driven management and scalability of the system. The architecture extends belief-desire-intention (BDI) system with situation awareness capability. A peer to peer overlay combined with semantic discovery mechanisms is used for large scale relief operation management. The disaster situation management constructs a real time, constantly refreshed, situational model, which is used for operation planning and updating. The decision support system is used for scheduling. Communication between different BDI-SM agents is established using foundation for intelligent agents. A Framework of Multi-Agent Systems Interaction during Natural Disaster [6] (FMSIND) focuses on interaction among different system agents. Java application development framework is used for agent communication. Sensors are used for detecting a disaster situation. Decision support system uses learning agents, neural networks, and data warehousing components to decide the number of resources required by a site. This helps in devising the plan which is sent to the agent as well as disaster management repository to keep information up to date. The Autonomous Learning Agents for Distributed and Decentralized Information Networks, (ALADDIN) [7] project focuses on multiple intelligent agent design, decentralized system architecture, and applications. Sensors are used for situation awareness and autonomous data collection. Research includes developing tools and techniques for intelligent agents, making progress in problematic areas such as delayed data, missing data, spurious data, and decision making. The goal of this project is to achieve robustness in multi agent architectures for disaster management.
302
Z. Sharmeen et al.
Autonomous Notification and Situation Reporting for Flood Disaster Management [1] facilitates autonomous notification and generates situation reports for disasters using intelligent agents. A web based tool for flood management is proposed that uses multi agent concept. Water level is continuously monitored to detect any disaster situation. System signals the notification agent to send alerts. The existing MAS for disaster management focus on system design and do not discuss the user interface of the system. Therefore, we design and develop an agent based user interface for disaster management MAS.
3
Agent Based User Interface for Disaster Management (AID)
A good user interface includes consideration of different possible situations arising due to uncertain nature of the disaster environment [8]. In this section, we present an agent based user interface for improving the usability of MAS. The more the system is human like, the better it will be at understanding and interacting with the humans. 3.1
Abstract Architecture
The focus of AID is on building a good team [9]. The effective team management is a core desirable feature since a well-managed team achieves better results than an unorganized one. Fig. 1 shows an abstract architecture of AID.
Fig. 1. Abstract architecture of AID
There is a central control agent (CCA), like a manager, administrator, or coordinator in an organization. CCA is an intelligent software agent that works in parallel with human entities for better decision making. CCA directly communicates with all agents whether intelligent software entities or humans which are performing real time activities to stay aware of their current status. A good management system ensures effective communication among the team. AID facilitates both peer to peer communication as well as centralized communication through CCA. The inter-agent communication helps the system and humans to understand each other which results
Multi Agent System Based Interface for Natural Disaster
303
in better decision making. AID provides different alternatives for input and output [11]. The system considers environment of human agent and continuously switches to the mode that best fits agent needs. It also accommodates manual requests for a particular user interface from human agent on higher priority. 3.2
System Input
AID supports different forms of input. Providing only one form of input is not enough as the disaster environment varies instantaneously. There is always a possibility of different problems arising in operating the system due to uncertainty of the environment. Suicide attacks, bomb blasts, or extremist attacks normally happen in sensitive areas where a number of foreigners visit. In such a community, people feel difficult to communicate in formal way of texting or calling due to problems like language barriers, strict protocols, unavailability of contact numbers, etc. AID is designed keeping in mind the user needs and all such environmental factors.
Input mode Text
Text agent
Touch
Touch/Command agent
Speech
Speech agent
2D picture Disaster site
System format CCA
2D picture agent
2D video
2D video agent
3D video
3D video agent
Gestures
Gestures agent
Fig. 2. AID input user interface architecture
Fig. 2 shows the architecture of input user interface where an input data is read based on the input mode of the system. The role of different input agents is to read data and convert it to a standard system format. The input mode is used for activating or deactivating the input agents. This data is passed to CCA for further processing. Text Agent. Many a time, situation arises where vocal, visual, or touch input is not feasible to use. As an example, when entering record of agent in computer the speech may not be accurate due to noise, visual input, or touch interface doesn’t seem appropriate at all on account of flashing lights. The suitable input mode here is text, which may be simple information, a message, an alert, a request, a command, etc. Text agent reads data and parses instructions if required, and take action accordingly. For saving typing effort, built-in messages are provided like suicide attack, bomb blast, building collapsed, etc. in which the user inserts the location at runtime.
304
Z. Sharmeen et al.
Touch Agent. In a disaster situation where everyone is in a hurry, the user has no time to stop and type. AID provides an easy touch interface for giving commands with minimum effort. Most frequently used commands are more visible and more accessible. The touch interface uses universal icons which are well understood and self-descriptive, such as directions icon for map, flash icon for light, etc., so that less thinking is involved. Speech Agent. There may be smoke due to a sudden fire which limits the visualization thus disabling the use of typing, touching, and input snapshots or video. Normally people ran in anarchy, building crash, etc. In such a situation the suitable mode of input is speech. Speech agent gets input using voice recognition application in the form of speech signals. Noise filtering and other issues are handled by the application integrated with AID for voice recognition. The speech input is validated by confirming the interpreted data from user which is later on shared over the network, parsed to text or commands for taking appropriate action. The speech is also used for one to one communication between human agents like a telephone call. In many situations, this proves fruitful for team members in describing their problem effectively. AID also supports vocal message sharing among different agents for better coordination. Typing consumes more time and effort than speaking [12]. It requires the agent to stop for a while and operate the system physically whereas speech removes this limitation by allowing the agent to continue his activity in parallel. Pre built-in commands are provided for user facility. 2D Picture/Video Agent. The human agent needs to share picture or video at times. AID provides the ability to take pictures, record videos as well as their easy sharing with CCA and other agents. This type of input is required for better visualization of the environment at agent's end [13]. Better visualization means better understanding that leads to better decision making. AID builds up the map of agent’s environment from pictures or videos and uses it for guiding the agent in taking appropriate measures. As an example, AID suggests different routes to the agent for exit as well as help in forming better strategies in the rescue operation. This mode of input gives the system insight into the environment. This is particularly useful in giving environment specific instructions instead of general instructions to the agent. It saves the time of human agent that is consumed in conveying environment details. 3D Video Agent. 3D is a better approach than 2D for getting precise environment information [14]. 3D view of environment has more detail. This agent supports rotating cameras in the agent environment that are capable of changing the view like a person rotating his neck for shifting his view. Left and right images are sent to CCA which renders them into a 3D image using stereoscopy and then performs analysis of the environment on its own, in short time. CCA builds a knowledge base and gets an expert opinion on behalf of agent for sorting out an efficient rescue plan.
Multi Agent System Based Interface for Natural Disaster
305
Gestures Agent. Body language is a crucial part of communication [15]. AID uses live streaming for getting human gestures and body language as input. Many times the communication is misinterpreted just because of lack of gestures. AID improves the human agent communication by providing a chance of having direct face to face talk. 3.3
System Output
AID continuously updates the human agent about the situation and strategies to cope with the environmental changes. The system has an effective output mechanism that gives the human agent a sense of team coordination. Similar to input, the system has different output modes for user ease [12]. AID provides both visual display and audio output, the ability to speak up.
Fig. 3. AID output user interface architecture
Fig. 3 shows architecture of output user interface, data is sent to the user device in standard format. Audio and visual agents are activated or deactivated using output mode. These agents receive the content from CCA in the system format. The agent parses the content to proper output format and turns on the appropriate interface for it. Audio Agent. Research shows that hearing a message requires less time and processing of humans as compared to reading a message visually. In many situations visual display is not a good option. The user may need to hear the instructions while working. Audio agent therefore, helps the user to continue his task without stopping to read new updates. Visual Agent. AID supports different visual formats and adjusts its interface according to the content. It is a well-known fact that visualization develops better understanding then simply hearing anything. The human agent need not to waste time on trying to create an image of the described situation in the mind, the user can simply view it. The display is natural, well organized, and easy to understand. The better the display, more will it be pleasing for its user.
306
3.4
Z. Sharmeen et al.
Input and Output Mode Selection
The combined architecture of mode selector agents is shown in Fig. 4.
Fig. 4. AID Input/output mode selector agent architecture
The agent selects the mode after considering both user preference and system detected input mode. System input/output mode detector (SMD) agent detects input and output mode by using sensors for getting the state of environment. User preference is given higher priority as compared to system detected mode within a time frame. A time window is used for validating the user IO mode preference. If system mode was detected after user preference was set, then system mode is used instead of user preference. An alert is sent to user about change of IO mode to avoid confusion. 3.5
Adding or Removing an Agent
AID provides option for adding a new agent. All team members are informed about the new agent and its role, to keep the team updated. If the new agent has skills that meet the needs of an existing agent then it sends a help request to the former. This creates a real world team like environment where each one plays his/its role as well as mentors their peers in achieving designated tasks. The system user interface is therefore team oriented. AID also accepts requests from common people for participation in the rescue operation. A human agent can send join request to AID via its centralized service. AID validates the request and agent profile, creates a new agent, locates any existing team working nearby, adds new agent to the team, alerts the team and guides, the agent for joining his team. AID also provides the ability to safely remove any human agent as well as any malfunctioning system agent. Unavailability of any agent does not halt the system completely. If any agent is inactive for a longer period of time, system removes that agent from the team. Fault tolerance is a highly desirable feature for any real time system, as it determines its flexibility and scalability. The system takes necessary measures to restore its performance to the minimum satisfaction level. The system alerts CCA about the decrease in performance, its cause, and the possible way to restore it back. This is useful in sending timely requests for resources and support for restoring the system to a healthy state.
Multi Agent System Based Interface for Natural Disaster
3.6
307
Physical Proximity
In the real world, the rescue team needs to work in different distributed physical locations [4]. AID provides a sense of physical proximity to the human agents by providing features for exploring and connecting with agents working nearby. AID keeps all the agents up-to-date about the status of other agents that is any new agent joining or leaving the team on that spot. Agents explore and view other agents, communicate with them as well as help them if required. This boosts the confidence of human agents and gives them a feeling of togetherness with a physical team in a real time distributed environment. The agents share their knowledge and experience by communicating directly with each other. This induces the true essence of a team. 3.7
Centralized Services
AID provides centralized services like emergency service center, police center, intelligence agencies, etc. and their service numbers which are broadcasted to common people via mobile phone and other multimedia services. GPRS system is used by AID for validating the service user and the information they convey. GPRS is also used for agent identification. This validation is required for filtering out invalid requests and avoiding misuse of the system.
4
Case Study
We validate our solution with a scenario of a terrorist attack. Despite many measures from security agencies like installation of security cameras, security checking of individuals and vehicles in sensitive areas, etc., the situation is not controlled. One of the reasons is negligence due to the unexpected nature of these events. We create a simulated environment to perform various experiments for demonstration and get successful result in saving lives and other property losses. In the disaster situation, 2 buildings collapsed, 50 people killed, and over 140 people were injured due to a suicide attack. There were 35 Android devices, 78 Symbian devices, 26 Apple devices and 6 Blackberry devices at the disaster site. It usually takes a lot of time in acknowledging the incident and locating the disaster spot. Using AID, the response time is reduced due to participation of common people in rescue operation. A female living near that disaster site spontaneously informs AID about the attack and its location via telephone call. AID locates the people present at the disaster site by tracking their mobile devices. It starts gathering their information such as profile and medical history in order to coordinate with rescue agents and common people onsite for initial measures via text message. AID determines the status of people and the extent of damage using services like GPRS, maps and satellite view. AID alerts the rescue organization by sending notifications. It also provides the map of disaster location to the rescue agents on their registered mobile clients. AID estimates the required number of resources such as fire brigade, ambulances, etc. and dispatches the available resources immediately. Initially, a rescue team of 15 persons was dispatched for the emergency support. There were 2 senior medical officers
308
Z. Sharmeen et al.
whereas other members had basic level training of providing emergency aid. This team of 15 persons was divided into 2 sub teams in order to work at spots A and B simultaneously. The lack of coordination between the teams often leads to unorganized activities, raising the death toll. With AID, they continuously stay in touch. Work plan and status of the human agents is monitored by CCA. If any agent faces difficulty in performing the assigned task, he requests CCA for a mentor. CCA notifies all team members about this requirement and connects the agent with an experienced agent via video chat or voice call for instructions. The system allows the agents to directly communicate with each other while carefully conducting their assigned tasks. The medical officer can view human agent performing the action in order to provide better guidance. Following the instructions received directly from the medical officer is very convenient for an agent as both parties are continuously in sync and monitoring the impact of their commands. The agent in this case does not need to waste time on asking for help, AID takes care of resolving such dependencies for the team and thus ensures a smooth rescue operation.
Fig. 5. AID Mobile client for Android
An android mobile interface in Fig. 5 shows a bottom bar with touch icons. Interface switches to visual mode by selecting the screen (or display) icon, which activates the visual agent. Similarly, the speaker icon turns the audio mode on. Touching keyboard changes input mode to text whereas mike icon enables voice input. Similarly, a picture or video mode is turned on by touching camera or video icon respectively. Some other icons allow the user to communicate with AID by sending commands like get directions, update task status, share environment situation, request medical aid, report blocker, request help, etc. Directions icon requests AID for map of the disaster
Multi Agent System Based Interface for Natural Disaster
309
location and best route with respect to agent’s current location. The location coordinates are received in response from CCA. These coordinates are opened in a map application for better visualization. AID interface provides features for staying connected with the team via chat, live streaming, telephone call, viewing their status, responding to their help requests, sharing information with them and so on. The mobile agent continuously refreshes its state in order to stay in sync with the CCA, which shares the rescue plan with the team and updates it as per requirement. CCA not only assigns each agent task but also prioritizes their tasks after considering the environmental factors. Table.1 shows a comparison of AID with other MAS. Table 1. A comparison of AID with other multi agent systems (MAS) Interface Features
AID
Disaster2.0
ALLADIN
FMSIND
Multiple input/output (IO) modes
Yes
No
No
No
Team oriented
Yes
To some extent
No
No
Resource mobilization
Yes
No
No
Yes
Web/Mobile clients
Yes
Yes
No
No
Services for common people
Yes
To some extent
No
To some extent
AID is team oriented for improving the disaster management process. It considers the interaction of team members in the real world which makes AID user interface more human friendly. AID not only improves the resource mobilization process but also facilitates the human agent by providing different input and output modes.
5
Conclusion and Future Work
Lot of work is being done for designing robust MAS for disaster management but existing solutions lack a powerful user interface. The usability of a system is highly dependent on its user interface. Therefore, we have improved the MAS architecture by designing a human friendly interface. With an interface like AID, even nontechnical and less educated people take part in the rescue operation to minimize the extent of loss. We further plan to extend this work by incorporating latest technologies in the system architecture and designing AID specific system components as well as the web and mobile clients for different platforms in order to facilitate the users.
References 1. Ku-Mahamud, K.R., Norwawi, N.M., Katuk, N., Deris, S.: Autonomous Notification and Situation Reporting for Flood Disaster Management. Computer and Information Science 1(3), 20 (2008) 2. National Consortium for the Study of Terrorism and Responses to Terrorism, A Department of Homeland Security Science and Technology Center of Excellence: Based at the University of Maryland. Annex of Statistical Information Country Reports on Terrorism 2012 (May 2013)
310
Z. Sharmeen et al.
3. Basak, S., Modanwal, N., Mazumdar, B.D.: Multi-Agent Based Disaster Management System: A Review. International Journal of Computer Science & Technology 2(2), 343–348 (2011) 4. U.S. Department of Health and Human Services. A Guide to Managing Stress in Crisis Response Professions. DHHS Pub. No. SMA 4113. Rockville, MD: Center for Mental Health Services, Substance Abuse and Mental Health Services Administration (2005) 5. Puras, J.C., Iglesias, C.A.: Disasters2. 0. Application of Web2. 0 technologies in emergency situations. Proceeding of ISCRAM 9 (2009) 6. Aslam, M., Pervez, M.T., Muhammad, S.S., Mushtaq, S., Enriquez, A.M.M.: FMSIND: A Framework of Multi-Agent Systems Interaction during Natural Disaster. Journal of American Science 6(5), 217–224 (2010) (ISSN 1545-1003) 7. Adams, M.F., Gelenbe, E., Hand, D.J.: The Aladdin Project: Intelligent Agents for Disaster Management. In: IARP/EURON Workshop on Robotics for Risky Interventions and Environmental Surveillance (2008) 8. Flemisch, F., Kelsch, J., Löper, C., Schindler, A.S.J.: Automation spectrum, inner / outer compatibility and other potentially useful human factors concepts for assistance and automation. Human Factors for Assistance and Automation, 1–16 (2008) 9. Carver, L., Turoff, M.: Human-computer interaction: The human and computer as a team in emergency management information systems. Communications of the ACM 50(3), 33–38 (2007) 10. Buford, J.F., Jakobson, G., Lewis, L.: Multi-agent situation management for supporting large-scale disaster relief operations. International Journal of Intelligent Control and Systems 11(4), 284–295 (2006) 11. Jaimes, A., Sebe, N.: Multimodal human–computer interaction: A survey. Computer Vision and Image Understanding 108(1), 116–134 (2007) 12. Benoit, C., et al.: Audio-visual and multimodal speech systems. In: Handbook of Standards and Resources for Spoken Language Systems-Supplement, vol. 500 (2000) 13. Wang, Y., Liu, Z., Huang, J.-C.: Multimedia content analysis-using both audio and visual clues. IEEE Signal Processing Magazine 17(6), 12–36 (2000) 14. Yasakethu, S.L.P., et al.: Quality analysis for 3D video using 2D video quality models. IEEE Transactions on Consumer Electronics 54(4), 1969–1976 (2008) 15. Pavlovic, V.I., Sharma, R., Huang, T.S.: Visual interpretation of hand gestures for humancomputer interaction: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence 19(7), 677–695 (1997)