Keywords: unmanned sensor platforms, task based guidance, mission management, operator assistant system. Abstract. The guidance of multiple Uninhabited ...
TASK-BASED GUIDANCE OF MULTIPLE DETACHED UNMANNED SENSOR PLATFORMS IN MILITARY HELICOPTER OPERATIONS Johann Uhrmann, Ruben Strenzke & Axel Schulte Universität der Bundeswehr München, GERMANY {johann.uhrmann|ruben.strenzke|axel.schulte}@unibw.de
Keywords: unmanned sensor platforms, task based guidance, mission management, operator assistant system.
Abstract The guidance of multiple Uninhabited Aerial Vehicles (UAVs) as sensor platforms in military helicopter missions has been addressed by the development and integration of socalled Artificial Cognitive Units (ACUs) aboard the UAVs and into the airborne operator station. The ACUs have been given knowledge of the Manned-Unmanned Teaming (MUM-T) domain, which allows the guidance of UAVs based upon certain high-level tasks. The knowledge about resources, tactical situation and reconnaissance information needed enables the UAVs to improve mission management and sensor deployment tactics. First experimental investigations have been conducted using a laboratory prototype integrated into a complex MUM-T simulation. In future experiments the human UAV operator will additionally be aided by a cognitive assistant system.
1 Introduction UAVs in use today typically follow a pre-programmed mission that can be altered manually in-flight by a crew of at least two human operators. With the advent of multi-UAV [11] and MUM-T scenarios [17] the operator-to-vehicle ratio shall be inverted in future applications [9]. Therefore, the Institute of Flight Systems at the Universität der Bundeswehr München conducts research on semi-autonomous UAVs [11] guided on the level of abstract tasks [18] using cognitive automation. These tasks will not only be given to the UAVs, but also be understood by a cognitive assistant system designed to aid the human operator [6]. The formulation of these tasks shall as far as possible rely on a symbolic description solely, which is why the UAV guidance system as well as the operator assistant system are based upon symbolprocessing methods of artificial intelligence, i.e. knowledgebased systems. The task-based guidance approach promises a reduction of the operator workload and a situation awareness enhancement in contrast to waypoint-based guidance [18].
2 Integration and test environment Providing airborne detached sensor platforms is the intended use of UAVs in military helicopter operations. These assets
retrieve valuable information about weather, terrain, buildings and hostile forces in real-time during the mission. Utilizing suchlike information missions are expected to become safer, more flexible and have a higher probability of success. The Institute of Flight Systems integrates UAV guidance systems into a research helicopter simulator cockpit (figure 1) in order to evaluate prototypes in an operational context with the human in the loop.
Figure 1: Research helicopter simulator cockpit 2.1 Test environment The test environment provides the simulation of several UAVs as well as a cockpit of a manned helicopter. The simulated UAV on-board systems include a thermal camera, a video data link and a command data link. The manned helicopter will be operated by two crewmembers, i.e. the pilot in command (PIC) and the pilot flying. The PIC is also in charge of the UAV mission management. The simulation also provides computer-generated neutral, friendly and hostile forces. The latter are ground based air defence and may attack the manned helicopter and the UAVs. 2.2 Mission The test environment allows the simulation of a full-scale military helicopter mission. The missions used in the evaluations were air assault missions derived from training missions of the German Army. Modifications were made to introduce UAVs for route reconnaissance and for observing the designated landing sites prior to the approach of the manned helicopter [17]. Starting at a home base, troops had to be carried to a combat area by the manned helicopter in order
to secure an object and being supported by manned and unmanned assets providing reconnaissance information. After the troops had been dropped, the helicopter and one UAV should monitor the landing site and the environment of the troops’ destination and provide sensor information. More details about the mission design can be found in [17, 18]. 2.3 Evaluation of waypoint-based guidance approach Currently, most UAV guidance systems use a waypoint-based guidance approach. The management of multiple UAVs using an individual guidance system per UAV has been tested in experiments [17, 18] with military helicopter pilots of the German Army as subjects in order to provide a baseline study. Although the subjects perceived the overall performance generally fair, they massively exhibited socalled self-adaptive strategies to mitigate otherwise excess subjective workload effects caused by the combination of UAV guidance and being the commander of the manned helicopter. [5] provides a behavioural study utilizing eyetracking techniques. Self-adaptive strategies included the neglect of tasks or sub-tasks on purpose, re-scheduling of tasks and the use of less optimal but faster-to-configure sensor settings. To avoid these effects, a higher automation level for the guidance of UAVs was introduced. This new level combined the UAV guidance and the payload management into mission specific high-level tasks.
3 MUM-T system concept 3.1 Requirements Resulting from the evaluations in chapter 2, we assume that higher-level automation is required for multi-UAV guidance. This automation shall combine the tactical guidance and the payload management of each UAV. Moreover, the operator should be able to interoperate with the UAV system on a guidance level similar to guiding a manned helicopter, e.g. the pilot-flying or a wingman. Task
Parameter
Pilot
Commander
The first row of figure 2 depicts the commanding of another manned helicopter. The role of the pilot is to understand and interpret the tasks from the commander with respect to the current situation. The pilot will transform this task into parameters to control the aircraft, i.e. either manually steering the air vehicle or using automation like an autopilot. It is up to the pilot how the task will be executed in detail. The second row depicts the guidance of a UAV on the basis of parameters as evaluated in chapter 2.3. 3.2 Artificial Cognitive Units To allow the guidance of one or multiple UAVs in a manner similar to the guidance of a manned helicopter, the UAV shall be furnished with a system that will be able to interpret tasks with respect to the current mission and tactical situation and act upon those tasks in a situation-specific way. These types of systems we develop use artificial cognition to interpret, act on and communicate by use of more abstract commands such as tasks expressed by conceptual notions rather than numeric parameters. The required information processing method is based a cognitive system architecture under development at the Institute of Flight Systems [3]. The cognitive system architecture is the baseline for creating so-called Artificial Cognitive Units (ACUs). As depicted in the third row of figure 2, each UAV is equipped with one ACU to allow the task-based guidance of the UAV. Chapter 4 provides details about using ACUs for task-based guidance. 3.3 Dealing with automation complexity The introduction of highly complex automation aboard the UAV poses the serious challenge of handling the automation to the human operator. Experiences with automation systems for manned aircraft show that complex automation can lead to automation induced errors [2, 14] and may raise the human workload in situations where the workload is already at a critical level. For manned aircraft applications, this problem has successfully been addressed with the introduction of so-called assistant systems [13]. As part of this research work an assistant system will be developed in order to support the UAV operator. As assistant systems also use artificial cognition, they are ACUs that are located aboard the manned helicopter to support the human operator in guiding the UAVs. The assistant system and its planning mechanisms are covered in detail in chapter 5.
Parameter
3.4 Work system Commander
Task Commander
Parameter
ACU
Figure 2: Guidance on task level vs. parametric guidance
The system design of MUM-T and the integration of the ACUs into the system are driven by the concept of the work system as described in detail in [11, 14]. The work system is defined by its work objective which is in our case “perform MUM-T mission”. The work result produced by the work system consists of the actions and results aiming towards the fulfilment of the work objective. The work system is exposed to the environment, i.e. all external influences and information besides the work objective.
The work system contains one or more entities that know the work objective. This is called the operating force. The tools, work site settings and automation used by the operating force build the operation supporting means of the work system. 3.5 Roles and Responsibilities In the work system depicted in figure 3, the responsibility of the human UAV operator is the pursuit of the overall work objective, i.e. support of mission accomplishment by providing reconnaissance information of the flight path and the mission relevant objects ahead of the manned helicopters. environment
work objective
Human Operator
1
3
Assistant System
2
4 Task Based Guidance (ACU)
Operating Force
Payload Management System Flight Management System
Payload work result UAV
Operation Supporting Means
Figure 3: MUM-T work system Therefore, he can task the UAVs and gets information from the UAV-ACU (link 1 in figure 3) as well has information from the other systems of the UAV and its payload (not shown in figure 3). The task of the assistant system is to support the operator by guiding his attention to the most important task at hand and by providing suggestions to the operator [13]. Thus, the assistant system needs to monitor the interaction between the human operator and the UAV (link 2) and it has to monitor and communicate with the human operator (link 3). Because of its knowledge and pursuit of the overall work objective, the assistant system is part of the operating force and builds a team with the human operator. The ACUs on board of the UAVs possesses the capability to understand and cooperatively execute tasks with respect to the environment and the tasks assigned to other UAVs. For the task execution, the UAV-ACU has full control of the flight system of the UAV, as well as its payload and communication system (link 4). The UAV-ACU does follow the assigned tasks but is not aware of the overall work objective. Therefore, it is part of the operation supporting means as depicted in figure 3. 3.6 Knowledge of the ACUs An ACU is completely defined by its knowledge. The knowledge of an ACU consists of environment models, desires, action alternatives and schedules (instruction models) [15]. Instances of those knowledge classes build the dynamic knowledge of the ACU about the current situation. In contrast to designs that use specialized agents [1], ACUs hold all dynamic knowledge as well as all the static knowledge in one unit. This allows creating behaviours based on the all aspects of the situation and a-priori knowledge available in the ACU. To fulfil its role to aid the human operator in reaching mission accomplishment, the assistant system ACU needs knowledge about the mission objective, the currently planned
actions of the human operator, the actions necessary in the future, the current tactical situation and the possibilities of interaction with the human operator. The ACUs aboard the UAVs know about the task description, machine problem solving, the current tactical situation and cooperation with other UAVs in order to enable the UAV to execute the tasks given by the human operator.
4 Task based guidance Because of the requirements shown in chapter 3.1, the design of a task-based guidance interface is influenced from the notion of a wingman being tasked by the commander of a flight formation. Thus, a task is specified using an abstract meaning rather than coordinates and using mission level terms instead of parameters. Therefore, symbolic representations conveying the meaning shall be preferred to those purely relying on numeric values such as e.g. waypoint lists. 4.1 Machine problem solving A task that is assigned to a UAV is modelled as problem to solve by the ACU aboard that UAV. Thus, every task is interpreted by the ACU and transformed into a representation of its desired result. The instances of those desires are called goals and contain information about the state of the environment that has to be achieved by the task execution. Therefore, all actions of the ACU aboard the UAV are directed towards the fulfilment of the active goals. Goals exist at different abstractions, e.g. from “be at a certain location” to “follow task agenda”. The two different kinds of goals in the cognitive system are fundamental goals and instrumental goals [8]. Fundamental goals always instantiate when the corresponding desire is violated, e.g. if there is a task agenda, but the UAV is not following the current task on the agenda, then the desire “follow task agenda” will create a goal to follow the agenda. Fundamental goals define the behaviour of ACUs because the cognitive system is always working towards the satisfaction of those goals. Instrumental goals instantiate only to fulfil the prerequisite of another plan or goal, e.g. the goal of being in sensor range of a certain location will instantiate, if there is an active plan to observe an object at that position. Action alternatives are models of actions that the UAV is able to execute to change the environment. Based on the current situation known to the ACU of the UAV, action alternatives are proposed and selected to achieve the goals to fulfil. Action alternatives may propose the use and configuration of the avionic of the UAV, the sensors, communication systems or change an internal state of the ACU, e.g. update the task agenda. Using goals and action alternatives instead of pre-defined “action scripts” as the basis of the task execution generates a dynamic and flexible behaviour, e.g. actions are not executed, if they are not feasible or not directed towards an active goal.
4.2 Detection and use of opportunities The combination of knowledge about the current tactical situation, including the resources of the UAV team and the current demand on those resources was used to model the concept of opportunity. This concept models the chance to optimize system and sensor usage for the mission without violation of the task agenda commanded by the operator. The knowledge of the opportunity is implemented as an environment model and can be seen as combination of needed resources, the set of situational elements that must be present and the set of situational elements that must not be present. The desire to use a detected opportunity is implemented as fundamental goal, i.e. the UAV will use any possible opportunity. One implemented example of an opportunity is to use the onboard infrared sensor to take a close-up picture of a previously detected, but currently still unidentified hotspot, the location of which is known either by data provided by sensors aboard the ACU or by an according message from other UAVs or from the operator. The needed resource in this case is the infrared sensor. The situation elements that must be present are: There exists a sensor hotspot. The hotspot is in sensor range. The opportunity is void, if the following situation elements are present: The sensor hotspot is already identified. There is an already-existing close-up picture from the current direction to the hotspot. In a multi-UAV scenario, the implementation of this opportunity along with its pursuit results in a cooperative real-time generation of infrared images from different angles by multiple UAVs as soon as a sensor hotspot was located either by the operator or by the automated target recognition system of the UAV. 4.4 Human-machine interface Depending on the focus of the information that should be handled in the interaction of the human operator with the UAV guidance system, there are several possible approaches to a human-machine interface. If the focus is on the spatial representation of the tasks, then tasks shall be displayed and manipulated on a map. Figure 4 shows the task interface, which allows the human operator to see and manipulate the task agenda of the UAVs on a touch-screen in the cockpit. The task agenda of the currently selected UAV is shown in orange, the currently executed task is highlighted. Tasks in the agenda could be skipped by activating a specific task that should be executed immediately. Alternative concepts of task representation and manipulation focus on the temporal representation of the task agenda, tasks, i.e. the use of timetables or schedules. Another concept is the use of speech interfaces with the focus on the causal relation of tasks, because speech output and commands reference previous or following tasks.
Figure 4: UAV task interface
5 UAV Operator Assistant System The definition of the term assistant system can be found in [14]. [16] has shown that cognitive assistant systems have already successfully been used for the support of the pilots of military aircraft. The approach to a system assisting UAV operators in MUM-T is described in [6]. The focus of this chapter is on the recently added planning capability of this assistant system. 5.1 Requirements To aid the human operator in guiding the UAVs the cognitive assistant system needs to make decisions, whether, when and how to inform the human operator about any problems concerning the mission progress. This information can be either visually displayed or synthesized speech warnings. Thereby, the operator will be advised that specific tasks have to be given to the UAVs or have to be activated at a certain point in time. It is also possible to inform the operator about the fact that the combination of tasks that he has given to the UAVs does not lead to a feasible overall plan (e.g. mission time constraints violated). Therefore, the assistant system shall be able to anticipate which tasks the human operator should perform in order to achieve the mission goal. During the execution of the MUM-T mission the assistant system has to check the feasibility and completeness of the operator given plan. This has to be done upon any UAV task agenda change (by operator input or by UAV actions), tactical situation change (e.g. new vehicle enters the scenario) or mission order change (new mission objectives received or mission objectives have already been met). 5.2 Concept and Design For the reasons mentioned above the assistant system has to perform a planning task, in which the actions of the aircraft are generated and concatenated in an adequate way to transfer
the situation parameters (e.g. vehicle locations) according to the requirements set by the mission goals. The MUM-T Mission Planner (MMP) shall perform this planning function. The assistant system ACU uses certain constraints for the planning process that are given by the mission order and the operator’s input. In consideration of these constraints, the MMP generates a cost-optimized and time-stamped plan that sets the basis for the decisions of the assistant system, whether, when and how to inform the operator about any problems. As shown in figure 5, the MMP receives the current tactical situation (e.g. the positions of all aircraft and mission-relevant sites), the mission order and UAV task constraints (i.e. the tasks that the operator has given to the UAVs) as input data from the MUM-T assistant system core. The mission order constraints include the work objective and additional hard mission constraints (e.g. times for takeoff clearance) and soft mission constraints (e.g. specification of the preferred drop zone).
5.3 MUM-T Mission Planner Implementation The planning functionality mentioned above has been implemented as a laboratory prototype of the MMP. Aerospace mission planning systems often use Hierarchical Task Network (HTN) planning approaches, e.g. [12] or [4]. In the application described here the planner is used to calculate a plan that transforms the current world state into the desired world state. This desired world state can change, i.e. when the work objective changes, but the action alternatives that can be used to generate a plan will never change. This prefers a PDDL (Problem Domain Definition Language) approach over an HTN approach, because the goal state that has to be reached can be freely defined in PDDL while the use of HTN implicates the definition of actions that have to be accomplished, which also limits the capacity for innovative solutions. Also the strong and even growing focus of PDDL on temporal planning argues for the selection of a PDDL planner. It was decided to use the PDDL 2.2 planner [7] LPGtd 1.0 [10] as the core of the MMP, especially for its advanced temporal planning functionalities.
6 Discussion and future work
Figure 5: MUM-T Assistant System interfaces There are two MMP instances working in parallel. One (“slave”) receives all constraints from the assistant system including the UAV task constraints and is therefore able to check if the UAV tasks entered by the operator lead to a feasible and complete plan. The other (“free”) instance receives the same input, except the UAV task constraints for comparison of the current plan against a freely planned, possibly more cost-effective plan (e.g. shorter flight times, better reconnaissance etc.). It is also possible that the slave MMP does not find any solution but the free MMP might still find one. This leads to the conclusion that this fact solely results from the UAV task constraint the operator entered last. The design can be seen as a mixed-initiative planning approach because the machine (as “slave”) is able to complete the plan of the human and the machine is also able to check if the “free” plan it generates is better than the human-driven one. In this case it can start a negotiation process with the human to make him reconsider his decisions.
First preliminary simulator experiments with German Army helicopter pilots have shown that task based guidance is accepted by the operators, even if there is only a limited set of available tasks [18]. To prevent errors caused by the complex automation aboard the UAVs, the assistant system described in chapter 5 shall be further developed to support the human operator in using the unmanned vehicle with respect to the overall work objective. The assistant system shall be evaluated in future experiments with helicopter pilots of the German Army. The current design specifies either full control over the systems of the UAVs by the ACU (task based guidance mode) or full control over the subsystems of the UAV by the human operator, i.e. waypoint based guidance with separate payload guidance. In future projects, mixtures between this guidance modes will be developed and analysed to allow low level access to the UAV subsystems without a complete deactivation of the high level, knowledge based guidance and assistance from the ACUs.
References [1] J. W. Baxter, G. S. Horn. “Controlling Teams of Uninhabited Air Vehicles”, AAMAS '05: Proceedings of the fourth international joint conference on Autonomous agents and multi agent systems, pp. 27-32, (2005) [2] C. E. Billings. “Aviation Automation - the Search for a Human-Centered Approach”, ISBN: 978-0805821277, Lawrence Erlbaum Associates, (1997) [3] S. Brüggenwirth, R. Strenzke, A. Matzner, A. Schulte. “A Generic Cognitive System Architecture applied to the Multi-UAV Flight Guidance Domain”, International Conference on Artificial Intelligence and Agents (ICAART 2010), pp. 292 – 298, Valencia, Spain, (2010)
[4] P. Bonasso, “Issues in Providing Adjustable Autonomy in the 3T Architecture”, AAAI Technical Report SS-9906, (1999) [5] D. Donath. “Verhaltensbasierte Analyse der Beanspruchung des Operateurs in der Multi-UAVFührung”, Dissertation, Universität der Bundeswehr, Neubiberg, (in prep.) [6] D. Donath, A. Rauschert, A. Schulte. “Cognitive assistant system concept for multi-UAV guidance using human operator behaviour models”, Conference on Humans Operating Unmanned Systems (HUMOUS´10), Toulouse, France, (2010) [7] S. Edelkamp, A. Hoffmann. “PDDL2.2: The Language for the Classical Part of the 4th International Planning Competition”, Technical Report 195, Freiburg, Germany, (2004) [8] F. Eisenführ, M. Weber, T. Langer. “Rational Decision Making”, ISBN: 978-3642028502, Springer-Verlag, (2010) [9] J. Franke, V. Zaychik, T. Spura, E. Alves. “Inverting the Operator/Vehicle Ratio: Approaches to Next Generation UAV Command and Control”, Association for Unmanned Vehicle Systems International and Flight International, Baltimore, USA, (2005) [10] A. Gerevini, A. Saetti, I. Serina. “Planning in PDDL2.2 Domains with LPG-TD”, International Planning Competition, 14th Int. Conference on Automated Planning and Scheduling (ICAPS-04), abstract booklet of the competing planners, (2004) [11] C. Meitinger, A. Schulte. “Cognitive Machine Cooperation as Basis for Guidance of Multiple UAVs”, NATO RTO HFM Symposium on Human Factors of Uninhabited Military Vehicles as Force Multipliers, Biarritz, France, (2006)
[12] C. A. Miller, R. P. Goldman, H. B. Funk, P. Wu, P. Bate. “A playbook approach to variable autonomy control: Application for control of multiple, heterogeneous unmanned air vehicles”, Proc. American Helicopter Society 60th Annual Forum, pp. 2146-2157, (2004) [13] R. Onken, A. Walsdorf. “Assistant systems for aircraft guidance: cognitive man-machine cooperation”, Aerospace Science and Technology, volume 5, pp. 511520, (2001) [14] R. Onken, A. Schulte. “System-Ergonomic Design of Cognitive Automation”, Studies in Computational Intelligence, ISBN: 978-3-642-03134-2, SpringerVerlag, (2010) [15] H. Putzer, R. Onken. “COSA – A generic cognitive system architecture based on a cognitive model of human behavior”, Cognition, Technology & Work, volume 5, pp. 140-151, (2003) [16] A. Schulte, P. Stütz. “Evaluation of the cockpit assistant military aircraft CAMA in simulator trials”, NATO RTO System Concepts and Integration Panel Symposium. Sensor Data Fusion and Integration of the Human Element, volume 13, (1998) [17] J. Uhrmann, R. Strenzke, A. Rauschert, A. Schulte. “Manned-unmanned teaming: Artificial cognition applied to multiple UAV guidance”, NATO SCI-202 Symposium on Intelligent Uninhabited Vehicle Guidance Systems, Neubiberg, Germany, (2009) [18] J. Uhrmann, R. Strenzke, A. Schulte. “Human Supervisory Control of Multiple UAVs by use of Task Based Guidance”, Conference on Humans Operating Unmanned Systems (HUMOUS´10), Toulouse, France, (2010)