development of an assistant system for the human UAV operator(s) to help ... At the UBM the approach of so called cognitive operator assistant systems has ...
Cognitive assistant system concept for multi-UAV guidance using human operator behaviour models Diana Donath, Andreas Rauschert & Axel Schulte Universität der Bundeswehr München (UBM), Department of Aerospace Engineering, Institute of Flight Systems (LRT-13), 85577 Neubiberg, Germany [diana.donath|andreas.rauschert|axel.schulte]@unibw.de
Abstract. Future military missions require the simultaneous guidance of multiple UAVs by one or few operators from aboard a manned aircraft. In this paper an approach will be described how to supplement the human operator(s) with a cognitive and cooperative assistant system. Based on general considerations about the work process, behaviour requirements for such a system are stated. By using a cognitive implementation approach, further requirements are derived. The issue of detection of the actual operator task and critical operator workload is investigated using models of human behaviour. To gather individual task specific workload dependent human behaviour patterns, an experiment has been performed. During the experiment, human behaviour within different task situations, operator performance and human error during task processing has been observed. This paper describes theoretical considerations, experiments and modelling approaches.
Keywords: assistant system, knowledge-based, subjective workload, adaptive automation, behaviour model, eye movements
1
Introduction
In future military helicopter missions a critical function is to get flexible real-time intelligence during the mission without exposing humans to possible threats. Therefore, direct control of one or more unmanned aerial vehicles (UAVs) from aboard the manned platform, i.e. as detached sensors is investigated [1]. Since there is only a limited number of operators aboard the manned aircraft, UAV guidance with a vehicle to operator ratio of equal or larger than one is required. Studies in other areas of UAV guidance show that accidents are not only caused by technical malfunctions but also by human error [2] resulting from typical reasons like unbalanced workload conditions, interface handling problems, reduced situation awareness, degraded operator attention, vigilance decrements or complacency. This paper proposes the development of an assistant system for the human UAV operator(s) to help managing the guidance task.
2
Diana Donath, Andreas Rauschert & Axel Schulte
To begin with, general characteristics and behaviour guidelines for an assistant system obtained from earlier research will be considered. These statements are used to derive required knowledge of an assistant system aiming for a knowledge-based implementation. By introducing a cognitive implementation method, the required knowledge may be structured into more detailed knowledge classes. Within this concept, the experimental gathering of knowledge about the human operator workload and his current task will be investigated further.
2
Assistant system requirements
At the UBM the approach of so called cognitive operator assistant systems has been followed now since the early 1990ies. During that extended period of research various prototypes aiming for cockpit assistance have been developed and successfully field tested (cf. CASSY e.g. [3]; CAMA e.g. [4]). Based on this experience, Onken and Schulte [5] describe the general approach to assistant systems in a broader context. Additionally, the design and implementation method of assistant systems mimicking rational aspects of human cognition is proposed, which has not yet been applied to the above mentioned implementations. The aim of the current work is to refine the general approach towards the application of multi-UAV guidance using cognitive automation methods. Since these methods assume a knowledge-based implementation, the next section will focus on deriving the necessary knowledge of an assistant system. 2.1
The Work System
To be able to derive the characteristics of the proposed assistant system, multi-UAV guidance is described from a more abstract, process oriented point of view. Figure 1 shows the physical entities of such a work process in a work system and the specific characteristics of the assistant system, which can be derived from the embedding into the operating force (cf. [5]). knowledge & understanding of work objective necessary
pursues the work objective too
takes full situational picture into consideration domain knowledge necessary
makes use of operation-supporting means too cooperates with human operator & coordinates tasks knowledge of knowledge about OSM capabilities human operator & necessary cooperation necessary
Figure 1: Characteristics of the assistant system in the in the work system The work system as such is defined by the work objective, which should be accomplished by the work process. The work system itself consists of the Operating
Assistant system for multi-UAV guidance using operator behaviour models
3
Force (OF, left in figure) and the Operation Supporting Means (OSM, right in figure). Constraining factors to the work process are the environmental conditions. The OF is the high end decision component of the work system. In Figure 1 it is enlarged by the assistant system. The OF is the only component which pursues the complete work objective. The OF applies the OSMs (e.g. automation) to accomplish the work objective. On this level the interaction between the OF and the OSMs can be described with the supervisory control paradigm [6]. Onken and Schulte [5] now characterise the role and integration of the assistant system (cf. robot head in Figure 1) by some characteristics resulting from its integration into the work system, i.e. the assistant system shall do its best by own initiative for the fulfilment of the work objective also as part of the OF. Therefore, it needs knowledge and understanding of the work objective, the application domain, the human operator and the cooperation, and the operation supporting means (cf. Figure 1). The next section will explain how the above mentioned cooperation could be feasible for an assistant system. 2.2
General guidelines for assistant systems
To be able to cooperate with the human operator and to support him during mission and task accomplishment similar to a human team-mate, Onken and Schulte [5] suggest the following general guidelines for the expected behaviour of cognitive assistant systems: Present the full picture of the work situation. Ensure that the operator attention is placed on the most urgent task. If the human operator is overtaxed in performing the most urgent task, then automatically transfer the situation into one manageable by the operator. If task risk or costs are likely to be too high or the operator is not capable of performing the task, take over or reallocate the task to an OSM.
Assistant system
Human operator
Operator state Conducting dialogues
UnderWork objective standing of work objective
Guide attention Balance workload Moderate costs
OSM capabilities
OSM
Interpretation of work domain Environmental conditions & supply
Figure 2: Required knowledge of the assistant system as part of the work system
4
Diana Donath, Andreas Rauschert & Axel Schulte
According to these guidelines, the needed knowledge about goals for the necessary actions (cf. 2.1) may be specified by knowledge about how to guide the attention of the operator and balance his workload. Furthermore knowledge about how to moderate the costs is necessary. Figure 2 tries to illustrate the different knowledge areas of an assistant system, which have been derived in sections 2.1 and 2.2. The derived knowledge shown in figure 2 is still quite unstructured and needs to be described in more detail for an implementation. Therefore the next section will introduce a knowledge-based implementation method also proposed in [5], which allows a more structured and detailed modelling of the knowledge.
3
Knowledge-based implementation with the Cognitive Process
The Cognitive Process (CP) has been developed as a model of human information processing, which is suitable for generating human like rational behaviour [7] in technical systems. One of the main characteristics is that it allows a unitary architecture with one knowledge base instead of many separated modules. As the generated behaviour is driven by explicit goals like requested in section 2.1, the CP is well suited for the development of an assistant system. 3.1
The Cognitive Process as a blueprint for knowledge-based systems
Figure 4 shows the CP consisting of the body (inner part) and the transformers (outer extremities). The body contains all knowledge, which is available for the Cognitive Process in order to generate behaviour. There are two kinds of knowledge: the apriori knowledge, which is given to the CP by the developer of an application during the design process and which specifies the behaviour of the CP, and the situational knowledge, which is created at runtime by the CP itself by using information from the environment and the a-priori knowledge. The above-mentioned transformers read information in mainly one area of the situational knowledge, use a-priori knowledge to process it and write the output to a designated area of the situational knowledge. To generate an overall observable behaviour, the following a-priori knowledge classes need to be specified: ENVIRONMENT MODELS help to interpret the input data, so the CP has an understanding of the current situation (belief). DESIRES describe which goals are to be achieved in the current situation. ACTION ALTERNATIVES offer possible plans to achieve the instantiated goals. INSTRUCTION MODELS are then needed to schedule the steps required to execute the plan, resulting in instructions. For further information of the CP refer to [7]. The above mentioned knowledge classes may now be used to detail the needed application independent a-priori knowledge for an assistant system.
Assistant system for multi-UAV guidance using operator behaviour models
3.2
5
Cognitive modelling of assistant system knowledge
In this section the required knowledge of an assistant system stated in section 2 is structured according to the cognitive process (cf. 3.1). The aim is to derive specific knowledge classes for a cognitive implementation. The application of the different knowledge classes to an assistant system should start with a definition of the DESIRES to be achieved [7]. Figure 3 shows a possible goal hierarchy. According to the characterisation from section 2.1 the most abstract desire is to support the human operator. This desire is achieved if the operator’s workload is balanced, the operator works on the most urgent task and the tasks costs are acceptable (cf. 2.2). To be able to check, if the operator’s workload is balanced it is essential to know the operator’s workload. To check, if the operator works on the most urgent task it is necessary to know the most urgent task and the actual operator task. For knowledge of the most urgent task and predicting the task costs, the assistant system has to know which tasks are to be performed for the fulfilment of the work objective in terms of a task agenda. Basic desire Support operator
Operator workload balanced
Know operator workload
Operator works on most urgent task
Know actual operator task
Know most urgent task
Task costs acceptable
Predict task costs
Know task agenda Understand work objective
Figure 3: Desires of assistant system core knowledge package The next step in deriving knowledge classes is to figure out, which ACTION ALTERNATIVES could be applied to fulfil the desires. In case the operator workload is unbalanced, the assistant system should transfer the situation by offering (partial) automation to the operator. If the operator is not working on the most urgent task, the assistant system should guide the attention of the operator. If the task costs become inacceptable, a reallocation of the task to automation is a possible action alternative. The execution of these action alternatives require an output of the system either to the operator or the OSM, like already stated in section 2.1. An action alternative should always imply to send a message to the operator for his information. Another possible output to the operator may be to offer (partial) automation of the task by using the available interfaces. A reallocation of the task would result in executing the task with an OSM. To be able to check, if the above mentioned desires are met and to generate behaviour in the first place, an assistant system would need ENVIRONMENT MODELS about
6
Diana Donath, Andreas Rauschert & Axel Schulte
the most urgent task, the actual operator task, the operator workload and the likely task costs. Figure 4 illustrates the above described knowledge classes which represent the application independent behaviour of an assistant system. Especially in gaining the environment models of the cognitive process, more research has to be done. Due to the limited scope of the paper the following sections will focus on an approach for the detection of the actual operator task and workload.
Interpretation
Input Interface
Most Most urgent urgent task task
Task Task agenda agenda
Human Human operator operator
Likely Likely task task costs costs
-- Actual Actual task task -- Workload Workload -- Resources Resources
Belief
Input Data observable behaviour of CP = ACU behaviour
Environment
Work Work objective objective
Goal Determination
Environment Models
Operator Operator workload workload balanced balanced
Desires situational knowledge
a-prioriknowledge
Action Alternatives
Goals
Operator Operator works works on on most most urgent urgent task task
Instruction Models
Task Task costs costs acceptable acceptable Instructions
Plan
Planning
Output Interface
Execute Execute task task
Reallocate Reallocate task: task: Scheduling
-- Automate Automate task task -- Message Message to to operator operator
Send Send message message to to operator operator
Guide Guide attention: attention:
Offer Offer (partial) (partial) automation automation
Transfer Transfer situation: situation:
-- Message Message to to operator operator -- Message Message to to operator operator -- Offer Offer (partial) (partial) automation automation
Figure 4: Top-level knowledge models of assistant system
4
Concept for detection of actual operator task and workload
For the implementation of the above mentioned requirements of assistant systems it is necessary on the one hand to detect the task the human operator is currently working on and on the other hand to identify if the human operator is currently overtaxed during task accomplishment. Here, the usage of individual, task specific human behaviour models, representing human behaviour during task execution, will be investigated as a possible solution for both requirements. To justify this behaviour based approach some further hypotheses have been stated: The behaviour human operator’s exhibit in work situations in terms of interactions (information gathering and control, cf. [14]), although being heavily task dependent, is rather stable in normal workload conditions, which means they behave in an analogous manner within the same task situation.
Assistant system for multi-UAV guidance using operator behaviour models
7
Human performance degrades at high subjective workload levels, i.e. the probability of human erroneous action goes up in coming close to what some authors call the red-line of workload [13]. The behaviour human operators exhibit under higher subjective workload conditions (i.e. approaching the red-line) changes to an extent to be determined from normal behaviour which is caused by so called (self-)adaptive strategies [8] [9] and appears for example in a change of task prioritising, disregard of subtasks, change in task execution, altered attention allocation [10]. Assuming the availability of human behaviour models representing task specific, individual human operator behaviour within standard as well as within overload conditions an assistant system would be able to continuously compare the expected human behaviour, represented by human behaviour models for current task situation with the actual human behaviour in form of pilot interactions in the current situation. Thereby, the following cases may occur: Match: The actual human behaviour matches to one of the available human behaviour models within the database for the current situation. Depending on the identified model, this might belong to standard situation, or to a model representing an unbalanced high workload condition. In the latter case this match will be a trigger for the assistant system to intervene according to the general requirements for assistant systems. In general a match would allow the robust identification of the current task being performed by the individual human operator in that very moment. Mismatch: There might be two reasons, if there is no match of the actual human behaviour with the behaviour models available for the current situation: (1) The operator is working on a different or additional task not known to the assistant system. (2) The operator is working on expected task but maybe suffers on psychophysiological degradations suchlike fatigue resulting in a change of the operator behaviour.
5
Experimental Model Acquisition
Ideally, human operator behaviour models should be made available for each individual operator working with the system and for each possible occurring task in standard as well as in overload conditions. Those models, therefore, have to be gathered by dedicated measurements, preferably in real world task situations. In order to develop the respective methods and to obtain exemplarily task specific and subjective workload related operator behaviour models, experiments were conducted. human operator behaviour, i.e. manual and visual interactions was observed and recorded. The experiment referred to a MUM-T scenario, i.e. the guidance of multiple UAVs by a human UAV-operator located in a helicopter cockpit. The UAVs were used as remote sensor platform for real-time reconnaissance information during a simplified military air assault mission. For further information see [11] [15].
8
Diana Donath, Andreas Rauschert & Axel Schulte
5.1
Experimental Design
In the experiment a high-level task of the UAV-operator was the reconnaissance of the helicopter ingress and egress route and of the helicopter operation area (HOA), with two possible landing sites (LS1, LS2) to drop the onboard troops. The UAVs took off five minutes ahead of the manned helicopter. They were equipped with a thermal camera and a video data link to provide reconnaissance information and were primarily guided along pre-planed routes. To exhibit the subjects to reproducible task situations, in this case an object identification task three civil objects were located in approximate equal distance on the ingress route and two additional hostile objects were placed in the helicopter operation area. The latter one provoked a situation which required a re-planning of the helicopter route to an alternate landing zone in case of a detected hot landing site. For the accomplishment of the reconnaissance task the UAV-operator had continuously access to ortho-photos, which were taken by the UAVs along their flight path. In case of a recorded object by one of the UAV-cameras, a hotspot was visible in the ortho-photo and thereby made available to the UAV-operator during his scanning process of the ortho-photos. To identify and classify the detected objects as either civil or hostile, the operator furthermore had the ability to analyse a camera live stream provided by the currently selected UAV.
Figure 5: UAV-operator control station panel in simulator helicopter cockpit Figure 5 shows the UAV-operator control station panel, consisting of two touchdisplays for the accomplishment of the reconnaissance task. Figure 6 shows the available display modes [15] for the UAV-operator to accomplish his task.
Figure 6: Available display modes for FMS-based guidance (FMS-/RECCE-/CAM-Mode)
Assistant system for multi-UAV guidance using operator behaviour models
9
To provoke an increase of human operator subjective workload in the same task situation (target identification task), which should be accompanied with a change in human behaviour according to the previously stated assumption in case of an experienced high subjective workload situation, this experiment was conducted in two different system configurations, i.e. guidance of one UAV and guidance of three UAVs. 5.2
Experimental Procedure
For the experiments, a fixed-based helicopter cockpit and multi-UAV simulator was used, with a UAV-operator workstation integrated in the workstation of the pilot in command. Furthermore the simulator was equipped with faceLAB, a contact-free, video-based, eye-movement measurement system, to online observe and record the visual behaviour of the UAV-operator during task execution. Subjects were four military pilots, three of them at an age close to an average of 27 years, with only little flying experience of 290h in average as helicopter-pilot and the fourth at the age of 33 years with more flying experience of 835h and 300h as commander. To get familiarised with the UAV-operator station layout and with handling of the FMS-based guidance of the UAVs [11], as well as with the reconnaissance task as UAV-operator, all subjects got a full day training. During the experimental mission various performance parameters, as well as operator interactions (manual and visual) were recorded. Furthermore, the operators filled out NASA-TLX subjective workload ratings after completion of each target identification task. 5.3
Experimental findings
In the following sections subjective workload, performances parameters and human behaviour will be discussed. The material refers to the target-identification-task. 5.3.1 Description of the task The development of modelling concepts shall be based upon the self-contained targetidentification-task. As depicted in Figure 7 this task can be further subdivided into mainly three subtasks. The subtask “tag hotspot” represents the period between the recognition of the hotspot by the UAV-operator and the tagging of the hotspot in the map, which is often accompanied by centring the cam of the UAV on the hotspot. “Identify hotspot” means the identification of the previously detected hotspot with a live video stream delivered by the selected UAV. Finally, “classify hotspot” is the insertion of the identification into the system. “Search for hotspot” and “systems management” are added for the sake of completeness. These tasks are not an essential part of the targetidentification-task but they appear during the accomplishment of the targetidentification-task. The subtask “search for hotspot” is the period prior to the detection of the hotspot while “systems-management” refers to necessary actions to get the system back into initial conditions for the next search for a hotspot.
10
Diana Donath, Andreas Rauschert & Axel Schulte
reconnaissance
…
search for hotspot
target identification
recognize & tag hotspot
…
identify hotspot
classify hotspot
systemsmanagement
Figure 7: Task subdivision of the target-identification-task
recognize & tag hotspot
click hotspot
switch to HotSpot fixations hotspot
click SEL-TARGT-Button
direct CAM to HotSpot fixations SEL-TARGET-Button
click hotspot
switch to HotSpot fixations hotspot
click ADD-OBJ-Button
change to mark-Modus fixations ADD-OBJ-Button
click RECCE-Button
change to RECCE-Mode fixations RECCE-Button
fixations hotspot
centering HotSpot manual interactions to center hotspot
actions
procedure subtask subtask
Each of these above mentioned subtasks can be broken down into several subsubtasks, each of which are characterised by certain manual and visual interactions (see Figure 8 for the subtask recognize & tag hotspot).
Figure 8: Sub-subtasks of the subtask recognize & tag hotspot including typical manual and visual interactions 5.3.2 Research questions In the subsequent sections the experiments shall be examined according to the following questions: Do the operators behave in an equal manner in similar task situations? Is there an increase of subjective workload to be observed? Is there a noticeable change in behaviour during task accomplishment? If this is the case, how can this behaviour deviation be described and quantified?
Assistant system for multi-UAV guidance using operator behaviour models
11
5.3.3 Subjective Workload The subjective workload of the UAV-operators was assessed after completion of each target-identification-task. Figure 9 shows a slight increase in workload with the larger number of UAVs. Average Workload [%]
70
62
60 60
51
49 50
40
30
20
10
0
1 UAV
3 UAVs
Ingress
HOA
Figure 9: Influence of the number of UAVs and the difference of ingress-route (non-hostile-objects) and in HOA (hostile object) on the workload The first three objects were non-hostile located on the helicopter ingress route, while the fourth object was hostile and positioned within the helicopter operation area (HOA). Again, this also caused an impact on the subjective workload (see Figure 9). 5.3.4 Performance Data According to the basic assumption which states that there is a change of human behaviour prior to the occurrence to grave performance decrements [12], several performance parameters referring to the target-identification-task were investigated. Each performance parameter was analysed regarding FMS-based guidance with one/three UAVs, for ingress-route and HOA, as well as for low and “high” workload conditions. Here, the time required for accomplishment of the target-identificationtask and erroneous actions of the operator were considered, i.e. three different types of errors: Permanent errors refer to a wrong hotspot classification or an abortion of the target-identification. These errors have an impact on mission accomplishment. Temporary errors refer to minor errors with eventually no impact on mission accomplishment, e.g. if the operator temporary neglect to switch a camera back to MAP-GND mode, which may result in a coverage gap. Another temporary error might be the wrong classification followed by the immediate correction. Near misses refer to almost errors which could only be discovered by means of eye-tracking. In these cases, it could be observed visual fixations indicating that the operator focused his attention already on a subsequent task, while the current task was not yet fully completed. Due to the low number of subjects the analysis of the material showed no significant changes of error frequency neither between the conditions FMS-based guidance with one or three UAVs nor between the conditions ingress-route and HOA. Specifically the following observations concerning errors can be stated: Only one permanent error occurred during all target-identification-tasks. Near misses were observed several times, but only by one subject in fairly low workload conditions.
12
Diana Donath, Andreas Rauschert & Axel Schulte
There was no significant difference in the occurrence of temporary errors for the guidance of one or three UAVs, on the ingress-route or in the HOA, nor between the subjects. The more experienced subject made overall fewer errors than the less experienced one. Both, the subjective workload as well as the assessed performance parameters lead to the assumption that the workload stimulation, primarily caused by an increase in the number of UAVs, was not enough to achieve a persistent high workload condition, which might result in a decrease of performance parameters. Beside the occurrence of possible errors, the time required for the accomplishment of the whole target-identification-task as well as for the above mentioned subtasks “tag hotspot”, “identify hotspot” and “classify hotspot” were investigated. Similar to the previous performance analysis, there was no trend, or significant difference in times required for the accomplishment of tasks/subtasks observable in the guidance of one or three UAVs or on the ingress-route versus HOA. But there was a great difference/variability between the subjects, as depicted in Figure 10. 30
24,6
22,8
time [sec]
25 20
18,1
16,1 14,7
15 10
5,4
7,7
5,8
8,7
6,4 3,2
5
2,7
0
recognize & tag VP4 (1UAV)
identify VP4 (3UAVs)
VP1 (1UAV)
classify VP1(3UAVs)
Figure 10: Average time required for the accomplishment of the subtask of the target-identification task of subject 1 compared with subject 4 While subject 4 was very fast in the task accomplishment, subject 1 needed more than twice the time, for the same task. Subject 1 was the one with the most experience and the fewest errors. Compared with the audio recordings of the experiments it seems that subject 1 reached his decisions and adapt his procedures/strategies on the basis of his experience and against the background of his tactical knowledge, while the overall goal of subject 4 seemed to accomplish each task as fast as possible, without any consideration of possible tactical influences. Here an important aspect becomes apparent, the need for individual operator models, representing individual behaviour. 5.3.5 Human-System Interactions Concerning human-system interactions the sequence of visual scanning for particular sub-subtasks was investigated. On the macroscopic level of subtasks there were no sequence variations possible, given the system operation design. Hence, the analysis we extended to the sub-subtask level. The following aspects were investigated. (1) Equal behaviour of one subject within the same task situation Figure 11 depict a typical micro-sequence of interactions for two different task situations of the type classify hotspot (subject 4).
Assistant system for multi-UAV guidance using operator behaviour models
13
Subject4-1UAV-Target1-ingress-civil [Subtask: classify] look at HotSpot change to RECCE-Mode
classify HotSpot
switch to HotSpot
Subject4-1UAV-Target2-ingress-civil [Subtask: classify] look at HotSpot change to RECCE-Mode
classify HotSpot
switch to HotSpot
Figure 11: Interaction sequence in performing subtask classify (observed in two subsequent target-identification-tasks) (2) Inter-individual differences in the task accomplishment The most apparent difference between the two subjects could be identified on a macroscopic level, expressed in the different usage two displays of the operatorcontrol station. While subject 4 only used one display (the right one) during the whole target-identification-task with changing display modes, subject 1 used both displays in different configured modes and switched between these displays during the execution of the target-identification-task. (3) Differences within a subject due to different workload conditions Concerning this feature the experiments could not provide persistent high workload conditions. Consequently, there were no significant differences visible within the interaction sequences of each subject. In general, these results can still be regarded as unsatisfactory concerning the detection of subjective workload induced behaviour changes. Considering the only little raise in subjective workload which could be induced by the experimental conditions, nothing else could be expected. However, earlier experiments [15] indicated that much higher workload levels could be reached, though. 5.4
Modelling Concept
In order to determine the current task the human operator is working on, a set of human behaviour models have to be build, representing the normal as well as the modified behaviour of each human operator during task accomplishment as a next step. Again, human behaviour comprises manual and visual interactions with the system. Figure 12 shows a typical task sequence.
14
Diana Donath, Andreas Rauschert & Axel Schulte
sub-subtask centering hotspot
sub-subtask change to SLEW-mode
+ : visual interaction (fixation)
sub-subtask switch to hotspot
sub-subtask change to RECCE-mode
sub-subtask change to mark modus
o : manual interaction
Figure 12: Typical operator behaviour, i.e. manual and visual interactions One established method for modelling sequential processes are Hidden Markov Models (HMM). HMMs have already been used earlier in eye movement research, e.g. in segmenting the low-level eye movement signals to detect focus of attention, for implementing models of cognitive processing, for analysing attention switching [16][17], and for human supervisory control behaviour models in the context of UAVguidance [18]. Here it is intended to use HMMs in order to infer the current task an operator is working on from the observable human behaviour, i.e. visual and manual interactions with the system, although it is not directly visible. This is similar to the HMM notion in which the hidden states must be inferred from observable symbols. visual raw data
object assignment sequence of basis interactions
manual raw data
object assignment
Figure 13: Pre-processing of human operator interactions To get a sequence of observable symbols, in our case a sequence of basis interactions of the human operator, a pre-processing of the human operator interactions with the operator control station has to be performed (Figure 13). The next step will be defining the model, the number and meaning of states and the typology. Therefore, variations in the level of detail for example on task level, subtask level and sub-subtask level, as well as different topologies will be investigated in more detail in future work (Figure 14).
Assistant system for multi-UAV guidance using operator behaviour models
task level (reconnaissancetask)
Search for objects
15
Targetidentification
Rescheduling
subtask level (target-identification task)
search
1 UAV sub-subtask level (search-subtask)
recognize & tag hotspot
forward
identify hotspot
classify hotspot
system management
backward
forward
backward
3 UAVs switch UAV
Figure 14: Observable state transitions specific to certain subtasks
6
Conclusions and Future Work
This paper showed a concept for the development of a knowledge based cognitive, cooperative operator assistance system for multi-UAV guidance. The main focus of this concept paper was to infer the necessary knowledge to meet the general requirements of an assistant system cooperating with a human team-mate. This knowledge, which is an essential part for the subsequent implementation of such a system, can be subdivided into four domains, comprising the knowledge to understand and pursue the overall work objective and all associated sub-goals, the knowledge necessary to understand the surrounding environment, the knowledge about the availability of OSMs and their handling, and, as an essential part for the cooperation with a human team-mate, the knowledge about the human operator itself. The latter should comprise the actual operator state, represented by the task the operator is currently working on, the actual perceived workload and his still available/occupied resources. Human behaviour models were proposed as an approach to determine the current operator’s task and possible overload conditions. To achieve such human behaviour models an experiment was conducted to get individual, task specific, workload dependent human behaviour data. Here a concept for human behaviour models by using Hidden Markov Models was proposed. To realise such models a new experiment has to be conducted, to get sufficient data, for each task situation and for overload conditions changing human behaviour, which could not yet be achieved with the above mentioned experiment. Consequently, the next step will be an adaptation of the experimental design, to provoke a more explicit
16
Diana Donath, Andreas Rauschert & Axel Schulte
change in human behaviour in consequence of excess subjective workload, and to get a sufficient test and training data set of human behaviour within standard and high workload conditions for the implementation of the presented modelling concept.
References [1] [2] [3]
[4]
[5]
[6] [7] [8]
[9] [10] [11] [12] [13] [14] [15]
[16]
[17] [18]
Carlile, C.B., Larese, W.S.: Manned-Unmanned Aircraft Teaming – Making the Quantum Leap, Army Aviation. 2009. United States Air Force – Accident Investigation Board (AIB): Class A Aerospace Mishaps, http://usaf.aib.law.af.mil/, last visited Mar 3rd, 2010. Prévôt, T., Gerlach, M., Ruckdeschel, W., Wittig, T., Onken, R.: Evaluation of intelligent on-board pilot assistance in in-flight field trials, 6th IFAC/IFIP/IFORS/IEA symposium on analysis, design and evaluation of man–machine systems. 1995. Frey, A., Lenz, A., Putzer, H., Walsdorf, A., Onken, R.: In-Flight Evaluation of CAMA – The Crew Assistant Military Aircraft, German Aeronautics and Astronautics Congress 2010, Hamburg, 2001. Onken, R., Schulte, A.: System-Ergonomic Design of Cognitive Automation – Dual Mode Cognitive Design of Vehicle Guidance and Control Work Systems, Springer,, 2010. Sheridan, T. B.: Telerobotics, Automation, and Human Supervisory Control, MIT Press, Cambridge, 1992. Putzer, H., Onken, R.: COSA – A generic cognitive system architecture based on a cognitive model of human behaviour, Cogn Tech Work, 5, 140-151, 2003. Canham, L.S.:Handbook of Human Factors Testing and Evaluation, Chapter:Operability Testing of Command, Control & Communications in Computers and Intelligence (C41) Systems, Mallory International, 2001. Parasuraman, R., Hancock, P. A.: Stress, Workload and Fatigue, Chapter: Adaptive Control and Mental Workload, Human Factors in Transportation, 2000. Veltman, J.A., Jansen, C.: The role of operator state assessment in adaptive automation, TNO-DV3 2005 A245, 2006. Uhrmann, J., Strenzke, R., Schulte, A.: Human Supervisory Control of Multiple UAV by use of Task Based Guidance, HUMOUS 10, Toulouse, 2010. Donath, D., Schulte, A.: Behavior model based recognition of critical pilot workload as trigger for cognitive operator assistance, HCI, San Diego, USA, 2009. de Waard, D.: The measurement of Drivers Mental Workload. Thesis, University of Groningen, Netherlands . 1996. Flemisch, F.O., Onken, R.: Human Factors Tool caSBAro: Alter Wein in neuen Schläuchen. Anthropotechnik gestern-heute-morgen, DGLR-Bericht 98-02. 1998. Uhrmann, J., Strenzke, R, Rauschert, A, Meitinger, C., Schulte, A.: Mannedunmanned teaming: Artificial cognition applied to multiple UAV guidance, NATOSCI-202 Symposium, Neubiberg, 2009. Hayashi, M., Oman, C.M., Zuschlag, M.: Hidden Markov Models as a Tool to Measure Pilot Attention Switching during simulated ILS Approaches, 12th International Symposium on Aviation Psychology, Dayton, OH, 2006. Hayashi, M.: Hidden Markov Models for Analysis of Pilot Instrument Scanning and Attention Switching, PhD-Thesis, MIT 2004. Boussemart, Y., Cumming, M.L.: Behavioral Recognition and Prediction of an Operator Supervising Multiple Heterogeneous Unmanned Vehicles, HUMOUS, Brest, France, 2008.