12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009
Monitoring of Reliability in Bayesian Identification Max Kr¨uger
Nane Kratzke
Industrieanlagen-Betriebsgesellschaft mbH (IABG) Einsteinstraße 20, D-85521 Ottobrunn, Germany e-mail:
[email protected]
Industrieanlagen-Betriebsgesellschaft mbH (IABG) Einsteinstraße 20, D-85521 Ottobrunn, Germany e-mail:
[email protected]
Abstract – Identification of tracked objects is a key capability of surveillance and information systems for air, surface (maritime), ground, and space environments. It improves situational awareness and offers decision support to operational users. Bayesian-style identification processes provide an identity as result. As input for taking further operational action, judgement of the remaining uncertainty is important. An operational user needs to know, how much he can rely upon the provided identification result. In this contribution, a typical Bayesian identification process is described and analyzed with respect to the support of an operational user in judging the reliability of an identification result. Measures proposed in literature do not completely fulfill the requirements of operational users. Existing approaches only deal with selected aspects of reliability. Therefore, relevant aspects of the overall reliability are considered in this paper. In addition, a set of appropriate measures is proposed, which corresponds to these aspects.
relevance. An operational user needs to know, how much he can rely upon the provided result, even without having technical knowledge of the underlying identification process. This paper is intended to provide technical means to support an operational user in judging the overall reliability of an identification result by providing adequate measures for specific aspects of reliability. We use a typical Bayesian identification process as a basis for our considerations. The results are expected to be easily transferable to other Bayesian-based approaches. Outline of this work: In section 2 we describe the exemplary Bayesian identification process, providing a common frame for the subsequent treatment of reliability. Ergonomic requirements and definition of appropriate reliability measures are provided in section 3. Section 4 shows some examples and section 5 outlines conclusions and future work.
Keywords: Identification, Bayesian Fusion, Monitoring of Reliability, Uncertainty, Operational User
Major objective of identification is the provision of an identity, based on measured object features for each tracked object. Instead of relying on only one particular source, all available information on a tracked object is fused and used to provide a more reliable identification result. Nevertheless, identification is a process where uncertainties of results have to be carefully considered. In addition to a resulting identity, the related reliability measures of this result must be presented to an operational user. In literature as well as in practice, there are a number of techniques for fusion in combination with identification, see e.g. [1], [2], [3] with references therein. Apparently, rule-based ([3], [4]), Dempster-Shafer ([5], [1], [3]), and Bayesian approaches ([1], [3], [2], [6]) are most commonly used. Figure 1 shows a typical Bayesian identification process ([1, pp. 214-220], [3, pp. 220-222]). This exemplary process assigns an identity for each tracked object based on all measured emissions. In principle, all considerations made in this paper are valid for tracked objects in air, surface (maritime), ground and space environments. Note, that the process refers to identification on a track by track approach.
1 Introduction Identification is the characterization of a tracked object by assignment of an identity (ID). This identity describes object features, e.g. allegiance or intent, necessary to know in performing military missions. In military context, the identities Friend, Assumed Friend, Neutral, Suspect, Hostile, and Unknown are often used as standard. Differing identities might be defined and applied depending on the application context. As a major capability in surveillance and information systems for air, surface (maritime), ground, and space environments, identification extends the traditional positional and kinematical information on tracks. For each tracked object, an identity is calculated by fusion of uncertain information provided by different sources, e.g. based on track behavior, IFF equipment, and adherence to air traffic regulations. In such systems, identification provides a significant benefit for situational awareness and decision support. Identities are input for taking further action. Therefore, remaining uncertainty of an identification result is of high
978-0-9824438-0-4 ©2009 ISIF
2 Bayesian Identification
1241
Figure 1: Bayesian identification process
Particular problems of tracking are not considered here, and a proper association of data and information to the corresponding track is assumed to be given, for details see [1], [2], and [3]. Subsequently we describe the process shown in figure 1 in detail as basis for our analysis of reliability aspects in section 3. Primarily, our description follows [2, pp. 496497] and [7].
2.1 ID Sources Bayesian identification facilitates integration of manifold information sources. These ID sources can be grouped into source types with similar functionality and output type. Figure 2 shows a selection of ID source types, taken from [8].
radar signature this context data might consist of a collection of all known radar signatures stored in a database. Note, that also nontechnical ID sources are possible, e.g. aerial image analysts and field observers. Each ID source provides a declaration to the subsequent Bayesian fusion process as output of the evaluation process. A declaration is the statement of the ID source on a specific attribute of the considered object, based on the measured emissions. Consider as example the declaration object follows a civil air route. Every ID source selects an appropriate declaration out of a source-dependent finite set of possible declarations. It is admitted that an ID source provides no declaration on a tracked object, e.g. if there are no measurements available. We assume that each of the N ID sources provides a declaration di ∈ Di with i ∈ {1, . . . , N } to the fusion process. Di =: {di,1 , . . . , di,Ni } denotes the finite set of all declarations of ID source i, where Ni is the number of all possible declarations of ID source i. An ID source without declaration on a particular track is temporarily removed from the model and the corresponding identification processing cycle for this tracked object at that time.
2.2 Bayesian Fusion Figure 2: Selected ID source types
Every object of interest, has its specific combination of attributes consisting of technical properties, particular capabilities, and behavioral characteristics. E.g. a fighter aircraft has a typical radar signature, may be able to fly at supersonic speed, and occasionally performs certain characteristic attack profiles. As illustrated in figure 1, object attributes induce emissions, which can be measured by suitable sensors. An ID source consists of a sensor and a corresponding evaluation component. The sensor provides raw data based on measured object emissions to the evaluation component, e.g. a measured radar signature. The evaluation component tries to reconstruct the underlying attribute of the object by using additional source-specific context data. For a measured
An object belongs to exactly one out of M possible Operational Object States. The finite set of all possible states is given by OOS := {O1 , . . . , OM }, e.g. OOS = {OF, NA, EF } with own forces (OF ), neutral allegiance (NA), or enemy forces (EF ). We will use this OOS later on for our example. The purpose of the fusion process is the calculation of the Posterior Likelihood Vector PLV = (p(Oj |d1 , . . . , dN ))j=1,...,M with the combined declaration (d1 , . . . , dN ) of all ID sources. That means to provide a likelihood vector on the operational states of the object under the condition, that each ID source i has contributed the declaration di . As a first proceeding step, Conversion is performed for each ID source i, where the Source Likelihood Vector SLV i = (p(di |Oj ))j=1,...,M is assigned to the declaration di . Within the configuration activities of the operation, the SLV i -values had been retrieved for each ID source i. Next, Fusion of declarations is carried out by Bayesian
1242
means, providing the Combined Likelihood Vector CLV = p(d1 , . . . , dN |Oj )
and id∗ (PLV ) = argmin (risk(PLV , id)) .
j=1,...,M
=
N Y
i=1
p(di |Oj )
j=1,...,M
.
(1)
Note, that the combination in line (1) assumes conditional independence as a model-provided precondition, for more details see [2, pp. 496-497]. The CLV expresses declaration-based condensed information without consideration of any a priori expectations on operational states of the tracked object in the considered operational environment. A priori expectation are provided to the process within configuration by specifying probability values for each p(Oj ) with j ∈ {1, . . . , M }. Within the subsequent Bayes step, the Posterior Likelihood Vector PLV = p(Oj |d1 , . . . , dN ) j=1,...,M p(d1 , . . . , dN |Oj ) · p(Oj ) M (2) = P p(d1 , . . . , dN |Ok ) · p(Ok ) k=1
j=1,...,M
is calculated by application of the Theorem of Bayes. The PLV is a major intermediate result of the identification process. It contains the declaration-based condensed information and additionally the a priori expectation in the considered operational environment.
2.3 Identity Decision The PLV as probability distribution on possible operational states of the considered tracked object is an intermediate result, which needs to be transformed into an appropriate form to be presented to an operational user. They need a definite decision for an identity on the basis of the PLV . Typically, the following identities are used: Friend (F ) expresses the object to be considered of having the operational state OF , Neutral (N ) for the state NA, and Hostile (H) for the state EF . Assumed Friend (AF ) and Suspect (Suspect) refer also to the states OF and EF but indicate more uncertainty. The identity Unknown (U ) reflects the situation that no operational proposal can be given based on a particular PLV . By ID := {F, AF, N, S, H, U } we denote the set of all possible Identities. As decision rule a minimax approach is used within the ID Decision step ([1, p. 213-214]). It minimizes the maximum risk of expected negative consequences. Risk values r(Oj , id) with Oj ∈ OOS and id ∈ ID numerically express the consequences, if a tracked object is in state Oj and is assigned the identity id. Risk values had been retrieved within the configuration preparation of an operation. Given a PLV , the ID Decision id∗ (PLV ) is defined by risk(PLV , id) :=
M X j=1
p(Oj |d1 , . . . , dN ) · r(Oj , id)
(4)
id∈ID
(3)
In cases, where equation (4) does not define a unique identity id∗ (PLV ), predefined rules for selection of one of the ID candidates must be applied.
3 Monitoring of Reliability In this paper, reliability in an ID case is considered. For a given tracked object at a given time, the underlying declarations, the Bayesian processing steps, the intermediate results, and the final identity, all together form a particular ID case. Since the identification result is input for further action by operational users, judgement of the overall reliability of this ID case is important. For this purpose of judging the overall reliability, adequate reliability measures need to be provided. Based on these reliability measures, a user might decide for example, whether or not to engage a contact which has been assessed as hostile track. Reliability conversely corresponds to uncertainty. In literature, in context of Bayesian identification, entropy-based approaches seem to be most commonly used to measure uncertainty of fusion results ([2, p. 502], [9, p. 173]). An other interesting aspect of uncertainty discussed in literature is the occurrence of conflicting source information. For Bayesian networks several approaches to define conflicts are available ([10, pp. 99, 179, 190-191], [11]). Measures proposed in literature do not completely fulfill the requirements of operational users, because these approaches each only focus on one selected aspect of the overall reliability. We intend to give a set of indicators and alerts to describe the relevant reliability aspects of an ID case more comprehensively. Uncertainties based on inaccurate configuration data due to improper retrieval or missing knowledge are excluded from our treatment. For details of appropriate data acquisition we refer to [12].
3.1 Ergonomic Requirements In military systems, final identification authority should be left to the user for several operational and legal reasons. However, an automated identification support function offers far reaching assistance. Every operational user acts within an assigned role. Due to limitation of pages, we can only sketch typical roles related to identification exemplarily for a naval unit, e.g. a frigate. Role allocation in other military context is similar. The Sensors & Sources Operator controls and monitors the sensors and ID sources of the ship. Operation of the overall identification process is the duty of the Identification Officer. There are two major consuming roles, the Commanding Officer (CO) and the Warfare Commander (WFC). They have similar duties but act on different echelons: The CO is responsible for the deployment of his ship, whereas the WFC manages the mission of the task force for the given regional and warfare area. Providing measures on reliability to operational users, certain ergonomic aspects need to be taken into account. For
1243
cussed in section 4. An Operational ID process model given in figure 3 provides a simple context for an intuitive understanding of these reliability aspects. This means, that an operator, who is familiar with this operational model, should be able to understand the introductory, nontechnical explanations of considered aspects, being provided at the beginning of each of the following subsections. In this sense, the Operational ID process model enables the correct interpretation and application of indicators and alerts by operational users.
3.2 Combined ID Sources Information
Figure 3: Operational ID process model
visual presentation of information on a display, ISO 9241 standard recommends in part 12 ([13, p. 7]) the following ergonomic principles: clarity, discriminability, conciseness, consistency, detectability, legibility, and comprehensibility, for detail see [13, p. 7]. In this subsection we only consider ergonomic principles, that are relevant to the definition of reliability measures. For aspects and details of the design of corresponding operational interfaces we refer to [14] and [15]. The reliable detection of irregularities and routine cases in provision of identities by the identification process is the main operational objective of defining measures. Compliance with the ergonomic principles as listed at the beginning of this subsection supports this objective by providing usability. Usage of an appropriate wording from the operational expert’s language for description of measures, measured aspects, and context is essential for all named ergonomic principles. An operational user needs measures, that are intuitively understandable, applicable, and appropriate, regardless of his technical understanding of process details. In particular, this is due to the principles of discriminability, clarity, and comprehensibility. These three ergonomic principles together with conciseness and detectability imply, that there should be as many as necessary and as few as possible different measures to describe the overall reliability. Obviously, the selection of measures to be presented to a user also depends on his operational role. To attain consistency and legibility we have the following requirement: The presentation of content of different reliability measures should be as similar as possible. Subsequently, in order to describe the different aspects of reliability, we will use only two types of measures: indicators and alerts. An indicator is a continuous measure that provides a result within [0, 1]. For all indicators, the value 0 represents the worst, the value 1 the best possible rating of the considered reliability aspect. Conversely, alerts are measures, that can provide only the two ratings OK and Warning. Based on the previous considerations, we define indicators and alerts for different aspects of reliability in the following subsection. The necessity of these measures is dis-
The reliability aspect Combined ID Sources Information describes the information content of all fused ID source declarations. In figure 3 this measure can be retrieved at label
1 after the fusion step in the Operational ID process model. The underlying uncertainty originates from measurement interpretation uncertainties on the one hand, and from sensor measurement errors and inaccuracies on the other hand. Concerning the former, consider the exemplary question of how to interpret an ID source declaration stating a certain type of aircraft, that is flown by neutral as well as opponent forces. Obviously, an operator’s judgement on the reliability of an identification result strongly depends on the number of contributing sources and the significance of their declarations. In section 2.2 the fused information of all ID source declarations and its uncertainty is expressed by stochastic means, i.e., the Combined Likelihood Vector CLV , see equation (1). Assuming a uniform a priori distribution on Opera1 instead of p(Oj ) tional Object States, i.e., p˜(Oj ) := M and p˜(di |Oj ) := p(di |Oj ) for all i ∈ {1, . . . , N }, by application of the Theorem of Bayes the operational-unbiased distribution (˜ p(Oj |d))j=1,...,M with d := (d1 , . . . , dN ) can be calculated. Based on the well-known concept of entropy ([2], [9]), the Combined ID Sources Information Indicator ICISI (CLV ) for the information content of the CLV is defined as − ICISI (CLV ) := 1 −
M P
j=1
p˜(Oj |d) · log2 (˜ p(Oj |d)) log2 (M )
. (5)
Due to the properties of entropy, it is ICISI ∈ [0, 1] for all CLV . In case of no source information, i.e., no declarations by any ID source or mutual extinction, we have ICISI = 0. In the opposite case with a maximum of source information (i.e., a CLV with p˜(d|Oj ) = 0 for all but one j ∈ {1, . . . , M }) we have ICISI = 1. Therefore, values inbetween 0 and 1 of the Combined ID Sources Information Indicator give a grading of the available information provided by ID sources, with 0 representing the worst and 1 the best reliability for this aspect.
3.3 ID Decision Stability In figure 3 at label 2 in the Operational ID process model the reliability aspect ID Decision Stability can be measured.
1244
Considering the ID Decision step, this aspect describes the robustness of selection of the final identity against (small) fluctuations in the fused information of ID source declarations. Given a small stability, the final identity might change even due to an alteration of a declaration by the least significant ID source. With a high stability on the other hand, only a significant ID source can cause a change of identity on its own. Consequently, for judging an identity as reliable, amongst other criteria, a sufficiently high stability is required. The ID Decision Stability Indicator for a given PLV is defined as the (adequately scaled) distance to the closest PLV , which yields a substantially different identity. Here, the expression ’substantially different identity’ means, that for the definition of this indicator the identities Friend and Assumed Friend are considered to be one (combined) identity; the same applies to the identities Hostile and Suspect. Heading towards a more precise definition, we denote the set of all Posterior Likelihood Vectors by ∆ := {(x1 , . . . , xM ) ∈ IRM | xj ≥ 0,
M X
xj = 1}, (6)
j=1
p∈∆
IIDA (PLV ) := 1 −
{p ∈ ∆ | id∗ (p) ∈ ID \ {F, AF }}, {p ∈ ∆ | id∗ (p) ∈ ID \ {N }}, {p ∈ ∆ | id∗ (p) ∈ ID \ {H, S}}, {p ∈ ∆ | id∗ (p) ∈ ID \ {U }}.
Then the ID Decision Stability Indicator is defined by IIDS (PLV ) :=
min
| PLV − p| min |q − p|
p∈∆[id∗ (PLV )]
max q∈∆
p∈∆
risk rmax := max risk(p, id∗ (p)) the ID Decision Adequacy Indicator is defined by
and provide the following denotations: ∆[F ] := ∆[AF ] := ∆[N ] := ∆[H] := ∆[S] := ∆[U] :=
is no appropriate representation of this operational properties combination. Consider as an example a civil airliner of a hostile nation, which travels on a civil air route under air traffic regulations, but departed from an opponent’s airport. Information on this track is correctly referring to both, neutral allegiance and enemy status, but neither the Neutral nor the Suspect or Hostile identity are an appropriate selection. On the other hand, Hostile is an adequate identity for a bomber aircraft attacking own forces. Obviously, adequacy is low in the first example, and high in the second one. For a given PLV , risk and adequacy of an identity conversely correspond to each other. As described in subsection 2.3, the identity decision in the Bayesian identification process selects the identity with the lowest risk, i.e., with the minimal expected (negative) consequences, and therefore with the best adequacy. Due to the ergonomic considerations in subsection 3.1, we choose adequacy instead of risk to describe this particular aspect of reliability. Nevertheless, the definition of the corresponding indicator is based upon the calculated risk provided by the Bayesian identification process: With the minimal risk rmin := min risk(p, id∗ (p)) and the maximal
(7)
p∈∆[id∗ (q)]
risk(PLV , id∗ (PLV )) − rmin . rmax − rmin
(8)
Again, we have IIDA (PLV ) ∈ [0, 1]. Therefore, this indicator gives a grading between 0 and 1, stating how well the combined information on the operational object state of the considered tracked object is described by the resulting identity. A value closed to 0 reflects the worst possible compliance, whereas an optimal compliance is reported by an indicator value near 1.
3.5 ID Source Conflict
with |.| denoting the Euclidian Norm. The denominator in equation (7) is only for scaling. Equation (7) also yields IIDS (PLV ) ∈ [0, 1] for all PLV . Therefore, the ID Decision Stability Indicator provides a grading on the addiction of the resulting identity to switch due to small changes of the contributing declarations of ID sources. An indicator value closed to 0 reflects an instable decision, a value near 1 the best possible stability.
3.4 ID Decision Adequacy ID Decision Adequacy is a reliability aspect that measures, how accurately the result of the ID Decision step (i.e., the provided final identity) represents the information given by the fusion of the ID source declarations. The measurement of this aspect in the Operational ID process model in figure 3 takes place at label 3 . In certain situations, the information provided on the tracked object by the fusion of all ID source declarations exactly describes all relevant operational properties. But there might be other situations, where the selected identity
The reliability aspect ID Source Conflict is measured at label
4 in the Operational ID process model in figure 3. This aspect describes the situation, that (at least) one ID source provides information to the Fusion step, that apparently contradicts the fused information provided by the other ID sources, with both contradicting sides having a high likelihood. According to [10, p. 99] the coherence of ID source information in such situations is disturbed due to flawed declarations, due to facing a rare case, or due to having a situation not covered by the underlying model of identification. As an example consider a bomber attacking own forces (hostile declaration), which shows a valid IFF mode 4 response (friendly declaration). This can be due to an unlikely case of fratricide, or the disregarded problem of modeling compromised IFF mode 4 codes. For the definition of the ID Source Conflict Alert we use a Bayesian network related approach, which is according to [10, pp. 99, 175-176] and [11]: Correct declarations from a coherent situation covered by the model support each other, and are expected to be positively correlated.
1245
Figure 4: Exemplary configuration of a Bayesian identification process
Slightly differing from [10] we use the uniform distribution instead of the a priori probabilities p(Oj ), because we do not want a technical-oriented alert criteria of source conflicts biased by operational a priori expectations. Let us deM P 1 p(d|Oj ) · M note p˜(d) := for all single declarations
tribution. Using the denotation of p˜(d) introduced in subsection 3.5, we define: The Unexpected ID Situation Alert AUIS (d) with d := (d1 , . . . , dN ) is defined as the following inequality being true: M P
1 p(d|Oj ) · M p˜(d) j=1 = M > 1 + εunex P p(d) p(d|Oj ) · p(Oj )
j=1
d = di ∈ Di with i ∈ {1, . . . , N } and for a combined declaration d = (d1 , . . . , dN ). Then, the ID Source Conflict Alert AISC (d1 , . . . , dN ) is defined to have the value Warning iff the inequality
log2
N Q
p˜(di )
i=1
p˜(d1 , . . . , dN )
> εconf
(9)
holds for a given threshold εconf > 0, that is intended to suppress small fluctuations. Otherwise, AISC (d1 , . . . , dN ) has the value OK. Therefore, the ID Source Conflict Alert reports, whether the provided declarations of different ID sources are consistent (IIDS (PLV ) = OK) or contain possible inconsistencies (IIDS (PLV ) = W arning).
3.6 Unexpected ID Situation Label 5 in the Operational ID process model in figure 3 shows the point of retrieval of the reliability aspect Unexpected ID Situation. Such a situation is given, if the information measured by the ID sources deviates significantly from the operational a priori expectation, i.e., the tracked object shows operational properties that are expected to be rare. A deviation from expected patterns can be due to specific object characteristics, but can also result from a flawed identity processing. Therefore, it is relevant for judging the reliability of an calculated identity. In addition, objects with unexpected patterns of operational relevant behavior and properties are per se interesting from the operational perspective. Given the ID source declarations in case of an Unexpected ID Situation, the probability of this declarations combination is significantly reduced, if a meaningful a priori expectation (p(Oj ))j=1,...,M is applied instead of the uniform dis-
(10)
j=1
with a threshold εunex > 0 to suppress minor fluctuations. Note in equation (10), that this alert can only occur, if a meaningful a priori expectation (p(Oj ))j=1,...,M is given, which significantly differs from the uniform distribution. This alert points the occurrence of a rare combination of operational properties out to the operational user by showing the value Warning. In most cases, these rare combinations are of particular operational interest. For more likely combinations, the value OK is given.
4 Examples In our examples, a frigate provides air surveillance in the airspace above a littoral area. Due to disagreed territorial claims, a crisis has been provoked by Opland, a small nation in this region, which attacked allied forces at several occasions. By means of its Combat Direction System (CDS), the frigate tracks all air contacts in this region and reports an identity on each tracked object to a higher echelon command center. The air force of Opland is very small but includes some new generation type attack aircrafts. There is a significant presence of own forces in the area. Own civil air traffic avoids passage of this region, but civil and military neutral traffic is going on. To perform its surveillance task, the frigate is equipped with IFF Mode 3 and IFF Mode 4 devices, as well as Electronic Support Measures (ESM) which can discriminate friendly, neutral, and hostile emission types. In addition, the
1246
CDS is capable to detect track origins (Identification by Origin IDBO) and Hostile Acts (HA), e.g. the release of a bomb. The configuration data of the Bayesian identification process, which is part of the CDS, are given in figure 4: For each ID source i the tables at the left side show the feasible declarations di ∈ Di = {di,1 , . . . , di,Ni } with Ni = 2 for i ∈ {1, 2, 5} and Ni = 3 for i ∈ {3, 4}, as well as the conditional probabilities p(di |Oj ) for every Operational Object State, i.e., O1 = OF , O2 = NA, and O3 = EF . The a priori expectation p(Oj ) is given in the upper middle, and the risk values r(Oj , id) are listed in the table at the lower right for id ∈ {F, AF, N, S, H, U }. At the upper right, the two thresholds εconf and εunex are defined. Based on these data, the Bayesian identification process provides identities of tracked objects, which are computed as described in section 2. In addition, the indicators and alerts of reliability are calculated by the CDS as defined in subsections 3.2, 3.3, 3.4, 3.5, and 3.6.
Figure 5: Illustration of the PLV probability space Figure 5 illustrates the 3-dimensional probability space with the components OF , NA, EF of all possible Posterior Likelihood Vectors PLV by a bijective projection into the OF -NA-plane. All PLV can be mapped into the triangle by registering its first and second and omitting its third probability distribution component. The vertices OF , NA, and (EF ) of the triangle represent PLV = (1.0, 0, 0), PLV = (0, 1.0, 0), and PLV = (0, 0, 1.0). Additionally, figure 5 graphically displays the Risk Values of the exemplary configuration defined in figure 4 by showing identity areas in which a given PLV is mapped to the corresponding identity. In the following, we discuss 6 different ID cases: five with reliability problems and one case with excellent reliability. For each ID case, the Posterior Likelihood Vector is illustrated in figure 5. The results of the ID cases, the values of indicators and alerts, and the underlying ID source declarations are summarized in table 1. Each column 2-7 represents an ID case. In the upper part (rows 2-6), the contributing declarations for this ID case are listed, whereat di,j denotes the j-th declaration for ID source i according to figure 4, and ’-’ stands for no declaration. Remember, that an ID source is temporarily removed from the processing, if it does not
provide a declaration. For every indicator and alert the calculated values for the ID cases are given in the lower part of table 1 (rows 7-11). The last row shows the resulting identity for the ID cases. ID Case: IFF 3 IFF 4 ESM IDBO HA ICISI IIDS IIDA AISC AUIS Identity
case 1
case 2
case 3
case 4
case 5
case 6
d1,1 0.055 0.332 0.629
d1,1 d3,1 0.270 0.001 0.552
-
-
d2,2 d3,3 0.624 0.231 0.037
d2,1 d4,1 d5,1 0.638 0.399 0.845
d1,2 d2,2 d3,3 d4,3 0.961 0.656 0.936
d1,1 d3,2 d4,2 0.616 0.863 0.929
OK OK
OK OK
OK OK
Warning OK
OK Warning
OK OK
N
AF
S
F
H
N
Table 1: Summarized results of the ID case examples
In our experience as researchers with some knowledge on the operational background, for this exemplary scenario a numerical value of an indicator less than 0.15 has to be judged as critical with not enough reliability. In table 1 critical values of indicators and alerts are set in bold face type. Indicator values within the range [0.15, 0.3] are sufficiently acceptable and the range [0.3, 0.5] contains good values. In case of a value above 0.5 the underlying particular reliability aspect is considered to be excellent in that ID case. Obviously, to a certain degree these values are subject to different operational appraisals and need to be customized for given technical and operational environments. ID case 1 must be considered as a case with insufficient information for reliable identification: According to column case 1 in table 1, the ID source IFF Mode 3 provides a declaration d1,1 :”Valid IFF Mode 3 Code”. All other ID sources do not contribute a declaration. The calculated identity of the tracked object is Neutral. Since the only underlying declaration contains no significant information, the corresponding Combined ID Sources Information Indicator ICISI has a critical value: ICISI = 0.055. The indicator values of ID Decision Stability IIDS and ID Decision Adequacy IIDA are good (IIDS = 0.332) and excellent (IIDA = 0.629). The alerts of an ID Source Conflict AISC and an Unexpected ID Situation AUIS both indicate OK. By this combination of indicator and alert values, the particular reliability problem of no significant source information is correctly indicated. In ID case 2 the identity Assumed Friend is provided. The critical value of the ID Decision Stability Indicator (IIDS = 0.001) shows, that the resulting identity is very likely to switch to an other identity, if there is only a slight change in the source declarations. This is can be also seen in figure 5, where the PLV of ID case 2 is closed to the borderline between the Unknown and the Assumed Friend area. A critical value IIDA = 0.037 of the ID Decision Adequacy in ID case 3 indicates, that the identity Suspect does not adequately reflect the underlying fused ID source declarations d2,2 :”No or Invalid Response” of the ID source IFF
1247
Mode 4 and d3,3 :”Hostile Emission Type” of the ID source ESM. This is due to the fact, that the ID decision according to equations (3) and (4) is based on a high risk in this ID case. The ID declarations of ID case 4 are ambiguous. A declaration d2,1 :”Valid Response” from the ID source IFF Mode 4 and a declaration d4,1 :”Friendly Area of Origin” from the ID source IDBO imply a friendly nature of this track. This strongly conflicts with the declaration d5,1 :”Performed Hostile Act” from the ID source Hostile Act. Nevertheless, the identity Friend is calculated due to the significance of the ID source IFF Mode 4. The reliability problem of an ID Source Conflict is reported by the alert AISC , which states Warning. All indicators in ID case 5 have excellent values and no ID Source Conflict is present, but the Unexpected ID Situation Alert AUIS states Warning. This is due to the fact, that the resulting identity Hostile does not meet the a priori expectation, which states that in the given operational setting, a contact with hostile characteristics is unlikely, compared to a contact with friendly or neutral characteristics. ID case 6 is an example without any reliability problem and excellent indicator values. The analysis of the six ID cases suggests, that the judgement of the overall reliability of an identification result by operational users requires all of the proposed indicators and alerts. This is due to the fact, that in each ID case 1 to 5 a problem with one reliability aspect was present, whereas all other reliability measures showed at least sufficient values. Omitting an indicator or alert would imply, that one of the problems of reliability in ID case 1 to 5 could not be detected. Therefore, every indicator and alert is indispensable, because they all measure different particular aspects of the overall reliability of the ID case, which are not completely covered by any other indicator or alert.
5 Conclusions In this paper, for a typical Bayesian identification process aspects of reliability of resulting identities have been analyzed. Based on ergonomic requirements, we have defined and discussed a set of indicators and alerts to measure the different aspects of the overall reliability. An operational user needs to decide, whether he can rely on the provided identity, even without having technical knowledge of the underlying identification process. The proposed measures support this judgement of a user. For other Bayesian-based identification processes, the defined indicators and alerts are expected to be easily transferable. Further research will address the question, if the combination of the proposed indicators and alerts covers all relevant aspects of the overall reliability from an operational viewpoint. Additionally, we will consider operational roles related to identification and provide an allocation of indicators and alerts to operational roles. Future work will include a prototypic implementation to be able to perform an extended evaluation of the practical value and additional benefit of the proposed reliability measures.
References [1] D. L. Hall and S. A. McMullen, Mathematical Techniques in Multisensor Data Fusion, 2nd ed. Boston, London: Artech House Publishers, 2004. [2] S. Blackman and R. Popoli, Design and Analysis of Modern Tracking Systems. Boston, London: Artech House Publishers, 1999. [3] E. Waltz and J. Llinas, Multisensor Data Fusion. London: Artech House Publishers, 1990.
Boston,
[4] P. van Gosliga and H. Jansen, “A Bayesian Network for Combat Identification,” in RTO IST Symposium on ’Military Data and Information Fusion’ (Prague, Czech Republic, 20-22 October 2003), ser. RTO Meeting Proceedings MP-IST-040. NATO Research & Technology Organization, March 2004. [5] H. Leung and J. Wu, “Bayesian and Dempster-Shafer target identification for radar surveillance,” IEEE Transactions on Aerospace Electronic Systems, vol. 36(2), pp. 432–447, April 2000. [6] C. Stroscher and F. Schneider, “Comprehensive Approach to Improve Identification Capabilities,” in RTO IST Symposium on ’New Information Processing Techniques for Military Systems’ (Istanbul, Turkey, 9-11 October 2000), ser. RTO Meeting Proceedings RTO-MP-049. NATO Research & Technology Organization, April 2001. [7] J. Ziegler, “Objektcharakterisierung im heterogenen Sensorverbund,” in GI Jahrestagung ’Informatik LIVE’ (Bd.2), ser. Lecture Notes in Informatics. Gesellschaft f¨ur Informatik e.V. (GI), 2005, pp. 312–316, (In German). [8] S. Gelsema, “The Desirability of a NATO-Central Database for Non-Cooperative Target Recognition of Aircraft,” in RTO SET Symposium on ’Target Identification and Recognition Using RF Systems’ (Oslo, Norway, 11-13 October 2004), ser. RTO Meeting Proceedings MP-SET-080. NATO Research & Technology Organization, October 2004. ´ Boss´e, J. Roy, and S. Wark, Concepts, Models, and Tools [9] Eloi for Information Fusion. Boston, London: Artech House, 2007. [10] F. V. Jensen and T. D. Nielsen, Bayesian Networks and Decision Graphs. New York: Springer Science, 2007. [11] K. B. Laskey, “Conflict and suprise: Heuristics for model revision,” in Proceedings of the 7th Conference on Uncertainty in Artificial Intelligence. Morgan Kaufmann Publishers, 1991, pp. 197–204. [12] M. Kr¨uger and J. Ziegler, “User-Oriented Bayesian Identification and Its Configuration,” in Proceedings of the 11th International Conference on Information Fusion (Cologne, Germany 30 June - 03 July 2008). ISIF, July 2008. [13] Deutsches Institut f¨ur Normung e.V., “Teil 12: Informationsdarstellung,” in DIN EN ISO 9241: Ergonomische Anforderungen f¨ur B¨urot¨atigkeiten mit Bildschirmger¨aten. Berlin: Beuth Verlag GmbH, 1998, (German version of ISO 9241-12 standard). [14] C. M. Burns and J. R. Hajdukiewicz, Ecological Interface Design. Boca Raton, London, New York, Washington D.C.: CRC Press, 2004. [15] L. Schmidt, C. M. Schlick, and J. Grosche (Hrsg.), Ergonomie und Mensch-Maschine-Systeme. Berlin, Heidelberg: Springer-Verlag, 2008, (In German).
1248