Shared Situation Awareness For Army Applications - Advanced ...

6 downloads 50 Views 4MB Size Report
3. This fusion system adapts to data inputs and real-time needs through the .... dissemination, retrieval, and monitoring of information on the digital battlefield [3].
Approved for public release; distribution is unlimited.

Shared Situation Awareness For Army Applications June 2003 William J. Farrell III, Stephen Jameson, and Craig Stoneking Lockheed Martin Advanced Technology Laboratories Cherry Hill, New Jersey 08002 [wfarrell, sjameson, cstoneki]@atl.lmco.com ABSTRACT The real-time manned-unmanned teaming of on-the-move Army assets will provide mobile commanders and warfighters with improved situation awareness from the sharing and fusion of heterogeneous distributed data sources information. Lockheed Martin Advanced Technology Laboratories (ATL) is improving situation awareness through three ATL-developed technologies: adaptive, modular, multisensor information fusion; Grapevine agent-based intelligent data dissemination; and agent-based data discovery. These technologies address the three key challenges to shared situation awareness: how to consume data, when to disseminate data, and where to access data. In particular, these technologies focus on the general task of real-time distributed heterogeneous information fusion in highly dynamic ad hoc networked environments. In this paper we describe the three technologies that enable real-time manned-unmanned teaming for shared situation awareness, as well as active research and development of new algorithms and architectures for these technologies.

1.

Introduction

The Army research community is promoting the development of technologies that provide a shared situation awareness capability to support the real-time teaming of manned-unmanned Army assets. This teaming is critical to the overall Army objective of seeing first, understanding first, acting first, and finishing decisively. The Army’s objective is exemplified by the Airborne Manned-Unmanned Systems Technology Demonstration (AMUST-D) and the Hunter Standoff Killer Team (HSKT) Advanced Concept Technology Demonstration (ACTD) programs, which are led by the U.S. Army Aviation Applied Technology Directorate (AATD). These programs provide airborne warfighters and mobile commanders with improved situation awareness from the sharing and fusion of information provided by heterogeneous distributed data sources. This shared situation awareness capability provides a Common Relevant Operational Picture (CROP) that is tailored to the individual warfighter or mobile commander. As a result, the warfighter is able to obtain timely, accurate, mission-specific information derived from a variety of distributed Army assets. The key challenges to shared situation awareness include: how to consume data, when to disseminate data, and where to access data. These challenges are amplified in manned-unmanned teams

where the source, type, and fidelity Command Helicopter of information are more diverse. To Intelligent Agent Attack Helicopters address these challenges, Lockheed Data Discovery Multi-Sensor Martin Advanced Technology Data Fusion Multi-Sensor Laboratories (ATL), currently Agent-Mediated Data Fusion Data Exchange Grapevine Information under contract to AATD, is comDissemination Grapevine Information bining three ATL-developed Dissemination technologies: Adaptive, modular, multi-sensor information fusion; Grapevine agent-based intelligent data dissemination; and agent-based UAV data discovery. These technologies • Video/IR IDM are distributed among a team TCDL consisting of (Figure 1): an Apache Longbow, an Army Airborne Figure 1. AMUST-D/HSKT Shared Situation Awareness Command and Control System Architecture (A2C2S) Blackhawk, and a UAV sensor platform. This paper describes the current state of ATL-developed technologies being applied to the AMUST-D/HSKT programs. In addition, active research and development efforts are presented and their benefits explored in detail.

2.

Applied Technologies

The Artificial Intelligence Laboratory at ATL has been developing information fusion and situation awareness capabilities for more than 10 years, including the Army’s successful Rotorcraft Pilot’s Associate (RPA) program. In particular, our recent developments in intelligent information agents have resulted in additional technology for information dissemination, retrieval, and monitoring on the digital battlefield. The Grapevine Information Dissemination Architecture, a specialized application of intelligent agents, was developed to support opportunistic exchange of relevant data between distributed forces over bandwidth-limited networks.

2.1

Adaptive, Modular, Multi-Sensor Information Fusion

From 1993 to 1999, ATL participated in the Army’s RPA Advanced Technology Demonstration, sponsored by AATD. ATL developed a Multi-Sensor Information Fusion system [1] that provides a common fused track picture to the RPA pilot and the RPA decision aiding systems. In the RPA fusion system, data representing as many Client-Specific Core Fusion Process Sourceas 200 battlefield entities, from 14 Output Specific Input Fusion Fusion different types of onboard and Modules Modules Dispatch Control offboard sensors, is correlated and JSTARS Track Fusion fused in real-time to form a consoliInput Kernel Module dated picture of the battlespace. The CORBA MTI Fusion RPA system, including ATL’s Output Module Grapevine Kernel fusion system, was successfully Input Track Module Management flight tested on an AH-64/D in Group Fusion Shared Kernel August 1999. This fusion system Memory JCDB Output Module (Figure 2) consists of four main Intel Fusion Input Kernel components: Fusion Dispatch, Module Fusion Control, Fusion Kernel, and Track Management. Figure 2. ATL’s Adaptive, Modular, Multi-Sensor Information Fusion System

2

This fusion system adapts to data inputs and real-time needs through the use of Fusion Dispatch and Fusion Control components. These components control the algorithms being applied to the input data and ensure that the fusion system is meeting real-time and resource usage requirements. The Fusion Dispatch component evaluates incoming data, determines which algorithm set – embodied in a Fusion Kernel component – should be applied to fuse the data, and dispatches the appropriate kernel to process the data set. The Fusion Control component monitors the performance and resource usage of the fusion processes, and applies control measures such as prioritization or down-sampling of input data when fusion processes begin to exceed resource limitations (such as memory and CPU usage). The separation of top-level control (provided by Fusion Dispatch and Fusion Control) from fusion algorithms (contained within the Fusion Kernel) allows the fusion system to configure readily to meet a variety of performance and resource usage requirements in different applications. The fusion system is modular through the use of Source-Specific Input and Output modules. These modules manage real-time interfaces with the sensor systems and other fusion related components. The input modules provide pre-processing of sensor data (Level 0 Fusion), transforming this data into a common format for consumption by subsequent fusion processing components. The output modules provide the fused track database to clients, using a client-specific protocol and client-specific messaging format. The use of these input/output modules facilitates portability and flexibility of the fusion system. For example, in the AMUST-D application, the same core fusion processes are used; however the input/output modules are adapted to the specific Apache/A2C2S systems. A Track Management component stores all track data and maintains relationships among individual source tracks and fused tracks. Additional contextual databases provide access to a variety of data about contributing platform and sensor characteristics used by the fusion system. A set of Fusion Kernel modules performs the heart of the correlation and fusion processing. Just as in the Input/Output modules, the modular nature of the fusion processing makes it possible to encapsulate algorithms tailored to a specific type of input data. As various input types are received, the appropriate Fusion Kernel is applied to that data, with the resulting output passed to the Track Management component for updating the fused track database. Each Fusion Kernel generally consists of the same processing steps. However, the algorithms contained within each step may differ. As an example, consider the fusion process for Moving Target Indicator (MTI) data (Figure 3). Given the fused track database at time ‘t’, the first step consists of Prediction. The Prediction function performs temporal-alignment across the sets of data being fused to ensure valid computations in subsequent processing. The next step, Clustering, breaks the track database into geographically distinct clusters to reduce the number of feasible Prediction: Sensor Data Clustering: Data grouped Fused Common Picture received at time t+1 and into geographic clusters at time t observation-to-track associations. For each time-aligned cluster, Cost Matrices define the relative cost of assigning each sensor observation to each fused track. The Assignment step uses the Jonker-Volgenant-Castanon (JVC) algorithm to assign observations to tracks based upon the costs provided by Fusion: Algorithms Cost Functions: Similarity Assignment: Associations the cost matrix. Once these assignments produce updated common Metric applied to clusters formed based on cost picture at time t+1 are computed, the final step is to perform values Fusion of the observation-track pairs in Figure 3. Fusion Kernel Functional Flow order to update the fused track database.

3

One key aspect of this fusion system SENSOR 2 Entity CLASS: is the integration of kinematic (position and AIR DEFENSE velocity) information with Class (vehicle Air Land type), ID (specific vehicle type), and IFF SENSOR 1 (friend/hostile) information. The integration Wheeled CLASS: Tracked of attribute and kinematic information into TRACKED the cost matrices and fusion processing Support Armor Artillery Air Defense enhances track maintenance as well as situation awareness, providing an accurate Resulting Class: battlespace picture for the warfighter. The Tracked ADU Wheeled ADU TRACKED AIR cost matrices and Fusion algorithms compare DEFENSE ZSU-23 Resulting ID: and combine Class and ID information ONE OF 2S-6 expressed in a class hierarchy (Figure 4). THESE 4 SA-13 Each sensor input or fused track has a class SA-15 representation that specifies the confidence for each node of the hierarchy. A set of Figure 4. Example of Class and ID Fusion Modified Bayesian Evidence Combination algorithms developed by ATL is used to compare, combine, and summarize this information. ATL’s development in this area [2] represents a major advance in information fusion technology.

2.2

Intelligent Agents for Information Retrieval and Dissemination

Since 1995, ATL has been developing technology for intelligent information agents to support dissemination, retrieval, and monitoring of information on the digital battlefield [3]. An intelligent agent is a persistent software construct that is able to interact with its environment to perform tasks on behalf of the user. Intelligence, in this context, implies that the agent is imbued with some degree of knowledge of its environment or subject domain that allows it to make decisions that affect its behavior in response to its changing environment or problem. Many applications use mobile agents, which are able to travel between nodes of a network to use resources not locally available. ATL has developed the Extendable Mobile Agent Architecture (EMAA) [4] that provides an infrastructure for the deployment of lightweight intelligent mobile agents. An agent is launched at a processing node with a set of instructions contained in an itinerary, a control construct that permits highly flexible control over agent behavior. Based on conditions or information it encounters, the agent may need to migrate to another node to continue performing its task or locate needed information. EMAA makes use of the portability inherent in the Java™ language to migrate the agent from its current processor to the target platform and execute it on that processor. The original applications of EMAA involved the use of mobile agents for search, retrieval, and dissemination of intelligence information in battlefield networks. Later applications exploited persistent sentinel agents for monitoring data in distributed information systems to alert a user or client when certain conditions or events occurred. To meet the challenge of supporting information pull as well as push, we began in 1999 to investigate the use of intelligent information agents to provide supplemental data fusion [5] information. We focused on the problem of improving the Common Operational Picture (COP) by identifying areas in the fused track picture that could or should be improved through the application of additional data from other non-reporting sources. An example of this is illustrated in Figure 5. In this system, the output of data fusion is collected to form the basis of the COP. A persistent sentinel agent examines the COP to determine areas where addi-tional information is needed. This analysis is based on several criteria, including: (1) areas in which data accuracy or latency does not meet requirements, possibly due to reliance on a source with high latency or large positional error; and (2) areas where no data are present, but where tactical requirements, expressed in the tactical plan, indicate a need for information. 4

When the sentinel agent identifies a need for additional information, it dispatches an information agent to search for the needed information in a remote data source, such as the All-Source Analysis System (ASAS). The results of the investigation are converted into a format usable by data fusion for incorporation into the COP. This approach has been investigated in an internal research and development program and integrated into the ACT II Battle Commander’s Decision Aid at the US Army Air Maneuver Battle Lab (AMBL), an experimentation branch of the Directorate of Combat Developments (DCD), at Ft. Rucker, Alabama.

Fusion Input Interface Data Matching Areas of Interest Input From Multiple Sensors

JCDB

Data Fusion Information Agent

Area Where No Data is Present Area of Insufficient Accuracy or Latency

Common Operational Picture

Sentinel Agent

Figure 5. Sentinel Agent to Augment COP

2.3

Grapevine Data Dissemination

The Grapevine architecture [6] was originally developed by ATL for use on DARPA's Small Unit Operations (SUO) program. It efficiently uses bandwidth for information sharing, providing each node with a description of the information needs of its peers, so each node can selectively transmit only the information it understands to be of real value to its neighbors. By sharing relevant sensor data, each participant can build a common tactical picture that is consistent between participants, and is as complete as the participants’ information sources can make it. The implementation of the Grapevine architecture (Figure 6) builds upon our previous work combining multi-sensor data fusion Grapevine Grapevine with intelligent agents. Each node in Node 1 Node 2 the architecture contains a data Data Fusion Data Fusion fusion process that fuses locally System System obtained data (from local sensors Grapevine Grapevine and data sources) and data received Manager Manager Proxy Agent 2 Proxy Agent 1 from other peer nodes. The GrapeProxy Agent 3 Proxy Agent 3 vine manager at each node manages the interchange of data with peer nodes. Each peer node contains a Grapevine Grapevine proxy agent that repreNode 3 sents the information needs and Data Fusion capabilities of that peer node. As the System sensors or other sources on the Grapevine platform generate local information, Manager Proxy Agent 1 each grapevine agent evaluates that Legend Proxy Agent 2 information against the needs of the Exchange of Information peer platform it represents for Needs and Capabilities factors such as: Exchange of Sensor Data Based on Needs • Sensor type: Data from remote sensors, e.g. JSTARS, is sent only Figure 6. Grapevine Data Dissemination Architecture if the recipient does not already have access to that data. • Mission: The peer platform’s mission may or may not require the propagation of friendly tracks.

5

• Location: The peer platform may only need information within a geographic or temporal/geographic radius. • Coverage: The peer platform may need information from beyond the coverage of its own sensor platform. In addition, the Grapevine agents are aware of the processing and bandwidth limitations of the peer nodes and communication links. Data identified as relevant to a peer node based on the above criteria may be down-sampled or prioritized to meet resource limitations. Each Grapevine agent propagates the needed information to the peer platform it represents, providing an intelligent push of data through the network. At the same time, the Grapevine manager has a representation of the local platform’s information needs and capabilities, expressed in terms of available sensors and data sources, mission, location, and sensor coverage. A sentinel agent within the Grapevine manager monitors the local fused picture to identify information needs not met by the local picture. Based on this, it sends updated configuration data for the local platform to the Grapevine manager on peer platforms. This is used to update the Grapevine agents on the peer platforms representing the local platform. This propagation of information effects an intelligent pull of data to meet the changing information of the local platform. There are several distinctive features of the Grapevine architecture. First, it is a peer-to-peer network. Propagation of data occurs between peer nodes in the network (although in practice this would probably be implemented as an extension to a hierarchical command and control system). Second, propagation is needs based. Peer-to-peer data propagation includes only data known to be of use to the recipient node, limiting the required processing and bandwidth. Third, the architecture is extensible. It can accommodate the addition of peer nodes merely by reconfiguring nearby nodes to reflect the addition of the new nodes. Fourth, it is survivable — there is no single point of failure. Since, in general, each node will have multiple peers, data can be spontaneously rerouted around missing nodes, and the loss of any single node will only result in the loss of the data sources local to that node. The result of this capability is to permit, in the face of stringent bandwidth and processing constraints, the creation of a Common Relevant Operational Picture (CROP) across all warfighter platforms. The CROP is a shared picture of the battlefield, with all warfighters having a consistent view of the world, and each seeing that portion of the picture relevant to their needs. In the case of infinite processing and bandwidth capabilities, this can scale to become a true Common Operational Picture (COP), with all participants seeing the same complete picture. In the case of significant limitations on the ability to exchange and process information, as is the case now and for the near future, the intelligent dissemination capability of the Grapevine ensures that all participants receive the most relevant information.

3.

Manned-Unmanned Teaming

On the in-progress AMUST-D program and its successor the HSKT ACTD, ATL is developing the shared situation awareness capability that will support the other functions of pilot and commander decision aiding and manned-unmanned teaming. AMUST-D is developing two decision aiding systems: the Warfighter’s Associate (WA) to be used on the AH-64D Longbow Apache aircraft, and the Mobile Commander’s Associate (MCA) to be integrated with the A2C2S residing on the UH-60 Blackhawk aircraft. Both systems include data fusion to provide a fused picture for situation awareness (Figure 7), and both include capability to provide Level 4 control of an Unmanned Air Vehicle (UAV), with both waypoint-level control of the UAV from the aircraft and direct feed of UAV sensor data to the aircraft. In addition, the WA provides decision aiding in support of the Apache pilot, including route planning and attack planning, while the MCA provides decision aiding in support of a maneuver commander, including situation awareness display, team route planning, and plan monitoring.

6

A2C2S UH-60 Blackhawk Commander

Mobile Commander’s Associate (MCA) Decision Aid

AH-64D Longbow Apache Pilot

A2C2S/ JCDB

Warfighter’s Associate (WA) Decision Aid

Intelligent Agent Data Discovery

Data Fusion

Grapevine

Data Link Interface

Data Fusion

External Sources

Apache Sources

Grapevine

Link-16

Data Link Interface

IDM

Figure 7. AMUST-D Shared Situation Awareness Architecture On the WA aircraft, data fusion will receive and fuse data from the Apache onboard sensor suite including a Fire Control Radar (FCR) and a Radio Frequency Interferometer (RFI), teammate aircraft, UAV’s under control of the WA, and offboard sources such as the Joint Surveillance Target Attack Radar System (JSTARS). On the MCA aircraft, data fusion will receive and fuse data from UAV’s under control of MCA and offboard sources such as JSTARS. In addition, the MCA will include the intelligent agentbased data discovery system. This will retrieve relevant blue force and red force entity data from the Joint Common Database (JCDB) in the A2C2S system and provide it to Data Fusion for incorporation in the fused picture. Data discovery will augment fused tracks generated by data fusion with additional information available from the JCDB, such as plan and status information in the case of friendly entities and sensor and weapon capability information in the case of hostile entities. The most recent integrated enhancement to the data fusion algorithm set includes passive track initiation, passive correlation, and fusion of the WA’s RFI sensor data. Initial evaluation of these algorithms illustrates the ability to quickly obtain fire control quality solutions on stationary threats under typical conditions. In addition, rapid track initiation and targeting can be performed using RFI data shared amongst a plurality of WA platforms. The ability to rapidly obtain an accurate targeting solution by fusing passive sensor data is made possible by the Unscented Filter [7] developed as a replacement for the more commonly applied Extended Kalman Filter [8]. The Grapevine agent system used in AMUST-D represents a specialized implementation of ATL’s intelligent agent technology in two ways. First, the Grapevine agents are implemented in C++ rather than in Java, to facilitate deployment in operational systems with stringent performance and resource requirements. Second, the Grapevine agents are being adapted to operate over non-TCP/IP networks, to facilitate use in existing tactical data links. On AMUST-D, the grapevine implementation uses the AFAPD message set over the Improved Data Modem (IDM) link to exchange data between peer aircraft.

7

4.

Research and Development Efforts

Building upon the success of the RPA and AMUST-D programs, which have clearly illustrated the benefits of ATL-developed Situation Awareness technologies, ATL continues research and development efforts to improve shared situation awareness capabilities. In particular, ATL is actively performing research and development in three areas: • “Redundant” Data Fusion: Avoiding the fusion of data more than once due to redundant communications in a loosely connected network. • Terrain/Feature Data Exploitation: Real-time incorporation of terrain/feature databases to improve track prediction, track maintenance, and correlation. • Information Theoretic Data Dissemination: Incorporating available bandwidth into data dissemination decisions.

4.1

“Redundant” Data Fusion

Redundant information is a serious obstacle when distributed data fusion is executed in a loosely connected ad hoc network. In particular, sensor information from multiple sources cannot be properly combined using most currently applied filtering algorithms. This is due to the underlying assumption that the input data is statistically independent or has known cross-correlation. That is, for example, the algorithms assume that a particular sensor’s data is only processed once. In some instances, data can be processed more than once because of redundant communication of that data. For example, if two platforms within a distributed fusion network are receiving and fusing JSTARS data from an airborne relay, the fused track database may contain redundant data. In such instances, the fused track database will become corrupt with over-confident estimates of targets within the battlespace. Currently, there are two approaches to avoid redundant information propagation throughout the distributed fusion network. First, elaborate routing and data distribution protocols can be applied to ensure that data is only consumed once throughout the entire network. Secondly, cross-correlations can be maintained between all data sources and used in slightly more complex fusion algorithms to properly account for redundant data. While the latter solution requires an O(n2) computational complexity increase as well as an O(n) bandwidth increase for “n” contributing sensors, the prior solution is at least tractable. However, even the prior solution requires negotiation of reporting responsibilities among network participants. In recent years, a nondivergent estimation algorithm in the presence of unknown correlations has been developed [9]. In particular, a method called Covariance Intersection (CI) provides a provably consistent fused estimate in the presence of redundant data without the need for estimating cross-correlations. However, the algorithm alone does not solve the problem of redundant information. A slightly new architecture (Figure 8) is required to fully benefit from the CI algorithm.

On-Board Data

Fusion Platform Kalman Filter Covariance Intersection

Track Prediction Covariance Intersection

Fused Track Communicated to Adjacent Nodes

Off-Board Data

Figure 8. Hybrid Fusion with Covariance Intersection

This architecture has several benefits. First, it always assumes that offboard data may be correlated and therefore processes it using Covariance Intersection. Secondly, it exploits the fact that onboard data is uncorrelated since it is typically assumed that sensors provide statistically independent 8

observations. Finally, this architecture accommodates the possibility of receiving fused track data that may already contain the local platform’s input. This is accommodated by the second application of Covariance Intersection. As a result, the connectivity of the distributed fusion network is irrelevant and does not impact the accuracy or legitimacy of the COP.

4.2

Terrain/Feature Data Exploitation

Typically, data fusion algorithms do not incorporate information about the battlespace environment, namely terrain and feature data, when performing the fundamental fusion steps illustrated in Figure 3. For this reason, results may be infeasible when the battlespace environment is considered. For example, tracks may be incorrectly clustered (as in Figure 3) when they are actually separated by severe terrain obstructions. ATL has developed a Mobility Constrained Prediction (MCP) algorithm that incorporates local terrain/feature data into the track prediction step of Figure 3. Figure 9 illustrates the MCP procedure. This procedure is broken down into four steps. First, the standard prediction algorithms (Kalman) are employed to produce a Temporally Aligned Track State Probability Density Function (TATSPDF). Next, given the local terrain/feature data, a Mobility Probability Density Function (M-PDF) is constructed based upon elevation differences and trafficability factors. The TATS-PDF and M-PDF are then convolved to obtain the Mobility Constrained Probability Density Function (MC-PDF). Finally, a Moment Matched Probability Density Function (MM-PDF) is constructed using the mean and covariance of the MC-PDF. It is this MMPDF that is used in subsequent fusion processing.

(a) Temporally Aligned Track State PDF

(b) Local Terrain/Feature Based Mobility PDF

(c) Mobility Constrained (Convolved) PDF

(d) Moment Matched PDF

Figure 9. Mobility Constrained Prediction

In addition to the MCP algorithm, intervisibility is exploited throughout the fusion system. First, intervisibility is considered in the clustering step of Figure 3. During clustering, tracks that are not visible from the sensor platform cannot become part of a cluster. In addition, tracks that are not visible from a sensor platform may not be deleted due to observing “N” missed updates. As a result, intervisibility simplifies Cost Matrices, reduces Assignment computational complexity, and allows obscured tracks to be maintained longer. To illustrate the dramatic benefits of terrain/feature incorporation, compare the fused COP results in Figures 10 and 11. Figure 10 illustrates a tracking scenario without considering terrain/feature information. At a time “t”, there are four tracks in the track database (blue). At time “t+1”, a sensor (red donut) makes two observa-tions (green) after the fused tracks have been temporally aligned (yellow). As a result, the fusion system clusters the two tracks in the lower left-hand corner, “coasts” the track nearest to the sensor, and deletes the track to the far right due to “N” missed detections. Contrary to the fused COP result in Figure 10, Figure 11 shows the results when terrain/feature data is incorporated for the same tracking scenario. In this case, high elevation regions are shaded in black and regions with good intervisibility are shaded in red. In Figure 11, the tracks existing at time “t” (blue) are temporally aligned to time “t+1” (yellow) using the MCP algorithm shown in Figure 9. As a result, 9

Maintained Deleted (“N” Misses)

Clustered

Figure 11. Tracking with Terrain Data Figure 10. Tracking without Terrain Data the two tracks that were previously clustered (in Figure 10) are now geographical-ly separated and therefore no longer clustered. In addition, since the track nearest the sensor is approaching a “fork in the road”, the MCP prediction properly inflates the predicted error beyond a deletion threshold and this track is deleted. Finally, the track to the far right is not visible from the sensor location and therefore is not deleted due to a missed detection. The results obtained in Figure 10 are quite different from those obtained in Figure 11. A track that was not deleted in Figure 10 is deleted in Figure 11 due to the incorporation of terrain via the MCP algorithm. A track that was deleted in Figure 10 is no longer deleted in Figure 11 due to the incorporation of intervisibility. The tracks previously clustered in Figure 10 are no longer clustered in Figure 11, eliminating previously entertained assignment possibilities.

4.3

Information Theoretic Data Dissemination

ATL’s Grapevine technology has enabled remarkable improvements in the efficiency of data dissemination for use in distributed data fusion applications. However, due to the heuristic nature of the data dissemination rules, there is nothing preventing a proxy agent (Figure 6) from deciding to send more data than the communications network can support. That is, even after the Grapevine proxy agent “filters” the data, there may still be too much data to transmit. This may occur when the density of tracks increases in a region of interest. Although heuristic rules may be employed to increase the dissemination of hostile target data over friendly, it is not clear that this is a desirable doctrine in all cases. Furthermore, the impact on fusion performance is not considered in the current Grapevine technology. From a fusion performance perspective, there are several reasons why data dissemination may be valuable. For example: • Significant Track Deviation: Information about a particular track has not been distributed for some time and it is recognized that this track’s state (kinematic or otherwise) has significantly changed since it was last disseminated. • Closely Spaced Objects: Two (or more) tracks are closely spaced and the probability of miscorrelation is increased. Both of these instances will degrade fusion performance if the appropriate information is with held from the distributed sensor participants. As a result, these events need to be recognized by any data dissemination mechanism.

10

4.3.1 Significant Track Deviation Previous attempts have been made to deal with the effects of “Significant Track Deviation”. However, these attempts are not generalized and define the meaning of “significant” quite differently (Figure 12). Covariance Comparison

Data Dissemination at time “t”

Data Dissemination at time “t”

Increasing Time

Data Dissemination at time “t+3”

Data Dissemination at time “t+3”

Track State Comparison

Track Position

Figure 12. Measures of Significant Deviation

In one approach, the covariance matrix for a track is compared against the covariance matrix of that track predicted forward from the time of the last data dissemination. Essentially, this approach results in a comparison of covariance matrices and ignores deviations in the actual track state. A second approach computes differences in the track state, but does not incorporate the growth of the track’s covariance matrix. The “Covariance Comparison” method distributes data based upon how long it has been since data was disseminated. The “Track State Comparison”

method distributes data based upon a target’s behavior such as a maneuver. As an alternate approach, we can view the data dissemination problem as a Hypothesis Test, which has the following hypotheses: • H 0 : The observed measurements are the result of target motion given by the current updated track state. • H 1 : The observed measurements are the result of target motion given by the extrapolation of the previously disseminated track state. Applying the Neyman-Pearson Lemma [10] for this Hypothesis Test, we have the following LogLikelihood Ratio Test:

ÈLH ˘ 2 ln Í 0 ˙ ÍÎ L H1 ˙˚

H1

< >

a

(1)

H0

where “ L ” is the Likelihood function (Probability Density Function) for each hypothesis and “ a ” is a Neyman-Pearson threshold. In the context of target tracking, these well-known Likelihood functions lead to a simplified form of Equation 1:

ln

S1 + d12 - d 02 S0

11

H1

< a >

H0

(2)

where “Si“ is the innovation covariance for Hypothesis “i” and “d2i” is the statistical distance for Hypothesis “i”. The Log-Likelihood Ratio Test specified in Equation 2 may be used as a direct test for data dissemination decisions. However, the Neyman-Pearson threshold “ a ” is arbitrary and therefore does not constrain the amount of data disseminated to neighboring fusion participants. For example, if the threshold is too large, hypothesis “ H 1 ” will always be selected and the data will not be disseminated. To the contrary, if the threshold is too small, data will always be transmitted. In addition, the methods for data dissemination presented in Figure 12 are special cases of Equation 2. If the statistical distance terms are ignored, then Equation 2 reduces to the “Covariance Comparison” method. If the covariances are assumed equal, the first term in Equation 2 vanishes and the result is the “Track State Comparison” test of Figure 12. How should the Neyman-Pearson threshold “ a ” be selected? How can this threshold be related to communications bandwidth? A patent pending technique [11] answers this question by relating the transmission of physical information (track states) to the transmission of statistical information via the Kullback-Leibler Measure [10]:

È È L H ˘˘ KL = E Íln Í 0 ˙ ˙ ÍÎ ÍÎ L H1 ˙˚ ˙˚

(3)

where “E” represents the statistical expectation taken over the observation space. As a result, Equation 3 quantifies the expected amount of statistical information obtained for discriminating between “ H 0 ” and “ H 1 ” per sensor observation. In the context of our data dissemination problem, the Kullback-Leibler Measure provides the expected number of sensor observations (for a given track) required before selecting hypothesis “ H 1 ” and disseminating track data. From this quantity, the NeymanPearson threshold can be recursively computed so the expected rate at which tracks are disseminated is equal to the available communications bandwidth. (a) Log-Likelihood Ratio (b) Neyman-Pearson Threshold This approach is exemplified (Figure 13) with a single airborne track that undergoes a maneuver. The available bandwidth can only sustain 0.5 tracks/second while the sensor updates the track once per second. Figure 13 illustrates that the nominal rate of track dissemination remains at 0.5 tracks/second even during the maneuvering portions of the target’s trajectory. This is due to the adaptation of the Neyman-Pearson threshold “ a ” which is constantly being adjusted to assure the proper bandwidth usage.

(c) Kullback-Leibler Measure

(d) Times of Data Dissemination

Figure 13. Information Theoretic Adaptive Data Dissemination

12

4.3.2 Closely Spaced Objects The presence of Closely Spaced Objects (CSO’s) within a surveillance region degrades fusion performance for a variety of reasons. First, the probability of mis-association increases, degrading the accuracy of the tracking filter output since the tracking filter will incorporate incorrect measurements into the track state. Secondly, depending upon the data association algorithm, track merging can occur. If an all-neighbors approach is used, such as a Probabilistic Data Association (PDA) or Joint PDA (JPDA) method, CSO’s can result in track merging [12]. To the contrary, the use of a one-to-one assignment algorithm, such as JVC, will experience increased mis-associations or “measurement swapping”. As a result, the detection of CSO’s is an important feature of any fusion system. To avoid fused track corruption in a distributed fusion network, track data representing CSO’s should be disseminated more frequently. More information will reduce the likelihood of a sensor participant incorrectly associating an observation with a track. The data dissemination decision for CSO’s can still employ the technique outlined in the previous section and based upon Equations 1 through 3. The only difference in the dissemination criteria for CSO’s is the definition of the likelihood functions required by Equation 1. A likelihood ratio test for CSO hypothesis testing may use likelihood functions derived from “Observation Merging” probability functions [13]. Another approach may use a Gaussian Mixture of likelihood functions to form a CSO hypothesis test. The precise form of the CSO hypothesis test (likelihood functions) is a subject that requires further investigation. However, once appropriate likelihood functions are defined, the method outlined in Equations 1 through 3 still applies. Therefore, a specified bandwidth could be allocated for the dissemination of CSO track data.

5.

Conclusions

The development of shared situation awareness, critical to the support of mobile command of distributed forces, faces numerous challenges. It requires the ability to integrate information from all available sources and share information between warfighters to the maximum extent possible, yielding a Common Relevant Operational Picture (CROP). In this paper, we have described three technologies developed at ATL that enable the creation of a CROP: • Adaptive, Modular, Multi-Sensor Information Fusion • Grapevine Agent-Based Intelligent Data Dissemination • Agent-Based Data Discovery We have described ongoing work to provide further validation of these core technologies as well as provide an environment to develop future technology enhancements. Recent advances under the AMUST-D contract include 1) collaborative fusion of Fire Control Radar (FCR) data among a group of Apache warfighters, 2) passive multi-lateration and tracking using Radio Frequency Interferometer (RFI) observations, and 3) the incorporation of UAV-based sensor observations into the CROP. These advances illustrate successful progress towards future manned-unmanned teaming on the battlefield. In addition to the current ATL-developed capabilities, we have presented research and development efforts to enhance the Grapevine and Multi-Sensor Fusion capabilities. We have presented three new areas of research: • Hybrid fusion architecture to accommodate redundant information in a distributed ad hoc networking environment. • Exploitation of terrain/feature data for improved track prediction, track maintenance, and correlation. • Information theoretic data dissemination criteria through the application of the Kullback-Leibler Measure. This ongoing work provides a more robust solution to the shared situation awareness problem by addressing the subtle lessons learned from ATL’s past fusion endeavors. 13

6.

ACKNOWLEDGEMENTS

This research was partially funded by the Aviation Applied Technology Directorate under agreement No. DAAH10-01-2-0008. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes not withstanding any copyright notation thereon. The discussion of these research and development efforts has not been previously published.

7.

DISCLAIMERS

The views and conclusions contained in this document are those of the authors and should not be interpreted as representing official policies, either expressed or implied, of the Aviation Applied Technology Directorate or the U.S. Government.

8.

REFERENCES

[1]

Malkoff, D. and Pawlowski, A., “RPA Data Fusion,” 9th National Symposium on Sensor Fusion, Vol. 1, Infrared Information Analysis Center, pp. 23-36, September 1996.

[2]

Hofmann, M., “Multi-Sensor Track Classification in Rotorcraft Pilot’s Associate Data Fusion” American Helicopter Society 53rd Annual Forum, Virginia Beach, Virginia, April 29-May 1, 1997.

[3]

Whitebread, K. and Jameson, S., “Information Discovery in High-Volume, Frequently Changing Data,” IEEE Expert/Intelligent Systems & Their Applications, Vol. 10, (5), October 1995.

[4]

Lentini, R., Rao G., and Thies, J., “EMAA: An Extendable Mobile Agent Architecture,” AAAI Workshop, Software Tools for Developing Agents, July 1998.

[5]

Pawlowski, A. and Stoneking, C., “Army Aviation Fusion of Sensor-Pushed and Agent-Pulled Information,” American Helicopter Society 57th Annual Forum, Washington DC, May 9-11, 2001.

[6]

Jameson, S.M., “Architectures for Distributed Information Fusion To Support Situation Awareness on the Digital Battlefield,” Fourth International Conference on Data Fusion, Montreal, Canada, August 7-10, 2001.

[7]

Vijayakumar, C., and R. Rajagopal, “Passive Target Tracking by Unscented Filters,” Industrial Technology 2000, Proceedings of IEEE International Conference on, Vol. 2, 2000, pp. 129-134.

[8]

Gelb, A., Applied Optimal Estimation, Cambridge MA: M.I.T. Press, 1974.

[9]

Hurley, Michael B., “An Information Theoretic Justification for Covariance Intersection and Its Generalization”, Proceedings of the International Society of Information Fusion, Vol. 1, 2002, pp. 505-511.

[10] Kullback, S., Information Theory and Statistics, Mineola, NY: Dover Publications, Inc., 1968. ISBN: 0-486-69684-7. [11] Farrell, William J. III, “Method and Apparatus for Collaborative Inferential Information Maintenance Among a Plurality of Contributors Over a Finite Bandwidth Communications Network,” United States Patent and Trademark Office. [12] Fitzgerald, R.J., “Development of a Practical PDA Logic for Multitarget Tracking by Microprocessor,” In Multitarget-Multisensor Tracking: Advanced Applications, Vol. I, Y. BarShalom (ed.), Norwood, MA: Artech House, 1990. [13] Blackman, S. S., and R. Popoli, Design and Analysis of Modern Tracking Systems, Norwood MA: Artech House, 1999. pp. 377-385.

14