Summary of Tracking and Identification Methods

7 downloads 14044 Views 629KB Size Report
In 2006 [24], the link between object assessment tracking and situation ..... management was discussed at the SPIE 2013 panel for big data analytics of which ...
Summary of Tracking and Identification Methods Erik Blasch1, Chun Yang 2, Ivan Kadar 3 1

Air Force Research Laboratory, Information Directorate, Rome, NY, 13441 2 Sigtem Technology, Inc., 1343 Parrott Drive, San Mateo, CA 94402 3 Interlink Systems Sciences, Inc., Lake Success, NY 11042

ABSTRACT Over the last two decades, many solutions have arisen to combine target tracking estimation with classification methods. Target tracking includes developments from linear to non-linear and Gaussian to non-Gaussian processing. Pattern recognition includes detection, classification, recognition, and identification methods. Integrating tracking and pattern recognition has resulted in numerous approaches and this paper seeks to organize the various approaches. We discuss the terminology so as to have a common framework for various standards such as the NATO STANAG 4162 - Identification Data Combining Process. In a use case, we provide a comparative example highlighting that location information (as an example) with additional mission objectives from geographical, human, social, cultural, and behavioral modeling is needed to determine identification as classification alone does not allow determining identification or intent. Keywords: Belief Functions, Classification, Recognition, Identification, Tracking, NATO STANAG 4162, DSmT, PCR5, Simultaneous Tracking and Identification

1. INTRODUCTION Over the last couple of decades, there has been significant work in tracking and detection, tracking and classification, and tracking and identification. For each of these methods, there is a difference in the target acknowledgement to include measurements, attributes, and allegiance; respectively. For classification and identification, it is the labeling of the target that is a key component. Benefits of track labeling include target recognition [1], increasing track life [2], and providing more rich information for the user [3,4,5]. One of the challenging issues for simultaneous tracking and identification is the different uses of the terminology. Thus, this paper seeks to review some of the literature and more carefully defining tracking and identification. Figure 1 highlights the use of the terms: detection, classification, recognition, and identification that are standard across academic, industry, and government discussions versus individual author presentations. Level 0 Data Assessment • Get the object location, preprocess Detect

Level 1 Object Assessment

Target

• Continuous Estimate (Kinematic) • Recognize  Classify, Identify

Object

Classify

Level 2 - Situation Assessment Tank

Recognize Type

• Identify Multiple targets • Determine groups

Level 3 - Impact Assessment

Identify T72

• Estimated Action (Mode, pointing) • Estimated areas of travel • Exploit contextual information Allegiance

Intent

Behavior

Friend, Foe, Neutral

Threat - Shoot, Articulation

Movement

Level 4 - Process Refinement • Other Sensors - Behavior • Intelligence Information

Level 5 - User Refinement • Modify/ Select Sensor Placements • Determine information needs • Choose algorithms, target priority

Figure 1: Target tracking as related to the Information Fusion Process levels [6]. There are three assumptions of the analysis to include: (1) the NATO STANAG 4162, as well as other international Signal Processing, Sensor/Information Fusion, and Target Recognition XXIII, edited by Ivan Kadar, Proc. of SPIE Vol. 9091, 909104 · © 2014 SPIE · CCC code: 0277-786X/14/$18 · doi: 10.1117/12.2050260

Proc. of SPIE Vol. 9091 909104-1 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

standards, provides a unifying definition of what is meant by “identification” in a tracking context; (2) “identification” comes from a variety of sensing sources such as radar, electro-optical systems, and semantic descriptions, which has some consistency in definitions for multiple decades, and (3) random references to identification, when the author meant classification, are discounted. The misuse of the terminology “identification,” when the author meant identification is reasonable if a researcher was not focused on the semantic description of their methods and traditionally, the discrimination between classification/identification was not well defined. Hence, this paper seeks to revisit the discussion so as to provide a discussion on the research in tracking and identification. Simultaneous tracking and identification (STID) method is a portion of the Data Fusion Information Group model [7, 8, 9, 10], which organizes the processes of information fusion into various stages. Level 0 (registration), Level 1 (object assessment), Level 2 (situation assessment), and Level 3 (threat assessment) include the estimation techniques. The control functions include: Level 4 (process refinement), Level 5 (user refinement), and Level 6 fusion (mission management) provides feedback methods towards the collection of measurement data [11]. There are three versions of tracking techniques: Level 1: Tracking and Detection Tracking and Classification (to include feature-aided tracking) Level 2: Tracking and Object-association (to include track group) Level 3: Tracking and Identification

Numerous textbooks [12, 13, 14, 15, 16, 17] and background papers on target tracking techniques have been reported [18, 19, 20, 21]. The discussion on tracking and recognition/classification/identification needs has been a subject of many panel discussions [22, 23] from which various authors have provided differing approaches and techniques towards solving the issues. In 2006 [24], the link between object assessment tracking and situation assessment was discussed which focused on knowledge representation. In 2007 [25, 26], the implications of STID were discussed in relation to threat assessment and sensor management. In 2008, tracking performance analysis [27] was highlighted with ATR sensor management [28]. Finally, other panels have discussed tracking and identification robustness [29], combination with text labeling [30], and social-cultural behavioral modeling [31], and big data analysis. The paper is organized as follows: in Section 2, we bring together many definitions of classification, recognition, and identification towards a common use of the terms. Section 3 discusses the STANAG 4162: Identification Data Combining Process. Section 4 is a literature review and Section 5 discusses a systems approach to tracking and identification. An example is shown in Section 6, metrics in Section 7, and finally conclusions in Section 8.

2. CLASSIFICATION, RECOGNITION, AND IDENTIFICATION DEFINITIONS The definitions of classification, recognition, and identification are slightly different based on the use of the terms in different communities. Here, we overview these definitions in the hope to solidify an understanding for the community. There are many communities using the terms in a similar way, but mean different things such as biology, engineering, pattern recognition, and the military. To start, we begin with the dictionary. 2.1 Dictionary definitions of Classification, Recognition, and Identification Classification is defined as (1) the act or process of putting people or things into groups based on ways that they are alike, or (2) an arrangement of people or things into groups based on ways that they are alike. Classifying is thus a systematic arrangement in groups or categories according to established criteria (e.g., taxonomy). The categories could be object behaviors (e.g., movement), appearances (e.g., size), and affiliations (e.g., group). In many cases, the determination of the information that supports a class discussion comes from signatures (1D, 2D, 3D), features, and combined attributes. Recognition is (1) the act of accepting that something is true or important or that it exists, or (2) the act of knowing who or what someone or something is because of previous knowledge or experience. Recognizing is an acknowledgement or feeling that something or someone is present alluding to a special notice or attention from the sensing and encoding by a machine. Given a structured approach to many man-made systems, examples include: optical character recognition, automatic target recognition, or face recognition. What is common about these definitions is that there is some defined template, model, or set of features that have been trained and linked to an object type (such as printed letters or

Proc. of SPIE Vol. 9091 909104-2 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

2.3 Military definitions of Classification, Recognition, and Identification For classification, a standard is the National Imagery Interpretability Rating Scale (NIIRS) that is also used for civilian applications. The NIIRS is based on the Johnson Criteria1 [35]. Johnson’s criteria determine the minimum resolution in terms of line pairs across a target that put the target into these categories as the resolution amount gives a 50 percent probability for an observer (or machine) to discriminate an object to the specified level:    

Detection, an object is present (1.0 +/– 0.25 line pairs) Orientation, symmetrical, asymmetric, horizontal, or vertical (1.4 +/– 0.35 line pairs) Recognition, the type object can be discerned, a person versus a car (4 +/– 0.8 line pairs) Identification, a specific object can be discerned, a woman versus a man, the specific car (6.4 +/– 1.5 line pairs)

From these definitions, established in 1948, classification was not noted as image resolution does not imply things were put into classes. However, two systems emerged and became known as Automatic Target Recognition (ATR) and Combat Identification (CID). ATR looks at an object type such as a tank. On the other hand, CID is based on the Identify Friend, Foe, or Neutral (IFFN) paradigm for cooperative ID. As above, a unique signature in communications can give a signal as to the exact system. However, with non-cooperative ID, then multiple sensors are needed to confirm the exact entity being observed. The sensors are used to then identify the entity based on using measurements to detect features to recognize which class it belongs. The problem is that recognition narrows the choices into a subset of options of information within a class. From there, other information such as allegiance, behaviors, location, and cultural (e.g., political) information is needed for further disambiguation of entity within the class. More specifically, there is a need for standard which is being defined by the STANAG 4162: Identification Data Combining Process.

3. STANAG 4162: IDENTIFICATION DATA COMBINING PROCESS STANAG 4162 understands that identification of an object and the assessment of its affiliation and threat potential are essential capabilities in civil and military surveillance systems [36]. The standard supports interoperability through consistent and reliable processing, scalable to any kind of sensor modality, flexible for information management and distributed processing, and supports the operators through workload reduction. The Identification Data Combing Process (IDCP) approach uses a Naïve-Bayes analysis in three stages:  Characterization of Identification Interest: The IDCP approach models mission-based operational identification and discrimination aspects from classes: Friend, Assumed Friend, Neutral, etc., with operational attributes such as the allegiances Own, Neutral, and Enemy Forces as example.  Description of ID Sources: Sources include the real and modeled uncertainties and inaccuracies resulting from the measurements using conditional probabilities. Technical attributes could include radar frequency type, imaging resolution, and entity adherence to a requested heading change.  Operational Interpretation of Source Information: Linking the operational interpretation of source information to technical attributes accounts for uncertainties of the transformation processing. Again, the interpretation is described by means of conditional probabilities such as valid IFF modes, image resolution, or standard/structured communication.

Current techniques being proposed for IDCP include a Bayesian formulation in which the ID has to be determined from a set of possibilities narrowed by a classification as non-friendly. Kruger and Ziegler [37] have formulated the problem through understanding the source data (user, platform and sensor pedigree, and target behavior). Future sources would include environmental and cultural factors. Assuming IFFN capabilities, they address the non-cooperative ID scenario for an airport analysis divided into the following classes, summarized below (with the associated a prior use of IFFN communications):

Military Civilian

1

IFFN 90% 10%

Own No IFFN 1% 99%

IFFN 50% 50%

Natural No IFFN 1% 99%

http://en.wikipedia.org/wiki/Johnson's_criteria

Proc. of SPIE Vol. 9091 909104-4 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

IFFN 10% 90%

Hostile No IFFN 1% 99%

2.3 Military definitions of Classification, Recognition, and Identification For classification, a standard is the National Imagery Interpretability Rating Scale (NIIRS) that is also used for civilian applications. The NIIRS is based on the Johnson Criteria1 [35]. Johnson’s criteria determine the minimum resolution in terms of line pairs across a target that put the target into these categories as the resolution amount gives a 50 percent probability for an observer (or machine) to discriminate an object to the specified level:    

Detection, an object is present (1.0 +/– 0.25 line pairs) Orientation, symmetrical, asymmetric, horizontal, or vertical (1.4 +/– 0.35 line pairs) Recognition, the type object can be discerned, a person versus a car (4 +/– 0.8 line pairs) Identification, a specific object can be discerned, a woman versus a man, the specific car (6.4 +/– 1.5 line pairs)

From these definitions, established in 1948, classification was not noted as image resolution does not imply things were put into classes. However, two systems emerged and became known as Automatic Target Recognition (ATR) and Combat Identification (CID). ATR looks at an object type such as a tank. On the other hand, CID is based on the Identify Friend, Foe, or Neutral (IFFN) paradigm for cooperative ID. As above, a unique signature in communications can give a signal as to the exact system. However, with non-cooperative ID, then multiple sensors are needed to confirm the exact entity being observed. The sensors are used to then identify the entity based on using measurements to detect features to recognize which class it belongs. The problem is that recognition narrows the choices into a subset of options of information within a class. From there, other information such as allegiance, behaviors, location, and cultural (e.g., political) information is needed for further disambiguation of entity within the class. More specifically, there is a need for standard which is being defined by the STANAG 4162: Identification Data Combining Process.

3. STANAG 4162: IDENTIFICATION DATA COMBINING PROCESS STANAG 4162 understands that identification of an object and the assessment of its affiliation and threat potential are essential capabilities in civil and military surveillance systems [36]. The standard supports interoperability through consistent and reliable processing, scalable to any kind of sensor modality, flexible for information management and distributed processing, and supports the operators through workload reduction. The Identification Data Combing Process (IDCP) approach uses a Naïve-Bayes analysis in three stages:  Characterization of Identification Interest: The IDCP approach models mission-based operational identification and discrimination aspects from classes: Friend, Assumed Friend, Neutral, etc., with operational attributes such as the allegiances Own, Neutral, and Enemy Forces as example.  Description of ID Sources: Sources include the real and modeled uncertainties and inaccuracies resulting from the measurements using conditional probabilities. Technical attributes could include radar frequency type, imaging resolution, and entity adherence to a requested heading change.  Operational Interpretation of Source Information: Linking the operational interpretation of source information to technical attributes accounts for uncertainties of the transformation processing. Again, the interpretation is described by means of conditional probabilities such as valid IFF modes, image resolution, or standard/structured communication.

Current techniques being proposed for IDCP include a Bayesian formulation in which the ID has to be determined from a set of possibilities narrowed by a classification as non-friendly. Kruger and Ziegler [37] have formulated the problem through understanding the source data (user, platform and sensor pedigree, and target behavior). Future sources would include environmental and cultural factors. Assuming IFFN capabilities, they address the non-cooperative ID scenario for an airport analysis divided into the following classes, summarized below (with the associated a prior use of IFFN communications):

Military Civilian

1

IFFN 90% 10%

Own No IFFN 1% 99%

IFFN 50% 50%

Natural No IFFN 1% 99%

http://en.wikipedia.org/wiki/Johnson's_criteria

Proc. of SPIE Vol. 9091 909104-4 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

IFFN 10% 90%

Hostile No IFFN 1% 99%

The key note is that IFFN is only available for own-ship analysis with bonus communication for neutral actors. Essentially, if the ID is not available, then there is a need to follow through the detection, classification, recognition, and identification paradigm with advanced attributes of kinematic and appearance information from multi-intelligence sources. Kruger and Ziegler [38] improved on their Bayesian model with Bayesian ID network that puts into classes many types of maritime vessel affiliations. Additional efforts could include the path and behavior of travel to distinguish the objects [39]. What is needed then is further analysis from tracking information to afford identification.

4. TACKING AND IDENTIFICATION RESEARCH Over the years, there have been many proposed methods in tracking and identification. Figure 3 showcases these elements of which we are concerned with the Level 1 processing for tracking and identification. It is noted that the ICDP is focused on the link between Level 1 and Level 3 processing of identification leading towards impact assessment. "-DATABASE SUPPORT

Doctrine Terrain Weather

f DATABASE SITUATION

T acks Picture History ,- -r - - --

INTELLIGENCE\ COLLECTION ASSETS

"SENSORS, ELINT COMINT IMINT SAR MTI

J COLLATERAL INTELLIGENCE

Evaluate

Add/ Update Retrieve Merge /Purge

fREGI STRATI ON' Spatial Temporal Spectral

\

FILTERING

Time Location Type

Signature Pedigree

Measures

LEVEL 5

USER REFINEMENT

Graphics / Display Menus Text Mission

(-SITUATION ASSESSMENT

r-

- -in

Awareness

Measures J

Correlation Assignment

J

LEVEL 3

` Measures

1

LEVEL 4

IMPACT

ASSESSMENT TRACKING

Threat Intent Lethality

Estimation Prediction

Red /Blue

Asset Availability Sensor Tasking

IDENTIFICATION

J'

Danger

Warnings

Typing

Task Prioritization ; Allegiance

r

Env. Analysis Contextual Situation Enemy OB

ASSOCIATION

COLLECTIONS MANAGEMENT

MGT

LEVEL 2

LEVEL 1

LEVEL 0

EO

"DATABASE

Monitor

l

1

-- - - - -- -EVALUATION

Performance Measurement Effectiveness Measurement Maintenance

Targeting

Weaponeerina/ 1 Analytical

Symbolic Numerical Hypothetical

Figure 3: Information Fusion Functions. 4.1 Tracking and Identification Having sorted and clarified the definitions of tracking and identification, this literature review looks at key papers as well as those that addressed the topic of simultaneous tracking and identification. The key developments in the 80’s began with numerous discussions on radar development and the various designs for identification [40]. A seminal paper by Mori, Chong, and Wisher in 1986 [41] characterized the classification versus identification issue by understanding that without identification information (e.g., IFFN labels), there was a need to detect and classify targets. Researchers in the early 90’s looked at Bayesian approaches to solving the problem from the available sensors (e.g., video and radar). Durrant-Whyte et al. [42] used the terminology similar to and with ambiguity as the Johnson criteria and developed a decentralized approach for tracking and identification by detecting targets with multiple sensors; however, there were no implications of the target allegiance, but only labels for the target type (aka. classification). Likewise O’Sullivan and Jacobs [43] used the terminology by tracking and recognizing the target type from high-range resolution radar (HRR) profiles. No discussion is made of classification from which one could potentially infer target allegiance. Other methods were used to classify tracks [44]. In 1997, others while implying identification, used the classification for target track discrimination analysis in clutter [45] and target determination [46, 47, 48]. These works used a confusion matrix which assumed some apriori training to start with a sensor-based classification update to the target track state. In implied analysis, the target’s classification is related to allegiance, then identification can be attributed to the target recognized from kinematic behavior. Using the ability to perform tracking and classification, multiple sensors could be used in the analysis to determine the target identification [49, 50]. By having knowledge of the apriori training of sensor measurements, a classifier could be trained on a specific set or target classes such that knowing the exact target type and location could infer the allegiance and identity of the target (e.g., IFFN) [51]. For example, with detailed analysis of HRR profiles, targets aligned to known target types with specific behaviors could be discriminated between sets of friend, foe, and neutral targets [52, 53, 54] which was reported as simultaneous tracking

Proc. of SPIE Vol. 9091 909104-5 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

and identification (STID). Another approach was to use the results from Electronic Support Measures (ESM) sensors for the target classifier to update a tracker [55, 56]. The use of tracking, classification, and identification has many advantages. By knowing the identity of a target, tracking could be updated to improve track maintenance [57, 58]. Likewise, identification information supports sensor management [59] and group tracking [60, 61]. Furthermore, since most STID results are presented to an operator, the user’s could refine the identification estimate based on context [62]. 4.2 Tracking and Identification using Radar The classification information can aid target tracking. As a matter of fact, with multiple sensor modes such as Synthetic Aperture Radar (SAR) for stationary targets and HRR for moving targets, the STID methods were applied to moving and stationary target analysis [63]. The coupling with IFFN sensors improved the kinematic state [64, 65, 66]. Using the features, attributes, and categories supports mutual aided STID [67, 68]. Another approach included using Ground Moving Target Indicator (GMTI) [69] data for maneuvering targets and the HRR analysis to get ID when the target was moving with enough constancy so as to classify and identify the target [70, 71, 72, 73, 74]. Simulated approaches include the Probability Multi-Hypothesis Tracker (PMHT) [75], interactive multiple modeling (IMM) [76, 77], evidential reasoning [1, 78, 79, 80], Particle Filter (PF) [81], Unscented Kalman Filter (UKF) [82], feature matching [83], and mean-shift classifier [84]. Together, a system could determine intent [6]. 4.3 Tracking and Identification with EO/IR systems Classification of targets can come from multiple looks of 1D information such as radar or from different sensors such as imaging. Electro-optical/infrared (EO/IR) has complementary information in that a target could be classified and associated with IFFN sensors. EO/IR tracking and identification includes air-to-ground targeting [85], video security [86], people tacking with heat signatures [87] and biometrics [88, 89], and chemicals in clouds [90]. Multi-object tracking and identification includes vehicles [91], pedestrian gait [92], and leaders in groups [93]. The computer vision community has numerous challenge problems to detect, track, classify and determine the behavior/intent of entities [94]. Other methods include using fusing the EO/IR imagery [95] and then doing tracking and identification with Short-Wave Infrared (SWIR) [96] or Polarimetric imaging [97]. 4.4 Tracking and Identification for Ballistic Target Tracking Ballistic target tracking inherently is tracking and identification as the action is assumed to be threatening. The missile itself leads to the identification from its detected location [98, 99, 100].

5. TRACKING AND IDENTIFICATION PROCESS 5.1 Tracking and Identification for Evaluation An important component to tracking and identification design is evaluation. We highlighted a design of experiments approach to testing tracking and identification systems [101], but there is also a need to consider the systems-level aspects of analysis such as the tracking method [102, 103, 104], metrics [105, 106], use of classification information [107, 108], and performance improvement for tracking through target movement [109] and sensor management [110]. 5.2 Tracking and Identification System The future of STID comes from the various features, attributes, and classifications combined with kinematics estimates that lead to intent. Key systems enablers include human computer interfaces (HCIs) that allows a user to confirm, link, or associate this information to identify an object. Additional supporting information is needed from various models (sensor, environment, and target) to aid real-time STID, as shown in Figure 4. The STID supporting information includes data management, user interaction, and models for enhanced analysis. Data management was discussed at the SPIE 2013 panel for big data analytics of which methods are using cloud computing [111]. User interaction has also been featured as a subject of High-Level Information Fusion [112]. The internal STID modeling includes kinematic models updates [113], target model refinement [114], and resource management [115]. The external STID modeling includes road tracking [116], occlusion detections [117], external signals for referencing [118], and activity analysis [119].

Proc. of SPIE Vol. 9091 909104-6 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

Tracking And ^Massification

Support

Situation

««ruent

Impact Assessment Threat

Management Sensor

Intent

Data

Allegiance

Historical

Terrain Models

User

Projection Goals Influenc es

Registration

Mission

Culture

s

Product Generator

EEE ttt

Data Files

TOPIC BUILDER

All- source Target Materials

tñbuted Re ositut Subscription Synchronization

Data Discovery

Notification

Distribution Replication

Joint Designs

Data Flow

Distributed

User Refinement /Analytics

Service Oriented Architecture

Figure 4: System Architectures.

6. EXAMPLE OF TRACKING AND IDENTIFICATION From the previous applications where we use various kinematic tracking (Bayes, Dempster-Shafer, and PCR5) [1, 120, 121] and classification methods [122,123,124], we highlight the use of classification for identification. Assuming that we have made a classification of an entity, which we know is a vehicle but have no communication with, STID has to determine whether the vehicle is friendly or hostile. Something could be a hostile if it is in the wrong location, moving at suspicious speeds, and/or has hostile positioning. In this case, we have a classification of a vehicle, but are unsure of its intent. For our scenario, the object is on a road and its designation is neutral or potentially hostile, shown in Figure 5a. We assume that if the vehicle is friendly, we have cooperative ID. However, with no communication, we have to perform identification from the available information to determine if hostile or neutral. The IFFN classification is CM=[0.7 0.3; 0.3 0.7] due to the inability to correctly classify the targets intent and we use the Joint Belief Probability Data Association (JBPDAF) tracker [1] with Probability Conflict Redistribution 5 (PCR5). The analysis shown in Figure 6, which compares the Bayesian approach with the PCR5 approach that highlights the changing identification (hostile), requires advances methods for inclusion in the STANAG 4162. From Figure 5b, the area designated in red (after joint tracking and classification) is the vehicle position near a restricted area (such as in front of a building). target track and ID

Measuerments

-75.18

-75.19

-75.19

-75.2

-75.2

-75.21

-75.21

long

long

-75.18

-75.22

-75.22

-75.23

-75.23

-75.24

-75.24

-75.25 43.275

43.28

43.285

43.29

43.295

43.3

-75.25 43.275

43.28

43.285

43.29

43.295

43.3

lat

lat

Figure 5: Simulation: (a) Object is on a road and (b) results after identification where red is hostile and green is neutral.

Proc. of SPIE Vol. 9091 909104-7 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

Probability Conflict Redistribution

Bayes

1

1 Ground truth Neutual Foe

0.9

0.8

0.7

0.7

0.6

0.6

FFN (PCR)

FFN (Bayes)

0.8

0.5

0.5

0.4

0.4

0.3

0.3

0.2

0.2

0.1

0.1

0

0

10

20

30

40 50 Scan number

60

70

Ground truth Neutral Foe

0.9

80

0

0

10

20

30

40 50 Scan number

60

70

80

Figure 6: Analysis showing the Identification.

7. DISCUSSION STID methods require additional metrics which extend Measures of Performance (MOPs) towards Measures of Effectiveness (MOEs) [125]. Below, we highlight the needs for additional metrics as related to the STANAG 2511 for reliability and credibility classification [126] and STANAG 4067 for tracking. Metrics Assessment Level 1

Data Object MOP

Level 2 Level 3 Refinement Level 4 Level 5

Situation Threat Process User MOE

Level 6

Mission

Track Relevant Reliability Credibility Accuracy Patterns Intent

Identification Provenance Pedigree Detection Recognition Classification Activity Score Identification

Covariance Timeliness Computation Confidence Efficient

Conflict Attention Workload Trust Effective

8. CONCLUSIONS In this paper, we summarized the methods in tracking and identification by conducting a literature review and highlighting the developments in the last two decades. The paper highlighted the need for a common terminology so as to support interpretability between systems sharing detection, classification, and identification information from tracking systems. The various NATO STANAGs as a common standard is needed for consistency in terminology and developments. As a use case example, we simulated the effects of a Bayesian and Belief-based tracking to show that determining the identification (friend, foe, natural) of a hostile moving entity requires information on the location as classification alone does not determine the identification or intent.

REFERENCES [1] Blasch, E., [Derivation of a Belief Filter for Simultaneous High Range Resolution Radar Tracking and Identification], Ph.D. Thesis, Wright State University, (1999). [2] Hanselman, P., Lawrence, C., Fortunano, E., Tenney, R., et al., “Dynamic Tactical Targeting,” Proc. of SPIE, Vol. 5441, (2004).

Proc. of SPIE Vol. 9091 909104-8 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

[3] Blasch, E. P., “Assembling a distributed fused Information-based Human-Computer Cognitive Decision Making Tool,” IEEE Aerospace and Electronic Systems Magazine, Vol. 15, No. 5, pp. 11-17, May (2000). [4] Blasch, E., “Situation, Impact, and User Refinement,” Proc. of SPIE, Vol. 5096, (2003). [5] Blasch, E., “Level 5 (User Refinement) issues supporting Information Fusion Management” Int. Conf. on Info Fusion, (2006). [6] Blasch, E., “Modeling Intent for a target tracking and identification Scenario,” Proc. of SPIE, Vol. 5428, (2004). [7] Blasch, E., Plano, S., “DFIG Level 5 (User Refinement) issues supporting Situational Assessment Reasoning,” Int. Conf. on Info Fusion, (2005). [8] Blasch, E., Kadar, I., Salerno, J., Kokar, M. M., Das, S., Powell, G. M., Corkill, D. D., and Ruspini, E. H., “Issues and Challenges in Situation Assessment (Level 2 Fusion),” J. of Advances in Information Fusion, Vol. 1, No. 2, pp. 122 - 139, Dec. (2006). [9] Blasch, E., Bosse, E., Lambert, D. A., [High-Level Information Fusion Management and Systems Design], Artech House, Norwood, MA, (2012). [10] Blasch, E., Steinberg, A., Das, S., Llinas, J., Chong, C.-Y., Kessler, O., Waltz, E., White, F., "Revisiting the JDL model for information Exploitation," Int’l Conf. on Info Fusion, (2013). [11] Blasch, E., “Sensor, User, Mission (SUM) Resource Management and their interaction with Level 2/3 fusion” Int. Conf. on Info Fusion, (2006). [12] Waltz, E., and Llinas, J., [Multisensor Data Fusion], Artech House, (1990). [13] Bar-Shalom, Y., and Li, X. R., [Multitarget-Multisensor Tracking: Principles and Techniques], YBS, New York, (1995). [14] Blackman, S., and Popoli, R., [Design and Analysis of Modern Tracking Systems], Artech House Publisher, Boston, (1999). [15] Ristic, B., Arullampalam, S., and Gordon, N., [Beyond the Kalman Filter: Particle filters for tracking Applications], Artech House, (2004). [16] Bar-Shalom, Y., Willett, P. K., Tian, X., [Tracking and Data Fusion: A Handbook of Algorithms], YBS Publications, (2011). [17] Mallick, M., Krishnamurthy, V., Vo, B-N., [Integrated Tracking, Classification, and Sensor Management: Theory and Applications], Wiley-IEEE Press, (2012). [18] Kadar, I., “Passive Multisensor Multitarget Feature-aided Unconstrained Tracking: A Geometric Perspective,” International Conference on Information Fusion, (2000). [19] Li, X. R., Jilkov, V. P., “Survey of maneuvering target tracking. Part I. Dynamic models,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 34, Issue 9, 1333-1364, Oct (2003). [20] Li, X. R., Jilkov, V. P., “Survey of maneuvering target tracking. Part II. Motion Models of Ballistic and Space Targets,” IEEE Transactions on Aerospace and Electronic Systems, Vol.46, Issue 1, 96-1119, (2010). [21] Li, X. R., Jilkov, V. P., “Survey of maneuvering target tracking. Part V. Multiple Model Methods,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 41, Issue 4, 1255-1321, (2005). [22] Kadar, I., “Issues in Tracking and Fusion: Geometrical, Dependency and Adaptivity/Quality/Robustness Aspects,” Invited Panel Session: Unsolved, Difficult and Misunderstood Problems and Approaches in Fusion Research, Proc. SPIE, Vol. 5429, (2004). [23] Kadar, I., “Invited Panel Discussion: Issues in Nonlinear Filtering with Applications to Real-World Problems,” Proc. SPIE, Vol. 5809, (2005) [24] Blasch, E., Kadar, I., Salerno, J., Kokar, M. M., Das, S., Powell, G. M., Corkill, D. D., Ruspini, E. H., “Issues and challenges of knowledge representation and reasoning methods in situation assessment (Level 2 Fusion),” Proc. of SPIE , Vol. 6235, (2006). [25] Kadar, I., “Research Challenges: Dependence Issues in Feature/Declaration Data Fusion,” Proc. SPIE, Vol. 6567, (2007). [26] Blasch, E., Kadar, I., Hintz, K., Biermann, J., Chong, C-Y., Salerno, J., Das, S., “Issues and challenges in resource management and its interaction with levels 2/3 fusion with applications to real-world problems: an annotated perspective,” Proc. of SPIE, 6567, (2007). [27] Kadar, I., Blasch, E., Yang, C., Drummond, O. E., Blair, W. D., Chong, C.-Y., Li, X. R., Mahler, R., Kirubarajan,, T., “Panel Discussion: Issues and Challenges in Performance Assessment of Multitarget Tracking Algorithms with Applications to RealWorld Problems,” Proc. SPIE, 6968, (2008). [28] Blasch, E., Moses, R., Castanon, D., Willsky, A., Hero, A., “Integrated Fusion, Performance prediction, and sensor management for Automatic Target recognition,” Int. Conf. on Info Fusion, (2008). [29] Kadar, I., “On Robust Estimation: Methods, Applications and Challenges,” Proc. SPIE, Vol. 7336, (2009). [30] Kadar, I., Antony, R., Blasch, E., Chong, C.-Y., Hall, D., Kirubarajan, T., Llinas, J., Mahler, R. P. S., “Invited Panel Discussion: Real-World Issues and Challenges in Hard and Soft Fusion,” Proc. SPIE, Vol. 8050, (2011). [31] Kadar, I., Salerno, J. J., Blasch, E., Endsley, M., Fenstermacher, L., Grewe, L., Yang, S. J., “Invited Panel Discussion: RealWorld Issues and Challenges in Social/Cultural Modeling with Application to Information Fusion,” Proc. SPIE, 8392, (2012). [32] Duda, R. O., Hart, P. E., and Stork, D. G., [Pattern Classification (2nd Edition)], Wiley, (2000). [33] Costa, P. C. G., Laskey, K. B., Blasch, E., Jousselme, A-L., “Towards Unbiased Evaluation of Uncertainty Reasoning: The URREF Ontology,” International Conference on Information Fusion, (2012). [34] Blasch, E., “Level 5 (User Refinement) issues supporting Information Fusion Management” Int. Conf. on Info Fusion, (2006). [35] Kahler, B., Blasch, E., “Predicted Radar/Optical Feature Fusion Gains for Target Identification,” Proc. IEEE Nat. Aerospace Electronics Conf. (NAECON), (2010).

Proc. of SPIE Vol. 9091 909104-9 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

[36] Stroscher, C., Schneider, F., “Comprehensive Approach to Improve Identification Capabilities,” RTO MP-049 Symposium on Processing Techniques for Military Systems, (2000). [37] Krüger, M., Ziegler, J., “User-Oriented Bayesian Identification and Its Configuration,” Int’l Conf. on Information Fusion, (2008). [38] Krüger, M., Ziegler, J., Heller, K. “A Generic Bayesian Network for Identification and Assessment of Objects in Maritime Surveillance,” International Conference on Information Fusion, (2012). [39] Blasch, E., Banas, C., Paul, M., Bussjager, B., Seetharaman, G., “Pattern Activity Clustering and Evaluation (PACE),” Proc. SPIE, Vol. 8402, (2012). [40] Holben, R. D., “Moving Target Identification (MTI) with passive sensors,” Proc. SPIE, 219, (1980). [41] Mori, S., Chong, C-Y., Tse, E., Wishner, R.P. “Tracking and classifying multiple targets without a priori identification,” IEEE Transactions on Automatic Control, Volume: 31, Issue: 5 , 401 – 409, (1986). [42] Rao, B.S.Y., Durrant-Whyte, H.F., “A decentralized Bayesian algorithm for identification of tracked targets,” IEEE Transactions on Systems, Man and Cybernetics, Volume: 23, Issue: 6 , 1683 - 1698 (1993) [43] O'Sullivan, J.A. ; Jacobs, S.P. ; Miller, M.I. ; Snyder, D.L. , “A likelihood-based approach to joint target tracking and identification,” Asilomar Conference on Signals, Systems and Computers, (1993). [44] Schubert, J, [Cluster-based Specification Techniques in Dempster-Shafer Theory for an Evidential Intelligence Analysis of Multiple Target Tracks], Ph.D. thesis, TRITA-NA-9410, Dept. of Numerical Analysis and Computing Sci., Royal Institute of Tech., (1994). [45] Salmond, D., Fisher, D., Gordon, N., "Tracking and Identification for closely spaced objects in clutter,” European Contr. Conf., (1997). [46] Chang, K. C., Fung, R. M., “Target Identification with Bayesian Networks in a Multiple Hypothesis Tracking System,” SPIE Optical Engineering Journal, Vol. 36, No. 3, pp.684-691, (1997). [47] Leung, H., Li. Y., Bossé, E., Blanchette, M., Chan, K.C.C, “Improved multiple target tracking using Dempster-Shafer Identification,” Proc. SPIE, Vol. 3068, (1997). [48] Blasch, E., and Hong, L., “Simultaneous Tracking and Identification,” IEEE Conference on Decision Control, pg. 249-256, (1998). [49] Blasch, E. P., “Fusion of HRR and SAR information for Automatic Target Recognition and Classification,” Int. Conf. on Info Fusion, (1999). [50] Blasch, E, Hong, L., “Sensor Fusion Cognition using belief filtering for tracking and identification,” Proc. SPIE, 3719, (1999). [51] Micthell, R. A., and Westerkamp, J. J., “Robust Statistical Feature-based aircraft identification,” IEEE. Trans. on Aerospace and Electronics Systems, Vol. 35, No. 3, 1077-1094, (1999). [52] Blasch, E., Hong., L., “Simultaneous Feature-based Identification and Track Fusion,” IEEE Conf. on Dec. Control, (1998). [53] Blasch, E. P., Westerkamp, J.J., Layne, J.R., Hong, L., Garber, F. D., and Shaw, A., "Identifying moving HRR signatures with an ATR Belief Filter," Proc. SPIE, 4053, (2000). [54] Blasch, E., Hong, L., “Data Association through Fusion of Target track and Identification Sets,” Int. Conf. on Info Fusion, (2000). [55] Li, J., Luo, Z- .,Wong, K.M., Bossé, E., “Convex optimization approach to identify fusion for multisensor target tracking,” IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, Volume: 31, pp. 172 – 178, (2001). [56] Challa, S., Pulford, G. W., “Joint target tracking and classification using radar and ESM sensors,” IEEE Trans. On Aerospace and Electronic Systems, Vol. 37, Issue 3, 1039 – 1055, (2001). [57] Farina, A., Lombardo, P. ; Marsella, M.,” Joint tracking and identification algorithms for multisensor data,” IEE Proceedings Radar, Sonar and Navigation, Volume: 149 , Issue: 6, (2002). [58] Blasch, E., Connare, T., “Improving Track maintenance Through Group Tracking,” Proc of the Workshop on Estimation, Tracking, and Fusion; A Tribute to Yaakov Bar Shalom, Monterey, CA, 360 –371, May (2001). [59] Mahler, R. P. S., “Detecting, Tracking, and Classifying Group Targets: A Unified Approach,” Proc. SPIE, 4380, (2001). [60] Blasch, E., Connare, T., “Improving track accuracy through Group Information Feedback,” Int. Conf. on Info Fusion, (2001). [61] Connare, T., Blasch, E., Schmitz, J., Salvatore, F., Scarpino, F., “Group IMM tracking utilizing Track and Identification Fusion,” Proc. of the Workshop on Estimation, Tracking, and Fusion; A Tribute to Yaakov Bar Shalom, 205 -220, (2001). [62] Blasch, E. P. and Plano, S. B., “JDL Level 5 Fusion model ‘user refinement’ issues and applications in group tracking,” Proc. SPIE, Vol. 4729, (2002). [63] Blasch, E., “Information-Theory-based feature-aided tracking and identification algorithm for tracking moving and stationary targets through high-turn maneuvers using fusion of SAR and HRR information”, Proc. of SPIE, Vol. 4727, (2002). [64] Schuck, T.M., Shoemaker, B., Willey, J., “Identification friend-or-foe (IFF) sensor uncertainties, ambiguities, deception and their application to the multi-source fusion process,” IEEE National Aerospace and Electronics Conference, (2000). [65] Kirubarajan, T., Bar-Shalom, Y., Pattipati, K. R., Kadar, I., “Ground Target Tracking with Variable Structure IMM Estimator,” IEEE Transactions on Aerospace and Electronic Systems (AES), January (2000). [66] Blasch, E., Connare, T., “Feature-Aided JBPDAF group tracking and classification using an IFFN sensor,” Proc. SPIE, 4728, (2002).

Proc. of SPIE Vol. 9091 909104-10 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

[67] Minvielle, P., Marrs, A.D., Maskell, S., Doucet, A., “Joint target tracking and identification-Part I: sequential Monte Carlo model-based approaches,” International Conference on Information Fusion, (2005). [68] Yang, C., and Blasch, E., “Mutual Aided Target Tracking and Identification,” Proc. of SPIE, Vol. 5099, (2003). [69] Drummond, O. E., “On Categorical Feature Aided Tracking,” Proc. SPIE, 5203, (2004). [70] Wu, S., Hong, L., Layne, J.R.,”2D rigid-body target modelling for tracking and identification with GMTI/HRR measurements,” IEE Proceedings Control Theory and Applications, Volume: 151, Issue: 4, (2004). [71] Blasch, E., Yang, C., “Ten methods to Fuse GMTI and HRRR Measurements for Joint Tracking and Identification,” Int’l Conf. on Info Fusion, (2004). [72] Anderson, S. J., “Target Classification, Recognition, and Identification with HF radar,” RTO-MP-080, (2004). [73] Lancaster, J., Blackman, S., Taniguchi, E., “Joint IMM/MHT tracking and identification with application to ground target tracking,” Proc. SPIE, 5913, (2005). [74] Lancaster, J., Blackman, S, “Joint IMM/MHT Tracking and Identification for Multi-Sensor Ground Target Tracking,” Int’l Conf. on Information Fusion, (2006). [75] Davey, S., Gray, D., Streit, R., “Tracking, Association, and Classification: A Combined PMHT Approach,” Digital Signal Processing, 12, 372-382, (2002). [76] Yang, C., and Blasch, E., “Pose Angular-Aiding for Maneuvering Target Tracking”, Int. Conf. on Info Fusion, (2005). [77] Lancaster, J., Blackman, S., “Joint IMM/MHT tracking and identification with confusers and track stitching,” Proc. SPIE, 6236, (2006). [78] Angelova, D., Mihaylova, L., “Joint Tracking and Classification with Particle Filtering and Mixture Kalman Filtering using Kinematic Radar Information,” Digital Signal Processing, (2006). [79] Khairnar, D.G., Nandakumar, S. , Merchant, S.N., Desai, U.B., “Nonlinear Target Identification and Tracking Using UKF,” IEEE Conf on Granular Computing, (2007). [80] Dezert, J., Tchamova, A., Smarandache, F., Konstantinova, P., “Target Type Tracking with PCR5 and Dempster's rules: A Comparative Analysis,” Int’l Conf. on Information Fusion, (2006). [81] Pannetier, B., Dezert, J., Pollard, E.,” Improvement of Multiple Ground Targets Tracking with GMTI Sensor and Fusion of identification Attributes,” IEEE Aerospace Conference, (2008). [82] Kouemou, G., Neumann, C., Opitz, F., “Exploitation of track accuracy information in fusion technologies for radar target classification using Dempster-Shafer Rules,” Int’l Conf. on Information Fusion, (2009). [83] Chen, H., Chen, G., Blasch, E., Schuck, T., "Robust Track Association and Fusion with Extended Feature Matching", invited Chapter in Optimization & Cooperative Control. Strategies, M.J. Hirsch et al. (Eds.):, LNCIS 381, pp. 319–354, Springer-Verlag Berlin Heidelberg (2009). [84] He, X., Tharmarasa, R., Pelletier, M., Kirubarajan, T., “Two-level automatic Multiple target joint tracking and classification,” Proc. SPIE, Vol. 7698, (2010). [85] Blasch, E., and Kahler, B., “Multi-resolution EO/IR Tracking and Identification” Int. Conf. on Info Fusion, (2005). [86] Snidaro, L., Piciarelli, C., Foresti, G.L., “Activity Analysis for Video Security Systems,” IEEE International Conference on Image Processing, (2006). [87] Hao, Q., Hu, F., Xiao, Y., “Multiple Human Tracking and Identification With Wireless Distributed Pyroelectric Sensor Systems,” IEEE Systems Journal, Volume: 3, Issue: 4, (2009) [88] Li, X., Chen, G., Blasch, E., Hzu, H., McKenna, T., “A non-cooperative long-range biometric image tracking and recognition (BITAR) method for maritime surveillance,” Proc. of SPIE, 6979, (2008). [89] Li, X., Chen, G., Ji, Q., Blasch, E., “A non-cooperative long-range biometric system for maritime surveillance,” IEEE Int. Conf. on Pattern Recognition, Dec. (2008). [90] Chen, H., Li, X.R.,” Joint identification and tracking of multiple CBRNE clouds based on sparsity pursuit,” International Conference on Information Fusion, (2010). [91] Ling, H., Bai, L., Blasch, E., Mei, X., “Robust Infrared Vehicle Tracking Across Target Pose Change using L1 regularization,” Int’l Conf. on Info Fusion, (2010). [92] Liu, Y., Wang, X., Yang, J., Yao, L., “Multi-objects tracking and online identification based on SIFT,” International Conference Multimedia Technology (ICMT), (2011). [93] Carmi, A.Y., Mihaylova, L., Septier, F., PangS., K., Gurfil, P., Godsill, S.J., “MCMC-based tracking and identification of leaders in groups,” IEEE International Conference on Causual agents for group tracking behavior, (2011). [94] Blasch, E., Ling, H., Wu, Y., Seetharaman, G., Talbert, M., Bai, L., Chen, G., “Dismount Tracking and Identification from Electro-Optical Imagery,” Proc. SPIE, Vol. 8402, (2012). [95] Liu, Z., Blasch, E., Xue, Z., Langaniere, R., Wu, W., “Objective Assessment of Multiresolution Image Fusion Algorithms for Context Enhancement in Night Vision: A Comparative Survey,” IEEE Trans. Pattern Analysis and Machine Intelligence, 34(1):94-109, (2012). [96] Lemoff, B. E., Martin, R. B.. Sluch, M., Kafka, K. M., McCormick, W., Ice, E., “Long-Range Night/Day Human Identification using Active-SWIR Imaging,” Proc. SPIE, 8704, (2013). [97] Bartlett, B. D., Rodriguez, M. D., “Snapshot Spectral and Polarimetric Imaging; Target Identification with Multispectral Video,” Proc. SPIE, 8743, (2013).

Proc. of SPIE Vol. 9091 909104-11 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

[98] Draper, J.S., Perlman, S., Chuang, C.K., Hanson, M., Lillard, L., Hibbeln, B., Sene, D. ,”Tracking and identification of distant missiles by remote sounding,” IEEE Aerospace Conference, (1999). [99] Wei, M., Chen, G., Pham, K., et al. “Game Theoretic Target Assignment Approach to Ballistic Missile Defense,” Proc. of SPIE, 6969, (2008). [100] Li, X, Chen, G. et al., “Detecting missile-like flying target from a distance in sequence images,” Proc. of SPIE, 6968, (2008). [101] Blasch, E., Yang, C., Kadar, I., Chen, G., Bai, L., “Overview of Design of Experiments Performance Assessment of Multitarget Tracking Algorithms, ” Proc. SPIE, 8392, (2012). [102] Chang, K. C., Zhi, T., Saha, R. K., “Performance Evaluation of Track Fusion with Information Matrix Filter,” IEEE Trans. Aerospace and Electronic Systems, Vol. 38, No. 2, April (2002). [103] Pannetier, B., Dezert, J., “Extended and Multiple Target Tracking: Evaluation of a Hybridized Solution,” Int’l Conf. on Info Fusion, (2011). [104] Straka, O., Dunık, J., Šimandl, M., Blasch, E., “Randomized Unscented Transform in State Estimation of non-Gaussian Systems: Algorithms and Performance,” Int. Conf. on Info Fusion, (2012). [105] Blasch, E., Straka, O., Duník, J., Šimandl, M., “Multitarget Tracking Performance Analysis Using the Non-Credibility Index in the Nonlinear Estimation Framework (NEF) Toolbox,” Proc. IEEE Nat. Aerospace Electronics Conf (NAECON), (2010). [106] Blasch, E., Valin, P., “Track Purity and Current Assignment Ratio for Target Tracking and Identification Evaluation,” Int. Conf. on Info Fusion, (2011). [107] Blasch, E., Straka, O., Yang, C., Qiu, D., Šimandl, M., Ajgl, J., “Distributed Tracking Fidelity-Metric Performance Analysis Using Confusion Matrices,” Int. Conf. on Info Fusion, (2012). [108] Pannetier, B., Dezert, J., “Track Segment Association with Classification Information,” Workshop on Sensor Data Fusion: Trends, solution, Applications, 60-65, (2012). [109] Kahler, B., and Blasch, E., “Decision-Level Fusion Performance Improvement from Enhanced HRR Radar Clutter Suppression,” J. of. Advances in Information Fusion, Vol. 6, No. 2, Dec. (2011). [110] Senlap, E. T., “Coordination of sensor platforms for tracking and identifying objects: Performance Evaluations,” International Conference on Information Fusion, (2013). [111] Liu, B., Blasch, E., Chen, Y., Aved, A. J., Hadiks, A., Shen, D., Chen, G., “Information Fusion in a Cloud Computing Era: A Systems-Level Perspective,” accepted for IEEE Aerospace and Electronic Systems Magazine, (2014). [112] Blasch, E. P.,Lambert, D. A., Valin, P., Kokar, M, M., Llinas, J., Das, S., et al.., “High Level Information Fusion (HLIF) Survey of Models, Issues, and Grand Challenges,” IEEE Aerospace and Elec. Sys. Mag., Vol. 27, No. 9, Spet. (2012). [113] Yang, C., Blasch, E., “Kalman Filtering with Nonlinear State Constraints,” IEEE Trans. Aerospace and Electronic Systems, Vol. 45, No. 1, 70-84, Jan. (2009). [114] Wu, Y., Wang, J., Cheng, J., Lu, H., Blasch, E., Bai, L., Ling, H., “Real-Time Probabilistic Covariance Tracking with Efficient Model Update,” IEEE Trans. on Image Processing, 21(5):2824-2837, (2012). [115] Yang, C., Kaplan, L., Blasch, E., “Performance Measures of Covariance and Information Matrices in Resource Management for Target State Estimation,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 48, No. 3, pp. 2594 – 2613, (2012). [116] Yang, C., Blasch, E., “Fusion of Tracks with Road Constraints,” J. of. Advances in Information Fusion, Vol. 3, No. 1, 14-32, June (2008). [117] Mei, X., Ling, H., Wu, Y., Blasch, E., Bai, L. “Efficient Minimum Error Bounded Particle Resampling L1 Tracker with Occlusion Detection,” IEEE Trans. on Image Processing (T-IP), Vol. 22, Issue 7, 2661 – 2675, (2013). [118] Yang, C., Nguyen, T., Blasch, E., “Field Testing and Evaluation of Mobile Positioning with Fused Mixed Signals of Opportunity,” IEEE Aerospace and Electronic Systems Magazine, (2014). [119] Hammoud, R. I., Sahin, C. S., Blasch, E. P, and Rhodes, B. J. “Multi-Source Multi-Modal Activity Recognition in Aerial Video Surveillance,” IEEE International Computer Vision and Pattern Recognition Conference, (2014). [120] Blasch, E., Dezert, J., Pannetier, B., “Overview of Dempster-Shafer and Belief Function Tracking Methods,” Proc. SPIE, 8745, (2013). [121] Blasch, E., Garcia Herrero, J., Snidaro, L., Llinas, J., Seetharaman, G., Palaniappan, K., “Overview of contextual tracking approaches in information fusion,” Proc. SPIE, 8747, (2013). [122] Dezert, J. Smarandache, F., [Advances and applications of DSmT for information fusion (Collected works)], Vols. 1-3, American Research Press, (2004–2009). http://www.gallup.unm.edu/~smarandache/DSmT.htm [123] Djiknavorian, P., Grenier, D., Valin, P. “Approximation in DSm theory for fusing ESM reports,” Int. Workshop on Belief functions 2010, Brest, France, April (2010). [124] Blasch, E., Dezert, J., Valin, P., “DSMT Applied to Seismic and Acoustic Sensor Fusion,” IEEE Nat. Aerospace Elect. Conf, (2011). [125] Blasch, E., Breton, R., Valin, P., “Information Fusion Measures of Effectiveness (MOE) for Decision Support,” Proc. SPIE, 8050, (2011). [126] Blasch, E., Laskey, K. B., Joussselme, A-L., Dragos, V., Costa, P. C. G., Dezert, J., “URREF Reliability versus Credibility in Information Fusion (STANAG 2511),” Int’l Conf. on Info Fusion, (2013).

Proc. of SPIE Vol. 9091 909104-12 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms