Visualization of Multi-INT Fusion Data using Java ...

6 downloads 13627 Views 866KB Size Report
... analysis. Using the open-source JVIEW software, we showcase a big ..... the use of JVIEW as a visualization tool for big data analysis and Level 5 information.
Visualization of Multi-INT Fusion Data using Java Viewer (JVIEW) Erik Blasch1, Alex Aved1, James Nagy1, Stephen Scott1 1

Air Force Research Laboratory, Information Directorate, Rome, NY, 13441

ABSTRACT Visualization is important for multi-intelligence fusion and we demonstrate issues for presenting physics-derived (i.e., hard) and human-derived (i.e., soft) fusion results. Physics-derived solutions (e.g., imagery) typically involve sensor measurements that are objective, while human-derived (e.g., text) typically involve language processing. Both results can be geographically displayed for user-machine fusion. Attributes of an effective and efficient display are not well understood, so we demonstrate issues and results for filtering, correlation, and association of data for users - be they operators or analysts. Operators require near-real time solutions while analysts have the opportunities of non-real time solutions for forensic analysis. In a use case, we demonstrate examples using the JVIEW concept that has been applied to piloting, space situation awareness, and cyber analysis. Using the open-source JVIEW software, we showcase a big data solution for a multi-intelligence fusion application, specifically context-enhanced information fusion. Keywords: Multi-INT fusion, JVIEW, User Defined Operating Picture, UDOP, Level 5 fusion, tracking

1. INTRODUCTION Three concepts for human interaction with multi-Intelligence information fusion systems include situation awareness (SAW) [1], information management [2], and visualization [3]. Visualizations are the key for any fusion system which provide for interactive human-in-the loop (HIL) or human-on-the-loop (HOL) developments, appreciation, use, and systems-level performance. For example, HIL includes pilots with local SAW [4], whereas HOL includes ground operators such as air traffic controllers with global SAW [5]. Other approaches include staff-in-the-loop that brings visualizations as a user associate [6]. SAW information can come from textual reports and regulations providing social and cultural persistent SAW [7] as well as visual reporting of capabilities [8]. Advances in visualization support SAW such as JVIEW being developed at the Air Force Research Laboratory (AFRL) and led by Jason Moore, Aaron McVay, and Chad F. Salisbury. Since 1999 [9], JVIEW has been developed for 3D visualization for aircraft flight visualization [10, 11, 12] and air traffic management [13]. 1.1 Information Fusion The Data Fusion Information Group (DFIG) model (see Figure 1) includes Level 5 “User Refinement”. The levels of information fusion are divided between Low-level information fusion (LLIF) and High-level Information Fusion (HLIF) [14] as shown in Figure 2 where the levels have duality between man and machine processing. Low -level Information Fusion

Info Fusion Real

World

Sensors

Explicit

Tacit

And

Fusion

Fusion

L1

L2/3

Machine

Human

'

Sources, L

Human

4- 'Decision 5

I

Level1 SAW

Level2 SAW

Level3 SAW

SENSATION

PERCEPTION

COMPREHENSION

PROJECTION

Representation

i

Visualization (Evaluation)

Observables

Objects

Situations

Scenarios

dots on maps)

(lines an maps)

(storytelling)

(forecasting)

l

Resource Management

Reasoning

r

Ground Station

Situation

Awarenes (SAW)

Interface Knw sledge

L4

LevelO SAW

Making

l

Platform

Human

High-level Information Fusion

L6

V;

Missióri 19áñágérnéñf r

Machine Planning

LevelO MIF

Level1 MIF

SUB - OBJECT

OBJECT ASSESSMENT

ASSESSMENT

Figure 1: DFIG Information Fusion model (L = Level).

Level

MIF

IMPACT

ASSESSMENT

ASSESSMENT

Figure 2: LLIF versus HLIF.

Next-Generation Analyst II, edited by Barbara D. Broome, David L. Hall, James Llinas, Proc. of SPIE Vol. 9122, 912209 · © 2014 SPIE · CCC code: 0277-786X/14/$18 doi: 10.1117/12.2050252 Proc. of SPIE Vol. 9122 912209-1 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

Level3 MIF

SITUATION

Machine Information Fusion (MIF)

LLIF (L0-1) composes data registration (Level 0 [L0]) [15] and explicit object assessment (L1) such as an aircraft location and identity [16, 17, 18]. HLIF (L2-6) composes much of the open discussions in the last decade. The levels, to denote processing, include situation (L2) and impact (L3) assessment [19] with resource (L4) [20], user (L5) [21], and mission (L6) refinement [22]. Here we focus on Level 5 fusion through effective visualization with a User Defined Operating Picture (UDOP). In order to provide SAW, there is a need to leverage developments in big data processing such as machine, visual, and text analytics [23]. These developments would enable operators to better understand the plethora of information available in the environment (e.g. weather [24]), airspace/airports (e.g., other aircraft), and things on the ground (e.g., aircraft takeoff and landings) [25]. The visualization of all of this information has to be pragmatically displayed to a user for safety, timeliness, accuracy, and confidence of emerging events. Together, these attributes constitute the need for developments between L2-L5 situation awareness and user refinement for cognitive readiness [26, 27]. Three emerging situation awareness topics are SAW evaluation, information management, and visualization. SAW has been attributed to pilots for effective understanding of their surroundings for such applications as take-offs and landings [28]. Likewise, SAW efforts include airport management [29] and communications evaluation [30]. SAW is also developed in connection with threat prediction [31] and uncertainty reduction [32]. A recent survey looks at the metrics, evaluation, and methods to support SAW techniques which are usually based on clustering techniques [33]. Information management methods and architectures are needed for future information fusion systems [34]. Information fusion includes many information fusion concepts such as target tracking, data monitoring, and an integrated SAW picture for interactive user analysis. Future displays will seek methods for text, audio, and visual analytics of the information unfolding in a scene [35]. Human-derived text and sensor visual information analytics need to be matched with machine analytics for effective visualization. Visualization of information is important for user interaction with the data which is a HLIF decision support challenge [36]. For example, icons representing data analysis are important [37]. A recent example focuses on cockpit icon degradation [38]. Icons for displaying uncertainty which could improve safety in air traffic collision avoidance systems, mark traffic of impending hazards, and provide warnings of critical situations. Visualization efforts need to be tested with operators for usability, workload, attention, and trust [39]. For many aerospace systems there has been a need for visualization in the cockpit and on the ground. In this paper, we highlight the developments in visualization using JVIEW as a UDOP for SAW [9-12] for operators on the ground. 1.2 Dynamic Data-Driven Application Systems (DDDAS) One emerging technology for big data processing is DDDAS [40, 41] which is consistent with information fusion [42]. DDDAS seeks to leverage modeling with measurement updates through advanced software methods. Most DDDAS applications benefit data intensive applications such as visualizing the environment as developed from theoretical models and real-times measurements. DDDAS concepts align with the information fusion levels: (L1) object assessment, (L2) situation assessment, (L3) threat assessment, (L4) process refinement, (L5) user refinement, and (L6) mission management, shown in Figure 3. Information Fusion Levels i neory

Analytics

Measurements

Data

O Management

[Visualizations

CDontrol

User

T

Software

Interaction

Figure 3: DDDAS Aligned with Information Fusion.

Proc. of SPIE Vol. 9122 912209-2 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

As an example of a complex avionics system, the National Aeronautics and Space Administration (NASA) has an effort called the Airspace Concept Evaluation System (ACES) [43] to explore air traffic management (ATM). ACES seeks to reduce flight delays, increase capacity, and mitigate risks in air transportation within the National Airspace System (NAS). ACES has focused on simulation modeling, data integration, and user actions within a modeling framework for community understanding. One example is weather assessment for ATM [44]. Currently, efforts are being developed for visualization [12] and integration of flight physics, airspace configurations, airport layouts [45], weather modeling [46], and scheduling in the ACES system [47]. Using the aerospace example to demonstrate the relationship between DDDAS and Information Fusion, the application is a scenario of safe flight mission (L6). While not a one-to-one mapping, it can be assumed that mission management, driven by scenarios, avoids identified threats (L3) such as severe weather. The theory and measurements come from the models of aircraft behavior and dynamic models (L1) which use computational methods to support situation awareness (L2) visualization. The user interacts with the machine with its displays and interactive capabilities (L5) in coordination with the advanced software computations to manage resources, control aircraft motors and sensors (L4), and collect new measurements. The key developments from advances in the last decade include distributed, faster, and reliable communication systems to enable such processing and coordination between the man and their machines. Big data consists of numerous challenges as organized by volume, velocity, variety, and veracity (4V) as shown in Figure 4. For multi-INT fusion, the challenges are obvious such as scalability from distributed resources [48]. The volume of the data can be large as per wide-area motion imagery (WAMI) [49, 50, 51]. The variety of data could include text and imagery as multimodal data [52, 53]. The velocity includes the speed of the data such as streaming video [54, 55]. Finally, the veracity includes the uncertainty of the data that requires assessment [56, 57, 58]. Volume

Variety

(Scale of Data)

(Types of Data)

Velocity (Speed of Data)

VT1

Veracity (Uncertainty of Data)

Figure 4: Four V’s of data. To solve the big data problem, there are developments in data management such as cloud computing [59] which could provide benefits to user interaction with multi-INT data. For example, the Watson Foundations big data capabilities include 1:  Data Management & Warehouse: Increase database performance across multiple workloads while lowering storage costs and realize extreme speed with capabilities optimized for deep analytics,  Hadoop System: Bring the power of Apache Hadoop to the enterprise with application accelerators, analytics, visualization, development tools, performance and security features.  Stream Computing: Efficiently deliver real-time analytic processing on constantly changing data in motion and enable descriptive and predictive analytics to support real-time decisions.  Content Management: Enable comprehensive content lifecycle and document management with costeffective control of existing and new types of content with scale, security and stability.  Information Integration & Governance: Build confidence in big data with the ability to integrate, understand, manage and govern data appropriately across its lifecycle. 1

http://www-01.ibm.com/software/data/bigdata/

Proc. of SPIE Vol. 9122 912209-3 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

New technology in computational theory of big data analytics allows users to interact with the system software and visualizations based on the mission needs. To do this, they need a User Defined Operating Picture (UDOP), in contrast with a Common Operating Picture (COP) (1, See Section 3.3). COP is just the visualization of exploited information [60] that doesn’t allow user interaction, whereas a UDOP allows tailoring of the display to the user needs. The rest of the paper is as follows: Section 2 discusses a UDOP, and Section 3 details JVIEW with examples. Section 4 presents multi-INT fusion results in JVIEW and Section 5 includes conclusions.

2. USER-DEFINED OPERATING PICTURE A UDOP concept has arisen as a response to observed limitations of current military ‘picturing’ capabilities in coping with today’s and tomorrow’s evolving operational needs [1]. Many of these challenges can only be adequately addressed through the provision of a more flexible and end-user community driven capability; one which supports a more rapid and agile reconfiguration of picturing systems, shown in Figure 5.

Common Operating Picture

User -Defined Operating Picture

Figure 5: UDOP Concept (windows can be moved to afford a user various displays of multi-INT data). UDOP (a) improves individual awareness and sensemaking, (b) supports more sophisticated interactive tasks such as analysis, exploration, and pattern trending, (c) aids users by feeding inputs into modelling and decision aids, (d) affords decision making in collaborative team contexts, (e) provides a common frame of reference, (f) provisions reduced cognitive load via exploitation of visual techniques, and (g) enables effective workflow experiences [1]. A UDOP is a configuration of services which provide data access mechanisms for a variety of data and information sources that can be used to present visual representations to a user in historical, current, or anticipatory perspectives. A UDOP system is JVIEW which provides flexibility in visualizing single-INT or multi-INT fused data.

3. JVIEW Building on the ACES developments, JVIEW was created to enable a UDOP. JVIEW is written entirely in Java and the 3D components utilize the OpenGL API to gain hardware graphics acceleration [12]. The objective was to develop, implement, and integrate visualization technologies that support the National Aeronautics and Space Administration (NASA) and the Federal Aviation Administration (FAA) goals for visualization of the National Airspace System (NAS). JVIEW relies on concrete Object Oriented Design (OOD) and programming techniques to provide a robust and venue non-specific visualization environment. There are three types of modules that are created within JVIEW. The first type are venue-specific modules called facilitators that address the specific task of placing objects in a scene and manipulating their behavior. The second type are plug-ins that are venue non-specific and add general functionality to the system. The last type consists of data source loaders called oddments. JVIEW is completely data source independent, and is by definition a standard visualization solution since it does not concentrate solely on space, air, ground, or littoral environments. One example is cyber. When a new data type becomes available and visualization is necessary, JVIEW allows the programmer to quickly implement the additions. It also allows analysts to operate in an environment in which they are familiar, instead of having to learn a new interface. Thus, the software and visualization are developed to enable a robust, flexible, and tailorable UDOP display for users to interact with the data. In direct support of NASA and FAA goals for visualization of the National Airspace System, a new application using JVIEW technology is called the ACES Viewer [12]. The ACES Viewer is an information visualization tool designed to provide visual representations of the output of ACES. The application provides mechanisms to load the various types of data used and output by the ACES simulation, process the data, and display it. The ACES Viewer is not limited to visualization of ACES data; but a set of services from which the user can select different configured layered services.

Proc. of SPIE Vol. 9122 912209-4 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

Figure 6 shows a visualization comprised of three transforms (Force Grid, Filter, and Time Filter), a Renderer (Slice Volume Renderer), and a data access component (JDBCTable) [12]. In this case, there is a linear flow of data from the aircraft state message table to the renderer. This means that the Filter Transform cannot process data until the Time Filter does, and similarly, the Force Grid Transform cannot perform its processing until the Filter Transform does. These data dependencies effectively force the entire data flow to be processed in a single thread. Figure 6 represents a limited approach for concurrency in the ACES Viewer. cm....

Visualization Container

=U.

I.

11111

3D Scene 1

xv.nus

Slice Volume Renderer

.14114001aat Otne OfIne UssIVIlloe 10

Force Grid Transform

IMIN111111111111

=MO

4 (Filtered Time filtered aircraftstatemessage )

14.11.

t

1==l1C1

0 Time filtered aircraftstatemessage

I

-I

o

11111111111 aircraftstatemessage emu.

unx

Figure 6: Visualization with Linear Data Flows.

Figure 7: Visualization with Parallel Data Flows.

As an improvement, in Figure 7, there are multiple pathways through the processing network to the renderers. Each pathway can be processed concurrently since there are no data dependencies between separate pathways. In this example, we can use multiple threads for concurrent processing (equivalent to the number of paths from the data access components to the renderers). After analyzing visualizations produced by researchers and the IV4D extension to the ACES Viewer, the application can use anywhere from 1 to 3 threads for concurrent processing of visualization data. By default, JVIEW Tables now issue change events asynchronously using a pool of threads, rather than issuing the events synchronously and sequentially. This removes the need for individual Table/Transform implementations to handle concurrent processing in most situations, while still taking advantage of multiple processers in the host system. To take better advantage of multi-processor systems, we have begun exploring opportunities for concurrency within each of the visualization components. The ForceEffectGrid Transform was one of the first targeted routines because it is highly Central Processing Unit (CPU) intensive, and the algorithm it uses is easily parallelized. A significant increase in runtime performance of the ForceEffectGrid on multi-processor systems was designed. Performance scales nearly linearly with the number of available CPUs, and tests on a 16 processor machine produced pleasing results. JVIEW employs many advances in software computing that enable development and visualization of data. Inherently, each one of the options listed below is a service that the user can subscribe to for a presentation of data [12]:        

Geodesic Bounding Box with Source Projection Stereoscopic Rendering GLSL Shader Program Abstractions OpenGL Lighting Geographic Coordinate Systems Rendering Appearance Settings Animation Framework Vertex Buffer Objects

       

Texture API Picking/Selection Bounding Volumes Camera System Data set reduction CVS parser Schema editor World data configurator

3.1 Keyhole Markup Language The Keyhole Markup Language (KML) is an XML file format that is used by Google® to represent geometry, points, and other geo-referenced information. A custom parser was written for JVIEW [12] because the Google® provided version parser was slow (due to extensive use of Java’s reflection utilities), and did not support the newest version of the KML specification. The KML rendering components were also upgraded to integrate with the new parser and to incorporate additional functionality. Features developed included KML parser, Graphical User Interface, Callouts, and Placemark Point Scene Element.

Proc. of SPIE Vol. 9122 912209-5 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

An example of an aerospace visualization is shown in Figure 8. Figure 8 shows a space example with the ability to place text call outs, represent data in Earth coordinates, and scale to any dimension which complements methods of awareness-based space resource management [61]. Key needs for future space operations include the ability to visualize and control multi-INT sensors [62]. JVIEW is an available open source code framework that affords future developments. The features included in the JVIEW Version 1.6 are: • DemoBrowser: A collection of sample applications used to demonstrate how to use JView for building applications. • ModelThumbnailViewer: An application for viewing 3D models. Figure 8: JView Scene with KML • DTED Level 0: Terrain elevation data Placemark Point Scene Elements [12]. • NASA Blue Marble: Imagery • Geoname and political boundary database • TerraFirma: An application for viewing global imagery and terrain, similar to NASA World Wind and Google® Earth. Having described the benefits of JVIEW for information fusion visualization, next we show some examples for the National Aerospace System. 3.2 National Aerospace System Example ACES is interested in flight physics, airspace configurations, airport layouts, weather and environment models, and scheduling for enhanced SAW of air operations. Presenting the captured data can be accomplished in many ways and these are just current examples of the DFIG Level 5 Fusion that supports interactive visualization [63]. One example on how to develop a UDOP application is to use dockable panels for each major component. Dockable panels provide a single window User Interface (UI) for the default application that could be rearranged to suit user needs, as shown in Figure 9. Selecting the parallel data flow, the user could configure the picture such that flights over a region are presented in the air traffic at that location. As the UDOP concept provides multiple windows, panels, or displays of data, a combined presentation could afford multiple adaptable windows. One example is to show in geospace the air routes in the regional sectors, flight take offs and landings at an airport, and then heat or density maps, as shown in Figure 10. Note that in this example, there is scalability. In the top portions, it allows for local presentation of discrete information while in the bottom is an aggregated global fusion presentation of continuous data. Since various displays have not been tested with real subject matter experts, the options in the UDOP allow the user to view data from the services in many ways. A subject of future studies for information fusion and DDDAS pertain to leveraging JVIEW in applications testing for Level 5 fusion issues with real operators. N

MIN NqFe1

Nyo N

Figure 9: JView Docking Frames User Interface [12].

.tIG

Y{'

IGOY NIOV

Figure 10: JView Maps (top left) Geospace, (top right) Vehicle labels, (bottom) Density heatmaps.[12]

Proc. of SPIE Vol. 9122 912209-6 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

To increase safety and determine the value of multi-INT information fusion concepts, weather modeling is important for safe flight operations. One example is detection and visualization of atmospheric clouds [64]. Figure 11 illustrates abstracted weather challenges. In this case, only general areas of interest are plotted, that when combined with the other data, can predict flights that could enter potentially threatening weather patterns posing potential threat, hazard, or difficulty. As another example, we highlight a combined UDOP presentation for SAW. Figure 12 shows an airspace picture that focuses on airspace overlays on a geospace representation (e.g., Digital Terrain Elevation Data map), the flight regions, and the regional airport information. Using the variation of services, data could be added, removed, or adapted to help an air traffic controller or a remotely piloted vehicle operator route aircraft safely and efficiently.

C

Figure 11: Weather Plots.

Figure 12: Combined Airspace Situation Awareness.

4. MULTI-INT FUSION VISUALIZATION WITH JVIEW Multi-INT fusion can benefit from a UDOP [1]. Information fusion has typically proceeded with machine-only developments, but displays and metrics are needed for determining and measuring the effectiveness of information fusion system designs [65, 66, 67]. To enhance user acceptance with information fusion systems, the UDOP allows user access to available data by whichever method they desire. Figure 13 shows UDOP functions of the information, services, and application layers. When a user determines the INT data (such as IMINT), the services layer processes the information for easy viewing. The data information layer is easily accessible to the user as shown in Figure 14 from which the interaction of the data can be manipulated with the hexagons. UDOP Application Layer Snapshot Viewer

Template Viewer

0

Authoring Tools

UDOP Services Layer UDOP

Repository Snapshots Templates

Auto

Visualisation Adaptors e.g. for KML

Loading

Creation

Symbology Handling

Collaboration

Analysis Support

Data

UDOP

UDOP Information Layer

Figure 13: UDOP Processes.

Figure 14: JVIEW Data ingestion.

Using the JVIEW system, we ingested multi-INT data for visualization including multiple-target results from group tracking [68, 69, 70, 71]. We utilized imagery analysis (such as video exploitation for tracking) as well as text extraction of relevant information. Together these are plotted in Figure 15 which is projected onto world coordinates. Figure 16 shows significant events that can service a multi-INT operator through an effective display. The first gain is to register the data in space for multimodal data which requires numerous Level 0 processing. The second feature is that the

Proc. of SPIE Vol. 9122 912209-7 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

correlation of data is synchronized in time. The multi-INT fusion processes determine the relevant text messages that go with the track data. For Level 2 situation assessment, the visualization affords association which allows the user to interact with the data and nominate or correct good or poor associations from the machine. Level 5 fusion of “user refinement” is crucial as the data is presented to the user and the visualization allows the data to be refined before verifying results for product generation. Finally, the multi-INT fusion can support indications and warnings of impending threats [72].

w

Figure 15: JVIEW Geo-visualization.

Figure 16: Multi-INT fusion with JVIEW.

5. CONCLUSIONS In this paper, we demonstrated the use of JVIEW as a visualization tool for big data analysis and Level 5 information fusion. Using the emerging concept of DDDAS, the scenarios and applications drive the modeling, measurements, and computational software for a UDOP analysis. We demonstrated various UDOP visualizations for two systems (1) the National Aerospace System using the Airspace Concept Evaluation System (ACES) and (2) video and text fusion for simultaneous tracking and identification. These concepts require further analysis with real operators to determine an optimal visualization display and how the user is motivated to use multi-INT fusion products to aid their workflow. Future efforts include integration of platform displays in the aerospace picture, full motion video (FMV) projection on the ground visualization, incorporation of symbology and course of actions for data requests from sensors and applications for difficult environments to support situation and threat assessment [73,74].

Acknowledgements The paper was a compilation of discussions with Chad F. Salisbury, Jason Moore, Aaron McVay (AFRL/RISB) and parts of this work were sponsored by the Air Force Office of Scientific Research which were greatly appreciated. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of Air Force Research Laboratory, or the U.S. Government.

REFERENCES [1] Blasch, E. P., Bosse, E., Lambert, D. A. [High-Level Information Fusion Management and Systems Design], Artech House, Norwood, MA, (2012). [2] Blasch, E., “Level 5 (User Refinement) issues supporting Information Fusion Management” Int. Conf. on Info Fusion, (2006). [3] Blasch, E., “Enhanced Air Operations Using JView for an Air-Ground Fused Situation Awareness UDOP,” AIAA/IEEE Digital Avionics Systems Conference, (2013). [4] Bruna, O., Holub, J., Pačes, P, Levora, T., “Small Aircraft Emergency Landing Decision Support System – Pilots’ Performance Assessment,“ IMEKO World Congress, (2012). [5] Solano, M. A., Ekwaro-Osire, S., Tanik, M. M., “High-Level fusion for intelligence applications using Recombinant Cognition Synthesis,” Information Fusion, (2012). [6] Bowman, E.K., “Persistent ISR: the social network analysis connection,” Proc. SPIE, 8389, (2012). [7] Buchler, N., Maruisch, L. R., Sokoloff, S., “The Warfighter Associate: Decision-support software agent for the management of intelligence, surveillance, and reconnaissance (ISR) assets.” Proc. SPIE, 9079, (2014). [8] Blasch, E. Wang, Z., Shen, D., et al.. “Surveillance of ground vehicles for airport security,” Proc SPIE, 9089, (2014).

Proc. of SPIE Vol. 9122 912209-8 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

[9] Salisbry, C. F., Farr, S. D., Moore, J. A., “Web-based simulation visualization using Java3D,” ACM Winter Simulation Conference, (1999). [10] DiLego, F. A. Jr., Hitchings, J., Salisbury, C. F., Simmons, H. X., Sterling, J., Cai, J., “Joint Airspace Management and Deconfliction (JASMAD), AFRL-RI-RS-TR-2009-13, (2009) [11] Jedrysik, P. A., Moore, J. A., Salisbury, C. F., Holmes, B., “Advanced Visualization and Interactive Displays (AVID),” AFRLRI-RS-TR-2009, (2009) [12] McVay, A., Krisher, D., Fisher, P., “JView Visualization for Virtual Airspace Modeling and Simulation,” AFRL-RI-RS-TR2009-79, April (2009). [13] Krisher, D., McVay, A., Fisher, P., “JVIEW Visualization for Next Generation Air Transportation System,” AFRL-RI-RS-TR2011-003, Jan. (2011). [14] Blasch, E. P., Lambert, D. A., Valin, P., Kokar, M. M., Llinas, J., Das, S., Chong, C-Y., Shahbazian, E., “High Level Information Fusion (HLIF) Survey of Models, Issues, and Grand Challenges,” IEEE Aerospace and Electronic Systems Magazine, Vol. 27, No. 9, Sept. (2012). [15] Mendoza-Schrock, O., Patrick, J. A., et al., “Video Image Registration Evaluation for a Layered Sensing Environment,” Proc. IEEE Nat. Aerospace Electronics Conf (NAECON), (2009). [16] Blasch, E., [Derivation of a Belief Filter for Simultaneous High Range Resolution Radar Tracking and Identification], Ph.D. Thesis, Wright State University, (1999). [17] Visentini, I., Snidaro, L., “Integration of contextual information for tracking refinement,” Intl Conf on Inform. Fusion, (2011). [18] Besada, J., Soto, A., de Miguel, G., Garcia, J., Voet, E., “ATC trajectory reconstruction for automated evaluation of sensor and tracker performance,” IEEE Aerospace and Electronic Systems Magazine, Vol. 28, (2), Feb. (2013). [19] Blasch, E., Kadar, I., Hintz, K., Biermann, J., Chong, C., Das, S., “Resource Management Coordination with Level 2/3 Fusion Issues and Challenges,” IEEE Aerospace and Electronic Sys. Mag., Vol. 23, No. 3, pp. 32-46, Mar. (2008). [20] Blasch, E., Kadar, I., Salerno, J., Kokar, M. M., Das, S., Powell, G. M., Corkill, D. D., Ruspini, E. H., “Issues and Challenges in Situation Assessment (Level 2 Fusion),” J. of Advances in Information Fusion, Vol. 1, No. 2, pp. 122 - 139, Dec. (2006). [21] Blasch, E. P., Hanselman, P., "Information Fusion for Information Superiority," IEEE Nat’l Aerospace and Elec. Conf., (2000). [22] Blasch, E., "Introduction to Level 5 Fusion: the Role of the User," Chapter 19 in Handbook of Multisensor Data Fusion 2nd Ed, Eds. M. E. Liggins, D. Hall, and J. Llinas, CRC Press, (2008). [23] Blasch, E., Steinberg, A., Das, S., Llinas, J., Chong, C.-Y., Kessler, O., Waltz, E., White, F., "Revisiting the JDL model for Information Exploitation," Int’l Conf. on Info Fusion, (2013). [24] Baumgart, L. A., Bass, E. J., Philips, B., Kloesel, K., “Emergency management decision making during severe weather,” Weather and Forecasting, 23 (6), 1268-1279, (2008). [25] Hunter, G., Boisvert, B., Smith, J., Marien, T., “NAS-wide traffic flow management concept using required time of arrival, separation assurance and weather routing,” IEEE / AIAAA Digital Avionics Sys. Conf., (2012). [26] Kadar, I., Kumar, S., Mehra, R., Lossau, K., Natarajan, P., Das, S., Blasch, E., “Real World Issues and Challenges in Big Data Processing with Applications to Information Fusion,” Proc. SPIE, Vol. 8745, (2013). [27] Lafond, D., DuCharme, M. B., Gagnon, J.-F., Tremblay, S., “Support Requirements for Cognitive Readiness in Complex Operations,” J. of Cognitive Engineering and Decision Making, (2013). [28] Blasch, E., “Learning Attributes For Situational Awareness in the Landing of an Autonomous Aircraft,” IEEE/AIAA Digital Avionics Systems Conference, (1997). [29] Molina, J. M., García, J., Besada, J. A., Casar, J. R., “Design of an A-SMGCS prototype at Barajas airport: available information and architecture” Intl Conf on Information Fusion, (2005). [30] Salerno, J., Blasch, E., Hinman, M., Boulware, D., “Evaluating algorithmic techniques in supporting situation awareness,” Proc. of SPIE, Vol. 5813, (2005). [31] Chen, G., Shen, D., et al., “Game Theoretic Approach to Threat Prediction and Situation Awareness,” J. of Advances in Information Fusion, Vol. 2, No. 1, 1-14, June (2007). [32] Blasch, E., Schmitz, J., “Uncertainty Issues in Data Fusion,” Nat. Symp. On Sensor and Data Fusion, (2002). [33] Mitsch, S., Mueller, A., Retschitzegger, W., Salfinger, A., Schwinger, W., “A Survey of Clustering Techniques for Situation Awareness,” Intl Workshop on Management of Spatial Temporal Data, (2013). [34] Solano, M. A., Jernigan, G., “Enterprise data architecture principles for High-Level Multi-Int fusion: A pragmatic guide for implementing a heterogeneous data exploitation framework,” Intl Conf on Information Fusion, (2012). [35] Blasch, E., Kessler, O., Morrison, J., Tangney, J. F., White, F. E., “Information Fusion Management and Enterprise Processing.” IEEE National Aerospace and Electronics Conf. (NAECON), (2012). [36] Blasch, E., Lambert, D. A., Valin, P., et al., “High Level Information Fusion (HLIF) Survey of Models, Issues, and Grand Challenges,” IEEE Aerospace and Electronic Systems Magazine, Vol. 27, No. 9, Sept. (2012). [37] Riveiro, M., Falkman, G., Ziemke, T., “Improving maritime anomaly detection and situation awareness through interactive visualization,” Int’l Conf. on Info Fusion, (2008). [38] Kolbeinsson, A., [Visualising uncertainty in aircraft cockpits: Is icon degradation an appropriate visualisation form], M. S. Thesis, University of Skövde, (2013). [39] Helldin, T., [Transparency for Future Semi-Automated Systems, Effects of transparency on operator performance, workload, ad trust], Doctoral Dissertation, Orbero University, (2014).

Proc. of SPIE Vol. 9122 912209-9 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms

[40] Darema, F., “Dynamic Data Driven Applications Systems: A New Paradigm for Application Simulations and Measurements. Computational Science,” ICCS, Lecture Notes in Computer Science series, 3038, 662–669, (2004). [41] Darema, F., “Grid Computing and Beyond: The Context of Dynamic Data Driven Applications Systems,” Proc of the IEEE, Vol. 93, No. 3, March 2005. [42] Blasch, E., Seetharaman, G., Reinhardt, K., “Dynamic Data Driven Applications System concept for Information Fusion,” International Conference on Computational Science, Procedia Computer Science, Vol. 18, Pages 1999-2007, (2013). [43] http://www.aviationsystemsdivision.arc.nasa.gov/research/ modeling/aces.shtml, (2009). [44] George, S., Wieland, F., “Build 8 of the Airspace Concept Evaluation System (ACES),” Integrated Communications, Navigation and Surveillance Conf., (2011). [45] Levora, T., Bruna, O., Pačes, P., “Surface Recognition for Emergency Landing Purposes,“ Int‘l Astronautical Congress,(2012). [46] Vargo, E. P., Bass, E. J., Cogill, R, “Belief Propagation for Large-Variable-Domain Optimization on Factor Graphs: An Application to Decentralized Weather-Radar Coordination,” IEEE Tr. on Sys. Man, and Cybernetics–A, 43 (2), 460-466, (2013). [47] Kuhn, K., "Analysis of Thunderstorm Effects on Aggregated Aircraft Trajectories," AIAA J. of Aerospace Computing, Information, and Communication, Vol. 5, April (2008). [48] Julier, S. J., Uhlmann, J. K., Walters, J., Mittu, R., Palaniappan, K., “The challenge of scalable and distributed fusion of disparate sources of information,” Proc. SPIE, Vol. 6242, (2006). [49] Blasch, E., Seetharaman, G. Suddarth, S., Palaniappan, K., Chen, G., Ling, H., Basharat, “Summary of Methods in Wide-Area Motion Imagery (WAMI),” Proc. SPIE, Vol. 9089, (2014). [50] Levchuck, G., “Detecting coordinated activities from persistent surveillance,” SPIE Newsroom, 5 June (2013). [51] Gao, J., Ling, H., Blasch, E., Pham, K., Wang, Z., Chen, G., “Context-aware tracking with wide-area motion imagery,” SPIE Newsroom, (2013). [52] Blasch, E., Nagy, J., Aved, A., Pottenger, W. M., Schneider, M., Hammoud, R., Jones, E. K., Basharat, A., Hoogs, A., Chen, G., Shen, D., Ling, H., “Context aided Video-to-Text Information Fusion,” Int’l.. Conf. on Information Fusion, (2014). [53] Hammoud, R. I., Sahin, C. S., Blasch, E. P, and Rhodes, B. J. “Multi-Source Multi-Modal Activity Recognition in Aerial Video Surveillance,” IEEE International Computer Vision and Pattern Recognition Conference, (2014). [54] Wu, Y., Wang, J., Cheng, J., Lu, H., Blasch, E., Bai, L., Ling, H., “Real-Time Probabilistic Covariance Tracking with Efficient Model Update,” IEEE Trans. on Image Processing, 21(5):2824-2837, (2012). [55] Mei, X., Ling, H., Wu, Y., Blasch, E., Bai, L. “Efficient Minimum Error Bounded Particle Resampling L1 Tracker with Occlusion Detection,” IEEE Trans. on Image Processing (T-IP), (2013). [56] Costa, P. C. G., Laskey, K. B., Blasch, E., Jousselme, A-L., “Towards Unbiased Evaluation of Uncertainty Reasoning: The URREF Ontology,” Int. Conf. on Info Fusion, (2012). [57] Blasch, E., Laskey, K. B., Joussselme, A-L., Dragos, V., Costa, P. C. G., Dezert, J., “URREF Reliability versus Credibility in Information Fusion (STANAG 2511),” Int’l Conf. on Info Fusion, (2013). [58] Blasch, E. Costa, P. C. G., Laskey, K. B., Ling, H., Chen, G., “The URREF Ontology for Semantic Wide Area Motion Imagery Exploitation,” Semantic Technologies for Intelligence, Defense, and Security (STIDS), pp. 88-95, October, (2012). [59] Liu, B., Blasch, E., Chen, Y., Aved, A. J., Hadiks, A., Shen, D., Chen, G., “Information Fusion in a Cloud Computing Era: A Systems-Level Perspective,” accepted for IEEE Aerospace and Electronic Systems Magazine, (2014). [60] Blasch, E. P., “Assembling a distributed fused Information-based Human-Computer Cognitive Decision Making Tool,” IEEE Aerospace and Electronic Systems Magazine, Vol. 15, No. 5, pp. 11-17, May (2000). [61] Chen, G., Chen, H., et al., “Awareness Based Game theoretic Space Resource Management,” Proc. SPIE, 7330, (2009). [62] Oliva, R., Blasch, E., Ogan, R., “Applying Aerospace Technologies to Current Issues Using Systems Engineering, 3rd AESS Chapter Summit,” IEEE Aerospace and Electronic Systems Magazine, Vol. 28, No. 2, Feb. (2013). [63] Blasch, E., Plano, S., “DFIG Level 5 (User Refinement) issues supporting Situational Assessment Reasoning,” Int. Conf. on Info Fusion, (2005). [64] Shen, D., et al., “A Holistic Image Segmentation Framework for Cloud Detection and Extraction,” Proc. SPIE, 8739, (2013). [65] Blasch, E., Pribilski, M., Daughtery, B., Roscoe, B., Gunsett, J., “Fusion Metrics for Dynamic Situation Analysis,” Proc. of SPIE, Vol. 5429, (2004). [66] Blasch, E. P., Valin, P., Bossé, E., “Measures of Effectiveness for High-Level Fusion,” Int’l Conf. on Info Fusion, (2010). [67] Blasch, E., et al.., “Information Fusion Measures of Effectiveness (MOE) for Decision Support,” Proc. SPIE, 8050, (2011). [68] Connare, T., Blasch, E., J. Schmitz, et al., “Group IMM tracking utilizing Track and Identification Fusion,” Proc. of the Workshop on Estimation, Tracking, and Fusion; A Tribute to Yaakov Bar Shalom, Monterey, CA, 205 -220, May 2001. [69] Blasch, E., Plano, S., “JDL Level 5 Fusion model ‘user refinement’ issues and applications in group tracking,” Proc. SPIE, Vol. 4729, (2002). [70] Blasch, E., Kahler, B., “Multiresolution EO/IR Tracking and Identification,” Int. Conf. on Info Fusion, (2005). [71] Yang, C., et al., “Fusion of Tracks with Road Constraints,” J. of. Advances in Information Fusion, Vol. 3, No. 1, 14-32, (2008). [72] Blasch, E., “Situation, Impact, and User Refinement,” Proc. of SPIE, 5096, (2003). [73] Blasch, E., “Sensor, User, Mission (SUM) Resource Management and their interaction with Level 2/3 fusion,” Int. Conf. on Info Fusion, (2006). [74] Blasch, E., Kadar, I., Hintz, K., Biermann, J., Chong, C., Das, S., “Resource Management Coordination with Level 2/3 Fusion Issues and Challenges,” IEEE Aerospace and Electronic Systems Magazine, Vol. 23, No. 3, pp. 32-46, Mar. (2008).

Proc. of SPIE Vol. 9122 912209-10 Downloaded From: http://spiedigitallibrary.org/ on 01/26/2015 Terms of Use: http://spiedl.org/terms