Visualization of Time-Dependent Data using Feature ... - CiteSeerX

3 downloads 0 Views 613KB Size Report
evolution of the data [4], but has a number of .... The parameters of the icons ...... Thus, the user can view the evolution of objects by animating the icons. A player ...
Visualization of Time-Dependent Data using Feature Tracking and Event Detection Freek Reinders1 , Frits H. Post1 and Hans J.W. Spoelder2 1. Dept. of Computer Science, Delft University of Technology Zuidplantsoen 4, 2600 AJ Delft, The Netherlands e-mail: (k.f.j.reinders, f.h.post)@cs.tudelft.nl 2. Div. of Physics and Astronomy, Vrije Universiteit De Boelelaan 1081, 1081 HV Amsterdam, The Netherlands e-mail: [email protected]

Abstract

the evolving phenomena are not described quantitatively. Once a moving feature is detected, one would like to describe its evolution: how fast it is moving, how long does it exist, does it interact with other features, or are there other special events. Last, interactive exploration of the data is not possible. The user may want to explore the evolution of one particular feature, or view the data from a different angle. In order to overcome these flaws we aim at an automatic procedure to track features and to detect particular stages or events in the evolution of these features, as was suggested by [15]. This is achieved by the following four steps:

Keywords: Visualization, Time-Dependent Data, Feature Tracking, Event Detection. This paper presents an innovative method to analyze and visualize time-dependent evolutions of features. The analysis and visualization of time-dependent data is complicated because of the immense amount of data involved. However, if the scientist’s main interest is the evolution of certain features, it suffices to show the evolution of these features. The task of the visualization method is to extract the features from all frames, to determine the correspondences between features in successive frames, to detect significant events or stages in the evolution of the features, and finally to visualize the results. The method described here performs all these steps, and is applied to a number of applications.

1

1. Feature Extraction. Extract the features in the data for each time-step (frame) and describe the characteristics of the features by calculating a number of attributes. 2. Feature Tracking. Track features by solving the frame-to-frame correspondence problem between features, using the attributes calculated in the first step.

Introduction

Numerical simulations are increasingly focusing on the investigation of time-dependent phenomena. In general this results in huge amounts of data that are hard to process, interpret, and visualize. Off-line generation of images (using a standard visualization technique) and creation of a playback animation, can give an overview of the evolution of the data [4], but has a number of deficiencies. First, the extraction of features is left to the human perception. Often the scientist is only interested in the evolution of apparent coherent structures (features) and it suffices to show the evolution of these features. Second,

3. Event Detection. Detect certain events like unusual changes, particular stages in the evolution of a feature, or specific interactions between features. 4. Visualization. Visualize the evolution of features interactively in a player, show the relations between features in successive frames, and highlight particular events. Thus, the time series as a whole can be described. For each feature a description can be made of its lifespan, describing its origin, motion, growth, interaction with other features, 1

etc. This allows us to select one feature and visualize the evolution of it, or to show the occurrence of a particular event. The tracking process results in entirely new ways of visualizing evolving phenomena. This paper describes a visualization method that performs the four steps mentioned above. In [11] we presented intermediate results; the tracking of continuations. Here we present an overview of that technique plus a number of additions, such as the explicit detection of events and a new enhanced visualization based on multiple linked views. A number of events described here are new and the multiple linked visualization of time-dependent phenomena is also new. The paper is organized as follows. Each of the four steps is described in a section: 2, 3, 4, and 5 respectively. Then, section 6 shows a number of applications, and finally we present the conclusions and discussion, and suggestions for future research.

fies a certain selection criterion. It can also be replaced by any other segmentation technique that results in selected nodes. ‘Clustering’ clusters connecting nodes into coherent regions of interest. ‘Attribute calculation’ determines a number of quantitative attributes for each cluster. This is an important stage in the feature extraction pipeline, since it determines what characteristics of the feature are stored. Any information about the feature that might be of interest may be added as an attribute set. Finally, ‘iconic mapping’ maps the calculated attributes to a parametric iconic geometry which can be displayed. The parameters of the icons are directly related to the attribute values of the feature. This process of feature extraction is controlled by the scientist in the sense that his knowledge of the data and his conceptual model of an interesting feature are translated into the selection expression, the connectivity criteria, the calculation method and the mapping function.

2

2.1

Feature Extraction

Attribute Calculation

Attribute calculation is a crucial step in the feature extraction pipeline. The attributes are calculated from a cluster of selected grid points indicating the feature. There are many methods to calculate attributes that also depend on the feature extraction technique used. Each method results in an attribute set describing certain characteristics of a feature. Below we describe a number of commonly used attribute calculation methods.

The first step is the extraction of features in each frame. Feature extraction is a set of techniques in scientific visualization aiming at algorithmic, automated extraction of relevant features from data sets. Since an ‘interesting feature’ is different for each application, many applicationspecific feature extraction techniques exist. Examples are the extraction of critical points [6], vortices [3], and shock waves [8]. More general approaches include region growing [1], deformable surfaces [14], and selective visualization [20].

Volume integrals are generic techniques for the calculation of attributes [20]. Volume integrals allow the calculation of many different aggregate attributes. They can be used to calculate values such as center point position, volume, mass, and average data values. Basically, the volume integral can be approximated by a sum over the selected nodes of a cluster. Each node contributes to the result of the volume integral. Since the volume integral uses a multitude of nodes, the derived attributes are stable with respect to noise and small changes. A good way to describe the shape of a 3D amorphous objects is to calculate the best fitting ellipsoid. Ellipsoidal fitting provides a first order approximation for the geometric shape of an object [5, 19]. The number of degrees of freedom is nine: three for the center point position, three

The tracking algorithm described here works on features that are 3D amorphous ’objects’ of a size (much) smaller than the data set domain, but larger than a grid cell. The two essential attributes of such features are position and size, but also other attributes may be used. In principle, these can be obtained with any feature extraction method, but in this work we have used a method based on selective visualization [20]. The feature extraction method based on selective visualization is summarized by the pipeline model shown in Figure 1, and consists of the following stages: selection, clustering, attribute calculation, and iconic mapping. ‘Selection’ identifies all grid nodes where the data satis2

Raw Data

Data Generation

Selected Nodes

Regions of Interest

Attribute Sets

Icons

Selection

Clustering

Attribute Calculation

Iconic Mapping

Selection Expression

Connectivity Criteria

Calculation Method

Mapping Function

Display

Scientist’s knowledge and conceptual model

Figure 1: The feature extraction pipeline. transferred to the visualization workstation by a low-bandwidth link.

for the axis lengths, and three for the orientations of the ellipsoid axes. The ellipsoid gives a good indication for position, size, and orientation, and therefore provides a good description of the global geometry of the object. In [19] the ellipsoid attributes were used to describe the temporal evolution of features in time. If features have a more complicated shape such as a strongly curved tube, the shape of the features may be better described by a higher order shape descriptor like the skeleton. In [10] we described a method to determine the skeleton of an object and to reduce the skeleton points to a small set of skeleton attributes.

2.2

Quantitative description. The attributes provide a quantitative description of the features. Each attribute can be used to compare features and can be plotted as a function of time. Iconic representation. The features can be displayed by parametric icons from which the parameters are directly related to the attributes of that feature. A fast display is guaranteed since the icons are relatively simple geometric objects.

Feature Data

We developed a framework for the storage and manipulation of feature data. With this framework all types of features can be created depending on the type of application. The user is not restricted to one particular set of attributes, and can add new types easily. The possibility of manipulating the features as individual entities has important consequences: the features can be quantitatively compared, and, for tracking purposes, the grid data is not needed.

The collection of all attribute sets from all features in all frames is called feature data. It is the result of the feature extraction step. Feature data describes for each frame a number of features, each quantitatively defined by a set of attributes, that can be visualized by an iconic representation. The transformation from grid data to feature data has a number of advantages: Higher level of abstraction. The feature data lifts the original grid data to a higher level of abstraction; data describing the features in contrast to data containing interesting features. Data reduction. It signifies a data reduction in the order of 1000. This provides an excellent opportunity for distributed processing: the computationally intensive first part of a numerical simulation and feature extraction can be executed on a remote high performance computer, while the results, the feature data, can easily be

However, the feature data is a drastically reduced model of the original grid data. Therefore, it is necessary to verify the accuracy of the calculated attributes before they can be used for tracking. In [12] we showed that the determination of position and size is very accurate and robust even for noisy data. Thus, the feature data is reliable and can be used for time tracking. 3

3

Feature Tracking

Sethi et al [16] solved the correspondence problem by establishing trajectories in the property space (i.e. attribute space). The total property coherence for all the trajectories is maximized using two algorithms based on greedy exchange and simulated annealing. The property coherence measure exhibited by three successive mappings of a feature is given by   ab ∗ bc + F (a, b, c) = w1 1 − kabk ∗ kbck q   kabk ∗ kbck  (1) w2 1 − 2 kabk + kbck

After the feature extraction step has been performed, a correspondence problem remains. The features in one frame are not related to the features in the next frame. It is not yet known which features are the same object only in different instances of time. Thus, there is no description of motion and evolution of objects.

3.1

Related Work

This correspondence problem is an issue in several scientific disciplines such as image processing, computer vision, and scientific visualization. Although each discipline has its own view to the problem, many approaches are analogous where a, b, and c respectively represent the three to each other. feature mappings in a k-dimensional property space. The first term on the right hand side Roughly, the methods can be classified in two gives a measure of change in the direction of categories: 1) pixel-based methods, and 2) the movement (directional coherence), and the feature-based methods. second term provides a measure of the speed of In pixel-based methods, the displace- the movement (speed coherence). Both of these ment is determined on a pixel-to-pixel basis. measures are combined through suitable weights Well known methods are based on cross- w and w . 1 2 correlation [13], or optical flow [2]. However, Also, the property coherence measure is prothese approaches are computationally too vided with suitable scaling terms for each axis expensive for 3D field data. For this type of of the property space: data the feature-based methods are much more k suited. X si (rbi − rai )(rci − rbi ) (2) ab ∗ bc = In feature-based methods, first a segmentai=1 tion takes place, followed by matching of the v u k obtained regions of interest (or features). The uX matching of features can be achieved by two si (rbi − rai )2 kabk = t mechanisms: i=1 v Region correspondence. The segmented reu k uX gions are matched. Examples are matching kbck = t si (rci − rbi )2 based on overlap [17], and the minimization i=1 of an affine transformation matrix [7]. These methods still need the (segmented) grid data, where s is the scaling factor for the ith axis of i and therefore are memory consuming and com- the property space with P s = 1.0, and r is i ai putationally expensive. the ith property value of feature a. Attribute correspondence. A number of derived attributes are matched. For each feature a Samtaney et al [15] introduced the following evonumber of attributes, like position and size, are lutionary events: continuation, creation, dissidetermined, and these are then matched. Ex- pation, bifurcation, and amalgamation. First, amples are the tracking of markers [16] by using all continuations are determined by testing each path coherence and smoothness of motion, and object in frame i to the closest (in position) obthe tracking of feature evolution [15] where cer- ject in frame i + 1. Then, combinations of the tain events are detected. remaining unmatched objects are tested for biObviously, for our tracking purposes we use furcation and amalgamation. Finally, all objects the mechanism of attribute correspondence. Be- that could not be matched in one direction are low, we discuss the two given examples in more classified as either dissipated or created. Thus, features are linked on a two-frame basis resultdetail. 4

ing in the paths and evolutions of individual objects. The criterion for correspondence between two features is that the difference between the attributes must fall within a certain tolerance. For continuation the criterion is: i+1 i attr(OB ) − attr(OA ) ≤ Tattr

spondence between two features (O1 and O2 ). This correspondence must be established based on the attribute sets in each feature. In order to make the method generic, each attribute set is linked to a number of correspondence criteria that may be tested. The criteria depend on the information available in that attribute set. Therefore, an attribute set containing an ellipsoid fit will invoke other correspondence criteria than a skeleton attribute set. For example, the attribute set of an ellipsoid fitting contains information about position, size, and orientation. Therefore, the correspondence criteria associated to the ellipsoid, are related to these three characteristics (see Table 1). The criteria do not have to be directly related to the attributes, but may be criteria on aggregate attributes. For instance the volume of the ellipsoid is an aggregate attribute, it is derived from the three axis lengths (V = 43 πr1 r2 r3 ). Thus, for each type of attribute set, criteria are created reflecting the information in the attributes.

(3)

i+1 i with OB the object in frame i + 1, OA the object in frame i, and Tattr the tolerance value for the attribute. For bifurcation the correspondence criterion is: X i+1 i attr(Sb∈N ) − attr(OA ) ≤ Tattr (4) i+1 with Sb∈N the sum of the combination of objects in neighborhood N . The criteria for bifurcation may be little different from the continuation criteria depending on the type of attribute: e.g. the continuation test for the position attribute uses the distance between the two objects, and the bifurcation test weights the positions with the volume or mass of the features.

The method of Sethi et al results in smooth trajectories corresponding to the preferences of the human visual system for continuity of motion. However, it uses the attributes as individual parameters in a k-dimensional property space where attributes are not related to each other and have no additional meaning. Also, the method cannot detect events. The method of Samtaney et al does treat the attributes as variables with a denotation and does detect particular events. However, the method lacks sense of continuity of motion. This is obviated by the requirement that the time between frames is small, i.e. the sampling frequency is large enough to capture object motion and evolution. We do not want to impose that requirement because we want to be able to track sparsely sampled time series. Also, the events ‘creation’ and ‘dissipation’ are not explicitly detected; they are merely a result of the inability for a positive match. We tried to combine the plus points of both methods, and created a method that tracks paths based on continuity of motion and that is able to detect particular events.

Position



~

P2 − P~1 ≤ Tpos

Volume

|V2 − V1 | ≤ Tvol max (V1 , V2 )

Orientation

(5)

(6)

R ~ 2 R ~1 1− · ≤ Tangle ~ 1 k kR ~ 2k kR (7)

Table 1: Correspondence criteria associated to the ellipsoid fitting attribute set, P~ = position, ~ = the vector of the main V = volume, and R axis. Basically, the criteria have the following format: f unc(O1 , O2 ) ≤ Tf unc

(8)

where f unc(O1 , O2 ) is some correspondence function between the attribute sets in object 1 and the ones in object 2, and Tf unc is the function tolerance specifying the deviation allowed. This is similar to Samtaney et al, however, the 3.2 Attribute Correspondence approach is more powerful; they always use the The first step in feature tracking and event de- same attributes, while in our case the user detection is to create a measure for the corre- cides which types of attribute sets are included 5

each object in the next frame. This prediction is used to find corresponding features in the next frame; all candidates (unmatched features) in that frame are tested for correspondence with the prediction. An example is given in figure 2. The top of the figure shows a number of matched features forming the path of an object (red objects), it shows the prediction at the end of the path (transparent object), and it shows a number of candidate features (blue objects). Clearly, one candidate feature corresponds very well to the prediction and should be added to the path.

in the feature data, depending on the type of application. The correspondence criteria related to the attributes pop up automatically. Several types of attribute sets are implemented and new types may be added easily. Each correspondence criterion is translated in a correspondence function as shown by equation 9. It has the following properties: Cf unc (O1 , O2 ) = 1 when the two objects are exactly the same, Cf unc (O1 , O2 ) = 0 when the deviation is on the limit of the tolerance, and Cf unc (O1 , O2 ) ≤ 0 when the deviation is larger then the tolerance.

f unc(O1 , O2 ) (9) For finding continuations, a prediction can be 1− Tf unc made by a linear extrapolation using the last  Exact match  1 two features in the path of an object: 0 Limit tolerance =  < 0 Exceed tolerance (ti+1 − ti ) Pi+1 = Oi + ∗ (Oi − Oi−1 ) (11) (t i − ti−1 ) After testing each correspondence functions, they are combined in one correspondence factor where Oi is the object in frame i, ti is the Corr(O1 , O2 ): time of frame i, and Pi+1 is the prediction to PNf unc the next frame. This equation becomes trivial Ci (O1 , O2 ) ∗ Wi (10) when frames are equidistant in time. A match is Corr(O1 , O2 ) = i=1 PNf unc Wi i=1 found when the correspondence factor between where Wi is the weight assigned to correspon- the prediction and the last feature in the path dence criterion Ci . The user assigns weights to is positive: each function (Wf unc = 0 means no evaluation Corr(Oi , Pi+1 ) ≥ 0 (12) at all) depending on the relevance of the criterion. When two features are tested for correspondence, all correspondence functions with a It is also possible to use higher order prediction positive weight are evaluated and the correspon- schemes, but the current applications don’t dedence factor is returned. The correspondence mand this and the gains do not counterbalance factor has similar characteristics as equation 9, the higher computational costs. Notice that this prediction scheme also works i.e. −∞ ≤ Corr(O1 , O2 ) ≤ 1.0. in the negative time-direction. Hence, we can Thus, two features can be corresponded by as- search for continuing paths in two directions: in signing suitable weights and tolerances for each the forward and in the backward time-direction. possible correspondence criterion. The resulting This may result in new solutions, because forcorrespondence factor is a measure for the cor- ward and backward predictions are different. respondence between these two features. A positive match between two features is found when 3.3.2 Initialization the correspondence factor is positive. Cf unc (O1 , O2 ) =

3.3 3.3.1

Yet, a prediction can only be made if a path exists, i.e. an initialization of a new path is required. The initialization is achieved by assuming a correspondence between two features in two successive frames. This assumption leads to a prediction that can be compared to all candidates in the third frame. If there is a candidate in the third frame that corresponds to the prediction, a new path is created and the path is continued into subsequent frames.

Tracking Continuations Prediction-Verification

In [11] we introduced a way of finding consistent paths by tracking the feature attributes using a prediction-verification method. The tracking algorithm is based on the basic assumption that: features evolve predictable. Using linear behavioral rules, we predict the attributes of 6

Figure 2: A visualization of a path (red objects), the prediction (transparent object), and a number of the candidate features (blue objects). Since a match in the third frame may be found coincidentally, an additional test on the resulting path is performed to ensure genuineness: the path length must be larger than a minimal path length (normally we take 4 or 5 frames).

in which the features are tested. The search for continuing paths may lead to multiple paths sharing the same feature: e.g. multiple candidates satisfy the prediction, or a candidate was already added to another path. This happens especially if the tolerances are relaxed. In case of multiple correspondences the path with the best confidence index gets the advantage. The confidence index of a complete path is calculated as follows:

Furthermore, two options are possible for the candidates: 1) all features in the next frame are candidates, and 2) only unmatched features in the next frame are candidates. The first option allows multiple solutions for one feature, and Call also requires an exhaustive search. The second (13) Conf (path) = 1 − e− τ choice limits the search because the number of edges X candidates decreases rapidly when more paths Corri with Call = are found, but it also removes a feature as posi=1 sibility once a feature has been added to a path, i.e. the tracking solution depends on the order where τ is a growth factor (which can be taken 7

• Entry/Exit. A feature at the start or the end of a path moves through an open boundary of the system (see Section 4.2) and disappears. The related event is called entry or exit, depending on the timedirection.

equal to the minimal path length), and edges are the connections in the path between features in successive frames (or the edges in the event graph, see section 5). The confidence index increases as the length of the path increases, which is convenient since, intuitively, our confidence in a path increases for longer paths. The confidence index is used when a choice has to be made between two paths sharing the same feature.

• Split/Merge. Two or more features merge and continue as one, or one feature splits and continues as two or more. The related event is called merge or split, depending on the time-direction.

The worst case complexity of the initialization is of O(nm2 ), with n the number of frames and m the number of features per frame which is usu• Unresolved. If none of the above events ally much less than 100. It should be noted that can give an explanation, the feature event the number of unmatched features from which is unresolved. a new path is started, decreases rapidly once a number of paths are found. So in practice the Note that most events come in pairs because they are each other’s reverse in time. This complexity will be much lower. means that a split can be found by testing Experience shows that good tracking results are the criteria for a merge in the backward timeobtained when tracking continuations is done direction. We use this property in the process in multiple (forward-and-backward) passes with of event detection. increasing tolerances. First, start with strict tolerances finding the obvious paths, and then re- The different type of events depend on the physlax the tolerances a little for each successive pass ical phenomena underlying the dynamics of the finding the more indistinct paths. When a pass feature. For instance, some features may not be with less strict tolerances is performed, paths able to merge, but collide when they meet, or the boundary of the system may be closed so may be extended, connected, or initialized. that entry and exit events are impossible.

4

Event Detection

4.1

The prediction/verification scheme can also be used to detect certain interesting evolving phenomena: events. Each event that can be detected has its own set of rules for prediction/verification. This section describes the utilization of the prediction/verification scheme for event detection.

Continuations

Continuations may be considered as non-events, they are the result of the feature path tracking process described in section 3.3. However, this step is necessary before the actual event detection can take place and may be repeated after the event detection results in new solutions. For instance, when two paths merge into an unCurrently, our method recognizes the following matched feature, this leads to a new ending of a path which may be continuated into the next events1 : frame. • Continuation. One to one correspondence found between two features in subsequent frames. The feature continues with- 4.2 Terminal Events out interaction with others. After continuing paths have been found, the end • Birth/Death. A feature at the start or points of the paths can be explained by testing at the end of a path decreases in size and for terminal events. In Samtaney et al [15], end dissipates. The related event is called birth points of paths are always classified as dissipaor death, depending on the time-direction. tion or creation. In our case, the end points of paths are unresolved unless a terminal event is 1 Compare the terminology to Silver et al. [15, 17, 18]: Continuation, Creation = Birth, Dissipation = Death, detected. Based on the notions described above Bifurcation = Split, and Amalgamation = Merge. the tests for terminal events will be as follows. 8

4.2.1

Again, the same tolerance Tpos is taken as in the position test for continuations. This tolerance can be translated to a zone of thickness Tpos along the soft boundaries were exits are possible.

Birth/Death

For a death two requirements must be satisfied:

1. The growth of the end point of a path must be negative. This is a test on the difference in size between the two last features in the An entry is detected as the reverse of an exit. path.

Sometimes the criteria for both terminal events are satisfied simultaneously for one end of a path. In this case the test with the highest correspondence factor can be taken as the right one. If both tests are equal, the preference of the user is taken as the terminal event. However, it is also possible to test each terminal event sepa(14) rately.

2. The size of the prediction must be small. If the size is negative, the death event was expected and the test returns a factor of 1.0. Otherwise the volume of the prediction is tested relative to the volume of the end point: Cdeath = 1 −

Vpred /Vend Tvol

where Tvol indicates the minimal percentage of decrease in volume, i.e. Tvol = 0.5 says that Vpred should be at least 50% smaller than Vend . The tolerance can be taken equal to the tolerance Tvol of the volume test for continuations.

4.3

The evolution of features may be influenced by other features if features interact. In some applications the question may be how features interact with each other or whether a particular interaction occurs. Most of the time the scientist probably has some idea what will happen when two features meet; if they merge, bounce, attract, or repel. The way features behave in the presence of one another depends very much on the underlying physical model. Somehow, this model has to be translated into event detection criteria. For instance when features are expected to bounce, a model can be made to calculate the predictions after the bounce occurred. These predictions can then be tested with candidates in the next frame using normal attribute correspondence. Thus, a bounce can be detected.

A birth is found as the reverse detection of a death. 4.2.2

Feature Interactions

Entry/Exit

Features may enter or exit the system through an ‘open boundary’ leading to an end of a path. An open boundary is a system boundary where features may move through. Some domains have open boundaries, for instance a flow through a pipe has an inlet and an outlet. The inlet and the outlet are open boundaries, but the pipe in between is a solid wall and therefore is a closed boundary. For an exit two requirements must be satisfied: 4.3.1 1. The object must move towards an open boundary. This is a test on the velocity and position of the last feature in the path compared to the boundary position.

Split/Merge

When two diffuse objects (like clouds) meet, they will probably merge into one. The result is an object with mass (or volume) equal to the combined masses (or volumes) and a position equal to the weighted positions (weighted by the mass or volume). This model for feature interaction is the underlying model for a split/merge event. Other behavior could also be modeled, leading to a different type of interaction event.

2. The position of the prediction must be near the soft boundary. If the position is over the boundary the exit event is expected and the test returns a factor of 1.0. Otherwise the position of the prediction is tested to the position of the boundary:



~

Ppred − P~bnds (15) Cexit = 1 − Tpos

In order to detect a split/merge event, we calculate a prediction for a merge based on the split/merge model mentioned above. Mass and volume attributes are added and position is 9

weighted by mass (or volume). For each possible type of attribute set similar rules for a merge are implemented. Table 2 shows the merge functions related to the ellipsoid attribute set. If two or more paths merge, the predictions of all paths are merged into one prediction that is tested with the candidates in the next frame using normal attribute correspondence. Split events are detected using the same algorithm for merge only in the reverse time direction.

1)

2)

3)

Position

Axes

Orientation

P~merge

P ~ Pi Vi = Pi i Vi

~ merge = R

(16) Figure 3: Schematic representation of the three possible merge situations (dotted lines are possible correspondences).

p P 3

Vi ~1 (17) 4/3π i

~ merge = ~0 A

Nn × Tpos , with Nn ≥ 1.0 the neighborhood factor. This reduces the number of path endings (and number of combinations) that are tested for a merge with a candidate.

(18)

Table 2: The merge functions that are related to the ellipsoid attribute set. These functions are used to calculate a merged prediction.

4.4

Unresolved

Unresolved are all features of which the evolution is not explained completely, i.e. in forward There are three possible situations how a merge and backward time-direction. For instance an may have occurred (see also figure 3): unexplained ending of a path is unresolved al1. Two or more paths ending in frame i merge though a correspondence is found in one direcinto an unmatched feature in frame i + 1. tion. An unmatched feature has no correspondence in either direction and is double unre2. Two or more paths ending in frame i merge solved. into the beginning of a new path in frame i + 1. Unresolved feature events result from the inability to find a correspondence. This may be 3. One path ending in frame i merges into a caused by a variety of reasons: continuing path. The tolerances may be too strict, they should For all paths ending in frame i the three situa- be increased. However, this may also give rise tions are tested in this order. First, all combina- to false correspondences that should not have tions of ending paths are tested for merging with been detected. The tolerances cannot be raised unmatched features, then with starting paths, unlimited without invoking false-positives. The object may have been missed during the and finally all remaining endings are tested for merging with continuing paths. We only accept feature extraction phase, so the object does not the combination of endings with the best (pos- exist in this frame. This happens when features itive) correspondence, and a merge with a con- are weak; i.e. either the data lies close to the tinuing path is only accepted if the (local) cor- selection threshold, or the number of selected nodes in the cluster lies close to the minimal respondence improves. In order to limit the number of merge tests, we number of nodes. This may cause the object first find the path endings with a prediction in to flash on and off. We call this effect ‘popthe neighborhood of the candidate. This is done ping’, and results in features that cannot be corby testing only the positions for normal con- responded. The time-sampling was too coarse; more timetinuation, however with larger tolerances Tn = 10

steps are needed in between the frames in order to describe the feature evolutions. The objects change too much between successive frames, i.e. their behavior becomes unpredictable. The objects should exist at least for the minimal path length before participating in an event. The particular event cannot be detected yet. The event detection method should be extended with a new event detection algorithm.

5

Visualization

The last step in visualizing time-dependent data is the visualization itself. The tracking results (correspondences and events) should be visualized in a clear and meaningful way. We use a combination of two viewers: the Feature Viewer and the Graph Viewer, which can be shown simultaneously. The feature viewer visualizes the features in 3D-space using iconic visualization techniques. The icons give an impression of the attribute values of a feature. The graph viewer visualizes the Event Graph, which is an abstract 2D representation of the correspondences. It is similar to the DAG used in [15], however with more utilities. We believe that the combination of the two viewers provides an excellent way to visualize time-dependent data. It helps the user to understand relations between corresponding features and to explore the data.

5.1

Figure 4: Icons used for the different events.

more clearly. It is also possible to relate the vertical axis proportionally to a data value of one of the attributes of the feature (for instance size), which shows the attribute as a function of time.

a)

Graph Viewer

The graph viewer shows the event graph: an abstract 2D representation of the time-dependent feature data that shows the relationships between corresponding features. In the event graph the features are represented by nodes, the correspondences between features by connecting lines (edges) between the nodes, and each particular event is shown by a distinct icon as shown in Figure 4. The frames are plotted horizontally and the features are plotted vertically: (x,y) = (FrameNr, FeatureNr). The space between nodes is fixed, which results in a clear distinction between the individual features. Figure 5 shows two different views of the same event graph: the graph drawn normally with the feature number on the y-axis (features are sorted by size), and the graph drawn with minimal edge crossings which separates the paths

b) Figure 5: Two visualizations of the same event graph: a) event graph drawn normally, b) event graph drawn with minimal edge crossings. Colors are used to distinguish the different paths in the graph. Unmatched features are shown in grey, while all feature nodes in the same path are shown with the same color. The nodes representing the features are drawn

11

Figure 6: The frame player for browsing through the frames using the feature viewer. with an icon that is related to the type of event detected for this feature (Figure 4). All icons are clear, intuitive, and easy to distinguish. By visualizing the continuations as small squares, the focus is automatically directed towards the special events.

5.2

viewers provides a strong, interactive tool for visualizing and exploring time-dependent data. The two viewers provide user-interaction in three ways:

Feature Viewer

The feature viewer visualizes the features in 3Dspace using iconic visualization techniques [20]. The parameters of the icons are directly related to attribute values of a feature. Thus, the user can view the evolution of objects by animating the icons. A player was created to browse through the frames and to allow exploration of the timedependent evolution of features in 3D. Figure 6 shows the player displaying one frame.

5.3

Two Linked Views

• They provide an overall visualization of time-dependent data, which facilitates the recognition of certain patterns in the evolution of the features. • They provide means for selective visualization of paths, nodes, and event history. Queries on the event types are possible, such as ‘show all merge events’. Or the user can view additional information by selecting one particular path, frame, node, or edge. • They allow guidance of the tracking process. The tracking process can be performed automatically, semi-automatically, and stepwise. In the stepwise mode, the user can examine correspondences, and interactively steer the tracking by changing parameters.

The linked combination of the graph viewer and the feature viewer is a powerful tool to explore time-dependent data. The same colors for features are used in both viewers which creates a direct link between nodes in the graph viewer and features in the feature viewer. Also, when a 6 Applications node is selected in the graph, the corresponding feature icon is shown in the feature viewer, and 6.1 Synthetic Data vice versa. Figure 2 shows one step during the tracking The tracking method can be tested using synprocess. The direct correlation between the two thetic data generated as described in [12]. The 12

synthetic data is generated using predefined feature data, in which correspondences and events are known in advance. Motion paths and other attributes are specified for each feature as a function of time. The predefined feature data is used to generate grid data (possibly with noise added). Then the whole process of feature extraction, feature tracking, and event detection is executed. The result can be compared to the correspondences which were known in advance.

of the complexity of the data; a high average and strongly varying number of features per frame. We tracked the data automatically with increasing tolerances. The result is an event graph that is solved for 96.0%. Figure 7 shows the features in one frame, visualized as spherical icons in the feature viewer. An animation of all frames can be found on the web [9].

Figures 2, 5, and 6 are from the same synthetic data set which includes all possible events. Tracking is performed automatically with increasing tolerances. The final event graph is shown in Figure 5. It is 100% solved and in full accordance with the correspondences known in advance. The unresolved end points of paths at the first and the last frames of the dataset are not counted as unresolved. In a sense these are the temporal equivalents of the spatial Entry/Exit events: they leave the time-interval. Thus, they are not counted as unresolved. A solving percentage Psolved for the graph can be calculated by: Psolved = 100 ×

2Nf trs − Nunres 2Nf trs

(19)

where Nf trs is the total number of features over all frames, and Nunres is the number of unresolved events. The number of features is multiplied by two, because each feature has to be corresponded in two time directions.

6.2

Turbulent Vortex Structures

The second application is a CFD simulation with turbulent vortex structures. The feature data and event graph were obtained from Silver et al [17] from Rutgers University. The data acquisition and feature extraction were performed in the US and the feature data was transferred to the Netherlands, where it was used for feature tracking. This illustrates one of the benefits of the huge data reduction achieved by feature extraction. The feature data can easily be transferred via a low bandwidth link. The feature data consists of 100 frames with for each vortex structure its position, volume, and mass attributes. The 100 frames contain a total of 5134 features, which is a relatively large average of over 50 features per frame. This is a good test case for our tracking method, because

Figure 7: Turbulent vortex structures. Also, we obtained the event graph found by Silver et al using correspondence criteria based on overlap in octrees and feature attributes, as described in [17]. We compared our tracking results with theirs. Table 3 shows the event count per event for both results. The number of remaining unmatched features is 161 in our case against 256 in theirs. Note that our number of exit and death events should be added to make a fair comparison with the number of exit events from Silver. Our tracking method found more continuations but less events. The two graphs can be compared by counting the coinciding and the non-coinciding edges. There are 8994 coinciding edges, 490 edges in our graph but not in theirs, and 452 edges in theirs not in ours. So, the two graphs are 100 × 8994/(452 + 490 + 8994) = 90.5% in agreement with each other. The differences can be explained by two reasons. First, their algorithm is based on overlap which means that fast moving, small features are not matched. Also, it is possible to get paths

13

Type of event Continuation Birth Entry Death Exit Split Merge Unresolved

# Silver 4431 205 n/a 256 n/a 142 75 490

# Our method 4610 58 118 126 83 40 25 406

Table 3: A comparison of the two tracking methods by counting the events.

with just two nodes (one edge), while in our case a path has at least three nodes (two edges). Second, the attributes mass, position and volume may not be good attributes for this type of feature. The features are so-called “worms” that have a strongly-curved shape. Based on these attributes, the neighborhood criterion, used by the split/merge detection, is inaccurate: a feature that splits may result in two features that have a large distance between their positions. The octree representation provides a much more accurate measure for neighborhood and therefore Silver et al can detect more split/merge events. An attribute set with skeleton information [10] provides a much better description of shape and may provide a better measure for neighborhood.

6.3

Flow Past a Tapered Cylinder

Figure 8: Standard visualization of the flow past a tapered cylinder.

After automatic tracking with increasing tolerances we found 3531 continuations, 82 births, 40 deaths, 27 splits, 23 merges, 886 unresolved, and still 239 unmatched features remaining. The event graph is solved for 89.2%. Figure 9 shows one frame in the feature viewer. The example shows the benefits of the data reduction obtained by the feature extraction step: from > 1Gb to 476 Kb. It is impossible to explore all the data by standard visualization, but after feature extraction the user can easily browse through the frames. Thus, we can explore the time-dependent data in search of temporal phenomena. An animation can be found on the web [9].

The third application is a flow past a tapered cylinder. The data was obtained from the NASA Ames Research Center2 . It consists of 400 frames of 2.6 Mb each (Plot3D data, grid size 64x64x32, total data size > 1Gb). Figure 8 shows a standard visualization of one timestep with streamlines, a slice colored by the enthalpy, and the features shown by ellipsoids. The creation of such a visualization takes several minutes, making it unfeasible to interactively browse through the frames. Features were extracted from the enthalpy (enthalpy ≥ -0.6997), and for each feature an ellipsoidal fit was calculated. The 4121 features are stored in the feature data file (with a size of 476Kb). The features are highly interacting Figure 9: One frame in the flow past a tapered regions in the wake of the cylinder. cylinder. 2 Data available via http://science.nas.nasa.gov/ Software/DataSets/

14

7

Conclusions

they provide an excellent way for visualization and means for user guidance of the tracking proIn this paper we described an innovative method cess, and exploration of the time-dependent phefor the visualization of time-dependent data us- nomena. ing feature tracking and event detection. The method uses only basic attributes describing The applications showed that the results are the characteristics of interesting features in the most satisfactory. Almost 90% of the turbulent data. The feature data is used to track features vortices example is in agreement with the results by a prediction/verification scheme. The fea- obtained by Silver et al. The question arises ture in the next frame is predicted using un- ‘which solution is correct?’. The answer is, that complicated prediction schemes. This predic- there is no objective way to determine the cortion is corresponded to the candidates in that rectness of a correspondence, as temporal coherframe by testing the attribute values with a ence is the only basis for correspondence. Both number of correspondence criteria. Each corre- methods have their advantages. Silver et al uses spondence criterion is associated to a tolerance octrees that consume memory, but provides betand a weight, which can be changed by the user. ter means to detect split/merge events in case of The tracking process is highly flexible and in- highly curved features. Our method uses only teractive, i.e. the user can easily change track- feature data which allows tracking with multiple ing parameters and investigate their influence on passes and more flexibility. the tracking results. The interaction is feasible, because we obtained a huge data reduction by Future Research the transformation from the original grid data 8 to the feature data. Now it is possible to perform the tracking process interactively and in Future research will focus on the detection of multiple passes in forward and backward time- new types of events. Especially, new interaction direction. Also, the system can be extended eas- events are interesting (bounce between two feaily by including other prediction schemes and tures, transition from one feature to another). Furthermore, we are working on skeleton decorrespondence criteria. scriptions of the shape of a feature, it would be Feature tracking results in a description of the path of each object. An object path de- worthwhile to develop tracking algorithms for scribes the life cycle of a feature, its origin, mo- skeletons. Finally, effort is put in the investigation, growth, and extinction. Using these path tion of temporal refinement; zooming in on one descriptions, we can detect certain evolutionary particular event in space and in time. phenomena, the events, in the evolution of a feature.

Acknowledgments

An event can be defined as any development in the evolution of a feature that is of interest. Events can be unusual changes in the attributes, particular stages in the life cycle of a feature, specific interactions with other features, or periodic recurring patterns in the evolution. At the moment we can detect the following events: continuation, birth/death, entry/exit, and split/merge. But new algorithms for other events may be added in the future. Also, we introduced a new way of visualizing and exploring time-dependent data using the event graph viewer in a linked combination with a 3D feature viewer. The graph viewer shows the event graph in an abstract 2D way, and the feature viewer provides a browser showing icons representing the features in 3D-space. Together

The authors wish to thank prof. Deborah Silver and dr. Xin Wang of Rutgers University for the use of their turbulent vortex data, and dr. Jarke J. van Wijk of TU Eindhoven for the stimulating discussions and useful input. This work is supported by the Netherlands Computer Science Research Foundation (SION), with financial support of the Netherlands Organization for Scientific Research (NWO).

References

15

[1] R. Adams and L. Bischof. Seeded Region Growing. IEEE Trans. on PAMI, 16(6):641–646, June 1994.

[2] G. Adiv. Determining 3D Motion and W. Ribarsky, editors, Data Visualization Structure from Optical Flows Generated by ’99, pages 63–72. Springer Verlag, 1999. several Moving Objects. IEEE Trans. on [12] F. Reinders, H.J.W. Spoelder, and F.H. PAMI, 7:384–401, 1985. Post. Experiments on the Accuracy of Fea[3] D.C. Banks and B.A. Singer. A Predictorture Extraction. In D. Bartz, editor, VisuCorrector Technique for Visualizing Unalization in Scientific Computing ’98, pages steady Flow. IEEE Trans. on Visualiza49–58. Springer Verlag, April 1998. tion and Computer Graphics, 1(2):151–163, [13] W.B. Rossow, A.D. Del Genio, and T. EichJune 1995. ler. Cloud-Tracked Winds from Pioneer [4] J. Becker and M. Rumpf. Visualization Venus OCPP Images. J. of Atmos. Sci., of Time-Dependent Velocity Fields by Tex47(17):2053–2082, Sep 1990. ture Transport. In D. Bartz, editor, Visualization in Scientific Computing ’98, pages [14] I.A. Sadarjoen and F.H. Post. Deformable Surface Techniques for Field Visualization. 91–101. Springer Verlag, 1998. In D. Fellner and L. Szirmay-Kalos, edi[5] R. B. Haber and D. A. McNabb. Visualtors, Proc. Eurographics ’97, volume 16(3) ization Idioms: A Conceptual Model for of Computer Graphics Forum, pages C109– Scientific Visualization Systems. In G. M. C116. Blackwell, Sep 1997. Nielson, B. D. Shriver, and L. Rosenblum, editors, Visualization in Scientific Comput- [15] R. Samtaney, D. Silver, N. Zabusky, and J. Cao. Visualizing Features and Tracking, pages 75–93. IEEE Computer Society ing Their Evolution. IEEE Computer, Press, 1990. 27(7):20–27, July 1994. [6] J.L. Helman and L. Hesselink. Visualization of Vector Field Topology in Fluid [16] I.K. Sethi, N.V. Patel, and J.H. Yoo. A General Approach for Token CorresponFlows. IEEE Computer Graphics and Apdence. Pattern Recognition, 27(12):1775– plications, 11(3):36–46, 1991. 1786, Dec 1994. [7] D.S. Kalivas and A.A. Sawchuk. A Region Matching Motion Estimation Algorithm. [17] D. Silver and X. Wang. Volume Tracking. In R. Yagel and G.M. Nielson, editors, CVGIP: Image Understanding, 54(2):275– IEEE Proc. Visualization ’96, pages 157– 288, Sep 1991. 164. Computer Society Press, 1996. [8] H.G. Pagendarm and B. Seitz. An Algorithm for Detection and Visualization [18] D. Silver and X. Wang. Tracking Scalar Features in Unstructured DataSets. In of Discontinuities in Scientific Data Fields D. Ebert, H. Hagen, and H. Rushmeier, edApplied to Flow Data with Shock Waves. In itors, IEEE Proc. Visualization ’98, pages P. Palamidese, editor, Scientific Visualiza79–86. Computer Society Press, 1998. tion: Advanced Software Techniques, pages 161–177. Ellis Horwood Limited, 1993. [19] D. Silver, N.J. Zabusky, V. Fernandez, [9] F. Reinders. http://www.cg.its.tudelft.nl/ ˜freek/Tracking. Web page with feature tracking examples.

M. Gao, and R. Samtaney. Ellipsoidal Quantification of Evolving Phenomena. In N.M. Patrikalakis, editor, Scientific Visualization of Natural Phenomena, pages 573– 588. Springer Verlag, 1991.

[10] F. Reinders, M.E.D. Jacobson, and F.H. Post. Skeleton Graph Generation for Feature Shape Description. In W. de Leeuw [20] T. van Walsum, F.H. Post, D. Silver, and F.J. Post. Feature Extraction and Iconic and R. van Liere, editors, Data VisualizaVisualization. IEEE Trans. on Visualization 2000, pages 73–82. Springer Verlag, tion and Computer Graphics, 2(2):111–119, 2000. 1996. [11] F. Reinders, F.H. Post, and H.J.W. Spoelder. Attribute-Based Feature Tracking. In E. Gr¨ oller, H. L¨ offelmann, and 16

Suggest Documents