Transfer Function Optimization Based on a Combined Model of

0 downloads 0 Views 12MB Size Report
Then the saliency field is defined as the absolute difference ..... {1/21, 2/21, 3/21, 4/21, 5/21, 6/21} for six features in the su- pernova). ... language such as C++ or by using the GPU for the computa- tion. ..... 1st ed.; A. K. Peters, Ltd.; 2011.
Transfer Function Optimization Based on a Combined Model of Visibility and Saliency Shengzhou Luo, John Dingliana Graphics Vision and Visualisation Group, Trinity College Dublin, Ireland

Abstract In this paper we present an automated approach for optimizing the conspicuity of features in 3D volume visualization. By iteratively adjusting the opacity transfer function, we are able to generate visualizations that satisfy a user-specified target distribution defining the relative conspicuity of particular features in the data set. Our approach exploits a metric, called Visibility-Weighted Saliency (VWS), that takes into account both the issues of view-dependent occlusion and visual saliency in defining the visibility of features in volume data. A parallel line search strategy is presented to improve the performance of the optimization mechanism. We demonstrate that the approach is able to achieve promising results in optimizing visualizations of both static and time-varying volume data. Keywords: Visualization, volume rendering, transfer function, visibility, saliency

1. Introduction Volume visualization is an effective means of discovering or analyzing 3D features in volumetric data sets, which are ubiquitous in various fields of Science, Medicine and Engineering. This type of data is represented as a regular 3D grid of volume elements (or voxels), which inherently allow for interior and exterior structures to be viewed simultaneously. The degree to which certain structural features are emphasized over others is controlled by modulating a transfer function, which maps specific voxel data ranges to appearance attributes such as color and opacity. In the specification of transfer functions, users typically have a general idea of how visible each feature should be for a given task and then adjust the opacity values in the transfer function accordingly. However, the relationship between the opacity of voxel ranges and the visibility of features in the final image is not linear. In fact, visibility in the final image is dependent on voxel opacities as well as view-dependent occlusion by other objects. Furthermore, the distinctiveness of features is also affected by how much a feature’s appearance contrasts from others in the neighborhood. Due to the interaction of these different factors, transfer function specification generally necessitates a trial-and-error process, with the user having only indirect control through a set of complex parameters with unintuitive effect on the final rendering. Therefore, it is desirable to have an automated method to assist the user in the design of transfer functions that can objectively match target feature visibility levels specified by the user. In this paper, we propose an optimization approach that supports this requirement by automatically refining a user-defined transfer function towards a simply-defined visibility distribution provided by the user, based on a model that takes into account issues of saliency as well as occlusion and transparency. Furthermore, we present a parallel line search strategy to imPreprint submitted to SCCG 2017

prove the performance of the transfer function optimization to make it more suitable for interactive visualization and demonstrate the applicability of the approach to both static and timevarying volume data sets. Summary of our main contributions: • an alternative technique for automated feature optimization in volume visualization, incorporating measures of visibility and saliency • a user-based evaluation of the efficacy of combining visibility and saliency in emphasizing features • a parallel iterative technique to improve optimization performance • application of the technique to optimizing time-varying volume data 2. Related Work Transfer function specification is a non-trivial and unintuitive task in volume visualization and often achieved using subjective manual input. For generalizable solutions, it is desirable to have objective feedback regarding the clarity of features in volume visualization. Correa and Ma [1] introduced visibility histograms to guide transfer function design for both manual and automatic adjustment. Wang et al. [2] extended these to feature visibility histograms, in order to measure the influence of each feature upon the resulting images. Ruiz et al. [3] proposed an information-theoretic framework which obtains opacity transfer functions by minimizing the Kullback-Leibler divergence between the observed visibility distribution and a target distribution provided by the user. Later, Bramon et al. [4] extended this approach to visualize multi-modal volume data. Cai April 19, 2017

et al. [5] described a method to derive opacity transfer functions by minimizing the Jensen-Shannon divergence between the observed visibility distribution and a user-defined target distribution. The target distribution can be defined using Gaussian function weighting. Qin et al. [6] proposed using a Gaussian mixture model to build a visibility distribution function and optimize the opacity transfer functions by minimizing the distance between the desired and actual voxel visibility distribution. Although many approaches in volume visualization attribute the visibility of a feature to its transparency and level of occlusion by other features, it is important to note that the ability of a viewer to perceive features can be strongly affected by how it stands out against others in its neighborhood. Thus an understanding of salient regions in images [7] can be useful for improving visualizations. Inspired by mechanisms of the human visual system, various computational models of visual saliency have been proposed to predict gaze allocation in an image [8] [9]. Lee et al. [10] presented mesh saliency, which is defined in a scale-dependent manner using a center-surround operator on Gaussian-weighted mean curvatures. In the scope of volume visualization, Kim and Varshney [11] introduced the use of center-surround operators to compute saliency fields of volume data sets. Based on perceptual principles, Chan et al. [12] introduced several image quality measures to enhance the perceived quality of semitransparent features. J¨anicke and Chen [13] described a quality metric for analyzing the saliency of visualization images and demonstrated its usefulness with examples from information visualization, volume visualization and flow visualization. Shen et al. [14] proposed the use of saliency to assist volume exploration. They described a method for inferring interaction position in volume visualization, in order to help users pick focused features conveniently. Shen et al. [15] described spatiotemporal volume saliency, which extended the saliency field [11] to time-varying volume data. Recently, Luo and Dingliana [16] proposed a metric that combines both aspects of visibility and saliency and this is exploited in the approach presented in this paper.

functions in volume visualization. The metric is defined based on two component fields in 3D, which we define below. Visibility Field. The Visibility field indicates viewpoint-dependent occlusions of voxels and is affected by both the opacity of the voxel and the opacity of those voxels in front of the current voxel in the view direction. The visibility of voxel i is calculated using front-to-back compositing, described by Emsenhuber [17] as: vi = Ai − Ai−1 = (1 − Ai−1 )ai where ai is the opacity of voxel i and Ai is the accumulated opacity at voxel i. Then, the visibility field, V, is simply the visibility of all the voxels in the volume, defined as: V = { vi | i ∈ V } Saliency Field. The saliency for a voxel indicates how it stands out within its local neighborhood, and is essentially modeled as a difference of Gaussian in 3D with respect to appearance attributes such as brightness and saturation. We use a centersurround operator similar to the one used by Shen et al. [15] to compute the saliency field. Let the neighborhood N(i, σ) for a voxel i be the set of voxels within a distance σ. Thus, N(i, σ) = { j | k j − ik < σ }, where j is a voxel. Let G(O, i, σ) denote the Gaussian weighted average X O j g(i, j, σ) G(O, i, σ) = j∈N(i,σ)

where g(i, j, σ) = P

exp[−k j − ik2 /(2σ)2 ] 2 2 k∈N(i,σ) (exp[−kk − ik] /(2σ) )

and O is a field of appearance attributes of every voxel in the volume and O j is the appearance attribute of voxel j. Then the saliency field is defined as the absolute difference of Gaussian-weighted averages L(O, i, σ) = |w1G(O, i, σ) − w2G(O, i, 2σ)| where w1 and w2 indicate the weights of the Gaussian-weighted averages at a fine scale and a coarse scale respectively.

3. Background: Visibility-Weighted Saliency

VWS of Features. In particular, the VWS metric concerns itself with conspicuity of specific features within the data. In the context of this paper, a feature is defined as a range in the histogram of voxel values (see Figure 1)), and can be represented by a subset of the transfer function graph, typically as a ramp, box shape, tent-like shape or trapezoidal shape [18]. However, we believe that the approach presented is extendable to more generalized definitions of features, as long as there exists some means of objectively identifying contiguous subsets of the volume data e.g. by segmentation or annotation. The VWS of the feature U in the volume V (U ⊆ V) with respect to the appearance attribute O is P j∈U v j L(O, j, σ) (1) W(O, U, σ) = P j∈V v j L(O, j, σ)

In volume visualization, “visibility” is a term commonly used to encapsulate the opacity of a feature combined with the degree to which it is occluded by other features closer to the viewer. However another significant element of visibility, often discussed in other fields such as computer vision, is the degree to which a feature stands out from its neighborhood; in other words, its saliency. For disambiguation we will refer, in the rest of this paper, to these properties (opacity, occlusion and saliency) collectively as the conspicuity of a feature, and we argue that it is this property that needs to be enhanced in order to support a vast range of volume visualization tasks. In previous work [16], a metric called Visibility-weighted Saliency (VWS) was proposed, which simultaneously indicates the perceptual saliency and visibility, hence the conspicuity, of features in volume visualization. For completeness, we will briefly present the basis of VWS, before we discuss how we extend upon the original work to automatically optimize transfer

where L(O, j, σ) is the absolute difference of Gaussian-weighted averages in a local neighborhood within a distance σ of the voxel j and v j is the visibility of j. 2

4. VWS-based Optimization of Transfer Functions In this section, we describe our main contribution, namely a transfer function optimization approach which exploits the visibility-weighted saliency metric, described above, to automatically adjust the relative conspicuity of features based on a user’s specification of their relative importance. Although VWS provides the means to score the relative conspicuity of features in any given visualization, there is still no linear relationship between the visualization parameters and the desired distribution of visibility in the output, thus an iterative approach must be applied to achieve a target visibility distribution. In our approach, only the opacity of features are changed in the transfer function domain whilst the classification of features (e.g. intensity ranges on 1D transfer functions) represented by the color map remains invariant. This is based on the assumption that there exists some pre-defined classification of feature ranges by color, and we only adjust the saliency and occlusion while preserving this classification of the data set. 4.1. Objective Function Our transfer function optimizer adjusts the transfer function to match the visibility-weighted saliency with the user-defined target conspicuity distribution of features. An objective function F is defined as the mean square of differences between the visibility-weighted saliency and the target importance of each feature. Pn (Wi − ti )2 (2) F = i=1 n where Wi = u1 W(Ob , i, σ)+u2 W(O s , i, σ) is the visibility-weighted saliency of feature i, ti is the user-defined importance of feature i, Ob and O s are the brightness and saturation of feature i, respectively and n is the number of features. These user-defined conspicuity values are normalized and they add up to 1, in other P words, ti ∈ [0, 1] and ni=1 ti = 1. In the VWS scheme, as in other saliency models such as [8] and [11], saliency can be defined with respect to appearance attributes such as brightness, saturation, hue and orientation. Multiple saliency fields computed from different attributes can be combined together in order to represent a more holistic model of visual saliency. In our implementation, W j is a weighted sum of visibility-weighted saliency values computed

Visibility-weighted saliency 1.0 0.8

0.4 0.2

feature1 feature2 feature3

0.557474

0.6

0.0

(c) (a)

4.2. Optimization Algorithm In order to reduce the number of iterations required in the optimization, we employ Line Search [20], which is an iterative approach that adapts the step size in gradient descent in order to achieve a reduction in the objective function while still making sufficiently fast progress. More specifically, we employ an inexact line search strategy, which has some advantages over exact methods that may suffer deteriorations in convergence [21]. As exact line search is not in the scope of this paper, for convenience we will use the term “line search” to refer to inexact line search. The procedure for this is as follows: 1. Set initial iteration count m = 0 and set M to the maximum iteration count. 2. Check whether f (xk +γm+1 gk ) < f (xk +γm gk ) where γm = 2m , gk is the descent direction and xk is the current point at the k-th iteration 3. If so and m < M − 1, then m = m + 1 and repeat Line 2, otherwise terminate the line search and γm is the chosen step size. This strategy does not find the exact minimum along the line direction, instead it yields reasonable results and descends much faster than gradient descent. Note that the visibility-weighted saliency of a feature often increases as its feature opacity increases, however it also depends on the rendering viewpoint and the spatial distribution of voxels of every feature in the volume data set. Figure 2 (d) and (e) display how the opacities of the features are adjusted

0.263209

0.179316

using brightness (Ob ) and saturation (O s ) of voxels in the CIE L*C*h color space [19], and u1 and u2 are weights of the two appearance attributes respectively. In the absence of other specific assumptions, we find that applying equal weightings to different attributes provides dependable results. However, the visibility-weighted saliency Wi is not a variable that can be directly modified. Instead, Wi is a complicated function of the color and opacity of voxels in feature i and is also influenced by the viewpoint of rendering. The saliency field is a view-independent field based on the color of every voxel in the volume data set, while the visibility field is a viewdependent field computed from the opacity contribution of every voxel to the final image when rendered from a certain viewpoint. In particular, the computation of visibility fields is nontrivial. In order to compute a visibility field, a slice-based rendering is performed with a series of slices parallel to the viewing plane. Visibility values are computed by subtracting the accumulated opacity of the previous slice from that of the current slice. After collecting the visibility values of all voxels, the visibility field can be constructed. In general, the evaluation of the objective function is computationally expensive. However, for iterative optimization, the visibility field and visibilityweighted saliency need to be recomputed at each step after the feature opacity values are updated.

(b)

Figure 1: (a) nucleon data set (b) transfer function with 3 features with peak control points set to opacities in the ratio {0.1, 0.3, 0.6} (c) VWS graph indicates feature 2 is most conspicuous despite lower opacity

3

The parallel line search strategy introduces some extra overhead of starting and terminating threads. In practice, the number of threads should not exceed the number of cores of the processor, otherwise multiple threads have to share the same core and this would impact performance. In fact parallel line search is beneficial only when the evaluation of the objective function is more expensive than the overhead of parallel processing. In our case, the objective function is particularly expensive as it requires computing the visibility fields, which in turn requires that a pass of slice-based volume rendering is performed. The performance of the approach is discussed in Section 5.

Visibility-weighted saliency 1.0 0.8 0.604137

0.6 0.4

feature1 feature2 feature3

0.298523

0.2 0.0973397 0.0

(c)

(b) (a) Opacity over iterations

Opacity over iterations

1.0

1.0

0.8 0.6









◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆

0.4 ■



0.2 ●

0.0



0.6





0.8 ● feature1 ■ feature2 ◆ feature3

● feature1 ■ feature2 ◆ feature3



0.4

4.3. Application to Time-Varying Data

■ ■















5

■ ●

■ ●

0.2 ■ ●

■ ●

10

■ ●

■ ●

■ ●

■ ●

■ ●

■ ●

■ ●

■ ●

■ ●

15

(d) Gradient descent



0.0 1.0

■ ●

1.5

2.0

■ ●

2.5

We were particularly interested in applying our optimization to the case of time-varying volume data, where multiple time-steps are each represented as individual volume data sets. In such cases, it is particularly difficult and time-consuming for end users to derive optimal transfer functions using a manual process as the impact of any small change to the transfer function would need to be examined across many different frames of animation. Our automatic optimization approach provides two alternative solutions to this problem. On one hand, optimization could be applied to a single representative time-step, such as the first frame or median frame, to provide a single global transfer function, which is then applied to the full sequence of volumes. More interestingly, our automated technique allows us to adaptively optimize the transfer function for each individual time-step in the sequence. As the distribution of data, i.e. the histogram, changes from frame to frame, the optimal transfer function to visualize features in a given frame also evolves. Thus such an adaptive process yields more optimal results across the full sequence of frames. Results of applying these two methods are presented in the next section.

3.0

(e) Line search

Figure 2: (a) After the optimization towards relative visibility of 0.1, 0.3, 0.6 respectively for each feature, the green feature is particularly emphasized. (b) The optimized transfer function; (c) VWS; (d) & (e) The opacity of the peak control points over iterations, with gradient descent and line search respectively.

over iterations with gradient descent and line search respectively. Compared to a standard gradient descent, line search takes many fewer iterations to converge to a desired target. 4.2.1. Parallel Line Search The classical gradient descent is a sequential algorithm. In its iterative procedure, the next iteration takes the result from the previous iteration as input. However, the line search at each iteration can be computed in parallel to accelerate the optimization. Thus we further propose a parallel line search strategy, which evaluates the objective function at different candidate points in parallel along the line search direction. With this parallel approach, the computing power of modern multicore processors can be better exploited to accelerate the transfer function optimization. Specifically, multiple threads are launched to perform the line search, with each thread computing the visibility-weighted saliency and the objective function at a candidate point. Subsequently, the results at all the candidate points are aggregated and the candidate point with the minimum objective function value is chosen as the next step. The procedure for the parallel line search is as follows:

5. Evaluation 5.1. Efficacy of Visibility Weighted Saliency A similar approach to VWS is the Feature Visibility (FV) measure of Wang et al. [2] which analyses voxel visibility and can be used to adjust transfer functions to emphasize chosen features. VWS in theory provides an improvement over this approach by adding a measure of visual saliency. To illustrate the impact of this, Figure 3 shows an image of the tooth data set and its feature visibility and visibility-weighted saliency. In Figure 4, the saturation of the red feature is reduced. VWS is able to pick up the saturation change, thus the red feature has lower VWS and the yellow feature has higher VWS in Figure 4 (c). On the other hand, FV, being purely based on visibility, is indifferent to the saturation change, thus the feature visibility in Figure 4 (b) is the same as that in Figure 3 (b). From this we can conclude that VWS would provide a more accurate sense of conspicuity in cases where features stand out mainly based on their color contrast. Furthermore, we were interested in how VWS compared to a pure saliency metric. One such measure is the widely used saliency model of Itti et al. [8]. The output from this model is a

1. Generate a list of M step sizes S = {γ0 , γ1 , ..., γ M−1 } where γm = 2m 2. Evaluate f (xk + γm gk ) in parallel for each γm in S 3. Find the index m of the minimum f (xk + γm gk ), and γm is the chosen step size. The mechanism of the parallel line search is sightly different from the sequential version. In the latter, if the current candidate point does not meet the condition, the search is terminated and the next candidate point would not be evaluated. In contrast, the parallel line search would always evaluate all the candidate points and pick the one with the least value of the objective function. However, these two methods have the same behavior if the objective function is a convex function. 4

Visibility-weighted saliency

Visibility-weighted saliency

1.0

Feature Visibility

Visibility-weighted saliency

1.0

1.0

0.8 0.6

0.496965

0.4 0.2

0.386219

0.8

0.661618

feature1 feature2 feature3

0.6

0.8 feature1 feature2 feature3

1.0

0.8

0.602564

0.6 0.4

0.4

feature1 feature2 feature3

0.2

0.298466

0.4

0.0

(a)

0.0

(b)

(b) Visibility-weighted saliency

Visibility-weighted saliency

(c)

1.0

0.863848

0.8 feature1 feature2 feature3

0.6 0.4 0.2

Figure 3: (a) The tooth data set; (b) Feature visibility [2]; (c) Visibilityweighted saliency

0.120077

0.4 0.2

0.0160746

0.0

Feature Visibility 0.8 0.496965

0.4

0.386219

feature1 feature2 feature3

0.765085

0.4

(d)

asked to score 54 images (9 different transfer functions for each of the data sets) on a scale of 1 to 5 by keyboard input, based on how “clear and distinct” a particular feature appeared. We applied Spearman’s rank correlation [24], to evaluate the strength of monotonic associations between the users’ observed Mean Opinion Scores (MOS) on the conspicuity of features against scores calculated from the three computational metrics, VWS, FV and 2DFS. Our hypothesis being that if a correlation exists, this should imply that the metric is useful in predicting user perception of the visualization. As shown in Table 1, we found strong positive correlations between MOS and VWS (0.67508), and MOS and FV (0.678626) respectively, whilst there is a moderate positive correlation between MOS and 2DFS (0.550472). The results indicate that VWS is equivalent to FV and noticeably better than 2DFS in terms of correlation to user opinion scores of feature distinctiveness. Note that in this experiment we only modified the opacity transfer functions, as color function optimization is outside the scope of the paper, thus the results do not reflect that VWS may behave differently from FV due to color contrast factors.

0.0

(b)

0.00343598

feature1 feature2 feature3

0.6

0.2 0.0907423 0.144173

0.116815

0.0

0.0619607

0.0

Figure 5: Sample images of the stimuli shown in the user study and corresponding VWS values.

Visibility-weighted saliency 1.0

0.8

feature1 feature2 feature3

0.6

(c) 1.0

0.934603

1.0

0.8

(a)

0.2

0.00453912

0.0

0.2 0.0989701

0.116815

0.6

feature1 feature2 feature3

0.281234

0.2

0.0372618

0.0

0.714227

0.6

0.30112

(c)

(a) Figure 4: (a) Saturation of the red feature is reduced; (b) Feature Visibility is the same as Fig. 3 (b); (c) VWS identifies change in emphasis of red and yellow features.

2D view-dependent map indicating the visual saliency of pixels in the full rendered image and thus can not directly be used to estimate the visual saliency of 3D voxels. However, using an inverse distance weighting [22] between pixels of the final image and individual 2D feature images (rendered from the same view but each isolating only the individual features), we can estimate the visual saliency of each feature as the weighted total saliency of each 2D feature saliency map. We refer to this as 2D Feature Saliency (2DFS) in the rest of the paper. We conducted a brief user study to gather subjective opinion scores regarding the distinctiveness of features in volume visualizations, and then analyze the correlations between the subjective opinion scores and the scores of VWS, 2DFS and, for completeness, the feature visibility (FV) measure discussed above. Since VWS has not previously been formally evaluated with user studies, we felt that such a step was first required establish the efficacy of the metric. The premise is that if we prove that VWS ratings sufficiently represent user opinion, we can consequently infer that our VWS-optimized transfer functions align reasonably well with user expectation of a given target distribution. 30 participants (20 male and 10 female) took part in the experiment to gauge how the visual saliency of objects in volume visualization was perceived by human users. They were shown volume visualizations, rendered using Voreen [23], of 6 wellknown volume data sets (Engine Block, Foot, MRbrain, Nucleon, Tooth and Vismale Head) taken from publicly available repositories 1 2 3 . The opacity transfer functions for visualization were manually edited for varying degrees of emphasis on specific features (see example in Figure 5). Participants were

Table 1: Spearman’s rank correlation of 54 opinion scores against the corresponding VWS, FV and 2DFS respectively.

MOS vs VWS MOS vs FV MOS vs 2DFS

Spearman’s ρ 0.67508 0.678626 0.550472

P-value 2.16005 × 10−8 1.70738 × 10−8 1.61418 × 10−5

5.2. Optimization of Static Transfer Functions In this section, we present some results to demonstrate the effectiveness of our automatic optimization approach on the nucleon (voxel dimensions: 41 × 41 × 41), tooth (140 × 120 × 161), CT-knee (379 × 229 × 305) data sets1 , one time-step of a simulated supernova (432 × 432 × 432)4 and a simulated turbulent vortex flow (128 × 128 × 128, 100 time-steps)5 . Results were obtained on a computer equipped with an Intel Xeon E3-1246

1 Volume Library courtesy of Stefan Roettger: http://www9.informatik.unierlangen.de/External/vollib/ 2 Voreen data sets: http://www.uni-muenster.de/Voreen/ 3 Stanford volume data archive: https://graphics.stanford.edu/data/voldata/

4 VisFiles

http://vis.cs.ucdavis.edu/VisFiles/pages/supernova.php data repository http://www.cs.ucdavis.edu/ ma/ITR/

5 Time-varying

5

v3 processor, 16GB of RAM and a NVIDIA Quadro K4200 graphics card. The transfer functions were optimized in a prototype program written in Wolfram Mathematica 11 and then automatically loaded into the Voreen volume rendering engine [23] for visualization. We examine the transfer functions and volume rendered images before and after transfer function optimization, as well as the evolution of opacity values of features due to the optimization. For a number of sample data sets, we generated an “ideally optimized” image using a large number of iterations and compared this, using the SSIM metric [25], to images at different progressive stages of optimization. We noted, across all data sets, that for values of the objective function below 0.001, the SSIM scores settled consistently at over 0.99, which was taken as an indicator that further iterations lead to an almost imperceptible change to the rendered image. In practice this threshold can be chosen as demanded by the application. We also noted that using 0.05 as the step size can make the objective function converge steadily for the sample data sets while still having a desirable convergence speed. Hence 0.05 is used as the step size and the objective function falling below 0.001 is regarded as convergence. We denote a target VWS distribution as {v1 , v2 , ...vn } where each vi is a normalized relative VWS value for a feature, and Pn i=1 vi = 1. In all following examples n = 3. A small number of discrete features is sufficient in practice for most manuallyspecified transfer functions, however there are no specific constraints on the value of n in our framework. The specific target distribution is indicated by the user, but, in practice, if this is not explicitly specified, we could assume the peak opacities of the input transfer function are an indicator of the target distribution intended by the user, similar to the approach taken by Correa and Ma [26]. In this case, the VWS values of each feature after the optimization will be distributed proportionally to the input peak opacities of the features. Optimization Results: Figure 1 shows the volume rendered image of a nucleon data set, and the transfer function and visibility-weighted saliency, before the transfer function optimization.

(a) before optimization

(a) before optimization

(b) after optimization

Figure 7: CT-knee: (a) opacities manually set to {0.1, 0.3, 0.6}; (b) after VWSoptimization to {0.1, 0.3, 0.6} the internal green and purple features are clearer.

of {0.1, 0.3, 0.6}. In Figure 2 (a), the three features from outside to inside appear in different transparency levels, from weak to strong. This reveals a clear perspective of the three structures. Figure 2 (b) and (c) are the optimized transfer function and visibility-weighted saliency histogram respectively. Figure 2 (d) and (e) are the evolution curves of opacity values of the peak control points of each feature over iterations, in the gradient descent, the line search and the parallel line search respectively. The evolution curves of the objective function are shown in Figure 10c, which show that the line searches converge much faster than the gradient descent. Figure 6 displays the volume rendered images and transfer functions respectively of the tooth data set before and after the optimization towards a target conspicuity distribution of {0.1, 0.3, 0.6}. In the pre-optimized transfer function Figure 6(a), the opacities of the features were manually set to a relative distribution of 0.1, 0.3 and 0.6 respectively. After VWS-optimization Figure 6 (b), the resulting transfer function indicates that, according to VWS, in order to achieve this perceptual distribution the opacity of the yellow feature should be much higher to account for occlusions by surrounding tissue. Figure 7 shows the volume rendered images and transfer functions respectively of the CT-Knee data set before (a) and after optimization (b) towards a target of {0.1, 0.3, 0.6}. As with the tooth, the initial input transfer function was set with opacities in the target proportion. The optimization suggests that Feature 2 needs to be more transparent and Feature 3 more opaque to achieve this conspicuity distribution. Figure 11 shows the volume rendered images, transfer function and VWS graph of the first time-step of the vortex data set before and after optimization. In this case the objective was to have equal conspicuity of the three features (i.e. {1/3, 1/3, 1/3} distribution). The pre-optimized images clearly indicate this is not the case as the purple feature dominates the rendered image. For a user to have equal awareness of the behavior of all features, the more subtle feature in the green range needs to have relatively higher opacity. Figure 8 shows a single frame (time-step 1324) of the supernova data set and visualization and corresponding transfer functions before and after optimization, demonstrating a result with a larger data set and more general color transfer function, in this case, a straightforward rainbow color map. The selected features are equally sized ranges in the intensity axis and the target was to achieve equal conspicuity across the three features. Note that in the optimized imaged in Figure 8(b), interior

(b) after optimization

Figure 6: Tooth: (a) feature opacities manually set to {0.1, 0.3, 0.6}; (b) after optimization towards the target {0.1, 0.3, 0.6} yellow feature is subtly more emphasized.

In Figure 1 (b), the opacities of peak control points of the 3 features are set to the ratio {0.1, 0.3, 0.6}, and the red histogram in the background is the intensity histogram of the volume data set in a logarithmic scale. Figure 2 shows the results of Figure 1 after the optimization towards a target conspicuity distribution 6

features become relatively more visible. In Figure 9, we further demonstrate the approach applied to transfer functions with different numbers of distinct features, in Figure 9(c) and (d) the target is set to have equivalent conspicuity for all features and in Figure 9(e) and (f) the target is set to have linearly increasing relative conspicuity from left to right, in other words with higher intensity features increasingly more conspicuous (for instance a target distribution of {1/21, 2/21, 3/21, 4/21, 5/21, 6/21} for six features in the supernova). We can see that compared to the unoptimized example in the first row, the interior details are more emphasized in both cases and especially so for the bottom row. In theory, there is no limit to the number of features that can be specified in our approach, nor is the approach limited to any specific color transfer function. In practice, most real world tasks would use a few features as extremely large numbers of distinct features would quickly become indistinguishable regardless of conspicuity. Furthermore, it would benefit to have the color distributions to correlate with the feature intensity ranges.

(a) before optimization

(a) Supernova, before optimiza- (b) CT-Knee, before optimization tion

(c) Supernova, after optimiza- (d) CT-Knee, after optimization tion to an equal VWS target to an equal VWS target {1/5, 1/5, 1/5, 1/5, 1/5} {1/6, 1/6, 1/6, 1/6, 1/6, 1/6}

(e) Supernova, after optimiza- (f) CT-Knee, after optimization tion to a linearly increasing to a linearly increasing VWS target VWS target

(b) after optimization

Figure 9: (a) & (b) Supernova (time-step 1324) and CT-Knee; (c) & (d) Supernova and CT-Knee after optimization to an equal VWS target respectively; (e) & (f) Supernova and CT-Knee after optimization to a linearly increasing VWS target respectively. The internal features are more visible after optimizing towards linearly increasing targets than equally weighted targets.

Figure 8: Supernova (time-step 1324): (a) opacities manually set to {1/3, 1/3, 1/3}; (b) after VWS-optimization to {1/3, 1/3, 1/3} the internal features are more visible.

Performance Analysis: Figure 10(a) shows the convergence of the objective function during optimization of the nucleon data set. Here, we can see that both line search methods require far fewer steps to converge than gradient descent and provide significant improvement in performance. Note that because the line search and the parallel line search made the same choices of adaptive step sizes during the iterations, the two curves are completely overlapped. Figure 10(b) displays the paths of the gradient descent and the line search in a visualization of the parameter space. Each axis represents the opacity of the peak control for each feature and the color represents the value of the objective function. Note that gradient descent progresses with fixed step sizes, whilst line search progresses more aggressively. We observed that other data sets that we tested also exhibited similar behavior. Figures 10(c) and (d) show a similar optimization from a different start condition (i.e. initial transfer function). We note that the approach converges to almost the same end result. Table 2 compares the performance of the optimization of the different optimization techniques for several static data sets and for a single frame of the Turbulent Vortex and Supernova timevarying data sets. We can see that the two-line search methods are considerable quicker than the gradient descent but take the same number of steps to converge. Furthermore, the parallel line search method (run on 4 CPU threads) only took about half

the time of the sequential line search method. At present the computing times are clearly above what would be required for an interactive application but it should be noted that these are based on results of a basic implementation in Wolfram Mathematica and serve to merely compare the relative efficiency across the different optimization strategies. Performance would be significantly improved in a compiled programming language such as C++ or by using the GPU for the computation. For instance, one of the major bottlenecks is due to the expensive visibility field computation, which is performed in each iteration. In this regard, previous authors have shown that it is possible to compute visibility in real-time on a GPU [26] [17]. We did not have full implementations of these published techniques to hand for testing, but we plan to implement similar optimization in future work. 5.3. Optimization For Time-Varying Volume Data Next, we applied our optimization framework on all the time steps of the vortex data set in order to test its effectiveness in the context of time-varying data sets. We dynamically optimized the transfer function to a globally defined user-specified target (equal weights i.e. {1/3, 1/3, 1/3} were set as target in this test) for each time step of the simulation data set. 7

Table 2: Performance of gradient descent, line search and parallel line search (PLS) showing steps and time (seconds) taken to converge (F < 0.001)

Visibility-weighted saliency 1.0

nucleon

Gradient descent 17 1.07 21 7.56 17 33.84 33 14.00 27 158.30

steps time steps time steps time steps time steps time

tooth CT-Knee vortex supernova

Line search 2 0.59 2 3.25 2 17.81 13 15.22 3 78.78

0.8

PLS 2 0.38 2 1.57 2 9.26 13 8.40 3 40.16

0.836712 feature1 feature2 feature3

0.6 0.4 0.132274

0.2 0.0

0.0310138

(a) opacities set to {1/3, 1/3, 1/3} Visibility-weighted saliency 1.0 0.8 0.6 0.4

0.327782

0.330221

0.341997

feature1 feature2 feature3

0.2 0.0 Objective function over iterations 0.10

● ■ ◆ ●

0.08

(b) optimised to VWS {1/3, 1/3, 1/3}



0.06

● ■ ◆

● ●

0.04

Gradient descent Line search Parallel line search

Figure 11: Visualization, transfer function and VWS of Vortex: (a) feature 1 dominates even though opacities are set to equal; (b) after optimization details in internal green and red feature are more recognizable.

● ●

0.02



■ ◆

● ■ ◆

0.00

■ ◆

2

4

6

8







10



12

(a)

ing occluded by the more substantial voxels in Feature 1. With the optimized transfer functions, the VWS curves in both Figure 12 (b) and (c) are more converged. Moreover, the VWS curves in Figure 12 (c) are the most stably converged because the dynamic optimization accounts for changes in the distribution (histogram) of intensities over the course of the simulation. Since VWS is used as the target for optimization, the VWS plots are predictably favorable in terms of showing that the optimization achieves its target distribution. In order to provide an independent measure of the effectiveness of the resulting visualizations, we plotted the resulting 2DFS scores (as described in Sec. 5.1) of the optimized transfer functions, shown in Figure 12. As before, the graphs show a clear distinction between the no-VWS case against the two optimized cases. Figure 13 shows time-step 30 and 80 rendered with the naive transfer function, the statically optimized transfer function and the dynamically optimized transfer function respectively. As can be observed in the accompanying video, the optimized visualizations exhibit reasonable degree of temporal coherence, which is a significant benefit in time-varying visualization. Furthermore, in the context of time-varying data, it may be possible to exploit coherency by using results from previous frames into account to improve the optimization performance. We found that optimizing the transfer of each frame based on the optimized transfer function of the previous frames as a starting point led to better performance. For the vortex, on average, the result converged to the target within 3 steps with the parallel line search strategy, compared to 13 steps by starting from a naive assumption (as reported in Table 2), resulting in one-third the original optimization time. For the supernova both iterations and processing time reduced to half the time it takes for optimizing each frame.

(b) Objective function over iterations 0.06 0.05

● ■ ◆



0.04

● ■ ◆



0.03 ●

Gradient descent Line search Parallel line search

0.02 ●

0.01



■ ◆

0.00

● ■ ◆

2

4

6



8





10

(c) (d) Figure 10: (a) Shows the convergence of the objective function during transfer function optimization of the nucleon data set. Note that the curves for the two line search methods are overlaid due to their similar step choices. (b) The steps of gradient descent and line search are shown in a visualization of the parameter space (c) (d) corresponding graphs for an optimization from a different starting condition.

Figure 12 compares how the VWS values evolve over the frames of the vortex simulation visualized with different transfer function assumptions. No-VWS denotes a manual transfer function, where equal opacities are manually assigned to each feature (this is considered a reasonable first guess), as in Figure 11(a). Static-VWS denotes a case where the transfer function was VWS-optimized for the first time step as in Figure 11(b)) and this transfer function then applied to all time-steps of the simulation. Dynamic-VWS is the case where the transfer function is optimized for each time step individually. With the naive transfer function, the VWS curves in Figure 12 (a) are far apart from each other and the purple feature has the highest VWS throughout all the time steps. This reflects the fact that although each feature has been assigned an equivalent opacity, their individual levels of perceptibility are considerably different due to interior and background elements be8

6. Conclusions

VWS: not optimized

VWS: statically optimized

VWS: dynamically optimized

1.0

1.0

1.0

0.8

0.8

0.8

0.6

Feature 1

0.4

Feature 3

Feature 2

0.2 0.0

0.6

Feature 1

0.4

Feature 3

Feature 2

0.2 0

20

40

60

80

0.0

100

0

20

40

(a)

60

80

Feature 3

Feature 2

0.2

(d)

20

40

80

100

80

100

(c) 0.8

0.6

Feature 1

0.4

Feature 3

0.0

60

2DFS: dynamically optimized

Feature 2

0.2 60

0

1.0

0.8 Feature 1

40

Feature 2

(b)

0.4

20

Feature 3

2DFS: statically optimized

0.6

0

Feature 1

0.4

0.0

100

1.0

0.8

0.0

0.6

0.2

2DFS: not optimized 1.0

We proposed a novel transfer function optimization approach that exploits the visibility-weighted saliency metric. By automatically refining the transfer function to match user-defined target conspicuity distribution values for features of interest, the process of designing effective transfer functions is simplified. Results indicate this approach can be effective over both static and time-varying volume data sets. In addition, a parallel line search strategy is proposed for exploiting the computing power of multi-core processors to improve the performance of the transfer function optimization approach. Currently, the performance of the system is not interactive: >40 seconds for the supernova data set with 4323 resolution and generally below real-time rates. We acknowledge that much higher performance would be required to enable use cases such as to allow the user to interactively modify the target VWS distribution or in fact, to have the transfer function updated automatically to a chosen viewpoint in real-time. However, it should be noted that results presented are based on an initial prototype CPU implementation running in Wolfram Mathematica. The reason for this was that our primary goal in this paper was to prove the viability of the approach, firstly, in successfully achieving automated optimization of the transfer function. We believe that performance optimization for instance by GPU or lower level multi-core parallelization should speed this up to interactive rates, in particular for the visibility field calculation which other authors have achieved in real-time on GPU [26] [17]. We did not have full implementations of these published techniques to hand for testing, but we plan to implement similar optimization in future work. With regard to time varying data sets, it may be possible to improve performance by exploiting temporal coherency. We discussed some initial results that show promise by simply using the previous frames result as the starting point for optimization of the subsequent frame. However, further studies across different data sets and conditions would be useful. Furthermore, it is possible that a better or more coherent result might be obtained if we could take into account data from more than just a single frame, for instance by combining the data of all or several frames of the series and optimizing in a single pass. A similar approach is taken by Kelly and Ma [27] who aggregate the time series into a summary-volume and generate a transfer function based on this, and it would be interesting to see if similar approaches could be viable with VWS-based optimization. The main difference from the Feature Visibility(FV) approach are in added support for saliency, which is affected primarily by local contrast with respect to attributes such as hue, saturation and brightness. At present, although the metric is sensitive to these features our optimization approach is only opacity based, thus we are unable to fully demonstrate distinctly different behavior from a pure visibility-based approach. Indeed, the user study seems to suggest that when color changes are not accounted for, FV’s performance is equivalent to VWS. However, we believe that with an intelligent means of also varying hues, the advantages of coupling visibility with saliency may further be proven in the future. At present, we merely pro-

0.6

Feature 1

0.4

Feature 3

Feature 2

0.2 0

20

40

60

(e)

80

100

0.0

0

20

40

60

80

100

(f)

Figure 12: VWS of the vortex data set over simulation frames with transfer functions that have been (a)manually chosen no-VWS, (b) statically optimized, static-VWS, and (c) dynamically optimized transfer function for each time step, dynamic-VWS. (d)-(f) 2DFS of the vortex data set of each case.

Figure 13: Time step 30 (top row) and 80 (bottom row) rendered with no-VWS, static-VWS dynamic-VWS respectively from left to right.

9

pose that VWS provides an interesting alternative to FV, with similar performance. The efficacy of VWS had heretofore not been validated with a user study. We conducted a small experiment primarily to justify its use in our optimization framework, but the results are also an important step in establishing the combined consideration of visibility and saliency in future work. We were not able to conduct a direct study of the actual results of the optimization largely because this would require a much more complicated user task such as rating whether a certain result fits a particular target distribution. For generality, this would require testing across varied transfer functions and numbers of features and could be a very challenging for participants, who would be asked to make a subjective decision based on their perception of a highly complex configuration space. It may however be possible to test the efficacy of VWS optimization on a more specific scope of problems, such as visualization of anatomy for instance, where more experienced users can rate the quality as a whole of the optimized visualization. We hope to conduct such studies in the near future. Assuming that better interactivity is achieved, we would like to also integrate a convenient graphical user interface for users to intuitively provide the necessary target distribution, in terms of a feature definition, number of features and relative conspicuity weighting. 7. Acknowledgments This research has been conducted with the financial support of Science Foundation Ireland (SFI) under Grant Number 13/IA/1895. The Supernova data set is made available by Dr. John Blondin at the North Carolina State University through US Department of Energy’s SciDAC Institute for Ultrascale Visualization. Other data sets were obtained from the Volume Library courtesy of Stefan Roettger, the Stanford Volume Data archive and the Timevarying data repository at UCDavis. The nucleon data set was obtained from the free distribution of the Voreen engine. We would like to thank the respective owners for making these data sets available. References [1] Correa, C.D., Ma, K.L.. Visibility-driven transfer functions. In: IEEE Pacific Visualization Symposium. 2009, p. 177–184. [2] Wang, Y., Zhang, J., Chen, W., Zhang, H., Chi, X.. Efficient opacity specification based on feature visibilities in direct volume rendering. Computer Graphics Forum 2011;30(7):2117–2126. [3] Ruiz, M., Bardera, A., Boada, I., Viola, I., Feixas, M., Sbert, M.. Automatic transfer functions based on informational divergence. IEEE Transactions on Visualization and Computer Graphics 2011;17(12):1932– 1941. [4] Bramon, R., Ruiz, M., Bardera, A., Boada, I., Feixas, M., Sbert, M.. Information theory-based automatic multimodal transfer function design. IEEE Journal of Biomedical and Health Informatics 2013;17(4):870–880. [5] Cai, L., Tay, W.L., Nguyen, B.P., Chui, C.K., Ong, S.H.. Automatic transfer function design for medical visualization using visibility distributions and projective color mapping. Computerized Medical Imaging and Graphics 2013;37(7):450–458.

10

[6] Qin, H., Ye, B., He, R.. The voxel visibility model: An efficient framework for transfer function design. Computerized Medical Imaging and Graphics 2015;40:138–146. [7] Zhao, Q., Koch, C.. Learning saliency-based visual attention: A review. Signal Processing 2013;93(6):1401–1407. [8] Itti, L., Koch, C., Niebur, E.. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 1998;20(11):1254–1259. [9] Harel, J., Koch, C., Perona, P.. Graph-Based Visual Saliency. In: Proceedings of Neural Information Processing Systems (NIPS). 2006, p. 545–552. [10] Lee, C.H., Varshney, A., Jacobs, D.W.. Mesh saliency. ACM Trans Graph 2005;24(3):659–666. [11] Kim, Y., Varshney, A.. Saliency-guided Enhancement for Volume Visualization. IEEE Transactions on Visualization and Computer Graphics 2006;12(5):925–932. [12] Chan, M.Y., Wu, Y., Mak, W.H., Chen, W., Qu, H.. Perception-based transparency optimization for direct volume rendering. IEEE Transactions on Visualization and Computer Graphics 2009;15(6):1283 –1290. [13] J¨anicke, H., Chen, M.. A Salience-based Quality Metric for Visualization. Computer Graphics Forum 2010;29(3):1183–1192. [14] Shen, E., Li, S., Cai, X., Zeng, L., Wang, W.. SAVE: saliency-assisted volume exploration. Journal of Visualization 2014;18(2):1–11. [15] Shen, E., Wang, Y., Li, S.. Spatiotemporal volume saliency. Journal of Visualization 2015;19(1):1–12. [16] Luo, S., Dingliana, J.. Visibility-Weighted Saliency for Volume Visualization. In: Computer Graphics and Visual Computing (CGVC). London, UK; 2015,. [17] Emsenhuber, G.. Visibility Histograms in Direct Volume Rendering. Master’s Thesis; Institute of Computer Graphics and Algorithms, Vienna University of Technology; 2008. [18] K¨onig, A., Gr¨oller, M.E.. Mastering transfer function specification by using VolumePro technology. Technical Report TR-186-2-00-07; Institute of Computer Graphics and Algorithms, Vienna University of Technology; 2000. [19] Baldevbhai, P.J., Anand, R.. Color image segmentation for medical images using l* a* b* color space. IOSR Journal of Electronics and Communication Engineering 2012;1(2):24–45. [20] Vrahatis, M.N., Androulakis, G.S., Lambrinos, J.N., Magoulas, G.D.. A class of gradient unconstrained minimization algorithms with adaptive stepsize. Journal of Computational and Applied Mathematics 2000;114(2):367–386. [21] Zhou, B., Gao, L., Dai, Y.H.. Gradient Methods with Adaptive StepSizes. Computational Optimization and Applications 2006;35(1):69–86. [22] Shepard, D.. A Two-dimensional Interpolation Function for Irregularlyspaced Data. In: Proceedings of the 1968 23rd ACM National Conference. 1968, p. 517–524. [23] Meyer-Spradow, J., Ropinski, T., Mensmann, J., Hinrichs, K.. Voreen: A rapid-prototyping environment for ray-casting-based volume visualizations. IEEE Computer Graphics and Applications 2009;29(6):6–13. [24] Cunningham, D., Wallraven, C.. Experimental Design: From User Studies to Psychophysics. 1st ed.; A. K. Peters, Ltd.; 2011. [25] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 2004;13(4):600–612. [26] Correa, C.D., Ma, K.L.. Visibility histograms and visibility-driven transfer functions. IEEE Transactions on Visualization and Computer Graphics 2011;17(2):192–204. [27] Jankun-Kelly, T.J., Ma, K.L.. A study of transfer function generation for time-varying volume data. In: Proceedings of the 2001 Eurographics conference on Volume Graphics. VG’01; Aire-la-Ville, Switzerland, Switzerland: Eurographics Association. ISBN 3-211-83737-X; 2001, p. 51–66.

Suggest Documents