Comparing Color and Leader Line Highlighting ... - IEEE Xplore

0 downloads 0 Views 1MB Size Report
Jan 30, 2015 - user performance with two highlighting methods; color and leader lines. Our study methodology uses eye-tracking to capture participant eye ...
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,

VOL. 21,

NO. 3,

MARCH 2015

339

Comparing Color and Leader Line Highlighting Strategies in Coordinated View Geovisualizations Amy L. Griffin and Anthony C. Robinson Abstract—In most coordinated view geovisualization tools, a transient visual effect is used to highlight observations across views when brushed with a mouse or other input device. Most current geovisualization and information visualization systems use colored outlines or fills to highlight observations, but there remain a wide range of alternative visual strategies that can also be implemented and compared to color highlighting to evaluate user performance. This paper describes the results of an experiment designed to compare user performance with two highlighting methods; color and leader lines. Our study methodology uses eye-tracking to capture participant eye fixations while they answer questions that require attention to highlighted observations in multiple views. Our results show that participants extract information as efficiently from coordinated view displays that use leader line highlighting to link information as they do from those that use a specific color to highlight items. We also found no significant differences when changing the color of the highlighting effect from red to black. We conclude that leader lines show significant potential for use as an alternative highlighting method in coordinated multiple view visualizations, allowing color to be reserved for representing thematic attributes of data. Index Terms—Evaluation/methodology, graphical user interfaces, interaction styles, information visualization

Ç 1

INTRODUCTION

V

ISUAL interfaces designed to support interactive data exploration, analysis, and sensemaking commonly make use of coordinated multiple view (CMV) displays to present information. One hallmark of CMV visualizations is that visual techniques are used to highlight observations across the views. Typically, this transient visual effect is triggered by a mouseover or another user interaction, with the goal of making specific observations more visually salient so that users can quickly identify and evaluate those observations. Currently, most systems use a highlighting strategy that applies a dedicated, bright and saturated color to surround or cover observations in each view (Fig. 1). While a wide range of possible visual strategies aside from color are potentially usable as highlighting methods, there has so far been little attention toward identifying and comparing those approaches in prior research. Information visualization tools that that use color-based highlighting include Prefuse [1], SpotFire [2], Jigsaw [3], and the InfoVis Toolkit [4]. Geographic visualizations have also adopted this approach, as seen in early work by Dykes [5], Andrienko and Andrienko [6], and in the GeoViz Toolkit [7]. A CMV visualization presents additional demands



A. L. Griffin is with the School of Physical, Environmental, and Mathematical Sciences, UNSW, Canberra BC ACT 2610, Australia. E-mail: [email protected].  A. C. Robinson is with the GeoVISTA Center, Department of Geography, Pennsylvania State University, University Park, PA 16802, USA. E-mail: [email protected]. Manuscript received 22 Nov. 2013; revised 2 Oct. 2014; accepted 16 Oct. 2014. Date of publication 23 Nov. 2014; date of current version 30 Jan. 2015. Recommended for acceptance by G. Andrienko. For information on obtaining reprints of this article, please send e-mail to: reprints.org, and reference the Digital Object Identifier below. Digital Object Identifier no. 10.1109/TVCG.2014.2371858

on the user’s perceptual and cognitive systems when compared with reading a single visualization. Not only does the user need to engage those processes necessary to read each display individually, but those processes needed to link displays with each other. These additional demands may be substantial for some tasks a user may undertake with coordinated views (e.g., integrating information across displays or making inferences that extrapolate beyond information present in the displays). Hence, for CMV displays, it is especially critical that visualization designers do not make choices that impose additional perceptual and cognitive loads on the user. As shown in Fig. 1, some contemporary systems blend together large numbers of views and apply colors to a wide range of thematic elements, making it difficult to rely on the use of color alone to clearly highlight elements across views. One potential solution to this problem is to use an alternative visual strategy that does not rely on color to highlight observations across views. Recently, Robinson proposed a taxonomy of highlighting strategies that move beyond the simple use of color to leverage other types of visual variables for use in CMV geographic visualization tools [8]. While these and other highlighting methods are possible, there remains a need to identify whether or not such methods can support visualization usability and utility to the same degree as commonly-used color highlighting. To begin addressing this question, in this paper we present the results of an experiment that makes use of eyetracking to compare the performance of two highlighting strategies; colored outlines and leader lines (Fig. 2). Our results and experimental methodology provide an empirical foundation for comparing and evaluating the range of strategies for highlighting in CMV visualizations. In this

1077-2626 ß 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

340

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,

VOL. 21,

NO. 3, MARCH 2015

Fig. 1. Example of a complex visualization generated for the 2008 VAST Contest Evacuation Mini-Challenge. Use of color highlighting alone may not be enough to draw attention to linked observations in systems like this one.

evaluation we focused specifically on evaluating approaches for linking one observation across multiple views. We fully recognize, however, that highlighting frequently leads to selections that could include one-to-many and many-tomany relationships. Our intention here is to explore the simplest case first. The following sections describe relevant prior work, our experimental methodology, results from eye-tracking experiments to compare color and leader line highlighting, the conclusions we derive from those results, and ideas for future research to further characterize and compare highlighting strategies for CMV visualizations.

2

COORDINATED MULTIPLE VIEW VISUALIZATION

Coordinated multiple view visualization uses two or more views to present different data or different representations of the same data [9], [10], [11]. Such systems use interactive methods coupled with visual interfaces to support data exploration and analysis. CMV visualizations are intended to help users explore data to develop hypotheses, and then to immediately derive answers for those hypotheses [11]. CMV tools have also been the focus of significant attention in Geographic Visualization research, where approaches to support spatio-temporal analysis pose challenges for integrating maps with other view types to evaluate complex and dynamic processes [12], [13]. Early theoretical work by Baldonado et al. defined eight guidelines for the design and implementation of coordinated multiple view visualizations [9], with a focus on balancing the need for multiple views against the tendency for such systems to become complicated or to feature redundant information. Two of these design principles, SelfEvidence and Attention Management are particularly relevant to our research. Self-Evidence refers to the need for multiple views to use perceptual cues to make relationships across views obvious to the user. Attention Management refers to the need for perceptual methods to focus user attention on parts of the display at the appropriate time. Visual highlighting strategies are one way that visualization designers can work to satisfy both of these design priorities. Other work has focused on defining useful coordination behaviors for CMV environments. North and Shneiderman

Fig. 2. Examples of two highlighting strategies: static color highlighting, which is commonly used in visualization systems (top), and leader lines, which can be used to directly connect observations across multiple views (bottom).

[10] developed a taxonomy of coordination types that describes three primary view combinations where selecting items can lead to selecting other items, navigating views can lead to navigating other views, and selecting items can lead to navigating views. The displayed information may be the same for both views, or it may differ. User interaction triggers these coordination events, and visual strategies are typically used to highlight these changes. In this way, highlighting is also a key enabling feature of the visual information-seeking mantra of Overview First, Zoom and Filter, Details on Demand [14]. Take the example of the result of a database query (a coordination event trigger) reflected in observations highlighted on both a map and a scatterplot (a visual strategy applied), which is a case of a selection leading to selecting other items. These basic coordination types have subsequently been augmented and extended by others [15], [16], [17]. One of the few examples of research on CMV visualizations that focuses specifically on visual strategies that can be employed for highlighting is work by Ware and Bobrow [18], [19] on the use of motion as an alternative to static highlighting. Their research compared multiple types of movement to static color-based highlighting to measure user performance on tasks that use node-link graphs. Their results show that these two methods are equally effective, and that when used together they are also quite effective. These conclusions prompt us to consider whether other types of static highlighting methods would perform in the same manner. Complementary research by Roberts and Wright [20] outlines methods for extending interactive brushing beyond visualization views and into their interfaces, a variation they call ubiquitous brushing. One of the

GRIFFIN AND ROBINSON: COMPARING COLOR AND LEADER LINE HIGHLIGHTING STRATEGIES IN COORDINATED VIEW GEOVISUALIZATIONS

few examples of exploratory work to develop new approaches is found in recent work by Steinberger et al. who developed a method they call context-preserving visual links, which draws lines to connect observations using a method that minimizes occlusion [21].

3

READING COORDINATED VIEWS

In earlier work [8], we proposed a wide range of possible candidates for visual highlighting in geovisualization to include color, depth of field [22], leader lines, transparency, contouring, color desaturation, and graphical style reduction. Additional possible highlighting methods include the twelve visual variables compiled by MacEachren [23]. Some of these (orientation and location, for example) may be difficult to imagine as useful highlighting approaches, but nonetheless remain possible to implement. Highlighting strategies that direct attention in ways that make it easy for the reader to understand the coordination of views should lessen the cognitive burden of reading multiple views. Wolfe and Horowitz have produced the most comprehensive summary of evidence to-date on the ability of visual strategies to capture visual attention [24], yet some potential highlighting strategies that we proposed are not included in this summary: leader lines, contour and style reduction. Leader lines were implemented as a highlighting strategy as early as 2007 in Jigsaw to link text reports and other related entities [3]. Recently, several other groups have described a line-based implementation of highlighting in CMV displays, although they may have given this strategy a different name (see, for example, [25], [26], [27], [28]). Leader lines have the relative advantages of being semantically easy for viewers to understand and being relatively easy to implement within visualization systems. However, there is as yet, little direct evaluation of how well they capture visual attention and support the kinds of tasks for which CMV displays are commonly used. Thus, we chose to compare leader lines with the current most commonly used strategy for highlighting: color. To begin comparing highlighting approaches we have chosen to start by exploring static visual effects, with the understanding that it also remains important to investigate dynamic visual effects in future work. In the following sections, we describe the basic experimental task and review the relevant perceptual and cognitive processes engaged when undertaking the task by performing a task analysis, including proposing a sequence of eye fixations we would expect our participants to employ when performing the task. Finally, we discuss the null hypotheses we designed our experiment to evaluate.

3.1 Experimental Task In this work we focus on comparing two specific approaches: color and leader lines. We chose to test color because it is the most common method currently used to link observations in coordinated multiple view displays. The leader line approach (Fig. 2) draws lines out from the selected observation to its counterparts in other views. We take the leader line name from this technique’s long history of use to connect observations in Cartography [29]. Other approaches for drawing lines to link observations have used methods to

341

automatically avoid occlusion of information within views [21]. We focus on direct links here as they provide the shortest path between observations and are therefore likely to provide the quickest response times from users. One common (and relatively simple) task that highlighting supports in CMVs relates to understanding the characteristics of unusual or unexpected data points. For example, the user might notice an unusually high or low value in one view and then probe other views for further information on that observation. Our experimental task presents the user with an observation in the map and asks the user to find the linked (highlighted) observation in one of two types of graphs (a scatterplot or a parallel coordinate plot) and retrieve the observation’s value in the linked view for a variable specified in each trial. This contrasts with the approach taken by Steinberger et al. [21] who asked participants to count the number of highlighted features and varied the number of connections that appeared in each trial. We chose our simple task because it requires users to make use of the connection once it has been found, which is more likely to be a relevant analysis task than to count the number of connected items. Furthermore, using eye fixation data we can also measure how long it takes users to find the connected features. We believe that if alternative methods of highlighting are ineffective for cognitively simple tasks in limited view combinations, that it is unlikely that they will be effective for more complex displays.

3.2 Task Analysis A number of researchers have presented theories of graph comprehension that could potentially explain what users do when they read a graph [30], [31], [32], [33], [34], [35]. The various theories are all based around three fundamental stages of information processing that are required for comprehension of the graph: processing of visual stimuli in the graph using early visual processes; delineating the conceptual relationships depicted in the graph; and matching the processed visual stimuli with the conceptual relationships to comprehend the graphed information. This last stage involves the application of what Pinker [34] termed the graph schema—a mental representation stored in long-term memory and used to interpret the graph. Ratwani and Trafton [36] recently presented evidence in support of the view that graph schemas have invariant structures, and are not influenced by the specific perceptual features they contain. In other words, it is the graphical framework (e.g., whether the graph uses a Cartesian or polar coordinate system) that determines which schema is activated rather than the actual graphical marks that represent observations in the graph (e.g., the specific form of the bars of a bar graph or the specific pie segments of a pie chart). In our experiment, reading the scatterplot should activate a Cartesian graph schema. While a PCP is not strictly a Cartesian graph, as each of its axes can be scaled individually, Peebles et al. have provided empirical evidence that novice users often invoke a Cartesian graph schema when reading a PCP, even if it is not the most appropriate schema [37]. Thus, any differences we observe should then be due to perceptual processing of the marks on the page, and this is when issues such as symbol discriminability, salience and organization (i.e., Gestalt laws) are most important. We would therefore

342

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,

VOL. 21,

NO. 3, MARCH 2015

Table 1 Groups of Experimental Stimuli Group

Variable combinations

1 2 3 4

Map/SP & color highlighting Map/SP & leader line highlighting Map/PCP & color highlighting Map/PCP & leader line highlighting

this ability may depend on a specific combination of data representation methods. 



4

Hypothesis 1: There is no difference between the ability of color and leader lines to support the efficient search for linked values in a coordinated display. Hypothesis 2: There is no difference in the number of eye fixations used for either the color or leader line conditions to search for linked values in a coordinated display.

METHODOLOGY

Our approach combines traditional behavioral measures (task efficiency and accuracy) with qualitative and quantitative measurements of participants’ eye movements while completing the experimental task.

Fig. 3. The task workflow for our highlighting experiment.

expect that highlighting methods that use symbols that are more discriminable and more salient within the representations in the display should then lead to more efficient reading of coordinated displays. To complete our experimental task, the participant first reads the question to find the name of the variable for which s/he is to find the value in the graph and stores this information in short-term memory (Fig. 3). S/he then searches for the highlighted observation in the map and reads the label of the highlighted region, also storing it in short-term memory. Then the participant would search for the highlighted observation in the graph. After finding the observation, the participant uses the axis of the graph to estimate the observation’s value and stores it in short-term memory. Finally, the participant verbalizes the name of the mapped region and the estimated value from the graph. The most efficient potential sequence of eye fixations would require three fixations: one on the highlighted map region, one on the highlighted observation in the graph, and one at the position of the graph axis needed to estimate the observation’s value.

3.3 Null Hypotheses We set out to determine whether or not there are differences in the ability of color or leader line highlighting to support a reader’s ability to extract information from coordinated multiple view displays. We tested two types of coordinated display combinations to explore whether

4.1 Experiment Design We tested two independent variables: highlighting method (color, leader line) and representation type (map and scatterplot, map and parallel coordinate plot); and four dependent variables: task efficiency (seconds), task accuracy (mean estimation error), time to first fixation (s), and number of fixations before finding the highlighted observations (n). Participants viewed both highlighting methods within each representation type (within subjects design). 4.1.1 Participants Thirty-two students and staff at UNSW Canberra participated in the experiment (sixteen males and sixteen females). Participants worked or studied within a broad range of disciplines, including those from the humanities, social sciences, natural sciences, and business and management. Participants ranged in age from 18 to over 55, with most participants aged between 21 and 25. All participants had vision corrected to normal and were wearing their glasses or contacts if needed. 4.1.2 Test Materials Our experiment makes use of 32 static images that were generated to simulate map þ scatterplot (SP) and map þ parallel coordinate plot (PCP) coordinated view geovisualizations; four groups of eight images each (Table 1). Groups one and two consist of pairs of images (one from each group) whose only difference was in the highlighting method used. Pairs of images in groups three and four were likewise identical except for the highlighting method used. We presented each representation type (SP or PCP) as a block, counterbalanced among the participant population so that half saw the SP condition first. Within each block,

GRIFFIN AND ROBINSON: COMPARING COLOR AND LEADER LINE HIGHLIGHTING STRATEGIES IN COORDINATED VIEW GEOVISUALIZATIONS

343

Fig. 4. Examples of experiment stimuli. Participants saw one image at a time at full screen on a 19 inch 1280  1024 display.

we used a balanced Latin Squares [38] rotation of the 16 stimuli to avoid order effects in the experiment. Each of the stimuli (Fig. 4) included a map based on fifty US counties from Georgia. We anticipated that our Australian participants would not be familiar with the region, but we also rotated the counties in 45-degree increments for each stimulus to avoid learning effects. These fifty counties are also of roughly equal size, while retaining unique shape characteristics. To create the PCPs, for each map observation we randomly generated data points ranging from 0 to 100 for each of the five PCP variables. We generated data to produce the stimuli because we did not want participants to bring any semantic knowledge about patterns in the data to completing the task. In this experiment our aim is to understand the perceptual processes involved in reading coordinated multiple view visualizations rather than how the user constructs knowledge about a phenomenon; a task that comes later in the process of reading visualizations. The highlighted observations were randomly chosen for each of the 16 trial pairs. Map colors were randomly assigned to each polygon in the map and its corresponding line in the PCP or point in the scatterplot using five quantile classes and a sequential blue color scheme from ColorBrewer [39]. The use of blue for the background data and red for the highlighting color reflects what is understood about chromostereopsis, where red tends to be perceived as higher in relative depth compared to blue which appears to be lower to most users [40]. This difference in relative

perceived depth promotes visual contrast between the color highlighting and the highlighted observation, and moves the highlighting up the visual hierarchy. Color highlighting was implemented as an outline of the observation rather than as a full fill to avoid obscuring the observation’s attribute value [8]. PCP highlighting was implemented as a set of lines in which each line connected the mapped observation to the point at which the PCP polyline crossed an axis. While there are several ways in which leader line highlighting could be implemented (e.g., one alternative is depicted in Fig. 2), we felt this implementation best supported the experimental task users were asked to complete. The images were created at a resolution of 1280  1024, and were displayed on a 19” monitor with participants seated at a distance of 60 cm from the screen.

4.1.3 Experiment Apparatus Participant eye fixations were captured by a Tobii X120 eyetracker operating at a sample rate of 60 Hz. A full motion screen capture was automatically recorded along with webcam video of participants to support result analysis. All participants completed the experiment in the same quiet, windowless laboratory space at UNSW Canberra. 4.1.4 Task and Procedure Our experimental task asked participants to name the highlighted region in the map and estimate its value for a

344

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,

VOL. 21,

NO. 3, MARCH 2015

Fig. 6. Task efficiency results for both highlighting conditions.

Fig. 5. Examples of AOIs for both view combinations. AOI 1 represents the highlighted region in the map. AOI 2 captures the linked observation that participants were asked to find in the task. AOI 3 delineates a region on the axis that needs to be read to complete the task.

specified variable in the coordinated statistical graph. Participants verbalized their answers for each task question. Their answers were recorded via a webcam integrated with our eye-tracker. Participants began by filling out a short demographic questionnaire that included questions about the participant’s age, gender, academic background, and vision correction. The eye-tracker was then calibrated and the experiment began. At the beginning of each block, participants completed four practice trials; two for each highlighting type. If the participant answered a practice trial incorrectly, the test proctor discussed the trial with the participant. These practice trials ensured that each participant understood how to interpret the stimuli correctly before the experimental trials began. After completing the first block of trials, participants had a chance to rest their eyes for as long as they wanted before beginning the second block of trials. Participants were instructed to read each task’s instructions (presented on a separate screen before each stimulus) and then press the spacebar to view the stimulus and once again after reporting an answer.

4.1.5 Coding Eye Fixations We first employed the Tobii fixation filter using a velocity threshold of 35 pixels and a distance threshold of 35 pixels to extract fixations from the raw eye-movement data. To then analyze the pattern of eye fixations, we defined areas of interest (AOIs) in the display that corresponded to areas where the participants needed to attend to in order to successfully complete the task.

The first AOI was a circular region that roughly covered the highlighted region in the map and that was centered on the region’s label (Fig. 5). The second and third AOIs were circular regions that were somewhat larger than the scatterplot points, in order to account for the precision of the eyetracker. To determine the size of AOI2 and AOI3, we referred to Holmqvist et al. who recommend that the minimum AOI size should be calculated based on a 1 to 1.5 degree visual angle subtended on the screen [41]. Using this guidance, at 60 cm from the display, each AOI is a circle of 1 centimeter in diameter, corresponding to about 1 percent of the screen area. AOI 2 was centered on the highlighted point in the scatterplot or where the parallel coordinate plot crossed the axis of interest. AOI 3 was centered on the portion of the relevant axis that was required to estimate the highlighted observation’s value.

5

RESULTS

We statistically analyzed both the time it took participants to complete the task and the eye-tracking fixations using two-way, repeated measures ANOVA, with highlighting method and representation type as within-subjects independent variables and gender and age as between-subjects independent variables. Our dependent variables were mean response time, mean estimation error, mean time to first fixation and mean number of fixations before the participant looked in the AOI. To correct for the number of dependent variables, we used the Bonferroni correction and adopted an alpha level of 0.01.

5.1 Task Efficiency and Accuracy For task efficiency, the within-subjects simple main effect of highlighting (color vs leader line) was not significant F(1, 31) ¼ 2.32, p ¼ .137, meaning that there was no difference in the amount of time it took to answer between the color and leader line conditions (Fig. 6). The within-subjects simple main effect of representation type was significant, F (1, 31) ¼ 43.46, p < .001, with the PCP condition taking longer than the SP condition (5.91 seconds versus 5.06 seconds). The interaction highlighting x representation type, however, was not significant, F(1, 31) ¼ .647, p ¼ .427. We also examined the accuracy of participants’ estimates of the values of the highlighted observations as the ability of

GRIFFIN AND ROBINSON: COMPARING COLOR AND LEADER LINE HIGHLIGHTING STRATEGIES IN COORDINATED VIEW GEOVISUALIZATIONS

345

Fig. 8. Scanpath showing erroneous fixation on the top of the C axis (following a longer leader line), rather than at the correct intercept toward the bottom of the C axis. Fig. 7. Task accuracy results for both highlighting conditions.

a highlighting method to support efficient search is not useful if it also hinders a participant’s ability to complete the task accurately. The within-subjects simple main effect of highlighting was small but significant F(1, 31) ¼ 7.44, p ¼ .01 (Fig. 7). Participants had slightly smaller mean estimation errors in the color condition than they did in the leader line condition (0.69 versus 0.88 units). The within-subjects simple main effect of representation type was not significant, F(1,31) ¼ .052, p ¼ .82. The interaction highlighting x representation type was also not significant F(1,31) ¼ 3.12, p ¼ .09. One potential critique of the particular leader line implementation that we tested is that it used red lines and thereby introduced an additional variable to the leader line condition: color. Most particularly, because a saturated red is a high salience color, it may have artificially increased the salience of the leader line condition and improved participant performance in that condition. To investigate whether there is evidence to support this claim, we undertook a small follow-up study in which we re-colored the leader lines to black (a lower salience option, and one that removed color as a redundant visual variable), making no other changes to the stimuli. We tested a further eight participants (who had not participated in our original experiment) using the same experimental procedures. While in the main experiment the simple main effect of representation type on efficiency was significant, a mixed repeated measures ANOVA comparing the main experiment participants’ performance on the leader line condition with that of the follow-up experiment participants produced a within-subjects simple main effect of representation that was just short of significance F(1,38) ¼ 6.24, p ¼ .016, meaning that times were not significantly different between the SP and PCP conditions (5.08 versus 5.89 seconds). The between-subjects main effect of color (red versus black) was also nonsignificant: F(1,38) ¼ .89, p ¼ 0.35, as was the interaction representation x color, F(1,38) ¼ 3.28, p ¼ .35. We therefore conclude that there are no significant differences between red and black leader lines in terms of participant efficiency in completing the experimental task. When we examined the accuracy of individual participants’ estimates of the values of highlighted observations for red and black leader lines in the PCP condition (main and follow-up experiments, respectively), it is clear that there were two participants in the follow-up experiment (black lines) who had very different task accuracies than the

other participants. We examined the data for individual trials and found several instances (6/64 trials; 9 percent) in which these participants clearly misread the PCP, leading to very large estimation errors for these trials and leading to substantial increases in the magnitudes of errors. In these instances, while the participants were reading the correct axis, they were not reading the leader line that stopped at that axis, focusing instead on the line crossing the highest value on that axis. This can be seen by viewing the participant’s scanpath for that trial (Fig. 8). While this error also happened occasionally in the main experiment, it was with a much lower frequency (2/256 trials, 0.7 percent). This may have been an artifact of the overall design of the stimulus, as the black axes in the PCP were a similar line width as the leader lines used in the PCP. Because we wanted to only modify the leader line colour in the follow-up study, we did not alter the axis line width. This thicker line width is likely to have been less of a problem when the leader lines were red as there was more visual contrast between the leader lines and the axes. While the design of the black leader lines could be improved, these misreadings do not prevent us entirely from drawing conclusions about the potential accuracy of participants’ ability to effectively read black leader lines. If we exclude the trials with verifiable misreadings, we find that there are no significant differences between accuracy of participants’ performance using red or black leader lines. A mixed repeated measures ANOVA on task accuracies for the leader line condition that excluded the misread trials produced a within-subjects simple main effect of representation that was not significant F(1,38) ¼ 1.58, p ¼ .22, meaning that accuracies were not significantly different between the SP and PCP conditions (1.01 versus .85 units). The between-subjects main effect of color (red versus black) was also nonsignificant: F(1,38) ¼ .446, p ¼ 0.51 (0.88 units versus .98 units), as was the interaction representation x color, F(1,38) ¼ 0.43, p ¼ .84. We therefore conclude that while the specific PCP implementation we used did make it more likely that a participant would misread the graph, if the PCP was designed to maximize graphical contrast between the leader lines and axes, it is likely that there would be no difference between red and black leader lines in terms of participant accuracy with the experimental task. In addition to producing similar results in terms of raw performance, we evaluated the scanpath data to search for commonalities in how participants surveyed each type of

346

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,

VOL. 21,

NO. 3, MARCH 2015

Fig. 10. Mean time to first fixation results for AOI 2.

Fig. 9. Scanpaths for red and black variations of the PCP test showing similar overall patterns.

stimulus. Analysis of the scanpaths for fixations on both color types shows that the general pattern remains consistent, with clear foci around the target AOIs (Fig. 9).

5.2 Time to First Fixation Successfully completing the experimental task required participants to find the highlighted observation in the coordinated statistical graph (Fig. 3). We broke down the experimental task into two sub-tasks for analysis: finding the two highlighted observations (map and coordinated statistical graph), which requires fixating on both AOI1 and AOI2; and reading the axis to find the value of the highlighted observation, which requires finding AOI3. Time to first fixation for the first sub-task was calculated from the raw fixation data as the time between the beginning of the trial and the discovery of both AOI1 and AOI2, minus the duration of any fixations on the AOIs themselves. That is, if a participant fixated on AOI1 twice before fixating on AOI2, the durations of the two fixations into AOI1 were subtracted from the total time. Removing the time spent fixating on the AOIs themselves isolates the time required for visual search from time spent examining each target. The within-subjects simple main effect of representation type was significant; F(1,31) ¼ 38.28, p < .001. Participants found AOI 2 significantly more quickly in the SP condition than the PCP condition (1.21 versus 1.60 seconds; Fig. 10). The within-subjects simple main effect of highlighting was not significant for AOI 2; F(1, 31) ¼ 0.93, p ¼ .342. The interaction highlighting x representation type was also non-significant for AOI 2, F(1, 31) ¼ 5.45, p ¼ .026. Once the participant has found the highlighted observation, completion of the task also requires interpolating along the appropriate axis to estimate the value associated with the observation. We defined this area as AOI 3 and measured the time to first fixation within this area as the length

of time between the end of the first fixation into AOI 2 and the beginning of the first fixation into AOI3, minus the duration of any additional fixations into AOI2. The within-subjects simple main effect of representation type was significant for the time to first fixation for AOI 3, F (1, 31) ¼ 11.52, p ¼ .002 (0.48 versus 0.80 seconds for the SP and PCP conditions, respectively). The within-subjects simple main effect highlighting was not significant for the time to first fixation for AOI 3, F(1, 31) ¼ 0.97, p ¼ .76, with participants requiring similar amounts of time to find the appropriate place on the axis to estimate the highlighted observation’s value in the color (0.65 seconds) and the leader line conditions (0.62 seconds) (Fig. 11). The interaction highlighting x representation type was non-significant, F(1, 31) ¼ 2.61, p ¼ .078.

5.3 Fixations Before The measure ‘fixations before’ is an additional measure of viewing efficiency, one that might reasonably be expected to positively correlate with time to first fixation. To explore this behaviour, we counted the number of times each participant fixated on the display from the start of the trial until s/he found both AOI1 and AOI2, and the number of fixations after their first fixation into AOI2 but before their first fixation into AOI 3. Both measures of ‘fixations before’ excluded fixations in the AOIs themselves (as in the time to first fixation calculations).

Fig. 11. Mean time to first fixation results for AOI 3.

GRIFFIN AND ROBINSON: COMPARING COLOR AND LEADER LINE HIGHLIGHTING STRATEGIES IN COORDINATED VIEW GEOVISUALIZATIONS

347

Fig. 12. Mean number of fixations before first fixation in AOI 2.

The within-subjects simple main effect representation type was significant, F(1, 31) ¼ 43.14, p < .01, with participants using more fixations to find AOI 2 in the PCP than the SP representation (5.97 versus 4.52 fixations, respectively). The within-subjects simple main effect highlighting was not significant for AOI 2, F(1, 31) ¼ 1.76, p ¼ .19, with participants using 5.43 fixations in the color condition before finding AOI 2 versus 5.06 fixations in the leader line condition. The interaction effect highlighting x representation type was significant, F(1, 31) ¼ 9.48, p ¼ .004 (Fig. 12). While the leader line condition for the SP representation required fewer fixations before finding AOI2 (4.01 versus 5.03, respectively), participants used slightly more fixations in the leader line condition than the color condition for the PCP representation (6.10 versus 5.84 fixations before). Neither the within-subjects main effects of representation nor highlighting were significant for AOI 3 (representation: F(1,31) ¼ 2.36, p ¼ .13; highlighting: F(1,31) ¼ .15, p ¼ .70; Fig. 13). The interaction effect highlighting x representation type was also not significant, F(1, 31) ¼ 1.51, p ¼ .23.

5.4 Spatial Patterns in Scanpaths While statistical measures of eye-movement patterns are useful for quantitatively describing differences (or lack thereof) in how readers view visualizations, they do not

Fig. 13. Number of fixations before the first fixation in AOI 3.

Fig. 14. Fixation patterns for the leader line condition (top) showed less tendency to search for AOI 2, compared to the color condition (bottom).

provide a complete picture of how readers view coordinated displays. When we examined the scanpaths employed by participants, we found that when they viewed displays that used leader line highlighting, their scanpaths were more focused and directed towards task-relevant parts of the display (Fig. 14) than those employed when color highlighting was used. This can be seen in the tight grouping of scanpaths along the leader line that leads to the information of interest in the parallel coordinate plot.

5.5 Gender and Age Effects We further examined our data to determine if participant gender or age helped to explain our results. With respect to participant age, we found no significant main effects. We also did not find any significant main effects that met our p-value threshold of .01 with respect to gender.

6

DISCUSSION

Our results show that leader lines are as effective as color for supporting the efficient search for information across multiple representations that are coordinated using visual highlighting. We found no significant difference in the amount of time needed to complete the experimental task between the two highlighting methods (Fig. 6). While we did find a significant difference between the highlighting methods for the mean estimation error, this difference was very small in magnitude (0.19 units on a scale ranging from 0 to 50 units). We also found no significant difference between the two highlighting methods with regard to the time it took participants to find the highlighted observation (Fig. 10) or the axis location needed to estimate the observation’s value (Fig. 11). Participants had a faster mean time to first fixation for AOI 3 in the leader line condition for the parallel coordinate plot representation, but not for the scatterplot condition, though this interaction effect was slightly short of

348

IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS,

significance. This may be related to the level of visual clutter inherent in the two different representations. Parallel coordinate plots are more visually complex than scatterplots and it is likely to be more difficult to find the (relevant) axis among all of the lines of the parallel coordinate plot than in the scatterplot. This finding may then imply that substituting leader lines for color will provide the greatest advantage in displays that are information dense and that incorporate a large number of different representation types like those that are commonly built in information visualization systems like Improvise [42] or Jigsaw [3]. However, there is also a need to balance the ability to link displays with the ability to read displays individually. Our current implementation of leader lines did lead to serious misreadings of the graph for some individuals. It may be that an alternative implementation of leader lines that employs curved lines that approach the relevant point on the axis from the top or the bottom of the axis may reduce the potential for this misreading, while providing a further benefit of using a line with a noticeably different character (curved versus straight) than the lines within the PCPs themselves. Additional support for the finding that leader lines are less helpful in simple visual displays such as scatterplots can be found in our mean estimation errors. Participants had slightly larger (but statistically significant) estimation errors in the leader line condition when red lines were used (Fig. 7); when black lines were used the difference was not significant. This may be due to the design of the leader line for this representation type; because of its visual salience, the leader line may function as a third axis when participants are viewing the scatterplots, and may interfere with their ability to estimate values. Future research could compare the current design to one that uses a curved leader line (one that does not intersect the y-axis, such as those designed in [21]) to see if that design improves estimation accuracy for this representation type.

7

CONCLUSION

The results of our study reveal that leader line highlighting is in most cases no worse than color highlighting for simple tasks that require the use of coordinated multiple view visualizations. Our work also demonstrates that leader line highlighting is not significantly more difficult to use than color. We have also found that performance using both methods does not vary when one considers age and gender, and follow-up testing revealed that the use of a less salient color (black) has minimal impact on performance if the representation is designed so that the leader lines occupy the highest level in the display’s visual hierarchy. While our results provide empirical evidence that leader lines may be a useful alternative to color highlighting, there remains a wide range of alternative highlighting methods (depth of field, transparency, color saturation, etc. . .) that should also be evaluated. We also know it will be important to move beyond static stimuli to compare methods in interactive settings, where it will be significantly harder to measure performance, but where the experimental conditions will more closely match realworld visual analysis scenarios. Recent work by Fabrikant

VOL. 21,

NO. 3, MARCH 2015

€ltekin et al. [44] have outlined methodoloet al. [43] and C ¸o gies we wish to explore further which use sequence analysis methods to categorize and evaluate gaze patterns that result from the use of interactive visualizations. A further option for moving beyond the measures we used is to adapt approaches that use trajectory analysis and temporal clustering techniques [45]. We would also like to explore the use of alternative approaches for drawing lines across views that minimize occlusion, such as those shown by Steinberger et al. [21]. In addition to analysing sequences of fixations captured in interactive coordinated multiple view visualization tasks, we believe that a fruitful direction for future work is to focus on evaluating the utility of different highlighting methods in more complex visualizations. We have only initiated progress here, with a focus on the simplest highlighting tasks done with the smallest number of coordinated views. Specifically, we would like to identify whether or not there is a threshold of visual clutter in displays at which information visualization designers should consider switching highlighting methods from color alone to include other approaches like leader lines, blurring, or other visual effects. As with so many other aspects of visualization interface and representation design, there may be multiple good options depending on the context and task focus for a given visualization.

ACKNOWLEDGMENTS The authors would like to thank Sara Fabrikant and Scott Bell for their advice and encouragement.

REFERENCES [1]

J. Heer, S. K. Card, and J. A. Landay, “Prefuse: A toolkit for interactive information visualization,” presented at the ACM SIGCHI Conf. Human Factors Comput. Syst., Portland, OR, USA, 2005. [2] C. Ahlberg, “Spotfire decisionsite,” 8.2.1 ed. Somerville, MA, USA, 2006. [3] J. Stasko, C. Gorg, L. Zhicheng, and K. Singhal, “Jigsaw: Supporting investigative analysis through interactive visualization,” in Proc. IEEE Symp. Visual Anal. Sci. Technol., Sacramento, CA, USA, 2007, pp. 131–138. [4] J. D. Fekete, “The infovis toolkit,” in Proc. IEEE Symp. Inform. Vis., Austin, TX, USA, 2004, pp. 167–174. [5] J. Dykes, “Exploring spatial data representation with dynamic graphics,” Comput. Geosciences, vol. 23, pp. 345–370, 1997. [6] G. Andrienko and N. Andrienko, “Interactive maps for visual data exploration,” Int. J. Geographical Inform. Sci., vol. 13, pp. 355– 374, 1999. [7] F. Hardisty and A. C. Robinson, “The geoviz toolkit: Using component-oriented coordination methods to aid geovisualization application construction,” Int. J. Geographical Inform. Sci., vol. 25, pp. 191–210, 2011. [8] A. C. Robinson, “Highlighting in geovisualization,” Cartography Geographic Inform. Sci., vol. 38, pp. 373–383, 2011. [9] M. W. Baldonado, A. Woodruff, and A. Kuchinsky, “Guidelines for using multiple views in information visualization,” in Proc. 5th Int. Working Conf. Adv. Visual Interfaces, Palermo, Italy, 2000, pp. 110–119. [10] C. North and B. Shneiderman, “A taxonomy of multiple window coordination,” Univ. Maryland, Dept. Comput. Sci., College Park, MD, USA, 1997. [11] J. C. Roberts, “State of the art: Coordinated & multiple views in exploratory visualization,” in Proc. Fifth Int. Conf. Coordinated Multiple Views Exploratory Vis., Zurich, Switzerland, 2007, pp. 61–71. [12] G. Andrienko and N. Andrienko, “Coordinated multiple views: A critical view,” presented at the Coordinated Multiple Views Exploratory Visualization, Zurich, Switzerland, 2007.

GRIFFIN AND ROBINSON: COMPARING COLOR AND LEADER LINE HIGHLIGHTING STRATEGIES IN COORDINATED VIEW GEOVISUALIZATIONS

[13] G. Andrienko, N. Andrienko, U. Demsar, D. Dransch, J. Dykes, S. I. Fabrikant, M. Jern, M.J. Kraak, H. Schumann, and C. Tominski, “Space, time and visual analytics,” Int. J. Geographical Inform. Sci., vol. 24, pp. 1577–1600, 2010. [14] B. Shneiderman, “The eyes have it: A task by data type taxonomy of information visualizations,” in Proc. IEEE Visual Languages, Boulder, CO, USA, 1996, pp. 336–343. [15] N. Boukhelifa, J. C. Roberts, and P. J. Rodgers, “A coordination model for exploratory multiview visualization,” in Proc. First Int. Conf. Coordinated Multiple Views Exploratory Vis., London, England, 2003, pp. 76–85. [16] C. North and B. Shneiderman, “Snap-together visualization: A user interface for coordinating visualizations via relational schema,” in Proc. ACM Working Conf. Adv. Visual Interfaces, Palermo, Italy, 2000, pp. 128–135. [17] T. Pattinson and M. Phillips, “View coordination architecture for information visualization,” in Proc. Asia-Pacific Symp. Inform. Vis., Sydney, Australia, 2001, pp. 165–169. [18] C. Ware and R. Bobrow, “Motion to support rapid interactive queries on node-link diagrams,” ACM Trans. Appl. Perception, vol. 1, pp. 1–15, 2004. [19] C. Ware and R. Bobrow, “Supporting visual queries on medium sized node-link diagrams,” Inform. Vis., vol. 4, pp. 49–58, 2005. [20] J. C. Roberts and M. A. E. Wright, “Towards ubiquitous brushing for information visualization,” in Proc. IEEE Int. Conf. Inform. Vis., London, England, 2006, pp. 151–156. [21] M. Steinberger, M. Waldner, M. Streit, A. Lex, and D. Schmalstieg, “Context-preserving visual links,” IEEE Trans. Vis. Computer Graphics, vol. 17, no. 12, pp. 2249–2258, Dec. 2011. [22] R. Kosara, S. Miksch, and H. Hauser, “Focusþcontext taken literally,” IEEE Comput. Graphics Appl., Special Issue Inform. VIs., vol. 22, no. 1, pp. 22–29, Jan./Feb. 2002. [23] A. M. MacEachren, How Maps Work: Representation, Visualization and Design. New York, NY, USA: Guilford Press, 1995. [24] J. M. Wolfe and T. S. Horowitz, “What attributes guide the deployment of visual attention and how do they do it?,” Nature Rev. Neuroscience, vol. 5, pp. 495–501, 2004. [25] T. Gemayer, M. Steinberger, A. Lex, M. Streit, and D. Schmalsteig, “Show me the invisible: Visualizing hidden content,” presented at the Proc. SIGCHI Conf. Human Factors Comput. Syst., Toronto, Canada, 2014. [26] J.-F. Im, M. J. McGuffin, and R. Leung, “GPLOM: The generalized plot matrix for visualizing multidimensional multivariate data,” IEEE Trans. Vis. Comput. Graphics, vol. 19, no. 12, pp. 2606–2614, Dec. 2013. [27] A. Lex, C. Partl, D. Kalkofen, M. Streit, S. Gratzl, A. M. Wassermann, D. Schmalsteig, and H. Pfister, “Entourage: Visualizing relationships between biological pathwas using contextual subsets,” IEEE Trans. Vis. Comput. Graphics, vol. 19, no. 12, pp. 2536–2545, Dec. 2013. [28] C. Viau and M. J. McGuffin, “ConnectedCharts: Explicit visualization of relationships between data graphics,” Comput. Graphics Forum, vol. 31, pp. 1285–1294, 2012. [29] T. Slocum, R. B. McMaster, F. C. Kessler, and H. H. Howard, Thematic Cartography and Geographic Visualization, 2nd ed. Upper Saddle River, NJ, USA: Pearson Education, 2005. [30] S. M. Kosslyn, “Understanding charts and graphs,” Appl. Cognitive Psychology, vol. 3, pp. 185–226, 1989. [31] S. Lewandowsky and J. T. Behrens, “Statistical graphs and maps,” in Handbook of Applied Cognition, F. T. Durso, R. S. Nickerson, S. T. Schvaneveldt, S. T. Dumais, D. S. Lindsay, and M. T. H. Chi, Eds., ed Chichester, U.Knited Kingdom: Wiley, 1999. [32] G. L. Lohse, “A cognitive model for understanding graphical perception,” Human-Computer Interaction, vol. 8, pp. 353–388, 1993. [33] D. Peebles and P. C.-H. Cheng, “Modeling the effect of task and graphical representation on response latency in a graph reading task,” Human Factors, vol. 45, pp. 28–46, 2003. [34] S. Pinker, “A theory of graph comprehension,” in Artificial Intelligence and the Future of Testing, R. Freedie, Ed. Hillsdale, NJ, USA: Lawrence Erlbaum Associates Inc, 1990, pp. 73–126. [35] D. Simkin and R. Hastie, “An information-processing analysis of graph perception,” J. Amer. Statistical Assoc., vol. 82, pp. 454–465, 1987. [36] R. M. Ratwani and J. G. Trafton, “Shedding light on the graph schema: Perceptual features versus invariant structure,” Psychonomic Bulletin Rev., vol. 15, pp. 757–762, 2008.

349

[37] D. Peebles, D. Ramduny-Ellis, G. Ellis, and J. V. H. Bonner, “The influence of graph schemas on the interpretation of unfamiliar diagrams,” presented at the 27th British Comput. Soc. Human Comput. Interaction Conf.: The Internet Things, Brunel Univ., London, United Kingdom, 2013. [38] J. Denes and A. D. Keedwell, Latin squares and Their Applications. New York, NY, USA: Academic Press, 1974. [39] M. Harrower and C. A. Brewer, “Colorbrewer.org: An online tool for selecting color schemes for maps,” Cartographic J., vol. 40, pp. 27–37, 2003. [40] J. Faubert, “Seeing depth in colour: More than just what meets the eyes,” Vision Res., vol. 34, pp. 1165–1186, 1994. [41] K. Holmqvist, M. Nystr€ om, R. Andersson, R. Dewhurst, H. Jarodzka, and J. van de Weijer, Eye tracking: A Comprehensive Guide to Methods and Measures. Oxford, United Kingdom: Oxford University Press, 2011. [42] C. Weaver, “Building highly-coordinated visualizations in improvise,” in Proc. IEEE Symp. Inform. Vis., Austin, TX, USA, 2004, pp. 159–166. [43] S. I. Fabrikant, S. Rebich-Hespanha, N. Andrienki, G. Andrienko, and D. R. Montello, “Novel method to measure inverence affordance in static small-multiple map displays representing dynamic processes,” Cartographic J., vol. 45, pp. 201–215, 2008. €ltekin, B. Heil, S. Garlandini, and S. I. Fabrikant, “Evaluating [44] A. C ¸o the effectiveness of interactive map interface designs: A case study integrating usability metrics with eye-movement analysis,” Cartography Geographic Inform. Sci., vol. 36, pp. 5–17, 2009. [45] G. Andrienko, N. Andrienko, M. Burch, and D. Weiskopf, “Visual analytics methodology for eye movement studies,” IEEE Trans. Vis. Comput. Graphics, vol. 18, no. 12, pp. 2889–2898, Dec. 2012. Amy L. Griffin received BA degrees in geography, chemistry, and art in 1997, an MS degree in geography in 2000, and a PhD degree in geography in 2004. She is currently a Senior Lecturer in the School of Physical, Environmental, and Mathematical Sciences at UNSW Canberra, Canberra, Australia. She serves as co-Chair of the Commission on Cognitive Visualization for the International Cartographic Association. Her research focuses on understanding the cognitive and perceptual processes employed in reading geographic visualizations.

Anthony C. Robinson received a BS degree in geography in 2002, a MS degree in geography in 2005, and a PhD degree in geography in 2008. He is currently an Assistant Professor and Assistant Director for the GeoVISTA Center in the Department of Geography, Penn State University, University Park, PA. He also directs Penn State’s Online Geospatial Education programs. His research focuses on human-centered interface design and evaluation for geographic visualization tools. " For more information on this or any other computing topic, please visit our Digital Library at www.computer.org/publications/dlib.

Suggest Documents