Visualization of Eye Tracking Data: A Taxonomy ... - Semantic Scholar

3 downloads 0 Views 14MB Size Report
This survey provides an introduction into eye tracking visualization with an overview of ...... Mind's Eye: Cognitive and Applied Aspects of Eye Movement Re-.
DOI: 10.1111/cgf.13079

COMPUTER GRAPHICS

forum

Volume 00 (2017), number 0 pp. 1–25

Visualization of Eye Tracking Data: A Taxonomy and Survey T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf and T. Ertl

University of Stuttgart, Germany {tanja.blascheck, kuno.kurzhals, michael.raschke, michael.burch, daniel.weiskopf, thomas.ertl}@vis.uni-stuttgart.de

Abstract This survey provides an introduction into eye tracking visualization with an overview of existing techniques. Eye tracking is important for evaluating user behaviour. Analysing eye tracking data is typically done quantitatively, applying statistical methods. However, in recent years, researchers have been increasingly using qualitative and exploratory analysis methods based on visualization techniques. For this state-of-the-art report, we investigated about 110 research papers presenting visualization techniques for eye tracking data. We classified these visualization techniques and identified two main categories: point-based methods and methods based on areas of interest. Additionally, we conducted an expert review asking leading eye tracking experts how they apply visualization techniques in their analysis of eye tracking data. Based on the experts’ feedback, we identified challenges that have to be tackled in the future so that visualizations will become even more widely applied in eye tracking research. Keywords: eye tracking, visualization, survey ACM CCS: [Human-centred computing]: Visualization—Visualization Visualization—Visualization design and evaluation methods

1. Introduction

[Human-centred

computing]:

or visual methods are used for eye tracking data analysis, a large amount of data generated during eye tracking experiments has to be handled. For example, a user experiment with 30 participants, three types of tasks and 30 stimuli for each task leads to 2700 scanpaths in total. Each scanpath typically consists of a large number of fixations and each fixation aggregates gaze points. The number of these gaze points depends on the recording rate of the eye tracking device. In this example, if we assume a recording rate of 60 Hz, an average fixation duration of 250 ms, and that each stimulus is viewed for 2 min, about 20 000 000 gaze points and 1 300 000 fixations have to be stored, formatted and analysed to support or reject one or more hypotheses. Besides analysing eye tracking data with respect to quantitative metrics such as fixation count, fixation duration, fixation distribution, saccadic amplitude and pupil size, semantic information about which areas on a stimulus were focused on provides additional information to understand viewing strategies of participants.

Eye tracking has become a widely used method to analyse user behaviour in marketing, neuroscience, human–computer interaction [Duc02], visualization research [TCSC13], reading, scene perception and visual search [Ray98]. Apart from measuring completion times and accuracy rates during the performance of visual tasks in classical controlled user experiments, eye-tracking-based evaluations provide additional information on how visual attention is distributed and changes for a presented stimulus. Eye tracking devices record gaze points of a participant as raw data. Afterward, these gaze points can be aggregated into fixations and saccades, the two major categories of eye movement. In dynamic stimuli, smooth pursuit is an additional data type used for analysis. Additionally, areas of interest (AOIs) can be defined to measure the distribution of attention between specific regions on a stimulus. Due to the wide field of applications for eye tracking and various kinds of research questions, different approaches have been developed to analyse eye tracking data, such as statistical evaluation, either descriptive or inferential [HNA*11], string-editing algorithms [PS00, DDJ*10], visualization-related methods and visual analytics techniques [AABW12]. Regardless of whether statistical  c 2017 The Authors c 2017 The Eurographics Association and Computer Graphics Forum  John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

techniques

While statistical analysis provides quantitative results, visualization techniques allow researchers to consider other aspects of recorded eye tracking data in an exploratory and qualitative way. Visualization techniques help understand spatiotemporal aspects of eye tracking data and complex relationships within the data. Such

1

2

1950

Blascheck et al. / Visualization of Eye Tracking Data

8 6 4 2

1960

1970

1980

1990

2000

2010

Figure 1: Histogram of all publications of this survey listed in Table 1, introducing eye tracking data visualization techniques. The number of published journal articles, conference papers and books has strongly increased during the last decade. an analysis aims at finding hypotheses that can be investigated with statistical methods later on. Due to an increasing complexity of tasks and stimuli in eye tracking experiments, we believe that visualization will play an increasingly important role in future eye tracking analyses. Hence, the contribution of this survey has several facets: First, based on a taxonomy of eye tracking terms, a classification system for stimuli, gaze data and visualization techniques is formulated. Second, we assign papers from the literature to these classes. Third, based on these results and an expert review we conducted, we identify challenges for the future development of eye tracking visualization techniques. Evaluation has become an important part of visualization research. More evaluations also using eye tracking have been done. One way of analysing the vast amount of eye tracking data is to use visualization techniques. The number of published journal articles, conference papers and books about eye tracking visualization techniques has strongly increased during the last decade. Figure 1 shows a histogram of all relevant publications with visualization techniques for eye tracking data that are included in this survey. Especially after 1995, the number of publications on eye tracking visualization techniques has overall increased. However, we are still missing a comprehensive survey of visualization techniques for eye tracking data that structures and discusses these approaches. Eye tracking in general has a longer history than the visualization techniques covered in this survey. We refer the interested reader to other papers to learn more about the history of eye tracking and eye tracking equipment [Yar67, YS75, JK03, PB06, HNA*11]. This paper is an extended version of our EuroVis 2014 state-ofthe-art report [BKR*14], adding an extended and updated body of surveyed literature. We also revised our taxonomy to include gaze data as another category of eye tracking data. The main part of this paper is a presentation of the different visualization techniques with respect to our classification system. Furthermore, we conducted an expert review asking about eye tracking visualization techniques and challenges for future work. 2. Systematic Literature Research Since eye tracking has a wide range of applications, the development of eye tracking visualization techniques has become an interdisciplinary challenge that requires a comprehensive literature research across several disciplines. To satisfy this requirement, we reviewed the main visualization and visual analytics conferences (IEEE VIS, EuroVis, EuroVA), the ACM Digital Library and the IEEE Xplore Digital Library, as well as the top conference on eye tracking research (Eye Tracking Research & Application), and the main jour-

nals of eye tracking research (Journal of Eye Movement Research), usability research (Journal of Human Factors), psychology [Behavior Research Methods (formerly Behavior Research Methods & Instrumentation and Behavior Research Methods, Instruments, & Computers)] and cognitive science (Cognitive Science Journal). We searched the conference proceedings, journals, and libraries using a keyword search. Our main keywords were ‘eye tracking’ and ‘visualization’. Here, the main challenge was that many papers did not explicitly develop visualizations for eye tracking data. Rather, papers used or created visualizations for the eye tracking data in lack of methods explaining certain phenomena. We also looked at eye tracking studies in general in the mentioned data sources. Based on the papers found, we systematically reviewed each paper assigning them to different categories. Our main categories were stimulus, participant, visualization, data, eye tracking vis, application, type and chart. These categories were further subdivided, for example, by classifying a stimulus as 2D or 3D, see Sections 3.2– 3.4 for detailed descriptions of the three different categories used. If possible, each paper was assigned a tag of each category. In order to classify novel techniques, we used the category type to distinguish papers introducing a novel technique or extension of an existing technique, from papers that only use an existing method or classify eye tracking data and visualizations in general. This literature research resulted in about 110 papers that deal with eye tracking data visualization. The papers tagged with the type introduction are shown in Table 1. We chose as a primary category to differentiate the papers the data type a visualization technique represents. This resulted in three main classes: point-based, AOI-based and visualization techniques using both. Furthermore, we used the categories visualization and stimulus to further differentiate the techniques. Our additional subcategories were based on their data type, visualization type, number of participants visualized, dimensionality of the visualization, context and interactivity. The stimulus category was subdivided using information about stimulus type, content and dimensionality of the stimulus. A detailed description of this classification and each subcategory is given in the next section. For this systematic and detailed literature research, we used the literature system SurVis [BKW16] to find, analyse and group papers.

3. Taxonomy Before we present our taxonomy classifying visualization techniques, we first define the basic terminology related to eye tracking data (Section 3.1). The taxonomy is subdivided into three

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

3

Blascheck et al. / Visualization of Eye Tracking Data

• • • • • • ◦ ◦ • • • • • • • ◦ • ◦ • • • ◦ ◦ ◦ ◦

• • • • • • ◦ ◦ • ◦ ◦ • • • • ◦ • • • ◦ ◦ ◦ ◦ ◦ ◦

• ◦ ◦ ◦ ◦ ◦ • • ◦ • • ◦ • • ◦ • ◦ ◦ • • • • • • •

Špakov & Räihä [SR08] Beymer & Russel [BR05] Räihä et al. [RAM∗ 05] Kim et al. [KDX∗ 12] Crowe & Narayanan [CN00] Holsanova [Hol01] Itoh et al. [ITS00] Pellacini et al. [PLG06] Kurzhals et al. [KHW14] Burch et al. [BKW13] Raschke et al. [RCE12] Richardson & Dale [RD05] Kurzhals & Weiskopf [KW15b] Tsang et al. [TTS10] Blascheck et al. [BRE13] Goldberg & Kotval [GK99] Egusa et al. [ETK∗ 08] Itoh et al. [IHN98] Schulz et al. [SSF∗ 11] Cheng et al. [CSS∗ 15] West et al. [WHRK06] Goldberg & Helfman [GH10b] Siirtola et al. [SLHR09] Kurzhals & Weiskopf [KW15a] Baldauf et al. [BFH10] Duchowski et al. [DMC∗ 02]

• • • • • • • • • • • • • • ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦

◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ • • • • • • • • • • • ◦ ◦

◦ ◦ ◦ ◦ ◦ ◦ ◦ • • • • • ◦ • ◦ ◦ ◦ ◦ ◦ • • • • ◦ ◦ ◦

◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ • ◦

• • • • • • • • • • • • • • • • • • • • • • • • ◦ •

• • • • • • • • • • • • • • • • • • • • • • • • ◦ ◦

• • ◦ ◦ ◦ ◦ ◦ ◦ • • ◦ ◦ • • • ◦ ◦ ◦ ◦ ◦ • ◦ ◦ • • ◦

◦ ◦ • • • • • • ◦ ◦ • • ◦ ◦ ◦ • • • ◦ • ◦ • • ◦ ◦ •

• • • • • • ◦ • ◦ • • • ◦ ◦ • • • ◦ • • • • • ◦ ◦ ◦

◦ ◦ ◦ ◦ ◦ ◦ • ◦ • ◦ ◦ ◦ • • ◦ ◦ ◦ • • ◦ ◦ ◦ ◦ • • •

◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ • •

• • • • • • • • ◦ ◦ ◦ ◦ • ◦ • • • • • ◦ ◦ ◦ ◦ • • •

◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ • •

• • • ◦ ◦ ◦ ◦ ◦ • ◦ ◦ ◦ • ◦ ◦ ◦ ◦ ◦ ◦ • ◦ ◦ ◦ • • •

◦ ◦ ◦ • • • • • ◦ • • • ◦ • • • • • • ◦ • • • ◦ ◦ ◦

Stimulus ◦ • ◦ • ◦ • ◦ • ◦ • ◦ • ◦ • • ◦ ◦ • • ◦ ◦ • ◦ • • • • ◦ ◦ • ◦ • ◦ • • ◦ • • ◦ • ◦ • ◦ • • ◦ • ◦ • ◦ ◦ ◦ ◦ • ◦ ◦ • ◦ • ◦ ◦ ◦ ◦ • ◦ • ◦ • • ◦ ◦ ◦ ◦ ◦ • •

• • • ◦ • • ◦ • ◦ • • • • ◦ • ◦ • ◦ ◦ • • • • • ◦ ◦

3D

Static

Dynamic

◦ ◦ ◦ ◦ ◦ ◦ • • ◦ ◦ ◦ ◦ ◦ ◦ ◦ • ◦ • ◦ ◦ ◦ • • • •

2D

Non-interactive

Visualization • ◦ ◦ • • ◦ ◦ • • ◦ • ◦ • ◦ • ◦ • • • • ◦ • ◦ • • • • ◦ ◦ • • ◦ • ◦ • ◦ • ◦ • ◦ • ◦ • ◦ • ◦ ◦ • • • • ◦ • ◦ • ◦ • ◦ • ◦ • ◦ • ◦ • ◦ ◦ • ◦ • • ◦ • ◦ • ◦ • ◦ • ◦ • ◦ • ◦ ◦ • • ◦ • • • • • ◦ • • ◦ • • ◦

Passive content

Interactive

• • • • • • • • • • • • • • • • • • • ◦ ◦ ◦ • • ◦

Active content

2D

◦ ◦ ◦ ◦ ◦ ◦ • ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ • • • • ◦ • •

Not in-context

Static

• ◦ • • • • • ◦ ◦ ◦ ◦ ◦ ◦ • • • • • • ◦ • • ◦ ◦ •

In-context

Animated

Gaze Data ◦ ◦ ◦ • ◦ • • ◦ ◦ • ◦ ◦ • ◦ ◦ • ◦ ◦ • • ◦ • • • ◦ • • ◦ • • ◦ • • ◦ • • ◦ • • ◦ • • ◦ • ◦ ◦ • ◦ ◦ • ◦ ◦ • ◦ ◦ • • ◦ • • ◦ • ◦ ◦ • ◦ • • • ◦ • • • • •

3D

Multiple part.

• • ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ • • ◦

Single part.

Spatial

Reference Grindinger et al. [GDS10] Goldberg & Helfman [GH10c] Bojko [Boj09] Velichkovsky & Hansen [VH96] Wooding [Woo02] Latimer [Lat88] Kurzhals & Weiskopf [KW13] Paletta et al. [PSF∗ 13b] Noton & Stark [NS71b] Ramloll et al. [RTSB04] Mackworth & Mackworth [MM58] Yarbus [Yar67] Lankford [Lan00] Burch et al. [BSRW14] Hembrooke et al. [HFG06] Kurzhals et al. [KHH∗ 16] Chen et al. [CAS13] Li et al. [LCK10] Hurter et al. [HEF∗ 14] Dorr et al. [DJB10] Duchowski et al. [DPMO12] Duchowski & McCormick [DM98] Stellmach et al. [SND10b] Weibel et al. [WFE∗ 12] Pfeiffer [Pfe12]

Spatiotemporal

Temporal AOI-based

Both

Point-based

Table 1: List of all references that introduced a new visualization technique, improved an existing one or adapted an existing one. The references are classified into gaze data, visualization-related and stimulus-related categories described in Sections 3.2–3.4.

• • • • • • • ◦ • ◦ • • • ◦ • • • • • • • • ◦ ◦ ◦

◦ ◦ ◦ ◦ ◦ ◦ ◦ • ◦ • ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ ◦ • • •

• • • • • • ◦ • • • • • • ◦ • • • ◦ ◦ • • • • • ◦ ◦

◦ ◦ ◦ ◦ ◦ ◦ • ◦ ◦ ◦ ◦ ◦ ◦ • ◦ ◦ ◦ • • ◦ ◦ ◦ ◦ ◦ • •

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

4

Blascheck et al. / Visualization of Eye Tracking Data = Gaze point 1

= Fixaon

1

= Saccade

3

5

= Dwell = AOI = Transion

4 2

7

6

Figure 2: Gaze points are spatially and temporally aggregated into fixations. Saccades connect individual fixations. Successive fixations are a dwell, however, only if fixations are within an AOI. An AOI is a region of specific interest on a stimulus. A saccade from one AOI to another is called a transition. The complete sequence of fixations and saccades is called scanpath. categories: categories related to stimuli of eye tracking experiments (Section 3.2), categories related to the gaze data (Section 3.3) and categories related to visualization techniques (Section 3.4). Other taxonomy papers target areas different from ours, for example, taxonomies restricted to the stimulus alone [SNDC09], the dimensionality of visualizations [Spa08], the eye tracking data [RTSB04] or the tasks for analysing eye tracking data [KBB*17]. Our taxonomy includes more categories to obtain a fine-grained categorization of visualization techniques focusing on visualization techniques for eye tracking data and not on analysis tasks.

3.1. Terminology Eye tracking devices record gaze points indicating where a participant is looking on a stimulus. The recording rates depend on the characteristics of devices. State-of-the-art devices allow rates between 60 and 500 Hz or more. The recording rate specifies how many gaze points are recorded per second. In most cases, the raw gaze data are further processed. Different data types, which are shown in Figure 2, are distinguished and detailed in the following. For each data type, different metrics can be used during an analysis. The most important metrics for each data type are explained briefly. A comprehensive collection of eye tracking metrics can be found in the book by Holmqvist et al. [HNA*11].

Fixation. A fixation is an aggregation of gaze points. Gaze points are aggregated based on a specified area and time span. This time span is usually between 200 and 300 ms [HNA*11]. Different algorithms exist for calculating fixations [SG00]. Common metrics for fixations are the fixation count (number of fixations), the fixation duration (in milliseconds) and the fixation position given as x-, yand—if required—z-coordinates.

Saccade. A saccade describes rapid eye movements from one fixation to another. Saccades typically last about 30–80 ms. During this time span, visual information is suppressed [HNA*11]. Typical met-

rics are saccadic amplitude (distance of a saccade), saccadic duration (in milliseconds) and saccadic velocity (in degrees per second). Smooth pursuit. During the presentation of dynamic stimuli, smooth pursuits can occur. They happen unintentionally and only if participants follow a movement in the stimulus. The velocity of the eye during smooth pursuits is about 10–30 degrees per second [HNA*11]. Scanpath. A sequence of alternating fixations and saccades is called a scanpath. It can provide information about the search behaviour of a participant. An ideal scanpath would be a straight line to a specified target [CHC10]. Deviance from this ideal scanpath can be interpreted as poor search [GK99]. Scanpath metrics include the convex hull (i.e. which area a scanpath covers), scanpath length (in pixels) and scanpath duration (in milliseconds). Stimulus. A stimulus can be any visual content presented to participants during an eye tracking experiment. We differentiate static and dynamic stimuli, with either active or passive content. Usually, 2D stimuli are presented to participants, but 3D stimuli have recently become a focal point of research as well [Pfe12]. Area of interest. An AOI or region of interest (ROI) is a part of a stimulus that is of special importance, typically marking an object partially or completely. For dynamic stimuli, AOIs also have to be dynamic. AOIs can either be specified beforehand or after an eye tracking experiment. Usually, AOIs are created based on semantic information of the stimulus. A transition is a saccadic movement from one AOI to another, and a dwell is a temporal aggregation of fixations within an AOI. Typical metrics for AOIs are the transition count (number of transitions between AOIs), the dwell time within an AOI (in milliseconds) and the AOI hit, which defines whether a fixation is within an AOI or not. 3.2. Stimulus-related categories The first part of our taxonomy is based on a categorization of stimuli. The type of stimulus can have a great influence on the choice and

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

Blascheck et al. / Visualization of Eye Tracking Data

design of an eye tracking visualization technique. Stimuli can be divided into static or dynamic, representing 2D or 3D content, with active or passive content. The viewing task would be another type for classification, but is not included in our taxonomy. Static versus dynamic stimuli. Static stimuli can, for example, be text or pictures where the visible content does not change. Dynamic stimuli can be videos, interactive applications or real-world scenarios. Some visualizations are overlaid on top of the stimulus and can be used with static and dynamic stimuli alike. Other visualization techniques depend on the static nature of a stimulus and cannot be applied to dynamically changing content. 2D versus 3D stimuli. 2D stimuli are common and can, for example, be static or dynamic 2D visualizations, videos or Web pages. Stimuli representing 3D models or objects have become increasingly popular in recent years. Data of 3D stimuli are usually collected using head-mounted eye trackers in real- or virtual-world scenarios [PR14]. 3D stimuli can be represented as stereoscopic images on 3D screens. Passive versus active stimulus content. The participant’s mode of interaction is an important factor for data visualization and how a graphical user interface for a visualization technique is designed. Participants can watch presented stimuli passively without interfering actions. A stimulus can be either static or dynamic (i.e. a presentation of pictures or videos). A synchronization between recordings of different participants is possible with minor effort, allowing one to compare multiple participants and search for similarities or differences in their scanpaths. Participants can also actively influence a stimulus by their actions, here a stimulus becomes dynamic. For example, eye tracking of interactive applications provides individual recordings that result from the active involvement of a participant in an experiment. Comparing these individually recorded data sets is non-trivial because the synchronization of the data is difficult [Duc07, BJK*16b]. Viewing task. Although the task has a significant influence on eye movements of participants [AHR98], we decided not to use the task as a categorization factor because many visualization techniques do not depend explicitly on a given task for a successful interpretation of the data. 3.3. Gaze data-related categories The next set of categories is related to the gaze data collected during eye tracking experiments. First, we distinguish between point-based and AOI-based data. Furthermore, data can be classified as temporal, spatial or spatiotemporal. Another categorization is if the data are 2D or 3D as well as how many participants’ data are represented. Point-based versus AOI-based. Eye tracking data can be evaluated in a point-based or AOI-based fashion [AABW12]. Point-based evaluation focuses on overall eye movements and their spatial or temporal distribution. A semantic annotation of data is not required. Depending on the stimulus, a point-based evaluation may not be sufficient for specific analysis tasks (e.g. comparison of asynchronous

5

eye tracking data). Here, an annotation of identified AOIs on a stimulus can be used to apply AOI-based metrics. AOI-based metrics provide additional information about the transitions and relations between AOIs. We use this classification as the first level in our taxonomy to distinguish between different visualization techniques. Temporal, spatial and spatiotemporal visualizations. We classify gaze data as either temporal, spatial or spatiotemporal. The temporal dimension focuses on time and is usually visualized with a timeline as one axis. The spatial dimension of the data focuses on x-, y- and possibly z-coordinates of gaze data. For AOI-based visualization techniques, the spatial domain contains AOIs and their relation to each other. The third data type is a combination of both, called spatiotemporal. Here, temporal as well as spatial aspects of the data are considered jointly. 2D versus 3D data. Depending on the dimensionality of the gaze data, different aspects are important. 2D data are the most common type of data collected, typically being an x- and y-coordinate of the eye movements. For 3D data, the third dimension, being depth collected as a z-coordinate, is important. A challenge with 3D data is the mapping of fixations onto the correct geometrical model. Single participant versus multiple participants. Another factor for eye tracking analysis is the number of participants represented with a visualization technique. In contrast to techniques that show only a single participant, visualizing data of multiple participants at the same time can help identify common viewing strategies. However, these representations might suffer from visual clutter if too much data are displayed at the same time [RLMJ05]. Here, strategies such as averaging or bundling of lines might be used to achieve better results [HFG06, HEF*14]. 3.4. Visualization-related categories There are several taxonomies for visualizations in general. Some taxonomies focus on data dimension or type [Chi00, TM04], interaction techniques [Shn96, YKSJ07], visualization tasks [BM13] or specific visualization types [LPP*06, CM97]. However, these taxonomies are either too general or too restricted for eye tracking visualizations. First, we shortly introduce statistical graphics. Then, we distinguish between animated and static, 2D and 3D, in-context and not in-context, as well as interactive and non-interactive visualizations. We also discuss the recent approach of visual analytics as one means of further evaluating eye tracking data. Statistical graphics. The most commonly used visualization techniques for eye tracking data are statistical diagrams such as bar charts, line charts, box plots or scatter plots. Statistical graphics are relevant for descriptive statistics, and therefore, important and widely used for eye tracking data. However, visualization techniques of statistical graphics are typically generic methods that were not specifically designed for eye tracking. Therefore, we discuss them separately from specific visualization techniques in Section 4.

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

6

Blascheck et al. / Visualization of Eye Tracking Data

Static versus animated visualizations. Static visualizations usually employ a time-to-space mapping of the data to the visualization. For dynamic stimuli, creating a static visualization often requires predefined AOIs because a spatial linking to dynamically changing content is hard to achieve. Animated visualizations use a timeto-time mapping by sequentially presenting certain points in time from the data. This allows in-context visualizations with an overlay of the visualization on top of the stimulus, keeping stimulus data and the visualization in the same domain. However, this requires complex layout algorithms that have to follow aesthetic drawing criteria [Pur97] for each static image in a sequence as well as for animation [BBD09] to preserve the viewer’s mental map.

tracking data; the authors focus on how techniques from geographic information science can be adapted for analysing spatiotemporal eye tracking data.

2D versus 3D visualizations. 2D visualizations represent data in a 2D space, for example, one spatial dimension and the temporal dimension, or both spatial dimensions at the same time. 3D visualizations represent three dimensions: both spatial dimensions and the temporal dimension or all three spatial dimensions for a 3D stimulus. 3D visualizations can be helpful as in the case of (STCs) [LCK10, KW13]. In a STC, the two spatial dimensions of a 2D stimulus as well as the temporal dimension are visualized at the same time. However, the third dimension has to be handled with care because of perceptual issues related to 3D visualizations, such as occlusion, making length judgments or reading text [Mun08]. In contrast, visualizing 3D data in a 2D domain removes one dimension and leads to data loss; however, it can make an analysis easier. Example operations for reducing one dimension are given in the state-of-the-art report by Bach et al. [BDA*14].

Table 1 provides an overview of all eye tracking papers that introduced a new visualization technique, improved an existing one or adapted an existing one for its application to eye tracking data. The table classifies the visualization techniques based on the two levels mentioned above. The upper part contains point-based visualization techniques described in Section 5 and the lower part of the table shows AOI-based visualization techniques described in Section 6. The middle part of the table contains papers with both point-based and AOI-based techniques. These papers are described in both sections, focusing on the respective aspect. The individual sections on point-based and AOI-based visualization techniques are further subdivided into temporal, spatial and spatiotemporal visualization techniques. The first column of the table shows this subdivision.

In-context versus not in-context visualizations. In-context visualizations link stimulus and visualization with each other, such as overlays over a stimulus or AOIs with thumbnail images. Visualization methods that do not include the stimulus in a visualization are not in-context visualizations. This is often the case for AOIbased visualization techniques. Not in-context visualizations have the problem that the spatial layout of AOIs is lost. However, if the relation between AOIs is more important than the spatial layout, losing this information is an acceptable trade-off. Interactive versus non-interactive visualizations. Noninteractive visualizations usually represent data with a fixed set of parameters. The analyst has limited options to influence these parameters, usually because they have been pre-defined to create a static image (e.g. an attention map). In contrast, interactive visualizations allow the analyst to explore the data beyond what is represented at the beginning. For example, the analyst can navigate through time, zoom and filter data and obtain information about the data for specific parameter settings that can be adapted individually. Visual analytics. When visualization techniques are not able to handle large eye tracking data, the emerging discipline of visual analytics can be an option for exploratory data analysis [TC05]. Machine-based analysis techniques such as methods from data mining or knowledge discovery in databases are combined with interactive visualizations and the perceptual abilities of a human viewer. A number of visual analytics systems were developed for the analysis of eye tracking data. Andrienko et al. [AABW12] discuss which and how existing visual analytics approaches can be applied to eye

3.5. Classification structure Based on the above categories, we coarsely divide the papers of eye tracking visualizations into two main subsets that differentiate whether a visualization technique is for point-based or AOI-based analysis. On the second level, we further segment the visualization techniques according to temporal, spatial and spatiotemporal aspects of the visualizations.

In the case of point-based visualization techniques (Section 5), the temporal approaches are timeline visualizations, spatial approaches are attention maps and spatiotemporal approaches are subdivided into scanpath and STC visualizations, as they represent two different concepts. Section 6 discusses AOI-based techniques: first temporal approaches in the form of timeline visualizations, then 3D visualization, and last spatial approaches in the form of relational visualization techniques. Furthermore, the table displays the other categorizing factors described in the taxonomy: gaze-based categories, visualization categories and stimulus categories.

4. Statistical Graphics Before we describe the eye tracking visualization techniques based on our classification, this section summarizes statistical graphics that are commonly used to visualize eye tracking data, but that were not especially developed for it. Figure 3 shows examples of a bar chart (Figure 3a) and line chart (Figure 3b) both representing the same data set: two participants and their corresponding fixation counts within AOIs. One of the first papers in eye tracking research uses line charts to study eye movement characteristics of children in a TV viewing scenario [GWG*64]. The authors present summarized time durations of different types of eye movements. Atkins and Zheng [AJTZ12] quantify the difference between doing and watching a manual task. To this end, they present results of recorded fixations in two line charts showing values for the x- and y-locations of fixations. Based on this visualization, the authors then discuss saccadic properties. Smith and Mital [SM13] use line charts to present values of mean

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

7

Blascheck et al. / Visualization of Eye Tracking Data

(a) Bar chart

(b) Line chart

Figure 3: Examples of statistical graphics depicting identical eye tracking data of two participants showing the fixation count per AOI: (a) bar chart and (b) line chart. fixation durations, mean saccadic amplitudes and other metrics over time. They also use line charts to separately show these metrics for dynamic and static stimuli. Bar charts are mainly employed to display the histogram of an eye tracking metric. Convertino et al. [CCY*03] plot the percentage of fixation durations for four different combinations of visualization techniques onto a bar chart. Thereby, the authors compare the usability of parallel coordinates [Ins85] with other types of visualizations. To evaluate different methods of image fusion, Dixon et al. [DLN*06] present eye location accuracy with bar charts. Scatter plots are commonly used to plot data in a 2D Cartesian diagram. This type of diagram shows relations between two values of a sample. Just and Carpenter [JC76] plot the relation between response latency and angular disparity. Berg et al. [BBM*09] compare human vision and monkey vision. The authors present scatter plots of amplitude and velocity measurements of saccadic movements for both species. Additionally, they compare visual saliency between humans and monkeys and show results also using a scatter plot. Anderson et al. [ABL*13] represent the recurrence of fixations in a scatter plot. The authors plot the fixation numbers onto the xand y-coordinates and mark if two fixations occur in the same spatial radius. Recurrence analysis indicates which parts of a scanpath occur repeatedly, highlighting which areas of a stimulus were looked at multiple times.

5. Point-Based Visualization Techniques This section comprises all visualization techniques that use spatial and temporal information of recorded data points (e.g. x- and y-coordinates of fixations along with temporal information) for visualization. Therefore, a definition of AOIs is not required. These visualization techniques can be used to analyse temporal changes in the position of data points, distribution of attention, scanpaths or spatiotemporal structures of eye tracking data. As eye tracking data are not continuous (due to saccades), the spatiotemporal characteristic of such data differs from that in other typical applications such as in geospatial data visualization.

5.1. Timeline visualizations

Box plots are a popular visualization technique to present statistical distributions. Hornof and Halverson [HH02] analyse absolute, horizontal and vertical deviation of fixations for participants from their experiment to monitor the deterioration of the calibration of the eye tracker. Dorr et al. [DMGB10] investigate how similar eye movement patterns of different subjects are when viewing dynamic natural scenes. To compare eye movements, they employ normalized scanpath saliency scores and show a box plot of the computed scores for different movies.

Timelines are an approach to visualize temporal data. A pointbased timeline visualization represents time on one axis of a coordinate system and eye tracking data on the other axis. Such plots are usually represented in 2D space. The coordinates for x, y or z of fixations can either be displayed together [KFN14] or the fixation position can be split into its coordinates and each one can be represented individually, as shown in Figure 4. If the fixation position is split and the x-coordinate is shown, time is depicted on the y-axis. To show the y- or z-coordinate of a fixations’ position, time is represented on the x-axis [GH10c]. This can be done for static or dynamic stimuli with active or passive content and for one or multiple participants [GDS10] representing fixation data independent of a stimulus. Such timeline visualizations reduce crossings and overlaps of saccade lines. Furthermore, this visualization technique allows a visual scanpath comparison to measure similarity of aggregated scanpaths [GDS10]. Additionally, a general overview of scanning tendencies can be seen such as downward and upward scanning, or horizontal shifts [GH10c]. However, a separation into coordinates makes it difficult to perceive the combined behaviour in the two or three spatial dimensions.

Star plots are another type of statistical graphics. Goldberg and Helfman [GH10c] use them to understand angular properties of scanpaths and Nakayama and Hayashi [NH10] apply them to study angular properties of fixation positions.

Gaze stripes [KHH*16] combine point-based information with the stimulus context. The position of gaze points is employed to cut out thumbnails from the stimulus and stack them on separate timelines for each participant. This technique allows a comparison of

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

8

Blascheck et al. / Visualization of Eye Tracking Data

illustrations of gaze data. An attention map is typically a static and in-context representation. The main purpose of attention maps is to obtain an overview of eye movements and identify regions on a stimulus that attracted much attention; the latter is often used to determine AOIs. There are several papers describing how to create attention maps (e.g. [SM07, Bli10]). Bojko [Boj09] introduces different types of attention maps depending on the data used, for example, fixation count, absolute fixation duration, relative fixation duration or participant percentage attention maps. Each type has its benefits depending on the data needed for an evaluation. In her paper, guidelines for using attention maps are described to avoid misuse and misinterpretation of them. A review of attention maps is given by Holmqvist et al. [HNA*11], who further recommend that they should be used with care. Figure 4: Temporal evolution of scanpaths of multiple participants. Time is represented on the x-axis, and vertical fixation position on the y-axis. Colour represents each individual scanpath. Figure based on Grindinger et al. [GDS10].

multiple participants without defining AOIs and partially preserves the context of a static or dynamic stimulus (cf. Figure 5). 5.2. Attention maps For spatial visualization techniques, marking fixation positions as an overlay on a stimulus is one of the simplest visualization techniques for recorded eye movements and was applied to dynamic stimuli as early as in 1958 by Mackworth and Mackworth [MM58]. This combined visualization of fixation data from different participants and a stimulus is denoted by bee swarm [Tob08]. It is usually presented as an animation to show temporal changes of fixations either by representing each participant with an individual colour [Tob08] or by using an animated cursor following the eye tracking data of one participant [GSGM10]. An aggregation of fixations over time and/or participants is known as attention map, fixation map or heat map and can be found in numerous publications as summarizing

Classical attention maps are visualized as luminance maps [VH96], 3D landscapes [Lat88, Woo02, HRO11], 2D topographic maps with contour lines [GWP07, DMGB10] or with a colour coding [Boj09, DPMO12]. To emphasize attended regions, alternative visualization techniques use filtering approaches to reduce sharpness and colour saturation [DJB10] in unattended regions, by showing only fixations that were recurrent or deterministic [ABL*13], by showing only first fixations [NH08] or by representing reading speed of participants [BDEB12]. For dynamic stimuli (e.g. videos), not only the spatial, but also the temporal component of data is visualized by applying dynamically changing attention maps, for example, to identify attentional synchrony between multiple participants [MSHH11]. Attention maps for dynamic stimuli bear the problem that an aggregation of the data cannot be represented statically due to the changing stimulus. Figure 6(a) shows a traditional attention map of a video with a car, driving from the right to the left of the screen, employing the last frame of the sequence as stimulus context. Participants were watching the car while it was moving, leading to a distribution of attention along the trajectory of the car. In the attention map, it seems that the attention was on two persons in the background, leading to misinterpretations. This problem is overcome by motion-compensated

Figure 5: Gaze stripes represent time on the x-axis and participants on the y-axis. For each participant, thumbnail series are displayed, representing the temporal changes of gaze point positions in-context of the stimulus. Figure based on Kurzhals et al. [KHH*16].  c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

Blascheck et al. / Visualization of Eye Tracking Data

9

(a) Traditional

Figure 8: In a typical scanpath visualization, each fixation is indicated by a circle. Saccades between fixations are represented by connecting lines between these circles.

(b) Motion-compensated Figure 6: A traditional, static attention map for a video scene (a) can lead to misinterpretations when objects move. The motioncompensated attention map (b) shows that most attention actually was on the moving car [KW13]. Figure reprinted with kind permission from IEEE.

this leads to data loss due to the removal of one dimension. Another possibility when multiple objects are shown in a 3D scene is to colour-code the complete object in the attention map, as shown in Figure 7(b) [SND10a]. A typical representation shows the attention map on the 3D model itself [DMC*02, Pfe12, CL13, PSF*13a, PSF*13b], as shown in Figure 7(c). Creating attention maps for 3D stimuli is associated with the utilization of additional information about object positions in a stimulus or feature detection. Direct volume rendering techniques can be used for rendering 3D attention maps [BCR12]. 5.3. Scanpath visualizations

attention maps [KW13] using optical flow [BSL*11, FBK15] information between consecutive frames to adjust fixation data based on the moving object that was watched. The resulting attention map shows the highest values on the moving object that was actually watched (cf. Figure 6b).

A spatiotemporal visualization using scanpaths connects consecutive fixations through saccade lines on a stimulus. Noton and Stark [NS71a, NS71b] used the word ‘scanpath’ to describe a fixation path of one subject when viewing a specific pattern. Today, a scanpath describes any sequence of saccades and fixations visualized as a static overlay on a stimulus [HNA*11]. In a typical scanpath visualization, a circle represents each fixation, where the radius corresponds to the fixation duration and connecting lines represent saccades between fixations [SPK86], see Figure 8. Changing the colour of fixation circles can represent additional information such as the speed of a participants’ gaze [Lan00]. A simple scanpath only shows recorded saccades without printing circles for each fixation [Yar67].

When looking at 3D stimuli, the third dimension has to be included into an attention map. One possibility is to use a 2D representation of a 3D stimulus and show the attention map on this 2D projection [JN00, SND10a, SYKS14] (cf. Figure 7a). However,

Many different approaches exist to show the position of fixations and their temporal information in a scanpath layout. However, only scanpath visualizations like the one shown in Figure 8 are widely used today. It is clear that this kind of visualization quickly produces

(a) Projected

(b) Object-based

(c) Surface-based

Figure 7: Attention maps for 3D stimuli: (a) the projected attention map, (b) object-based attention map and (c) surface-based attention map [SND10b]. Figure reprinted with kind permission from ACM.

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

10

Blascheck et al. / Visualization of Eye Tracking Data

(a) Standard

(b) Light to Dark

(c) Multicolor

(d) Thin to Thick

Figure 9: Different types of scaled traces showing a scanpath without fixation durations: (a) the normal scaled trace, (b) the line type being changed from light to dark, (c) multi-colours used for each saccade and (d) line thickness changing from thin to thick. Figure based on Goldberg and Helfman [GH10c]. visual clutter [RLMJ05], if several scanpaths are shown in one visualization to study eye movement behaviour. On the one hand, if scanpath shapes are different, lines and circles are plotted all across the visualization and it is difficult to find patterns. On the other hand, if scanpath shapes are similar, lines and circles may overlap. Many approaches have been presented to overcome the problem of visual clutter. One approach averages or bundles scanpaths [HFG06, CAS13, HEF*14]. Here, a crucial question is to find an adequate averaging or bundling criterion. This question of scanpath similarity has not been fully answered yet [DDJ*10]. Another solution is to reduce the ink used in scanpath visualizations and to show a condensed version of a scanpath [GH10c] (cf. Figure 9). The advantage of this visualization technique is that less visual clutter is produced, since circles for representing fixations are missing. However, a drawback of this graphical representation is that it is still difficult to visually identify common scanpath patterns by comparing line directions. Another solution is to break down the spatial information of fixations into their two dimensions [CPS06]. For example, vertical and horizontal components of saccades are shown separately on four sides of an attention map visualization [BSRW14] (cf. Figure 10). One advantage of this visualization technique is that the horizontal and vertical characteristics can be studied independently from each other. However, it demands more mental effort to combine the two separately presented views into one mental image. Another possibility is to show only a part of the scanpath at a time, for example, the subsequent 5 s of a selected timestamp or time frame [WFE*12]. Another challenge of scanpath visualization is how to show 3D fixation information. Basically, there are two approaches. The first one, shown in Figure 11, is to visualize scanpaths in the 3D domain of the stimulus [DMC*02, SND10b, TKS*10, Pfe12, PSF*13a]. The other one is to warp the stimulus into a 2D representation and to draw scanpath lines on this 2D image [RTSB04]. Besides the question how to adequately show 3D data on a 2D computer screen, which might lead to data loss, visualizations of scanpaths from 3D data have the same limitations as their 2D counterparts.

Figure 10: A saccade plot represents horizontal and vertical positions of a saccade on the four sides of a stimulus. The position is represented by a dot and the colour coding is used to distinguish between different saccades. The distance of a dot from a stimulus represents the length of the saccade. Whether a dot is placed on the left, right, top or bottom depends on the position of the saccade in the stimulus. Figure based on Burch et al. [BSRW14].

(a)

(b)

Figure 11: (a) The 3D scanpath is represented with spheres. (b) Cones show a 3D scanpath with viewing direction [SND10b]. Figure reprinted with kind permission from ACM.

5.4. Space-time cube As an alternative spatiotemporal visualization approach, STC visualizations are commonly used in various research fields, for example, in the geographic information sciences [H¨ag82, Kra03]. For an application to eye tracking data, the 2D spatial domain of a stimulus is extended by a third, temporal dimension. This representation provides an overview of the complete data set and can be applied to static [LCK10] and dynamic stimuli [DM98, KW13]. With an STC, scanpaths, fixations and cluster information are visualized statically and allow a direct identification of interesting time spans that would require a sequential search in the data otherwise. Figure 12 shows an STC with gaze data and identified clusters, with data from multiple participants, recorded from a dynamic stimulus. A sliding video plane along the temporal dimension is applied to relate time spans with the dynamic content of the stimulus. Since this 3D visualization bears issues resulting from occlusion,

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

11

Blascheck et al. / Visualization of Eye Tracking Data

(a) AOI timeline

(b) Scarf plot Figure 12: STC visualization of video eye tracking data. A temporal dimension (green) extends the spatial dimension of the video to a static overview of the data, displaying gaze points and gaze clusters. Figure based on Kurzhals and Weiskopf [KW13].

Figure 14: (a) AOI timelines with attention histograms and (b) scarf plots of 10 participants. For each participant, a timeline is shown with coloured time spans that correspond to the colours of visited AOIs. Black spaces indicate that no AOI was looked at. Figure based on Kurzhals et al. [KHW14].

distortion and inaccurate depth perception, 2D projections of the data can be applied to walls on the sides of an STC. The main advantage of the STC in comparison to alternative visualization techniques is the direct overview of data from multiple participants that allows an efficient identification of important AOIs. To this point, an application of the STC to data from multiple participants was applied to synchronizable stimuli only. The application of this visualization technique to asynchronous data (i.e. head-mounted eye tracking data) has not been investigated so far. It would bear additional issues with the spatiotemporal coherence between participants and an AOI-based analysis of this kind of data provides more effective approaches.

section visualize mainly temporal aspects of the data, or relations between AOIs.

6. AOI-Based Visualization Techniques

If the second dimension describes the number of participants, gazes in an AOI can be represented as colour-coded time spans [RD05, SND10b] (cf. Figure 13), also known as scarf plots [RD05, HNA*11, RHOL13, KHW14]. A scarf plot allows us to compare different participants as they are represented below each other. This technique is efficient to interpret and compare scanpaths, if the number of AOIs is not too large. Scarf plots are limited in the number of AOIs represented at the same time. AOIs have to be additionally labeled or aggregated into groups to show a large number.

Other than point-based visualization techniques, AOI-based visualization techniques employ additional information of the recorded fixation data. AOIs annotate regions or objects of interest on a stimulus. The annotation of AOIs in a static stimulus is often performed by defining bounding shapes around an area. Automatic fixation clustering algorithms are also a common approach to identify AOIs [PS00, SD04]. With AOI information, various metrics can be applied to the data [PB06, JK03], depending on the analyst’s research questions. A simple in-context approach is an overlay of AOI boundaries on a stimulus with values of the used metric in each AOI (e.g. dwell time in an AOI [RVM12]). The techniques in this

6.1. Timeline AOI visualizations Similar to point-based data, timelines can be applied to show temporal aspects of AOI-based data. As in the case of point-based timeline visualizations, time is again shown as one axis of a coordinate system. The other axis can represent AOIs or participants. Typically, timelines are represented independent from a stimulus and are 2D static visualizations.

Figure 14(b) shows a scarf plot representing scanpaths of multiple participants. Scanpath similarity can be calculated and the

Figure 13: Models of a 3D stimulus are represented by colour-coded time spans on a timeline depending on when and how long they were fixated [SND10b]. Figure reprinted with kind permission from ACM.  c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

12

Blascheck et al. / Visualization of Eye Tracking Data

(a) AOIs

(b) Time plot

Figure 15: A time plot (a) uses a horizontal timeline for each AOI, which is highlighted as rectangles on a stimulus (b). Each fixation is drawn as a circle at its point in time and depending on the AOI. The circle radius indicates the duration of a fixations. A line showing the path connects each fixation. Figure based on R¨aih¨a et al. [RAM*05].

Figure 16: A parallel scanpath visualization maps time onto the y-axis and AOIs of a stimulus onto the x-axis. For each participant, fixations within an AOI are displayed according to time as vertical line, and horizontally sloping lines represent saccades. Figure based on Raschke et al. [RCE12].

scarf plot can be clustered depicting different groups [KHW14]. Furthermore, it shows which AOIs have been looked at often or infrequently. Displaying AOIs with a thumbnail image on the x-axis can be used to visualize participant groups instead of individual participants [TTS10]. This can be used for static or dynamic stimuli. Creating a representative scarf plot for all participants of one group of participants can condense the visualization even more. In the case of AOIs represented on the second axis, either one participant, averaged information of multiple participants, or multiple participants separately can be shown. In general, all of these techniques represent AOIs on separate timelines horizontally or vertically parallel to each other. This approach is often referred to as AOI timeline [HNA*11]. A representation with coloured time spans is also common [CN00, Hol01, HNA*11, WFE*12] (cf. Figure 14a). Alternatively, representations similar to those in scanpath visualizations can be applied to display gaze durations in AOIs [ITS00, RAM*05, AMR05, KDX*12, RCE12] (cf. Figure 15). The visualization shown in Figure 16 can be used for visual scanpath comparison when applying statistical measures [RHB*14]. However, all of these techniques lead to visual clutter if multiple participants are shown on top of each other. By averaging eye tracking data, we can represent data of multiple participants in the same visualization without causing visual clutter. AOI rivers [BKW13], as shown in Figure 17, represent the average time spent in each AOI and transitions from one AOI to another. This enables us to see the distribution amongst participants. Radial timeline representations display one or multiple AOIs in the centre with time represented on the perimeter. If one AOI is depicted, data from multiple stimuli (e.g. different videos containing this specific AOI) can be shown in an AOI cloud [KW15b] (cf. Figure 18). However, the number of stimuli represented is limited due to the colour coding. Representing multiple AOIs in the circle centre shows which AOI has been visited at what point in time (cf. Figure 19). This technique does not scale if many AOIs are

Figure 17: Based on ThemeRiver by Havre et al. [HHWN02], AOI rivers display the change of fixation frequencies for each AOI and transitions between AOIs. Time is represented on the x-axis, and each AOI is colour-coded. Figure based on Burch et al. [BKW13].

represented at the same time [PLG06]. For investigating individual participants in reading tasks, each word represents one AOI. Such a technique is helpful as it allows us to see backtracks (i.e. backward movements to re-read words) [BR05, SR08]. 6.2. 3D visualizations As described in the previous section, AOI-based visualization techniques can be used to discard the spatial component of the data for an analysis of temporal aspects. In this case, it is possible to apply these techniques to either 2D or 3D stimuli. However, a subset of visualization techniques for recordings from 3D stimuli (e.g. real-world recordings from a head-mounted eye tracker) explicitly include the 3D spatial component of the stimulus for

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

Blascheck et al. / Visualization of Eye Tracking Data

13

Figure 20: 3D AOI on a virtual supermarket shelf. Copyright by Pfeiffer et al. [PRPL16]. Figure 18: AOIs that appear in multiple videos are represented as circles. The timelines of the videos are depicted as segments on the perimeter of each circle. This enables an analyst to assess when and how long a participant was looking at a specific AOI in each video. Figure based on Kurzhals and Weiskopf [KW15b].

tic [EDP*12] and geometry-based [PH10] estimation of the 3D point of regard. Approaches using 3D AOIs exist for real-world scenarios [PSF*13a], virtual reality [PR14] or videos [DM98]. Automatic calculation of AOIs in such dynamic stimuli can be achieved by clustering fixation data of individual participants [KW13], by averaging fixation data of multiple participants [Duc02] or by marking AOIs on an image and automatically detecting them in a video [PSF*13a]. 3D AOIs can be overlaid on the virtual model [BFH10] or individual object itself [SND10a], as shown in Figure 20. Additional information about an AOI can then be added, for example, different AOIs can have different colour [BFH10], AOIs can be highlighted based on how much time was spent on them [SND10b], or the AOI can be highlighted each time a participant looked at it [PR14]. 3D AOIs are used in a multi-party communication tool by rotating persona each time a participant looks at one [Ver99] or in gazecontingent displays [DM98].

(a) AOIs

(b) Search path

Figure 19: AOIs are placed in a circular layout in the middle of a search path (a). The size of each AOI circle corresponds to the number of visits to that AOI. The smaller circles inside an AOI represent the transition probability for this AOI from any other AOI. Each fixation is then placed next to its corresponding AOI, which leads to a circular looking search path (b) [LHB*08]. Figure reprinted with kind permission from Elsevier.

the representation. These techniques contain application scenarios that mainly provide an actively created content, for example, eye tracking in a virtual reality environment. The main purpose of these methods is a 3D scene representation of a stimulus or at least 3D gaze position to apply standard methods such as attention maps with 3D information. Pfeiffer [Pfe12] distinguishes two general approaches calculating a gaze position and there are multiple authors applying one or the other: holis-

6.3. Relational AOI visualizations Relational visualization techniques use AOIs and show the relationship between them (i.e. how often attention changed between two AOIs). Different metrics are represented in the AOI context, for example, the transition percentage or transition probability between two AOIs. This can be achieved in-context or not in-context. AOI metrics can be encoded into the AOI on the stimulus. For example, Cheng et al. [CSS*15] encode reading-related information on top of AOIs: grey shading for the reading speed based on the ratio of saccade length and dwell time. A common visualization technique to evaluate transitions between AOIs is the transition matrix [GK99]. A transition matrix orders AOIs horizontally in rows and vertically in columns and each cell contains the number of transitions between two AOIs (cf. Figure 21). This matrix representation can be used to evaluate the search behaviour of participants. For example, a densely filled matrix indicates poor search behaviour since all AOIs were focused on

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

14

Blascheck et al. / Visualization of Eye Tracking Data

AOI 1

AOI 2

AOI 3

0.1

0.82

0.45

AOI 1

0.51

0.15

0.73

AOI 2 0−0.2 transions 0.21−0.4 transions 0.41−0.6 transions

0.13

0.22

0.9

AOI 3

0.61−0.8 transions 0.81−1.0 transions

Figure 21: A classical transition matrix orders AOIs horizontally in rows and vertically in columns. Each cell represents the number of transitions between two AOIs. The colour coding corresponds to the transition count.

for several times. Finding visual search patterns is improved when cells in the transition matrix are coloured to show the transition count [LPS*12, KSK13, BKR*16]. Furthermore, transition matrices can be used to compare multiple participants. The classical transition matrix only represents one participant. If all participants are shown in the same matrix, a visual comparison of search strategies is possible. This is achieved by concatenating the matrices of all participants and assigning a value to each matching sequence between them, as shown in Figure 22 [GH10b]. A similar visualization technique using matrices places AOIs horizontally as rows and, for example, the task or query of a study vertically as columns. Each cell then contains different metrics such as fixation duration or count [ETK*08, SLHR09]. A matrix representation mixes statistical measures with a visual representation. This representation has the advantage that it allows us to compare 2D data sets visually. However, if the number of AOIs is high, the matrix becomes large. Well-established visualization techniques such as a directed graph or tree visualization are used to analyse the relation between AOIs. A directed graph shows transitions between AOIs (cf. Figures 23 and 24). Each node of the graph depicts one AOI and the links depict transitions. Node size can be varied or colour coding may be applied to represent different metrics such as fixation count or fixation duration. The thickness of a link depicts the number of transitions between two AOIs, as shown in the example in Figure 23. Usually, this type of diagram represents only data of one participant. Averaged values of multiple participants can also be used. Most visualization techniques show the graph independently of the stimulus [IHN98, FJM05, TAK*05, SSF*11, BRE13, BKR*16]. Losing the topological information makes the graph harder to interpret. However, the transition information allows us to see in which order AOIs have been looked at or how often the participant returned to look at an AOI. One example of such a visualization technique is provided in Figure 24. Some examples also show the graph incontext [HHBL03, HC13, OGF14] (cf. Figure 23a). This enables an analyst to correlate the AOI transitions directly to the stimulus

Figure 22: The AOI sequences of multiple participants are concatenated and shown in one matrix horizontally and vertically. A red line marks each matching sequence. Green lines show where the AOI sequence of one participant ends [GH10a]. Figure reprinted with kind permission from ACM. content. Like the transition matrix, the graph representation does not scale to a large number of AOIs. Although the transition matrix scales better, AOI graphs are more useful for following a specific path if the nodes are properly ordered in the graphs [vLKS*11]. A tree visualization can be used to compare AOI sequences of participants. A node in the tree represents an AOI. Each branch in the tree represents a different AOI sequence, as shown in Figure 25. Many branches indicate many different search patterns [WHRK06, TTS10, RHOL13]. An AOI transition tree containing more information than the word tree (i.e. frequencies of transitions and transition patterns) can be created [KW15a]. The AOI transition tree represents AOIs with thumbnails of video stimuli (cf. Figure 26). This visual comparison of scan patterns allows one to find participant groups. However, it can only be used for short sequences, because for long sequences, the difference between participants would be too large and each participant would become a single participant group. 7. Expert Review To identify challenges in eye tracking research, we conducted an online study asking the top 100 eye tracking researchers listed in Google Scholar about their experience with eye tracking visualizations. We used researchers listed on Google Scholar, as we believe that those are the researchers with the most experience available. We received 34 filled out questionnaires. Our questionnaire contained 11 questions subdivided into three blocks: six questions about the research background and applied eye tracking equipment, three

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

Blascheck et al. / Visualization of Eye Tracking Data

15

(a) In-context

Figure 24: The circular heat map transition diagram places AOIs on a circle, where the AOI circle segment is colour-coded based on the fixation count inside the AOI. The size of the circle segment indicates the total dwell time within an AOI. Arrows represent transitions between AOIs, with the thickness displaying the number of transitions. Figure based on Blascheck et al. [BRE13].

(b) Not in context Figure 23: Graph representations for transition frequencies between AOIs. AOIs are represented in-context (a) as circles placed on the stimulus. The radius of the circle presents the proportional dwell time. Arrows depict transitions and the thickness shows the number of transitions between two AOIs [HHBL03]. Figure reprinted with kind permission from Elsevier. AOIs are depicted not in-context (b) in a triangular layout with the most important AOI in the middle [TAK*05]. Figure reprinted with kind permission from IEEE.

questions about the application of visualization for eye tracking analysis and two questions about open challenges in eye tracking research. Typically, questions were multiple choice, only one question had a text field. Some questions had an additional text field, where experts could add another answer not listed in the answers list. The results of this questionnaire are depicted in Figure 27 and discussed in more detail in the following sections based on those three question categories.

7.1. Background of experts To evaluate the background of the participating experts, we used the research fields described by Duchowski [Duc02]. Most researchers

Figure 25: A word tree represents the sequences of AOIs starting at a selected AOI. The font size corresponds to the number of sequences. The sequence AOI-0, AOI-10 and AOI-11 shows a common sequence amongst participants. work in psychology (15) and computer science (9) and most of them are professors or senior researchers (23). Two researchers added that they were a research team leader and a virtual reality specialist. Most of the experts (26) have been working in eye tracking research for over 6 years. The different problem types researchers investigate were chosen from Duchowski [Duc02]. Scene perception (17), usability (15) and visual search (15) are the problem types investigated most frequently. Typical hardware equipment the researchers use is from the companies Tobii (23) or SMI (15), which mainly provide attention maps and scanpaths as visualizations for the eye tracking data. Some added that they use eye trackers custom-built (2) or built by themselves (2). For analysing eye tracking data, the experts stated what type of software they use. The eye tracking softwares Tobii Studio (17) and SMI BeGaze (9) and statistical programs such as SPSS (14), Matlab (13), and R (12) are used most often. Again, some experts stated that they use customly developed systems (3).

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

16

Blascheck et al. / Visualization of Eye Tracking Data

r r r r r r r

Post-hoc visualizations based on single-subject research methods. Gaze replays. Custom clustering software for dynamic images. Movies and animations. Cross-recurrence maps. Time series. Custom interactive stimuli visualizations.

These results let us conclude that scanpaths and attention maps are still the most used visualizations. However, from the individual comments, we can see that some of the more advanced visualization techniques mentioned in Sections 5 and 6 are being used as well. Overall, we believe that researchers are moving into the direction of using more eye tracking visualization techniques for analysis, but user friendly and wider availability of these techniques would further improve their usage.

Figure 26: An AOI transition tree displays AOIs on the y-axis and their transition sequence on the x-axis. Data from multiple participants are aggregated. This technique is used for dynamic stimuli, by taking video shots and the defined AOIs per shot, displaying them as an icicle plot. Figure based on Kurzhals and Weiskopf [KW15a].

7.3. Challenges in eye tracking research The last set of questions was concerned with challenges in eye tracking visualization research. First, the experts were asked to rate different statements whether they agree or disagree with these challenges. Second, the experts could state their own challenges they encounter during their research. Table 4 shows statements and answers of the experts. If we add up the answers for ‘strongly agree’ and ‘agree’, we find that the most important challenge is to analyse more eye tracking data from dynamic stimuli (30). Second, the experts think that approaches for automatically detecting patterns are needed (29).

7.2. Eye tracking and visualization The visualization-related questions consisted of two questions where the experts had to rate different statements (cf. Tables 2 and 3). All experts agree that statistical analysis is important. An exploratory approach was rated as being important by 28 of 34 experts, which is a good sign for the acceptance of eye tracking visualizations. Almost all experts (28) agree that visualization is important and 31 of our experts also use visualization to present their data.

Last, we asked the experts: ‘Do you consider other future challenges as important for the field? Please name them and explain briefly’. The experts stated many different challenges they see in general in eye tracking research. As this survey is focused on visualization and eye tracking, we sum up the answers related to this topic.

When asking researchers what type of visualizations they use, most of them answered that they use scanpaths (27) or attention maps (24). The other answers can be seen in Figure 28. Some experts added additional visualizations:

One expert mentioned that ‘a major problem with eye tracking over the past 15 years is that some visualizations have been so

Table 2: Experts had to rate two statements about statistical and exploratory analysis of eye tracking data.

Statement I consider statistical analysis as important for analysing eye tracking data. I consider an exploratory approach as important when analysing eye tracking data.

Strongly agree

Agree

Disagree

Strongly disagree

No answer

29 13

4 15

– 4

– –

1 2

Table 3: Experts had to rate two statements about eye tracking and visualization.

Question How important are visualizations during your analysis? How important are visualizations for presenting your study results?

Very important

Important

Not important

No answer

18 23

10 8

3 1

3 2

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

17

Blascheck et al. / Visualization of Eye Tracking Data What is your research posion?

What is your main research field? User Experience

How many years have you been working with eye tracking?

VR Specialist

Markeng Informaon Science

> 10 years

Research Team Lead

Informaon and Library Sciences Communicaon

PhD / MSc Student

Biomedical Engineering Adversement

6-10 years

Postdoc. Fellow / Junior Res.

Human Factors Computer Science

1-5 years

Professor / Senior Res.

Psychology 0

2

4

6

8

10

12

14

16

0

(a) What type of problems do you invesgate using eye tracking? Visual Aenon and Interacon Television/Video Point of Purch. Display Eval. Oculomotor Control Neurodevelopmental Disorders Interacon with Techn. EM Modeling Examples Expert/Novice Differences Collaboraon Decision Making Aviaon Auditory Language Proc. Aenonal Neuroscience Accessibility Ad Placement Driving Cognive Modeling Print Adversing EM and Brain Imaging Reading Visual Inspecon Natural Tasks Web Pages Usability Visual Search Scene Percepon

5

10

15

20

25

0

2

4

6

8

10

12

14

(b)

(c)

What hardware do you use to collect eye tracking data?

What soware do you use for analyzing your eye tracking data?

Smart Eye

16

iMoons Aenon Tool

Posive Science

eyetracking.com

Ober

Other stat soware

NAC

SAS

Gazepoint

C++

Facelab

Mirametrix SW

Eyemouse

GazeTag

DPI

Ergoneers D-Lab

Arrington

Python

LC Technologies

MS Excel

Interacve Minds E-Prime

Ergoneers

Ogama

Mirametrix

Custom Developed

ISCAN

SR Research

Self-made

0

5

10

15

20

Custom-built

Self-Developed

ASL

SMI BeGaze

The Eye Tribe

R

SR Research

Matlab

SMI

SPSS

Tobii

Tobii Studio 0

5

(d)

10

15

20

0

25

(e)

5

10

15

20

(f)

Figure 27: Overview of the results from the online questionnaire. influential that some researchers think that they are enough. Heat maps were a real problem to science in the 2004–2008 period. Actually, this only reflects the fact that eye tracking has become so common—so many eye trackers have been sold—and the scientific training in eye movement-based research has not been available to all new researchers. Manufacturers who are eager to sell tend to promise that visualizations will solve the researchers problems, when in most cases, they do not’. This is also closely related to a statement a second expert made, who claims that ‘Researchers should be trained in how to use visualizations’. We agree that researchers have to be trained before using visualizations. They should also be guided in choosing an appropriate visualization for their task and data. This also requires that visualizations should be developed that can be used intuitively and are user friendly. With this survey, we want to provide a step into this direction. We have summarized which visualization techniques are available and what benefits and drawbacks they have. This may be a first indicator for researchers to decide which visualization to use to answer a research question. A different challenge is a ‘combination of quantitative results with qualitative visualizations’. So far, most visualization techniques only display the eye tracking data. However, more visualization

frameworks are developed where quantitative data are integrated and can be analysed in combination [KHW14, BJK*16b, RSW*16]. Additionally, this also requires ‘being able to standardize the visualization options [ . . . ] if we are to ensure comparable results. For example, more work on 3rd-party tools that can display eye tracking data from a range of trackers would be helpful (e.g. Ogama1 )’. We see this statement as an appeal to make tools more publicly available. A visualization-specific challenge is to find ‘[ . . . ] a way to aggregate and analyse scan paths over a large sample [ . . . ]’. With this survey, we have shown other ways to analyse scanpath data using one of the many other visualization techniques presented in this paper. 8. Discussion of Future Research Directions Numerous visualization techniques have been developed to visualize different aspects of eye tracking data. Table 1 shows an overview of 1 http://www.ogama.net/,

last accessed on November 3, 2016

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

18

Blascheck et al. / Visualization of Eye Tracking Data

Table 4: Experts had to rate two statements about the challenges of eye tracking analysis.

Statement Approaches for automatically detecting patterns are needed. Eye tracking data from dynamic stimuli (e.g. videos) should be analysed more. Techniques for analysing a high number of AOIs (>10 AOIs) are necessary. Visualizations should scale for a high number of participants. More visualization techniques for eye tracking data are necessary, for example, interactive visualizations or spatiotemporal visualizations. Eye tracking data from interactive stimuli (e.g. Web pages or interactive visualizations) should be analysed more. Smooth pursuit from dynamic stimuli should be handled separately by visualization techniques. Eye tracking should be combined with other sensors such as EEG or skin resistance.

Bee Swarm

Line / Bar Chart

Aenon Map

Scan Path 5

10

15

20

25

Agree

Disagree

Strongly disagree

No answer

19 17 16 14 14

10 13 9 13 12

1 – 2 4 5

– – – – –

4 4 7 3 3

14

10

3



7

9

14





11

8

16

5



5

has been made to overcome this problem by using hierarchical AOIs [BJK*16a]. Our experts also named some challenges with AOIs in general. For example, two experts mentioned the need for small and precise AOIs (e.g. one word in regular text size). We see this as one area for future research.

If you are using visualizaons, what visualizaons are you using?

0

Strongly Agree

30

Figure 28: Results for application of visualizations in eye tracking research. existing approaches: visualizations for point-based data, AOI-based data and visualizations for both data types. From this overview, we can recognize that some aspects of eye tracking analysis have been investigated to a lesser degree than others.

We found that the number of visualization techniques for the category ‘dynamic stimuli’ increased within the last 5 years. Our experts also think that this is an important topic for future work, especially how to do a ‘gaze analysis in temporally moving scenes using fully automatic techniques’. This is also in line with other research agendas about visualizing eye tracking data [RTSB04, SNDC09]. Application areas for dynamic stimuli are, for example, video data or data coming from a head-mounted eye tracker. Only a few approaches deal with the question of how 3D eye tracking data can be visualized. This requires a ‘greater emphasis on mobile eye-trackers’ as well as ‘being able to aggregate data from eye tracking glasses’, as our experts mentioned. This question is important in case of user experiments that record stereoscopic fixation data. Also, separate handling of smooth pursuit from dynamic stimuli has mostly been neglected. Including smooth pursuit information in scanpath representations, for example, could provide valuable information about attentional synchrony of multiple viewers.

Table 1 shows that the number of spatiotemporal visualization techniques for AOI-based analysis is small. Usually, if time is visualized in AOI-based methods, a timeline is used along with one other data dimension. Therefore, animation is not needed. This might also explain the lack of in-context visualization techniques for AOIbased analysis. In case of point-based eye tracking visualizations, fixation data are often overlaid on a stimulus or presented as a 3D visualization in case of dynamic stimuli. For not in-context AOIbased visualization techniques, a presentation of eye tracking data is more abstract. This abstraction is intended to help analysts concentrate on specific questions. However, the mental map of the eye tracking data on the stimulus is lost.

Furthermore, we think that multimodal data visualization techniques will become more important due to the interdisciplinary character of eye tracking research. Some of our experts named this as a challenges as well. In this context, one expert especially mentioned an ‘easy integration of eye tracking with other recording equipment with full online synchronization’. For example, eye tracking data could be combined with other sensor information coming from EEG devices or skin-resistance measurements. One approach is already included in the commercial software BeGaze by SMI [SMI14]. Blascheck et al. [BJK*16b, BJK*16a] developed a system for analysing eye tracking data in combination with interaction and think-aloud data. However, there are still many open questions for new visualization techniques in this direction.

In general, for AOI-based methods, one limitation is the number of AOIs that can be represented at once. Most of the visualization techniques do not scale well, if there are many AOIs. Progress

Another challenge in eye tracking are stimuli with active content, where participants can influence a stimulus during a study. For

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

Blascheck et al. / Visualization of Eye Tracking Data

example, in studies where usability of Web pages is investigated, participants may click on links and navigate to different sites. The question is how participants can be compared. Often, a representative stimulus is created and fixations of participants are manually mapped to a stimulus. However, this might lead to erroneous data due to ambiguity or inaccuracies an annotator introduces. A tool called Gaze Tracker [Lan00] records complete Web page information and scroll behaviour automatically. Yet, this approach can only be used for Web pages or graphical user interfaces. However, we believe that further visualization techniques for this type of stimulus will have to be developed in the future. Our literature review showed that there is a large number of visualization techniques for analysing eye tracking data. However, it is not always apparent which visualization technique works best for which type of analysis. This question cannot be answered completely and to a full extent. Choosing an appropriate visualization technique depends on different factors. Therefore, we have classified the presented techniques based on our taxonomy. Yet, we have let out analysis tasks. Kurzhals et al. [KBB*17] present a task taxonomy for eye tracking visualizations that is a first step into the direction of suggesting visualizations based on tasks. For example, a common analysis task is to compare scanpaths of participants. Comparing scanpaths of multiple participants can help find regularities or similar patterns between different participants. Many of the presented visualization techniques can be used for scanpath comparison [WHRK06, Coc09, GH10b, TTS10, RHOL13, RHB*14, EYH15]. This is just one example of how the analysis task influences the type of visualization technique for an evaluation. Finding patterns in eye tracking data is a general goal of eye tracking analysis as also mentioned by Stellmach et al. [SNDC09]. Often not one visualization technique alone is sufficient to find those patterns and analyse the eye tracking data. Rather, multiple visualization techniques have to be used in combination. An interaction of an analyst with a visualization techniques can further improve analysis results. The column for the category ‘interactive visualization’ shows that there are not many visualizations for interactive analysis of eye tracking data. Most visualizations follow the paradigm of static visualization and provide little support for interaction. For this reason, we motivate to allow more interaction with visualization techniques for eye tracking data [RSW*16]. Ramloll et al. [RTSB04] have already called for more interaction, for example, by using brushing or focus-and-context techniques. Interactions allow an analyst to investigate data using filtering techniques or selecting participants. In the end, specific hypothesis can be tested by collecting new data and testing for statistical significance. Therefore, we propose combining those approaches: using visualizations, statistics and interaction together. This is to some extent done by visual analytics and could therefore also be applied to eye tracking analysis as already proposed by Andrienko et al. [AABW12] and to some extend implemented by Blascheck et al. [BJK*16a, BJK*16b]. Two other areas of research our experts named seem interesting in the context of eye tracking and visualization. First, one experts mentioned to use visualizations ‘not just as a means for the researcher to understand the data, but as a specific independent variable in which participants are shown the eye movement patterns of either themselves, or from other observers, and how they can (or can-

19

not) understand and exploit this information’. This comment is especially interesting, when analysing newly developed visualization techniques to find out if analysts understand the new techniques. Additionally, one expert noted to analyse saccadic behaviour in more detail. This is an interesting new direction, as only few visualization techniques considered saccades for their main data type so far. 9. Conclusion In this state-of-the-art report, we have reviewed existing papers describing eye tracking visualizations. To classify the different approaches, we have created a taxonomy that divides the papers into point-based and AOI-based visualization techniques as well as into visualizations that use both types of data. Furthermore, we have classified the papers on a second level according to the data represented: temporal, spatial or spatiotemporal. An overview of all visualization techniques was given as well as a detailed description of the different visualization techniques. Experts analysed challenges and use of eye tracking visualizations. Based on the results of our expert review, we presented future directions for research. Our survey aimed at helping researchers get an overview of eye tracking visualization. Based on our categorization, we think that researchers now have a first indicator how to choose an appropriate visualization technique for their research questions. Acknowledgements This work was funded by the German Research Foundation (DFG) as part of SFB/Transregio 161.

References [AABW12] ANDRIENKO G., ANDRIENKO N., BURCH M., WEISKOPF D.: Visual analytics methodology for eye movement studies. IEEE Transactions on Visualization and Computer Graphics 18, 12 (2012), 2889–2898. [ABL*13] ANDERSON N., BISCHOF W., LAIDLAW K., RISKO E., KINGSTONE A.: Recurrence quantification analysis of eye movements. Behavior Research Methods 45, 3 (2013), 842–856. [AHR98]AALTONEN A., HYRSKYKARI A., R¨AIHA¨ K.-J.: 101 spots, or how do users read menus? In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (1998), ACM, pp. 132–139. [AJTZ12] ATKINS S., JIANG X., TIEN G., ZHENG B.: Saccadic delays on targets while watching videos. In Proceedings of the Symposium on Eye Tracking Research & Applications (2012), ACM, pp. 405–408. [AMR05] AULA A., MAJARANTA P., R¨AIHA¨ K.-J.: Eye-tracking reveals the personal styles for search result evaluation. In Proceedings of the Human-Computer Interaction-INTERACT (2005), Springer, pp. 1058–1061. [BBD09] BECK F., BURCH M., DIEHL S.: Towards an aesthetic dimensions framework for dynamic graph visualisations. In

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

20

Blascheck et al. / Visualization of Eye Tracking Data

Proceedings of the International Conference on Information Visualization (2009), IEEE Computer Society Press, pp. 592–597. [BBM*09] BERG D. J., BOEHNKE S. E., MARINO R. A., MUNOZ D. P., ITTI L.: Free viewing of dynamic stimuli by humans and monkeys. Journal of Vision 9, 5 (2009), 1–15. [BCR12] BERISTAIN A., CONGOTE J., RUIZ O.: Volume visual attention maps (VVAM) in ray-casting rendering. Studies in Health Technology and Informatics 173 (2012), 53–57.

[BM13] BREHMER M., MUNZNER T.: A multi-level typology of abstract visualization tasks. IEEE Transactions on Visualization and Computer Graphics 19, 12 (2013), 2376–2385. [Boj09] BOJKO A.: Informative or misleading? Heatmaps deconstructed. In Proceedings of the Human-Computer InteractionINTERACT (2009), J. Jacko (Ed.), Springer, pp. 30–39. [BR05] BEYMER D., RUSSELL D. M.: WebGazeAnalyzer: A system for capturing and analyzing web reading behavior using eye gaze. In CHI Extended Abstracts on Human Factors in Computing Systems (2005), ACM, pp. 1913–1916.

[BDA*14] BACH B., DRAGICEVIC P., ARCHAMBAULT D., HURTER C., CARPENDALE S.: A review of temporal data visualizations based on space-time cube operations. In Proceedings of the EuroVis STARs (2014), R. Borgo, R. Maciejewski and I. Viola (Eds.), The Eurographics Association, pp. 1–19.

[BRE13] BLASCHECK T., RASCHKE M., ERTL T.: Circular heat map transition diagram. In Proceedings of the Conference on Eye Tracking South Africa (2013), ACM, pp. 58–61.

[BDEB12] BIEDERT R., DENGEL A., ELSHAMY M., BUSCHER G.: Towards robust gaze-based objective quality measures for text. In Proceedings of the Symposium on Eye Tracking Research & Applications (2012), ACM, pp. 201–204.

[BSL*11] BAKER S., SCHARSTEIN D., LEWIS J. P., ROTH S., BLACK M. J., SZELISKI R.: A database and evaluation methodology for optical flow. International Journal of Computer Vision 92, 1 (2011), 1–31.

[BFH10] BALDAUF M., FRO¨ HLICH P., HUTTER S.: KIBITZER: A wearable system for eye-gaze-based mobile urban exploration. In Proceedings of the Augmented Human International Conference (2010), ACM, pp. 1–5.

[BSRW14] BURCH M., SCHMAUDER H., RASCHKE M., WEISKOPF D.: Saccade plots. In Proceedings of the Symposium on Eye Tracking Research & Applications (2014), ACM, pp. 307– 310.

[BJK*16a] BLASCHECK T., JOHN M., KOCH S., BRUDER L., ERTL T.: Triangulating user behavior using eye movement, interaction, and think aloud data. In Proceedings of the Symposium on Eye Tracking Research & Applications (2016), ACM, pp. 175–182.

[CAS13] CHEN M., ALVES N., SOL R.: Combining spatial and temporal information of eye movement in goal-oriented tasks. In Proceedings of the International Conference on Human Factors in Computing and Informatics (2013), Springer, pp. 827– 830.

[BJK*16b] BLASCHECK T., JOHN M., KURZHALS K., KOCH S., ERTL T.: VA2 : A visual analytics approach for evaluating visual analytics applications. IEEE Transactions on Visualization and Computer Graphics 22, 1 (2016), 61–70. [BKR*14] BLASCHECK T., KURZHALS K., RASCHKE M., BURCH M., WEISKOPF D., ERTL T.: State-of-the-art of visualization for eye tracking data. In Proceedings of EuroVis - STARs, R. Borgo, R. Maciejewski and I. Viola (Eds.), (2014), The Eurographics Association, pp. 63–82.

[CCY*03] CONVERTINO G., CHEN J., YOST B., RYU Y.-S., NORTH C.: Exploring context switching and cognition in dual-view coordinated visualizations. In Proceedings of the International Conference on Coordinated and Multiple Views in Exploratory Visualization (2003), pp. 55–62. [CHC10] CONVERSY S., HURTER C., CHATTY S.: A descriptive model of visual scanning. In Proceedings of the BELIV Workshop: Beyond Time and Errors - Novel Evaluation Methods for Visualization (2010), ACM, pp. 35–42.

[BKR*16] BLASCHECK T., KURZHALS K., RASCHKE M., STROHMAIER S., WEISKOPF D., ERTL T.: AOI hierarchies for visual exploration of fixation sequences. In Proceedings of the Symposium on Eye Tracking Research & Applications (2016), ACM, pp. 111– 118.

[Chi00] CHI E. H.: A taxonomy of visualization techniques using the data state reference model. In Proceedings of the IEEE Symposium on Information Visualization (2000), IEEE Computer Society Press, pp. 69–75.

[BKW13] BURCH M., KULL A., WEISKOPF D.: AOI rivers for visualizing dynamic eye gaze frequencies. Computer Graphics Forum 32, 3 (2013), 281–290.

[CL13] CHEN M., LIM V.: Tracking eyes in service prototyping. In Proceedings of the Human-Computer Interaction-INTERACT (2013), Springer, pp. 264–271.

[BKW16] BECK F., KOCH S., WEISKOPF D.: Visual analysis and dissemination of scientific literature collections with SurVis. IEEE Transactions on Visualization and Computer Graphics 22 (2016), 180–189.

[CM97] CARD S., MACKINLAY J.: The structure of the information visualization design space. In Proceedings of the IEEE Symposium on Information Visualization (1997), IEEE Computer Society Press, pp. 92–99.

[Bli10] BLIGNAUT P.: Visual span and other parameters for the generation of heatmaps. In Proceedings of the Symposium on Eye Tracking Research & Applications (2010), ACM, pp. 125–128.

[CN00] CROWE E. C., NARAYANAN N. H.: Comparing interfaces based on what users watch and do. In Proceedings of the Symposium on Eye Tracking Research & Applications (2000), ACM, pp. 29–36.

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

Blascheck et al. / Visualization of Eye Tracking Data

[Coc09] COCO M. I.: The statistical challenge of scan-path analysis. In Proceedings of the Conference on Human System Interactions (2009), IEEE Computer Society Press, pp. 372–375. [CPS06] CROFT J., PITTMAN D., SCIALFA C.: Gaze behavior of spotters during an air-to-ground search. In Proceedings of the Symposium on Eye Tracking Research & Applications (2006), ACM, pp. 163–179. [CSS*15] CHENG S., SUN Z., SUN L., YEE K., DEY A. K.: Gaze-based annotations for reading comprehension. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2015), ACM, pp. 1569–1572. [DDJ*10] DUCHOWSKI A., DRIVER J., JOLAOSO S., TAN W., RAMEY B. N., ROBBINS A.: Scan path comparison revisited. In Proceedings of the Symposium on Eye Tracking Research & Applications (2010), ACM, pp. 219–226. [DJB10] DORR M., JARODZKA H., BARTH E.: Space-variant spatiotemporal filtering of video for gaze visualization and perceptual learning. In Proceedings of the Symposium on Eye Tracking Research & Applications (2010), ACM, pp. 307–314.

21

[ETK*08] EGUSA Y., TERAI H., KANDO N., TAKAKU M., SAITO H., MIWA M.: Visualization of user eye movement for search result pages. In Proceedings of the International Workshop on Evaluating Information Access (2008), National Institute Informatics, pp. 42–46. [EYH15] ERASLAN S., YESILADA Y., HARPER S.: Eye tracking scanpath analysis techniques on web pages: A survey, evaluation and comparison. Journal of Eye Movement Research 9, 1 (2015), 1–19. [FBK15] FORTUN D., BOUTHEMY P., KERVRANN C.: Optical flow modeling and computation: A survey. Computer Vision and Image Understanding 134 (2015), 1–21. [FJM05] FITTS P., JONES R., MILTON J.: Eye movement of aircraft pilots during instrument-landing approaches. In Ergonomics: Psychological Mechanisms and Models in Ergonomics, N. Moray (Ed.), (London and New York, 2005), Taylor & Francis, pp. 56– 66. [GDS10] GRINDINGER T., DUCHOWSKI A., SAWYER M.: Group-wise similarity and classification of aggregate scanpaths. In Proceedings of the Symposium on Eye Tracking Research & Applications (2010), ACM, pp. 101–104.

[DLN*06] DIXON T. D., LI J., NOYES J., TROSCIANKO T., NIKOLOV S., LEWIS J., CANGA E., BULL D., CANAGARAJAH C.: Scan path analysis of fused multi-sensor images with luminance change: A pilot study. In Proceedings of the International Conference on Information Fusion (2006), IEEE Computer Society Press, pp. 1–8.

[GH10a] GOLDBERG J. H., HELFMAN J. I.: Comparing information graphics: A critical look at eye tracking. In Proceedings of the BELIV Workshop: Beyond Time and Errors - Novel Evaluation Methods for Visualization (2010), ACM, pp. 71–78.

[DM98] DUCHOWSKI A., MCCORMICK B. H.: Gaze-contingent video resolution degradation. In Proceedings of the Society of Photographic Instrumentation Engineers (1998), SPIE, pp. 318–329.

[GH10b] GOLDBERG J. H., HELFMAN J. I.: Scanpath clustering and aggregation. In Proceedings of the Symposium on Eye Tracking Research & Applications (2010), ACM, pp. 227–234.

[DMC*02] DUCHOWSKI A., MEDLIN E., COURINA N., GRAMOPADHYE A., MELLOY B., NAIR S.: 3D eye movement analysis for VR visual inspection training. In Proceedings of the Symposium on Eye Tracking Research & Applications (2002), ACM, pp. 103–155.

[GH10c] GOLDBERG J. H., HELFMAN J. I.: Visual scanpath representation. In Proceedings of the Symposium on Eye Tracking Research & Applications (2010), ACM, pp. 203–210.

[DMGB10] DORR M., MARTINETZ T., GEGENFURTNER K. R., BARTH E.: Variability of eye movement when viewing dynamic natural scenes. Journal of Vision 10, 10 (2010), 1–17. [DPMO12] DUCHOWSKI A., PRICE M. M., MEYER M., ORERO P.: Aggregate gaze visualization with real-time heatmaps. In Proceedings of the Symposium on Eye Tracking Research & Applications (2012), ACM, pp. 13–20. [Duc02] DUCHOWSKI A.: A breadth-first survey of eye-tracking applications. Behavior Research Methods, Instruments, & Computers 34, 4 (2002), 455–470. [Duc07] DUCHOWSKI A.: Eye Tracking Methodology: Theory and Practice (2nd edition). Springer, London, 2007. [EDP*12] ESSIG K., DORNBUSCH D., PRINZHORN D., RITTER H., MAYCOCK J., SCHACK T.: Automatic analysis of 3D gaze coordinates on scene objects using data from eye-tracking and motion-capture systems. In Proceedings of the Symposium on Eye Tracking Research & Applications (2012), ACM, pp. 37–44.

[GK99] GOLDBERG J. H., KOTVAL X. P.: Computer interface evaluation using eye movements: Methods and constructs. International Journal of Industrial Ergonomics 24, 6 (1999), 631– 645. [GSGM10] GOGGINS S., SCHMIDT M., GUAJARDO J., MOORE J.: Assessing multiple perspectives in three dimensional virtual worlds: Eye tracking and all views qualitative analysis (AVQA). In Proceedings of the Hawaii International Conference on System Sciences (2010), IEEE Computer Society Press, pp. 1–10. [GWG*64] GUBA E., WOLF W., GROOT S. d., KNEMEYER M., ATTA R. V., LIGHT L.: Eye movement and TV viewing in children. AV Communication Review 12, 4 (1964), 386–401. [GWP07] GOLDSTEIN R., WOODS R., PELI E.: Where people look when watching movies: Do all viewers look at the same place? Computers in Biology and Medicine 37, 7 (2007), 957–964. [H¨ag82] H¨AGERSTRAND T.: Diorama, path and project. Tijdschrift voor Economische en Sociale Geografie 73, 6 (1982), 323– 339.

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

22

Blascheck et al. / Visualization of Eye Tracking Data

[HC13] HOOGE I., CAMPS G.: Scan path entropy and arrow plots: Capturing scanning behavior of multiple observers. Frontiers in Psychology 4, 996 (2013), 1–10.

Mind’s Eye: Cognitive and Applied Aspects of Eye Movement Research, J. Hy¨on¨a, R. Radach and H. Deubel (Eds.), (Amsterdam, 2003), Elsevier Science BV, pp. 573–605.

[HEF*14] HURTER C., ERSOY O., FABRIKANT S., KLEIN T., TELEA A.: Bundled visualization of dynamic graph and trail data. IEEE Transactions on Visualization and Computer Graphics 20, 8 (2014), 1141–1157.

[JN00] JONES M. G., NIKOLOV S.: Volume visualization via region enhancement around an observer’s fixation point. In Proceedings of the International Conference on Advances in Medical Signal and Information Processing (2000), IEEE Computer Society Press, pp. 305–312.

[HFG06] HEMBROOKE H., FEUSNER M., GAY G.: Averaging scan patterns and what they can tell us. In Proceedings of the Symposium on Eye Tracking Research & Applications (2006), ACM, pp. 41– 41. [HH02] HORNOF A. J., HALVERSON T.: Cleaning up systematic error in eye-tracking data by using required fixation locations. Behavior Research Methods, Instruments, & Computers 34, 4 (2002), 592– 604. [HHBL03] HOLMQVIST K., HOLSANOVA J., BARTHELSON M., LUNDQVIST D.: Reading or scanning? A study of newspaper and net paper reading. In The Mind’s Eye: Cognitive and Applied Aspects of Eye Movement Research, J. Hy¨on¨a, R. Radach and H. Deubel (Eds.), (Amsterdam, 2003), Elsevier Science BV, pp. 657–670. [HHWN02] HAVRE S., HETZLER E., WHITNEY P., NOWELL L.: ThemeRiver: Visualizing thematic changes in large document collections. IEEE Transactions on Visualization and Computer Graphics 8, 1 (2002), 9–20. [HNA*11] HOLMQVIST K., NYSTRO¨ M M., ANDERSSON R., DEWHURST R., JARODZKA H., VAN DE WEIJER J.: Eye Tracking: A Comprehensive Guide to Methods and Measures (1st edition). Oxford University Press, New York, 2011.

[KBB*17] KURZHALS K., BURCH M., BLASCHECK T., ANDRIENKO G., ANDRIENKO N., WEISKOPF D.: A task-based view on the visual analysis of eye tracking data. In Proceedings of the Eye Tracking and Visualization: Foundations, Techniques, and Applications (ETVIS 2015) (2017), M. Burch, L. Chuang, B. Fisher, A. Schmidt and D. Weiskopf (Eds.), Springer, pp. 3–22. [KDX*12] KIM S.-H., DONG Z., XIAN H., UPATISING B., YI J. S.: Does an eye tracker tell the truth about visualizations? Findings while investigating visualizations for decision making. IEEE Transactions on Visualization and Computer Graphics 18, 12 (2012), 2421–2430. [KFN14] KRASSANAKIS V., FILIPPAKOPOULOU V., NAKOS B.: EyeMMV toolbox: An eye movement post-analysis tool based on a two-step spatial dispersion threshold for fixation identification. Journal of Eye Movement Research 7, 1 (2014), 1–10. [KHH*16] KURZHALS K., HLAWATSCH M., HEIMERL F., BURCH M., ERTL T., WEISKOPF D.: Gaze Stripes: Image-based visualization of eye tracking data. IEEE Transactions on Visualization and Computer Graphics 22, 1 (2016), 1005–1014.

[Hol01] HOLSANOVA J.: Picture Viewing and Picture Description: Two Windows on the Mind. PhD thesis, Lund University, 2001.

[KHW14] KURZHALS K., HEIMERL F., WEISKOPF D.: ISeeCube: Visual analysis of gaze data for video. In Proceedings of the Symposium on Eye Tracking Research & Applications (2014), ACM, pp. 43– 50.

[HRO11] HUTCHERSON D., RANDALL R., OUZTS A.: Shelf real estate: Package spacing in a retail environment, 2011. http:// andrewd.ces.clemson.edu/courses/cpsc412/fall11/teams/reports/ group9.pdf, Last accessed on October 1, 2016.

[Kra03] KRAAK M.-J.: The space-time cube revisited from a geovisualization perspective. In Proceedings of the International Cartographic Conference (2003), The International Cartographic Association (ICA), pp. 1988–1995.

[IHN98] ITOH K., HANSEN J. P., NIELSEN F.: Cognitive modelling of a ship navigator based on protocol and eye-movement analysis. Le Travail Humain 61, 2 (1998), 99–127.

[KSK13] KREJTZ I., SZARKOWSKA A., KREJTZ K.: The effects of shot changes on eye movement in subtitling. Journal of Eye Movement Research 6, 3 (2013), 1–12.

[Ins85] INSELBERG A.: The plane with parallel coordinates. The Visual Computer 1, 2 (1985), 69–91.

[KW13] KURZHALS K., WEISKOPF D.: Space-time visual analytics of eye-tracking data for dynamic stimuli. IEEE Transactions on Visualization and Computer Graphics 19, 12 (2013), 2129–2138.

[ITS00] ITOH K., TANAKA H., SEKI M.: Eye-movement analysis of track monitoring patterns of night train operators: Effects of geographic knowledge and fatigue. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (2000), SAGE Publications, pp. 360–363.

[KW15a] KURZHALS K., WEISKOPF D.: AOI transition trees. In Proceedings of the Graphics Interface Conference (2015), ACM, pp. 41–48.

[JC76] JUST M. A., CARPENTER P. A.: Eye fixations and cognitive processes. Cognitive Psychology 8, 4 (1976), 441–480.

[KW15b] KURZHALS K., WEISKOPF D.: Eye tracking for personal visual analytics. Computer Graphics and Applications 35, 4 (2015), 64–72.

[JK03] JACOB R., KARN K.: Eye tracking in human-computer interaction and usability research: Ready to deliver the promises. In The

[Lan00] LANKFORD C.: Gazetracker: Software designed to facilitate eye movement analysis. In Proceedings of the Symposium on

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

Blascheck et al. / Visualization of Eye Tracking Data

Eye Tracking Research & Applications (2000), ACM, pp. 51– 55. [Lat88] LATIMER C. R.: Eye-movement data: Cumulative fixation time and cluster analysis. Behavior Research Methods, Instruments, & Computers 20, 5 (1988), 437–470. [LCK10] LI X., C¨OLTEKIN A., KRAAK M.-J.: Visual exploration of eye movement data using the space-time-cube. In Geographic Information Science, S. Fabrikan, T. Reichenbacher, M. Kreveld and S. Christoph (Eds.), (Berlin Heidelberg, 2010), Springer, pp. 295–309. [LHB*08] LORIGO L., HARIDASAN M., BRYNJARSDO´ TTIR H., XIA L., JOACHIMS T., GAY G., GRANKA L., PELLACINI F., PAN B.: Eye tracking and online search: Lessons learned and challenges ahead. Journal of the American Society for Information Science and Technology 59, 7 (2008), 1041–1052. [LPP*06] LEE B., PLAISANT C., PARR C. S., FEKETE J.-D., HENRY N.: Task taxonomy for graph visualization. In Proceedings of the AVI Workshop on BEyond Time and Errors: Novel Evaluation Methods for Information Visualization (2006), ACM, pp. 1– 5. [LPS*12] LI R., PELZ J., SHI P., OVESDOTTER Alm C., HAAKE A.: Learning eye movement patterns for characterization of perceptual expertise. In Proceedings of the Symposium on Eye Tracking Research & Applications (2012), ACM, pp. 393–396. [MM58] MACKWORTH J. F., MACKWORTH N. H.: Eye fixations recorded on changing visual scenes by the television eye-marker. Journal of the Optical Society of America 48, 7 (1958), 439– 444. [MSHH11] MITAL P., SMITH T., HILL R., HENDERSON J.: Cluster analysis of gaze during dynamic scene viewing is predicted by motion. Cognitive Computation 3, 1 (2011), 5–24. [Mun08] MUNZNER T.: Process and pitfalls in writing information visualization research papers. In Information Visualization: Human-Centered Issues and Perspectives, A. Kerren, J. Stasko, J.-D. Fekete and C. North (Eds.), (Boca Raton, London, New York, 2008), Springer, pp. 134–153. [NH08] NYSTRO¨ M M., HOLMQVIST K.: Semantic override of low-level features in image viewing – both initially and overall. Journal of Eye Movement Research 2, 2 (2008), 1–11. [NH10] NAKAYAMA M., HAYASHI Y.: Estimation of viewer’s response for contextual understanding of tasks using features of eyemovements. In Proceedings of the Symposium on Eye Tracking Research & Applications (2010), ACM, pp. 53–56. [NS71a] NOTON D., STARK L.: Scanpaths in eye movements during pattern perception. Science 171, 3968 (1971), 308–311. [NS71b] NOTON D., STARK L.: Scanpaths in saccadic eye movements while viewing and recognizing patterns. Vision Research 11, 9 (1971), 929–942.

23

[OGF14] OPACH T., GOLEBIOWSKA I., FABRIKANT S.: How do people view multi-component animated maps? The Cartographic Journal 51, 4 (2014), 330–342. [PB06] POOLE A., BALL L.: Eye tracking in human-computer interaction and usability research: current status and future prospects. In The Mind’s Eye: Cognitive and Applied Aspects of Eye Movement Research, J. Hy¨on¨a, R. Radach and H. Deubel (Eds.), (Amsterdam, 2006), Elsevier Science BV, pp. 211–219. [Pfe12] PFEIFFER T.: Measuring and visualizing attention in space with 3D attention volumes. In Proceedings of the Symposium on Eye Tracking Research & Applications (2012), ACM, pp. 29–36. [PH10] PAPENMEIER F., HUFF M.: DynAOI: A tool for matching eye-movement data with dynamic areas of interest in animations and movies. Behavior Research Methods 42, 1 (2010), 179– 187. [PLG06] PELLACINI F., LORIGO L., GAY G.: Visualizing Paths in Context. Tech. Rep. TR2006-580, Department of Computer Science, Dartmouth College, 2006. [PR14] PFEIFFER T., RENNER P.: EyeSee3D: A low-cost approach for analyzing mobile 3D eye tracking data using computer vision and augmented reality technology. In Proceedings of the Symposium on Eye Tracking Research & Applications (2014), ACM, pp. 369–376. [PRPL16] PFEIFFER T., RENNER P., PFEIFFER-LESSMANN N.: EyeSee3D 2.0: Model-based real-time analysis of mobile eye-tracking in static and dynamic three-dimensional scenes. In Proceedings of the Symposium on Eye Tracking Research & Applications (2016), ACM, pp. 189–196. [PS00] PRIVITERA C., STARK L.: Algorithms for defining visual regions-of-interest: Comparison with eye fixations. IEEE Transactions on Pattern Analysis and Machine Intelligence 22, 9 (2000), 970–982. [PSF*13a] PALETTA L., SANTNER K., FRITZ G., HOFMANN A., LODRON G., THALLINGER G., MAYER H.: A computer vision system for attention mapping in SLAM based 3D models. In ArXiv e-prints (2013), pp. 1–8. [PSF*13b] PALETTA L., SANTNER K., FRITZ G., MAYER H., SCHRAMMEL J.: 3D attention: Measurement of visual saliency using eye tracking glasses. In Proceedings of CHI Extended Abstracts on Human Factors in Computing Systems (2013), ACM, pp. 199– 204. [Pur97] PURCHASE H. C.: Which aesthetic has the greatest effect on human understanding? In Proceedings of Graph Drawing (1997), Springer, pp. 248–261. [RAM*05] R¨AIHA¨ K.-J., AULA A., MAJARANTA P., RANTALA H., KOIVUNEN K.: Static visualization of temporal eye-tracking data. In Proceedings of Human-Computer Interaction-INTERACT (2005), M. F. Costabile and F. Patern`o (Eds.), vol. 3585, Springer, pp. 946–949.

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

24

Blascheck et al. / Visualization of Eye Tracking Data

[Ray98] RAYNER K.: Eye movements in reading and information processing: 20 years of research. Psychological Bulletin 124, 3 (1998), 372–422. [RCE12] RASCHKE M., CHEN X., ERTL T.: Parallel scan-path visualization. In Proceedings of the Symposium on Eye Tracking Research & Applications (2012), ACM, pp. 165–168.

[SLHR09] SIIRTOLA H., LAIVO T., HEIMONEN T., R¨AIHA¨ K.-J.: Visual perception of parallel coordinate visualizations. In Proceedings of the International Conference on Information Visualization (2009), IEEE Computer Society Press, pp. 3–9. [SM07] SPAKOV O., MINIOTAS D.: Visualization of eye gaze data using heat maps. Electronics and Electrical Engineering 2, 74 (2007), 55–58.

[RD05] RICHARDSON D. C., DALE R.: Looking to understand: The coupling between speakers’ and listeners’ eye movements and its relationship to discourse comprehension. Cognitive Science 29, 6 (2005), 1045–1060.

[SM13] SMITH T. J., MITAL P. K.: Attentional synchrony and the influence of viewing task on gaze behavior in static and dynamic scenes. Journal of Vision 13, 8 (2013), 1–24.

[RHB*14] RASCHKE M., HERR D., BLASCHECK T., BURCH M., SCHRAUF M., WILLMANN S., ERTL T.: A visual approach for scan path comparison. In Proceedings of the Symposium on Eye Tracking Research & Applications (2014), ACM, pp. 135–142.

[SMI14]SMI: SMI begaze eye tracking analysis software, 2014. http://www.smivision.com/en/gaze-and-eye-trackingsystems/products/begaze-analysis-software.html, Last accessed on October 1, 2016.

[RHOL13] RISTOVSKI G., HUNTER M., OLK B., LINSEN L.: EyeC: Coordinated views for interactive visual data exploration of eyetracking data. In Proceedings of the International Conference on Information Visualization (2013), IEEE Computer Society Press, p. 239.

[SND10a] STELLMACH S., NACKE L., DACHSELT R.: 3D attentional maps: Aggregated gaze visualizations in three-dimensional virtual environments. In Proceedings of the International Conference on Advanced Visual Interfaces (2010), ACM, pp. 345– 348.

[RLMJ05] ROSENHOLTZ R., LI Y., MANSFIELD J., JIN Z.: Feature congestion: A measure of display clutter. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2005), ACM, pp. 761–770.

[SND10b] STELLMACH S., NACKE L., DACHSELT R.: Advanced gaze visualizations for three-dimensional virtual environments. In Proceedings of the Symposium on Eye Tracking Research & Applications (2010), ACM, pp. 109–112.

[RSW*16] RASCHKE M., SCHMITZ B., W¨ORNER M., ERTL T., DIEDERICHS F.: Application design for an eye tracking analysis based on visual analytics. In Proceedings of the Symposium on Eye Tracking Research & Applications (2016), ACM, pp. 311– 312.

[SNDC09] STELLMACH S., NACKE L., DACHSELT R., CRAIG L.: Trends and techniques in visual gaze analysis. In ArXiv e-prints (2009), pp. 89–93.

[RTSB04] RAMLOLL R., TREPAGNIER C., SEBRECHTS M., BEEDASY J.: Gaze data visualization tools: Opportunities and challenges. In Proceedings of the International Conference on Information Visualization (2004), IEEE Computer Society Press, pp. 173–180. [RVM12] RODRIGUES R., VELOSO A., MEALHA O.: A television news graphical layout analysis method using eye tracking. In Proceedings of the International Conference on Information Visualization (2012), IEEE Computer Society Press, pp. 357–362. [SD04] SANTELLA A., DECARLO D.: Robust clustering of eye movement recordings for quantification of visual interest. In Proceedings of the Symposium on Eye Tracking Research & Applications (2004), ACM, pp. 27–34. [SG00] SALVUCCI D. D., GOLDBERG J. H.: Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the Symposium on Eye Tracking Research & Applications (2000), ACM, pp. 71–78. [Shn96] SHNEIDERMAN B.: The eyes have it: A task by data type taxonomy for information visualizations. In Proceedings of the IEEE Symposium on Visual Languages (1996), IEEE Computer Society Press, pp. 336–343.

[Spa08] SPAKOV O.: iComponent - Device-independent Platform for Analyzing Eye Movement Data and Developing Eye-Based Applications. PhD thesis, University of Tampere, 2008. [SPK86] SCINTO L., PILLALAMARRI R., KARSH R.: Cognitive strategies for visual search. Acta Psychologica 62, 3 (1986), 263–292. [SR08] SPAKOV O., R¨AIHA¨ K.-J.: KiEV: A tool for visualization of reading and writing processes in translation of text. In Proceedings of the Symposium on Eye Tracking Research & Applications (2008), ACM, pp. 107–110. [SSF*11] SCHULZ C. M., SCHNEIDER E., FRITZ L., VOCKEROTH J., HAPFELMEIER A., BRANDT T., KOCHS E. F., SCHNEIDER G.: Visual attention of anaesthetists during simulated critical incidents. British Journal of Anaesthesia 106, 6 (2011), 807–813. [SYKS14] SONG H., YUN J., KIM B., SEO J.: GazeVis: Interactive 3D gaze visualization for contiguous cross-sectional medical images. IEEE Transactions on Visualization and Computer Graphics 20, 5 (2014), 726–739. [TAK*05] TORY M., ATKINS S., KIRKPATRICK A., NICOLAOU M., YANG G.-Z.: Eyegaze analysis of displays with combined 2D and 3D views. In Proceedings of the IEEE Symposium on Information Visualization (2005), IEEE Computer Society Press, pp. 519– 526.

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum 

Blascheck et al. / Visualization of Eye Tracking Data

[TC05] THOMAS J., COOK K.: Illuminating the Path. IEEE Computer Society Press, Los Alamitos, 2005.

25

computer interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (1996), ACM, pp. 496– 503.

[TCSC13] TOKER D., CONATI C., STEICHEN B., CARENINI G.: Individual user characteristics and information visualization: Connecting the dots through eye tracking. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2013), ACM, pp. 295–304.

[vLKS*11] VON Landesberger T., KUIJPER A., SCHRECK T., KOHLHAMMER J., VAN WIJK J., FEKETE J.-D., FELLNER D.: Visual analysis of large graphs: State-of-the-art and future research challenges. Computer Graphics Forum 30, 6 (2011), 1719–1749.

[TKS*10] TAKEMURA K., KOHASHI Y., SUENAGA T., TAKAMATSU J., OGASAWARA T.: Estimating 3D point-of-regard and visualizing gaze trajectories under natural head movement. In Proceedings of the Symposium on Eye Tracking Research & Applications (2010), ACM, pp. 157–160.

[WFE*12] WEIBEL N., FOUSE A., EMMENEGGER C., KIMMICH S., HUTCHINS E.: Let’s look at the cockpit: Exploring mobile eyetracking for observational research on the flight deck. In Proceedings of the Symposium on Eye Tracking Research & Applications (2012), ACM, pp. 107–114.

[TM04] TORY M., M¨OLLER T.: Rethinking visualization: A highlevel taxonomy. In IEEE Symposium on Information Visualization (2004), IEEE Computer Society Press, pp. 151–158.

[WHRK06] WEST J., HAAKE A., ROZANSKI E. P., KARN K.: eyePatterns: Software for identifying patterns and similarities across fixation sequences. In Proceedings of the Symposium on Eye Tracking Research & Applications (2006), ACM, pp. 149–154.

[Tob08] Tobii Technology AB: Tobii Studio user’s manual version 3.4.5, 2008. http://www.tobiipro.com/siteassets/tobii-pro/usermanuals/tobii-pro-studio-user-manual.pdf/?v=3.4.5, Last accessed on October 1, 2016. [TTS10] TSANG H. Y., TORY M., SWINDELLS C.: eSeeTrack - Visualizing sequential fixation patterns. IEEE Transactions on Visualization and Computer Graphics 16, 6 (2010), 953–962. [Ver99] VERTEGAAL R.: The GAZE groupware system: Mediating joint attention in multiparty communication and collaboration. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (1999), ACM, pp. 294–301. [VH96] VELICHKOVSKY B. M., HANSEN J. P.: New technological windows into mind: There is more in eyes and brains for human-

[Woo02] WOODING D. S.: Fixation maps: Quantifying eyemovement traces. In Proceedings of the Symposium on Eye Tracking Research & Applications (2002), ACM, pp. 31–36. [Yar67] YARBUS A. L.: Eye Movements and Vision. Plenum Press, 1967. [YKSJ07] YI J. S., KANG Y.-a., STASKO J., JACKO J.: Toward a deeper understanding of the role of interaction in information visualization. IEEE Transactions on Visualization and Computer Graphics 13, 6 (2007), 1224–1231. [YS75] YOUNG L. R., SHEENA D.: Survey of eye movement recording methods. Behavior Research Methods & Instrumentation 7, 5 (1975), 397–429.

 c 2017 The Authors c 2017 The Eurographics Association and John Wiley & Sons Ltd. Computer Graphics Forum