Semantic Depth of Field Robert Kosara∗
Silvia Miksch†
Helwig Hauser‡
Vienna University of Technology, Austria http://www.ifs.tuwien.ac.at/
VRVis Research Center, Austria http://www.vrvis.at/
Abstract Pointing the user to the parts of a visual display that are currently the most relevant is an important task in information visualization. The same problem exists in photography, where a number of solutions have been known for a long time. One of these methods is depth of field (DOF), which depicts objects in or out of focus depending on their distance from the camera. We propose the generalization of this idea to a concept we call semantic depth of field (SDOF), where the sharpness of objects is controlled by their current relevance, rather than their distance. This enables the user to quickly switch between different sets of criteria without having to get used to a different visualization layout or geometric distortion. We present several visual metaphors based on SDOF and show examples of their use. Because DOF is widely used in photography and cinematography, SDOF should be easy to understand for most users. CR Categories: I.3.6 [Methodology and Techniques]: Interaction techniques Keywords: Information Visualization, Focus and Context, NonRealistic Rendering
1
Introduction
Blur is often regarded as an imperfection of images, and indeed it is – in some respects. But blur can also be an enhancement to an image. One example of this is the depth of field (DOF) effect used in photography [1] and cinematography [14]: Scene elements which are out of focus are blurred according to their distance from the plane of focus. As a consequence, the viewers’ attention is automatically guided to the most important parts of the image. With portraits (Fig. 1, left), for example, the person of interest is immediately perceived as the most relevant part of the image. This is because the person is depicted sharply (in focus) whereas all the context (in the background as well as in the foreground) is blurred. In this paper, we present a model which we call Semantic Depth of Field (SDOF), for using blur as a focus and context (F+C) technique in computer graphics, especially in visualization. In general, F+C techniques are methods for how to enhance certain scene elements of interest while still using the rest as (non-distracting) context. Many of them are distortion-oriented [19], i.e., the parts of interest are enlarged while other parts are scaled down to fit into the remaining image space. Prominent examples are fisheye views [7, 26], hyperbolic trees [17, 16], stretchable rubber sheets [27], etc. Other approaches, for instance magic lenses [28] or toolglasses [3], use separate visualization techniques for relevant objects and the context, respectively. ∗
[email protected] †
[email protected] ‡
[email protected]
Asgaard-TR-2001-1
Figure 1: A portrait as an example for DOF in photography (left) – the blurred objects surrounding the face are hardly noticeable. An example for preattentive processing: the object different to all others is immediately perceived (top right); text is another example (bottom right).
In contrast to existing F+C techniques, the SDOF model uses blur to de-emphasize context objects. Different to DOF in photography, where the blurring of objects solely depends on their spatial depths within the scene, SDOF provides blurring depending on the semantics of the scene. From the theory of human perception [8] we know that human focusing works as follows: First, for the purpose of looking at a particular object of interest, the eye (and usually also the head) is brought into a position and orientation, such that the object of interest gets projected onto the fovea centralis, which is the area of most acute vision. Then, the lens within the eye focuses, such that the image of the object of interest is depicted sharply on the retina of the eye. From photography, we know that this procedure can be reversed: By depicting parts of an image sharply (and the rest of the image blurred), the viewer automatically focuses on that particular part, while ignoring all the rest, at least at first glance (Fig. 1, left). One important property of this aspect of human vision, i.e., discarding blurred parts from perceived images automatically, is that it takes place in a very short time, usually within ten milliseconds. Processes like these – called preattentive processes [29] – are performed in a highly parallel way, no serial search is performed. In Fig. 1 (top right), for example, the viewer immediately perceives the one object which is different from all others, there is no need for examining all objects one after the other. A small number of visual features are preattentively processed, for example closure, line orientation, hue, etc. This allows not only
—1—
TR-VRVis-2001-001
for very quick perception, but also for fast estimation of feature quantities as well as for rapid perception of areas where features gather. In this paper, we follow other recent examples [10, 12, 30] of exploiting perceptual psychology for computer graphics and visualization, by using preattentive processing as a strong argument in favor of our SDOF approach.
2
Film
Film
Object Lens Lens Axis
Pinhole
Related Work f
There have been surprisingly few attempts to use DOF or blur in visualization at all; the ones relevant to this work are shortly summarized here. In a system for the display of time-dependent cardio-vascular data [32], a stereoscopic 3D display is included that is controlled by the viewer’s eyes. Like a microscope, only one thin slice through the data appears sharp, all others are blurred and therefore almost invisible. Eye tracking equipment determines what the user is looking at, and that point is brought into focus. This makes it possible to concentrate on one detail without the surrounding structures confusing the viewer. Later work [33] describes “non-linear depth cues”, which means displaying structures that currently are of interest (like single organs) in focus, and other objects out of focus, not based on their distance from the camera, but on their importance. This amounts to a semantic use of depth of field. The Macroscope [20] is a system for displaying several zoom levels of information in the same display space. For this purpose, the images on all levels are drawn over each other, with the more detailed ones drawn “in front”, i.e., drawn over the less magnified layers. The layers’ transparency can be changed so that the background (context) can be more or less visible. In order to make the background less distracting, blur is used for the front-most images that show the coarsest data. The most interesting existing approach for this work is a display of geographical information [4]. In this system, up to 26 layers of information can be displayed at the same time. Each layer has an interest level associated with it that the user can change. The interest level is a combination of blur and transparency, making less interesting layers more blurred and more transparent at the same time. This work does not seem to have been followed up on recently. Also interesting in comparison to this work is another existing F+C technique, which is a system for visualizing geographical data [22] that uses color saturation to show different types of data for the same geographical area. Different cities, hospitals, pharmacies, etc. can be viewed by “lightening them up” with brighter and more saturated colors than other parts of the image. Here also preattentive processing is exploited for the purpose of fast perception. All the described approaches only used blur in a very limited way. None of them presented a thorough model or linked their work to perceptual psychology, nor showed the vast field of applicability of SDOF.
3
Depth of Field and Camera Models
In computer graphics, the pinhole camera is the simplest and therefore most common model in use (Fig. 2, top left). A pinhole camera consists of a light-tight box that contains a piece of film and has an infinitesimally small hole in the side opposite the film. Through the pinhole, each point on the film is hit by exactly one light ray, and therefore the image is perfectly sharp everywhere. In photography (and photo-realistic rendering), cameras usually contain lenses or lens systems. A lens has the advantage of collecting more light than a hole (Fig. 2, top right), allowing for shorter exposure times. An inherent property of cameras which are based on lenses is the DOF effect – scene elements are depicted sharply only if they are located near the focus plane. If a lens has the focal
Asgaard-TR-2001-1
v
u z~focus
b
D
D
b Film
Focus Plane
Figure 2: Camera models. The pinhole camera model (top left) and the thin lens model (top right). The focal length f is the distance from the lens at which light rays parallel to the lens axis are focused (top right). A smaller effective lens diameter D leads to a smaller circles of confusion for out-of-focus points (bottom).
length f , and an object is at distance u from the lens, then a sharp image of that object is projected at distance v from the lens (thin lens equation [18]): 1 1 1 + = (1) u v f While focusing on a scene object at distance z˜focus from the image plane, f is fixed, and the lens is moved such that its distance v from the film as well as its distance u = z˜focus − v from the object1 satisfy Eq. 1 (Fig. 2, top right). Points of (out-of-focus) objects are not depicted as points, but as discs (the so-called circles of confusion, CoC [1, 18]). The size of CoCs depends on the distance of a point from the plane of focus, and from the aperture setting (also called f-stop). The aperture effectively makes the lens diameter D larger or smaller, a larger aperture leads to smaller CoCs (see Eq. 2 and Fig. 2, bottom). In practice, infinitesimally small points are not needed for a perfectly sharp image. Instead, depending on the magnification of the negative or slide as well as the viewing distance, there is a maximum acceptable CoC diameter that will lead to the image being perceived as sharp. The extent of the space in which the image appears sharp is called depth of field (DOF). Assuming camera parameters v and f to be fixed during one shot of an image, we can calculate CoC diameters b for any depicted point, depending on its distance z from the lens as well as from aperture setting a:
f (zfocus − z) , zfocus (z − f )
b = D
D=
f , a
zfocus =
vf v−f
(2)
1 In optics, distances are measured from the lens, while in photography and computer graphics they are measured from the film plane. For the sake of simplicity, we will use distances measured from the lens in this paper, and mark distances measured from the image plane with a tilde ˜.
—2—
TR-VRVis-2001-001
4
semantic operations (selection, etc.)
SDOF – Semantic Depth of Field
The basic idea of SDOF can be summarized as follows: Given a certain 2D visualization of some data, for example, a map of a city, and also given a certain user criterion, what part of the visualization is more relevant as compared to others, we may think of extending the given 2D scene into 3D, by assigning z-values to each scene object according to the user criterion. We may also assume that the depth values assigned to scene objects correspond to DOF in such a way that relevant objects lie in the vicinity of the plane of focus, whereas irrelevant objects are either before or behind it. Consequently, rendering of the 3D scene by the use of photo-realistic camera as described before will result in relevant objects as being depicted sharply as well as irrelevant objects as being shown in a blurred style (Fig. 3).
irrelevant
focus plane
relevant
3D SDOF scene
rendered image
Figure 3: The basic idea of SDOF: applying DOF individually to scene objects depending on their semantics.
When rendering images with SDOF, we can build upon existing building blocks, like camera models and transformations, as shown in Fig. 4. One part, which SDOF takes as an input, is the spatial arrangement of scene objects as specified by the application – note that this might be trivial in case of inherent spatial locations as usually given in photo-realistic rendering, or a complex task itself as usually given in information visualization. Another important building block of SDOF is the relevance mapping: Scene objects are assigned importance values according to a user criterion. These building blocks are described in more detail in the following sections. A more formal summary of these steps can be found in Tab. 1.
rel
Data
2D 3D
Relevance and Blurring
Viewing and Camera Model Photorealistic Adaptive ...
Selection Distance ... Figure 4: Building blocks for SDOF: spatial arrangement of visualization, relevance criterion, and camera model (see also Tab. 1) .
Asgaard-TR-2001-1
r
objects/data
blur
visualization
Figure 5: Two functions are used to map objects to blur diameters. This makes independent control of semantic and visualization parameters possible.
4.1
Spatial Arrangement
One input SDOF requires from the application in use is a certain spatial arrangement of data items. Both 2D and 3D arrangements can be combined with SDOF. We call the function that maps data points to locations place, it is responsible for producing a layout for the data that the user will understand. In cases where data items inherently exhibit spatial locations anyhow, this part becomes trivial. However, in many cases, especially in information visualization, the data to be depicted does not have any inherent spatial structure and therefore, in principle, there is a significant freedom to place data items in visualization space. In database visualization, for example, usually no inherent spatial sorting of rows and columns is present – how to arrange data items, instead, is an integral part of the visualization procedure. Usually, the spatial arrangement which is chosen by a visualization algorithm tries to reflect the distances between data items with regard to a certain similarity metric. Automatic layout algorithms are used to optimize, for example, the drawing of the nodes of a graph [6]. In cases where data items do not have any inherent spatial structure and, therefore, some synthetic layout has to be chosen for visualization, the user needs to form a so-called mental map of the visualization. The user needs to learn the visualization layout in order to be able to work with it. As a consequence, it is necessary to avoid major changes to the data layout as much as possible. Therefore, in cases, where relevance of data items changes during a visualization session (which is the usual case), other techniques for enhancing objects of interest, like SDOF, are required.
4.2 Spatial Arrangement
visualization parameters
Relevance and Blurring
The model so far lacks a way of pointing out relevant objects without changing the layout of the visualization. It is therefore necessary to introduce another mapping we call rel, which assigns relevance values to data items. The result of rel is a value in the range [0, 1], where 1 represents maximal relevance: Examples for relevance functions are selections (objects that fulfill certain criteria are selected, i.e., assigned a relevance value of 1 whereas all the other data items are assigned 0), similarity to a reference object (possibly yielding a continuous measure of relevance), age of a data value, etc. In addition to function rel, another function named blur is needed to translate relevance values into CoC diameters, i.e., positive blur levels. It would be possible, of course, to map data values directly to blur factors. But separating the mapping from data to screen space from visualization parameters gives the user more direct and independent control (see Fig. 5). For our purposes, blur factors are always measured in pixel units. Therefore, a value of 1 or below denotes a perfectly sharp depiction of an object, any larger value makes the image more and more blurred. Usually, the user needs to decide what blur function optimally reflects his or her relevance metric. A few examples of such functions are shown in
—3—
TR-VRVis-2001-001
b) binary blur
The above equations apply to camera models such as distribution ray tracing [5], linear post-filtering [24], light field rendering [11], etc. But other cameras can of course be based on different (more mathematically complex) models [18], or be entirely different (like extended cameras [21, 25], etc.) . If the camera uses perspective projection, objects also have to be scaled to compensate for depth effects that are not desirable in this case.
blur
blur
a) everything sharp
1
r
r
0
0
1 c) discrete blur levels
1 4.3.2
blur
blur
d) continuous blur
r
In the 3D case, of course, it is not possible to directly map blur factors to depth values (and therefore use a constant zfocus -value together with the standard camera as described above) – the spatial arrangement of data items already contains a third dimension. However, using a simple extension of the photo-realistic camera, it is possible to also handle the 3D case. The adaptive camera is a modification of a photorealistic camera that can change its focus for every object point to be rendered. This is easily done with object-order rendering, but can also be achieved when rendering in image order. In contrast to the photo-realistic camera, the adaptive camera can render SDOF in 2D and 3D scenes. The photorealistic camera is, in fact, a special case of the adaptive camera (which simply stays focused at the same distance for the whole image). Function dof a is defined like dof p in Eq. 3. Different to the 2D case, now the inversion of dof a must be resolved for zfocus -values:
r
0
1 e) continuous blur with step
0
1
blur
blur
f) exponential blur
r
0
r
1
0
1
Figure 6: Examples for different blur functions mapping relevance values r to to blur factors b (for discussion, see Sect. 5).
Fig. 6, different types of functions are discussed in Sect. 5.
4.3
Viewing and Camera Models
Depending on whether the visualization space is two- or threedimensional, different camera models can be used to finally achieve the SDOF effect. The camera provides two functions: camera projects data values from an intermediate space (where the information was layed out by the place function) to screen space; and dof, which calculates the blur level of each data item depending on its z coordinate and the zfocus value the camera is currently focused at. In the following, we describe two simple models. We demonstrate that a regular photo-realistic camera can be used in the 2D case. For 3D, we present a so-called adaptive camera. 4.3.1
2D SDOF and Photorealistic Camera
In the 2D case, objects get an additional coordinate in addition to their x and y values: the blur disc diameter b. The objects are subsequently moved in the z direction so as to be properly blurred by the camera. If the camera is focused at zfocus , an object with intended blur b has to be moved to a distance of z from the camera: b = dof p (z, zfocus )
=
z = dof −1 p (b, zfocus )
=
f (zfocus − z) D zfocus (z − f ) D+b + zfocus D
b f
(3) (4)
(D is the effective lens diameter as defined in Sect. 3 and f is the focal length of the lens in use.)
Asgaard-TR-2001-1
3D SDOF and Adaptive Camera
b = dof a (z, zfocus )
=
zfocus = dof −1 a (b, z)
=
dof p (z, zfocus ) D D+b − fb z
(5) (6)
A special case of the adaptive camera is splatting [13, 31], which is a volume visualization technique, but is also used in information visualization. By changing the size of the splat kernel depending on the b value of a data point, SDOF can be implemented easily. Another special case are pre-blurred billboards [23]. Objects are rendered into memory, the images are then blurred and put onto polygons into the scene.
5
Parameterization
When using SDOF, the user can control two different functions: rel and blur. The former is application-specific and has to be provided by the developers of the visualization. For the blur function, there are no restrictions other than that it has to be monotonically decreasing (so a more relevant object is always depicted sharper than or at least as sharply as a less relevant one). But a few typical functions can still be distinguished and shall be briefly discussed here. The simplest case is a constant function (Fig. 6a). If its value is 1, it will lead to a completely sharp image that does not show any SDOF effects. Using a higher value will yield a uniformly blurred image that is generally not very desirable. Using a step function (Fig. 6b) makes it possible to do a binary classification of objects. Moving the r value where the step occurs changes the threshold and thus shows a different set of objects in focus. Changing the height of the step (i.e., the b value for r less then the step value) shows irrelevant objects more or less out of focus. Depending on the application, it can be desirable to still be able to recognize currently irrelevant objects, or to make them almost disappear. This function could be used for a classification where the relevance of objects depends on their age. Changing the threshold r value shows clearly distinguishes between objects younger and older than the age corresponding to the threshold. The chessboard visualization in Fig. 7 also uses a step function, for exmple.
—4—
TR-VRVis-2001-001
place2D
−−−−− −−→
data[i]
rel
blur
−−→ r −−→
place3D
−−−−− −−→
data[i]
rel
blur
−−→ r −−→
x ˆ yˆ b = blur(r)
view2D
−−−−→ dof −1 p
−−→
−−−−→
x ˆ yˆ zˆ b = blur(r)
view3D dof −1 a
−−→
x y z = dof −1 p (b)
zfocus
x y z −1 = dof a (b)
camerap
x ¯ y¯
cameraa
x ¯ y¯
−−−−→
−−−−→
Table 1: All steps necessary for visualizing data values data[i] with 2D (top) and 3D SDOF (bottom).
If the function contains several steps (Fig. 6c), it provides a classification into several groups. If the user is able to change the width and height of each step, very fine-grained control of the resulting image is possible. Using this function for objects whose relevance depends on their age, each step could be a whole day, thus making it possible to see how old objects are, and at the same time being shown more recent objects more clearly. A continuous linear function (Fig. 6d) is especially suited for applications where differences of features are of interest. Making the relevance of objects correspond to their distance in one of the the features makes it easy to find similar or different objects. Another application is a layered 2D view, where the user can change the distance between layers, and thus their relevance, which is then directly mirrored in their blur (this is a metaphor that very closely resembles focusing with a camera). The user needs control of the gradient of the function and the r value above which the function is constant. A linear function is used in the map viewer example in Fig. 8. If it is not only important to see the difference between objects, but also to precisely see which objects are different (even if they are very similar), a step can be included in the otherwise continuous and linear function (Fig. 6e). This makes it possible to leave the gradient of the function small, but still precisely see which objects are equal (or within a certain range) and which are not. A function that is somewhat similar to the continuous linear function, but stresses larger differences more, is the exponential blur (Fig. 6e). It is most useful where small differences are not very significant, but larger differences are more and more important. This function can of course be combined with a step, linear sections, etc.
6
Applications
The following applications are some of the possible uses of the model as described above. We distinguish three different classes purely for the sake of easier implementation. The model contains all three of them, as well as any combination.
6.1
2D SDOF
Displaying information in 2D, it is possible to use blur to show a selection or any other distance function. This does not have much in common with DOF as used in photography (which only exists if there is a third dimension), but is useful nonetheless. An example for this application would be a window manager that blurs all screen areas that are currently not used. Thus, a window showing the output of a program, or tracking a communication channel could be blurred so that it does not interfere with other work
Asgaard-TR-2001-1
currently done by the user. If new messages arrive, the scrolling would be visible. It would also be possible to bring the window to focus on such a case, and thus directly guide the user’s attention to it, without popping up a window or otherwise interfering with the user’s current task. Another example is a 2D chess board (Fig. 7, left column; see also the video on the accompanying CDROM) that shows which pieces threaten a specific piece, or how well a particular square on the board is guarded by other pieces of the same color.
6.2
Layered 2D SDOF
When several 2D layers of information are put on top of each other, it is possible to provide the user with an intuitive way of choosing how much and which information to display crisply. Independent Layers. A number of 2D depictions at the same level of detail are put one above the other, like floor plans in architecture drawn on tracing paper. The layers are translucent (the level of translucency can be changed), so that the other layers can be seen through the ones on top. Any subset of these layers can be rendered out of focus, so that the information on the in-focus layers becomes much more dominant. For a user interface that is based on layering inspired by transparent paper [2], the idea of wiggling the different layers is presented, so that the different layers can be discriminated. We believe that SDOF can show this effect much more effectively, and also provide many other means of interaction. Stacked Layers. Using the same topology as for independent layers, this mode is closer related to the photographic metaphor. Only a subset of neighboring layers is in focus here, all other layers are blurred according to their distance from the nearest in-focus layer. An example for layered 2D SDOF is a map viewer that allows many layers of geographic information (streets, mountains, rivers, telephone lines, weather data, population data, etc.) to be displayed at the same time (Fig. 8). The user can select what information is shown and how sharp, thus focusing on certain information while at the same time getting the context of the depiction. Hierarchical Layers. If data from several levels (semantic or from different magnification levels) is put together into one image, the different levels can be included more easily if there is a smooth focus change between layers with different detail levels while the image is zoomed. It is thus possible to immediately see the correspondences between objects on different layers, without having to switch back and forth between them. Magnification and blur can change simultaneously or independently of each other, depending on the user’s needs. Unlike the Macroscope (see Sect. 2), the different magnification levels are not drawn in the same size over each other, but maintain their relative sizes.
—5—
TR-VRVis-2001-001
a) The chessboard as it is known
b) Which chessmen cover the white knight on e3?
c) Which chessmen threaten the white knight on e3? Figure 7: Using SDOF in chess: a) all pieces have a relevance r = 1; b) the white pieces covering a white knight (and the knight itself) have r = 1, all others r = 0; c) the black chessmen threatening a white knight (and the white knight) have r = 1, all others r = 0 (from Garry Kasparov vs. Deep Blue, “IBM Kasparov vs. Deep Blue: The Rematch”, May 3, 1997, after the 23rd move).
Asgaard-TR-2001-1
—6—
TR-VRVis-2001-001
Figure 8: Maps. Setting the layer containing rivers to r = 1, and adjacent layers to smaller values according to their distance makes the rivers stand out (left); the same can be done with roads (right).
6.3
8
3D SDOF
The above uses of SDOF were only special cases of 3D SDOF. In 3D, it is possible to shift the focus between any objects, similar to the 2D case. Together with other interactions like navigation, pan, zoom, rotation, the user can have the system point to the objects that meet certain criteria, etc. An example of 3D SDOF is a 3D file system viewer that displays files and directories as objects in 3D space, and that allows searches and selections, the results of which are displayed by blurring all objects that do not match the criteria. When looking for objects that have a certain age, it displays the difference from the searched age with continuous blur levels. Any hierarchical data could be displayed using nested, translucent boxes; these boxes are blurred when the user focuses on their contents, while the contained objects are blurred (and the boxes drawn crisply) when the higher hierarchy level is of interest. This also solves the problem of how to draw these boxes so that they can be distinguished: because their contents are shown (but not in detail), they are recognizable. The chessboard example cited above in 2D can of course also be extended to 3D (Fig. 7, right column; see also the video on the accompanying CDROM).
7
Implementation
We implemented a chess viewer that can display both 2D and 3D chess boards and show any figure and field at any blur radius. We used billboards for both the 2D and 3D displays. The figures were first drawn into a buffer, then copied to main memory, blurred, and then applied as a texture to the billboards. For the 2D case, these textures can be cached and reused (and every piece of each color only has to be blurred once for every blur diameter). This is not true for the 3D case, where the appearance of a piece depends not only on its blur level, but also on the camera position. For the 3D display, the billboards need to be depth-sorted because the edges of the chess pieces become semitransparent when being blurred, and therefore need to be rendered into the frame buffer in the correct order. For the map viewer, we used the accumulation buffer [9] to blur the different map layers, and then composited them using blending directly into the frame buffer. Both implementations are rather slow, and are in need of some optimization.
Asgaard-TR-2001-1
Conclusions and Future Work
We have proposed a new method for providing focus and context in visualizations. It exploits effects well known from photography and cinematography, namely depth of field. Because blur is an intrinsic feature of the human eye, selective blur is a highly effective way of pointing users to pieces of information. We believe that SDOF, being a non-distorting focus and context technique, can be incorporated into any existing visualization. It can also be combined with other methods such as color coding to point out different dimensions of information. Combining SDOF with distortion-oriented F+C methods is another possibility. SDOF can serve as a “relevance cue”, and therefore replace other effects such as fog [15] in visualization. We found out that blurring of text (Fig. 1) is quite effective, which is a result we did not anticipate. The combination of SDOF with other features like color to provide multivariate data visualization and how well SDOF is suited for this needs further investigation. We are also working on methods to render SDOF using low-end 3D graphics hardware, to make it usable on standard PCs. One method we think can be applied is texture mapping, which is particularly fast on most hardware. One obvious problem SDOF has is that it is hard to combine with DOF as a depth cue. We therefore also want to investigate the implications of this. Finally, measuring blur in pixels has proven to be a bad choice, because it does not work very well with printed images, projections, and high resolution screens. We are therefore working on a measure that will take different target sizes, resolutions and viewing distances into account, so that the impression of blur is always the same.
9
Acknowledgments
We would like to thank Markus Hadwiger, Lukas Mroz and Robert F. Tobler for their critique and suggestions that greatly improved this paper. This work is part of the Asgaard Project, which is supported by Fonds zur F¨orderung der wissenschaftlichen Forschung (Austrian Science Fund), grant P12797-INF.
References
—7—
[1] A. Adams. The Camera. Little Brown & Company, 1991.
TR-VRVis-2001-001
[2] M. Belge, I. Lokuge, and D. Rivers. Back to the future: A graphical layering system inspired by transparent paper. In INTERCHI’93 Conference Companion, pages 129–130, 1993. [3] E. A. Bier, M. C. Stone, K. Pier, W. Buxton, and T. D. DeRose. Toolglass and magic lenses: The see-through interface. Computer Graphics (Proceedings SIGGRAPH’93), 27(Annual Conference Series):73–80, 1993. [4] G. Colby and L. Scholl. Transparency and blur as selective cues for complex visual information. In SPIE Vol. 1460, Image Handling and Reproduction Systems Integration, pages 114–125, 1991. [5] R. L. Cook, T. Porter, and L. Carpenter. Distributed ray tracing. Computer Graphics (Proceedings SIGGRAPH’84), 18(3):137–145, July 1984. [6] G. di Battista, P. Eades, R. Tamassia, and I. Tollis. Algorithms for drawing graphs: An annotated bibliography. Computational Geometry: Theory and Applications, 4(5):235–282, 1994. [7] G. W. Furnas. Generalized fisheye views. In M. M. Mantei and P. Orbeton, editors, Proceedings of the ACM Conference on Human Factors in Computer Systems, SIGCHI Bulletin, pages 16–23, New York, U.S.A., 1986. Association for Computer Machinery. [8] E. B. Goldstein. Wahrnehmungspsychologie: Eine Einf¨uhrung. Spektrum Akademischer Verlag, 1997. [9] P. Haeberli and K. Akeley. The accumulation buffer: Hardware support for high-quality rendering. Computer Graphics (Proceedings SIGGRAPH’90), 24(4):309–318, Aug. 1990. [10] C. G. Healey and J. T. Enns. Large datasets at a glance: Combining textures and colors in scientific visualization. IEEE Transactions on Visualization and Computer Graphics, 5(2):145–167, Apr. 1999. [11] W. Heidrich, P. Slusalek, and H. Seidel. An image-based model for realistic lens systems in interactive computer graphics. In W. A. Davis, M. Mantei, and R. V. Klassen, editors, Graphics Interface ’97, pages 68–75. Canadian Information Processing Society, Canadian Human-Computer Communications Society, May 1997. ISBN 0-9695338-6-1 ISSN 07135424. [12] I. Herman, G. Melanc¸on, and M. S. Marshall. Graph visualization and navigation in information visualization: A survey. IEEE Transactions on Visualization and Computer Graphics, 6(1):24–43, Jan.-Mar. 2000. [13] J. Huang, K. Mueller, N. Shareef, and R. Crawfis. Fastsplats: Optimized splatting on rectilinear grids. In Proceedings Visualization 2000, Salt Lake City, UT, USA, Oct. 8–13 2000. IEEE. [14] S. D. Katz. Film directing shot by shot: Visualizing from concept to screen. Focal Press, 1991. [15] T. A. Keahey. The generalized detail-in-context problem. In Proceedings IEEE Symposium on Information Visualization 1998, pages 44–51. IEEE, 1998. [16] M. Kreuseler, N. L´opez, and H. Schumann. A scalable framework for information visualization. In IEEE Symposium on Information Vizualization, Salt Lake City, UT, USA, Oct. 8– 13 2000. IEEE.
Asgaard-TR-2001-1
[17] J. Lamping, R. Rao, and P. Pirolli. A focus+context technique based on hyperbolic geometry for visualizing large hierarchies. In Proceedings CHI’95. ACM, 1995. [18] H.-C. Lee. Review of image-blur models in a photographic system using principles of optics. Optical Engineering, 29(5):405–421, May 1990. [19] Y. K. Leung and M. D. Apperley. A review and taxonomy of distortion-oriented presentation techniques. ACM Transactions on Computer-Human Interaction, 1(2):126–160, June 1994. [20] H. Lieberman. A multi-scale, multi-layer, translucent virtual space. In IEEE International Conference on Information Visualization, London, Sept. 1997. IEEE. [21] H. L¨offelmann and E. Gr¨oller. Ray tracing with extended cameras. Journal of Visualization and Computer Animation, 7(4):211–228, 1996. [22] I. Lokuge and S. Ishizaki. Geospace: An interactive visualization system for exploring complex information spaces. In CHI’95 Proceedings, 1995. [23] T. McReynolds and D. Blythe. Advanced graphics programming techniques using OpenGL. SIGGRAPH 2000 Course 32, Course Notes, 2000. [24] M. Potmesil and I. Chakravarty. A lens and aperture camera model for synthetic image generation. Computer Graphics (Proceedings SIGGRAPH’81), 15(3):297–305, Aug. 1981. [25] P. Rademacher and G. Bishop. Multiple-center-of-projection images. Computer Graphics (Proceedings SIGGRAPH’98), 32(Annual Conference Series):199–206, 1998. [26] M. Sarkar and M. H. Brown. Graphical fisheye views. Communications of the ACM, 37(12):73–83, Dec. 1994. [27] M. Sarkar, S. S. Snibbe, O. J. Tversky, and S. P. Reiss. Stretching the rubber sheet: A metaphor for visualizing large layouts on small screens. In Proceedings of the ACM Symposium on User Interface Software and Technology, Visualizing Information, pages 81–91, 1993. [28] M. C. Stone, K. Fishkin, and E. A. Bier. The movable filter as a user interface tool. In Proceedings of ACM CHI’94 Conference on Human Factors in Computing Systems, volume 1 of Information Visualization, pages 306–312, 1994. [29] A. Treisman. Preattentive processing in vision. Computer Vision, Graphics, and Image Processing, 31:156–177, 1985. [30] C. Ware. Information Visualization: Perception for Design. Morgan Kaufmann Publishers, 2000. [31] L. Westover. Footprint evaluation for volume rendering. Computer Graphics (Proceedings SIGGRAPH’90), 24(4):367– 376, Aug. 1990. [32] S. E. Wixson. Four-dimensional processing tools for cardiovascular data. IEEE Computer Graphics and Applications, 3(5):53–59, Aug. 1983. [33] S. E. Wixson. The display of 3d MRI data with non-linear focal depth cues. In Computers in Cardiology, pages 379– 380. IEEE, Sept. 1990.
—8—
TR-VRVis-2001-001