Comparing Disparity Based Label Segregation in Augmented and Virtual Reality Stephen D. Peterson∗ Dept. of Science and Technology Link¨oping University
Magnus Axholt† Dept. of Science and Technology Link¨oping University
Abstract Recent work has shown that overlapping labels in far-field AR environments can be successfully segregated by remapping them to predefined stereoscopic depth layers. User performance was found to be optimal when setting the interlayer disparity to 5-10 arcmin. The current paper investigates to what extent this label segregation technique, label layering, is affected by important perceptual defects in AR such as registration errors and mismatches in accommodation, visual resolution and contrast. A virtual environment matched to a corresponding AR condition but lacking these problems showed a reduction in average response time by 10%. However, the performance pattern for different label layering parameters was not significantly different in the AR and VR environments, showing robustness of this label segregation technique against such perceptual issues.
to 20 arcmin conversely degraded performance, likely due to the overall much larger accommodation demand to scan all labels. That experiment was performed in an outdoor, far-field environment. The present experiment’s aim is to compare VR and AR to determine if problematic AR environmental parameters, e.g., accommodation/vergence mismatch, background noise, static/dynamic registration error, visual resolution, luminance and contrast, collectively degrade visual search in a display incorporating label layering. Research has shown that changing background texture and luminance values in outdoor AR degrades text legibility [Gabbard et al. 2005], indirectly affecting stereofusion. Registration errors and mismatches in accommodation, visual resolution, luminance and contrast are all identified as perceptual issues in AR [Drascic and Milgram 1996] that affect system usability.
2 CR Categories: H.5.2 [Information Systems]: User Interfaces; I.3 [Computing Methodologies]: Computer Graphics Keywords: Label placement, user interfaces, stereoscopic displays, visual clutter, mixed reality.
1
Introduction
Stephen R. Ellis‡ Human Systems Integration Division NASA Ames Research Center
Experiment
Subjects were seated on a rooftop location, with an eye point approximately 10 m above ground level. The overlay graphics were presented to the subjects through a Kaiser ProView 50ST. On ground level, at a distance of approximately 110±10 m from the subject’s position (figure 1), nine physical markers numbered 1-9 were placed on a grass lawn in a random pattern.
The most common method to solve the problem of overlapping labels in Augmented and Virtual Reality (AR and VR) environments is to relocate them on the view plane [Azuma and Furmanski 2003; Bell et al. 2001]. By relaxing the criterion of co-location of labels and object they describe, optimization algorithms can, at best, find new label placements for all scene objects without overlap. This approach, however, introduces 2D view plane motion since optimal label placements are constantly re-evaluated for moving objects, which may disturb or distract the user. Furthermore, this approach introduces potential confusion as to which label belongs to which object, since labels may be constantly rearranged and leader-lines connecting labels to objects may grow unduly long. We are therefore investigating a novel technique, label layering, to separate labels only in stereoscopic depth (See Peterson et al. [2008b; 2008a] for details), which allows labels and objects to remain registered on the view plane projection. A previous study by Peterson et al. [2008a] investigated the optimal disparity difference between label layers, the interlayer disparity, and found that 5-10 arcmin separation significantly improved performance in a visual search task compared to the default case of displaying all labels in the same layer. Increasing the separation ∗ e-mail:
[email protected] [email protected] ‡ e-mail:
[email protected] † e-mail:
Figure 1: Nine physical markers with superimposed labels. Although seated, the subjects were free to move their upper body and head. Head tracking data was fed through a 6-camera PhaseSpace Impulse optical tracking system, with 6 active LED markers placed on the back of the HMD mount. More details about the experimental setup can be found in [Peterson et al. 2008a]. Eight subjects voluntarily participated in the experiment. Before the experiment each subjects was required to pass a Orthorater stereo test, and signed a written consent form. We employed a task which would require integration of information from both the background markers and overlaid labels. The task was to make a judgment of relative horizontal position of two background markers, where one (“target”) marker was identified
through its overlaid 6-character label and the other (“reference”) was identified through its marker number (1-9) placed on the physical ground seen in the background. Both the target label and reference marker were given in the HMD before each of the 90 trials. An input device with two buttons, left and right, was used to make each horizontal position judgment.
2.1
Independent Variables
The display environment determined whether the background markers were real physical objects (AR condition) or virtual elements (VE condition) rendered in the HMD. In the VE condition the HMD’s see-through optics were blocked with opaque plastic; the surrounding real environment was thus invisible and the physical markers replaced by virtual ones. Display Environment (AR, VE)
accommodative demand for switching between virtual and real imagery was 0.33 diopters. Furthermore, in this condition the static and dynamic registration errors could hamper label-object integration, while differences in visual resolution and contrast of the virtual and real imagery could require additional adaptation. Finally, in the AR condition the markers are located on a textured background, an outdoor scene, yielding visual noise that could affect legibility of foreground labels. All these factors are identified as technological limitations or hard problems in the AR community (See [Drascic and Milgram 1996]). Even though our results do not directly address these individual factors, they show that visual search time should be expected to increase when a VR systems is converted for use in a far-field AR setting.
5
Conclusion
Each label was rendered with a certain stereoscopic disparity in the HMD, making the label appear at a distinct depth to the subjects. The depth segregation of the layers was given by the stereoscopic disparity difference between layers, the interlayer disparity. The closest depth layer was fixed at 2.2 m. In all conditions except 0 arcmin, the depth order of labels and background markers was correlated to satisfy the ordered disparity constraint [Peterson et al. 2008b].
This experiment showed general performance degradation in terms of response time, 1.0 second or 12.3%, when converting an opaque HMD with rendered virtual background markers to an optical seethrough HMD with physical far-field markers in a visual search task. It also showed that the proposed method of segregating labels in depth, label layering, is robust against these changes in display environment, and that it could thus be implemented in any type of mixed or virtual reality application.
3
Acknowledgements
Interlayer Disparity (0, 5, 10, 20 arcmin)
Results
Response time (RT) and error were both measured. If no response occurred by 20 s, the trial terminated and RT was set to 20 s. The times of correct responses were analyzed using analysis of variance (ANOVA).
14 AR
Mean Response Time (s)
N = 8 ±1 SE
VE
12
References
10
A ZUMA , R., AND F URMANSKI , C. 2003. Evaluating label placement for augmented reality view management. In Proceedings of IEEE/ACM International Symposium on Mixed and Augmented Reality (ISMAR 2003), 55–75.
8 6 4 2 0 0
5
10
20
Interlayer Disparity (arcmin)
Figure 2: Both independent variables produced significant main effects on response time, but did not show any interaction effects. The display environment was found to have a significant effect (F (1, 7) = 11.7, p < 0.05) on response time; performance in the VE environment was 1.0 s, 12.3%, faster on average. The level of interlayer disparity also significantly affected performance (F (3, 21) = 8.74, p < 0.01), where the 5 and 10 arcmin conditions reduced average response time by over 2 s compared to the 0 and 20 arcmin conditions (see Figure 2). These independent variables did not show any significant interaction effects.
4
Stephen Peterson and Magnus Axholt were supported by PhD scholarships from the Innovative Research Programme at the EUROCONTROL Experimental Centre, Br´etigny-sur-Orge, France. These authors were also supported through the NASA Grant NNA 06 CB28A to the San Jos´e State University Research Foundation. The experiment was conducted at NASA Ames Research Center in Mountain View, CA (corresponding author: Stephen R. Ellis,
[email protected]).
Discussion
A number of factors likely contribute to the negative effect of the AR display environment on performance. In the AR condition, the
¨ B ELL , B., F EINER , S., AND H OLLERER , T. 2001. View management for virtual and augmented reality. In Proceedings of the symposium on User Interface Software and Technology (UIST’01), 101–110. D RASCIC , D., AND M ILGRAM , P. 1996. Perceptual issues in augmented reality. Stereoscopic Displays and Virtual Reality Systems III SPIE 2653, 123–134. G ABBARD , J. L., S WAN , J. E., H IX , D., S CHULMAN , R. S., L U CAS , J., AND G UPTA , D. 2005. An empirical user-based study of text drawing styles and outdoor background textures for augmented reality. In Proceedings of the IEEE Virtual Reality Conference (VR’05), 11–18. P ETERSON , S. D., A XHOLT, M., AND E LLIS , S. R. 2008. Label segregation by remapping stereoscopic depth in far-field augmented reality. In Proceedings of IEEE/ACM International Symposium of Mixed and Augmented Reality (ISMAR’08). P ETERSON , S. D., A XHOLT, M., AND E LLIS , S. R. 2008. Managing visual clutter: A generalized technique for label segregation using stereoscopic disparity. In Proceedings of the IEEE Virtual Reality Conference (VR’08), 169–176.