Virtual reality user interface design for first person ...

103 downloads 22 Views 235KB Size Report
3. User Interface Design. 9. 3.1 VR UI Metaphors. 11. 3.1.1 UI Location Type. 12. 3.1.2 UI ... 3.1.3 UI Integration Type. 17 ..... In 'Just Cause 2' and 'Crysis 2' the.
Virtual reality user interface design for first person games using head mounted display technology. Research Paper Bachelor course on Media Technology at St. Pölten University of Applied Sciences

by:

Peter Alexander Kopciak mt121048

Supervising tutor: Dipl.-Ing. Dr. Peter Judmaier

St. Pölten, 03.07.2014

1 / 26

Declaration

- The attached research paper is my own, original work undertaken in partial fulfillment of my degree. - I have made no use of sources, materials or assistance other than those which have been openly and fully acknowledged in the text. If any part of another person's work has been quoted, this either appears in inverted commas or (if beyond a few lines) is indented. - Any direct quotation or source of ideas has been identified in the text by author, date, and page number(s) immediately after such an item, and full details are provided in a reference list at the end of the text. - I understand that any breach of the fair practice regulations may result in a mark of zero for this research paper and that it could also involve other repercussions.

..................................................

................................................

Place, Date

Signature

2 / 26

Abstract With new technology emerging and the rising popularity of games, virtual reality and the head-mounted display ‘Occulus Rift‘, many developers are trying to develop games for stereoscopic 3D. Because of the new medium, many designers are still not sure how to cope with the new challenge and the transition to 3D and instead try to port existing 2D user interfaces. This barely works and creates more harm than good, reducing immersion and creating visual discomfort. The author proposes three different kinds of metaphor groups, which classify UI elements depending on Location in the virtual experience, Correlation between the UI element and game objects and lastly Integration in the virtual world. Following that, there is no best way of implementing user interfaces in 3D yet, thus it is mandatory to conduct user studies to find out how a player will react and interact with a specific element and how it affects immersion.

3 / 26

Table of Contents Declaration

2

Abstract

3

Table of Contents

4

1. Introduction

5

2. What is Virtual Reality?

6

2.1 What is a Head Mounted Display? 3. User Interface Design

8 9

3.1 VR UI Metaphors

11

3.1.1 UI Location Type

12

3.1.2 UI Correlation Type

15

3.1.3 UI Integration Type

17

3.2 Key Concepts and Examples

20

3.2.1 Navigation

21

3.2.2 References

22

3.2.3 Information

24

4. Conclusion and Future

25

Bibliography

26

4 / 26

1. Introduction With the rising popularity of gaming and wider access to virtual reality for consumers, developers did not hesitate to jump on the train and start developing or porting their games to stereoscopic 3D, especially now, with the approaching release of the ‘Occulus Rift‘ and other input devices for virtual world interaction. With this transition to 3D gaming, there are not only new possibilities but also new design challenges. One of the biggest is designing graphical user interfaces. The fact that there are no viewing constraints like on computer displays and a field of view similar to real world seeing makes more room for content, presentation and perhaps user interfaces. However, one of the biggest merits of a virtual reality is the fact that a player can dive in and immerse oneself, thus living a life and having an experience similar but nevertheless entirely different to real life. Moreover, as we all know, there is no graphical overlay display in the real world (yet). So, does it makes any sense of creating a UI for a virtual world, is it enough to port an existing 2D UI or do we have to create a totally new one? Are there any common guidelines and ideas a designer can follow to make the best out of this new medium?

5 / 26

2. What is Virtual Reality? Sherman, & Craig (2002) define virtual reality (VR) as „a medium composed of interactive computer simulations that sense the participant's position and actions and replace or augment the feedback to one or more senses, giving a feeling of being mentally immersed or present in the simulation (a virtual world) (2002, p. 13)" which can be simplified to four key elements: a virtual world, immersion, sensory feedback and interactivity. The virtual world can be any imaginary space, ranging from an exact copy of the physical reality to something barely resembling physical concepts, with its own rules manifested trough a medium that can be experienced by the user. Immersion is the „sensation of being in an environment; [which] can be a purely mental state or can be accomplished through physical means: physical immersion is a defining goal of virtual reality; mental immersion is probably the goal of most media creators (2002, p. 9)”. Sensory Feedback refers to the method of aiding the user’s immersion through providing direct sensory feedback to their physical senses and responding to the user’s input. In most cases, it is the visual sense that receives feedback, either with a stationary or moving screen and the aural (hearing) sense with 5.1 surround headphones (2002, p. 10). New technology will allow the user to feel, taste and smell his surroundings, further aiding immersion. The last ingredient is interactivity, which allows the user to interact with objects and places, move physically within the world, change their viewpoint and qualifies the virtual world to respond to the user’s actions, thus making it authentic and believable (2002, pp. 10–11). For creating a successful virtual reality experience there is much more to it than ‚just' designing the virtual objects and graphics. As an example, the User Interface has to be adapted, psychological and cognitive user studies have to be performed, and sound or textures have to be specially designed with 3D in mind (Yoon, Jang, & Cho, 2010, pp. 69 – 70). 6 / 26

Augmented Reality A special kind of virtual reality is the augmented reality (AR). This time, the user is physically located in the physical reality, but receives special computer-generated information, that are super-imposed (spatially placed between the user's eyes and the actual surrounding) over his field of view. So rather than experiencing the physical reality, the user finds himself in another reality, which includes the physical plane along with the virtual objects (Sherman & Craig, 2002, pp. 18, 22). This enables the user to see and experience information that are otherwise imperceptible to human senses.

7 / 26

2.1 What is a Head Mounted Display? There are many ways to experience virtual reality, one such device can be a helmet or a pair of glasses, namely a head-mounted display (HMD). Instead of showing the view in a 2D manner as on a computer display, the image will be rendered twice, one for each eye, with a slight offset and then either shown on one or two screens inside the HMD or projected upon the user's eyes. An example of the former would be the ‘Occulus Rift‘. (Sherman & Craig, 2002, p. 14) To determine what picture is shown, many HMDs may contain tracking sensor that tells the computer where the head is located. Depending on the extent of tracking, one can classify the tracking into two groups:. Either its 3-DOF (3 dimensions of freedom) which can be tracking of the rotation (movement of the head around its center) or translation (movement of the head along the three dimensional axes in 3D space) or 6-DOF which includes both, rotation and translation. If it is to decide between rotation and translation, one should favor rotation. (Sherman & Craig, 2002, p. 89) This kind of VR interface is primarily used to show the virtual world to the user through a first person point of view, so he can participate in the world from his own viewpoint (Sherman & Craig, 2002, p. 309) A HMD can be occlusive, that means it blocks out external visual stimuli.

8 / 26

3. User Interface Design After explaining what VR is we will now examine the different ways of implementing user interfaces and why it differs from the classic approach with 2D user interfaces. Putting effort into designing a proper User Interface (UI) for games, is as important as programming the actual game mechanics or deciding on a definite art style. This is especially true for games in the VR, because it is the intersection point between the user and the presented content and thereby as much a part of the content as the virtual world itself(Sherman & Craig, 2002, p. 72). Sherman & Craig state that „many theorists consider the ultimate goal of VR to be a medium without a noticeable interface, an interfaceless medium. That is, a VR experience would be designed so well that the boundary between user and virtual world would be seemingly nonexistent (2002, p. 285)”, which probably will be possible at some point in the future, as soon as the VR provides direct sensory feedback to all primary sensations a user can perceive, following physical and mental immersion.

The classic approach There is a need for new UI design guidelines and metaphors, because on one hand the medium VR is a new one and still subject to many researches, on the other hand, many current games are not taking full advantage of the depth and take the easy road of converting an existing 2D interface and using it for VR or Stereoscopic 3D (S-3D) gaming (Mahoney, Oikonomou, & Wilson, 2011, p. 153). Typically, first person games are set in a 3D world using a First Person Point of View to allow participation from an egocentric viewpoint with a 2D UI metaphor, namely a full heads-up display (HUD) to show various information like health, status, inventory or menus. By using classical UI metaphors, users may feel more familiar with the new medium at 9 / 26

first, but in the end it leads to more confusion and could lead to suboptimal interfaces becoming entrenched in the language of the medium (Sherman & Craig, 2002, p. 284). This means that most of the time the Graphical User Interface (GUI) is rendered in 2D at screen depth, increasing the necessary depth jumps during gameplay (Jonas Schild, 2012, p. 1) and, therefore, diminishing the ability to immerse oneself in the VR and even reducing the experience to a tedious and clumsy one. Mahoney et. al note, that "Game designers therefore need to weave effective use of stereoscopic gameplay into the game which will in turn increase the feeling of immersion (2011, p. 153)” and that they need to be designed for S-3D instead of being ported.

There are many ways to start implementing a S-3D UI, and, as written before, because VR is still in its cradle, its hard to define the best or the right way to do that. It is important to conduct usability and user experience studies both for general knowledge of the medium and for specific knowledge concerning UI design and finding the best possible solution for increasing immersion (Sherman & Craig, 2002, p. 284). Many of the following design metaphors are derived from Augmented Reality, existing user interface metaphors or physical reality concepts of already existing objects — with the final and ultimate goal of reducing cluttering and overload of information and striving towards a more context-sensitive approach. However, until then it is necessary to create user interfaces based on familiar concepts. Completely new UI metaphors have the risk of overwhelming the user with too much new concepts and they may feel lost — if they can interact with something familiar, they also start with a grasp of what to do and what is possible(Sherman & Craig, 2002, p. 284). The upcoming suggestions and taxonomy apply primarily, but not exclusively, to S-3D games using the HMD paradigm.

10 / 26

3.1 VR UI Metaphors GUI elements can be classified into three different metaphor groups, with one particular element being part of one or more metaphors or metaphor groups. In this context, a GUI element or control can be anything in the virtual world which helps the player to manipulate other game objects, behaviors, his own avatar or provides information.

The first being the UI Location Type, determines where the GUI element is positioned in depth, ranging from direct world placement, to display/HUD placement. The second metaphor group is the UI Correlation Type which refers to the way the element relates or refers to world objects or a world position. The last one is the UI Integration Type which is finally telling us, how the GUI control is presented in the VR.

11 / 26

3.1.1 UI Location Type The simplest differentiation in GUI implementation for VR is the location where it is displayed, respectively at which depth in the scene it is rendered. According to Sherman & Craig (2002) there are four types of locations a GUI element can be placed in: 'in the world', 'in the hand', 'in front of the view' and 'on the panel' . Although there is also a fifth & sixth Location; the 'on display' type equals the 'in front of the view' type because this paper will be focusing on HMD and the 'through an aperture' refers merely to the selection method of encapsulating an object with the user’s finger. The location of the GUI element is important for many reasons. For one, the placement affects how the user perceives the relation of the GUI controls to the virtual world and if they depend on each other. It also conveys how and when the user will be able to interact with it. Some controls may be located in a particular region of the virtual world, and others may always be on hand or at least summonable for use whenever and wherever desired (Sherman & Craig, 2002, p. 301) while some may appear or disappear depending on context and available game information. In the world The 'in the world', or in-world, location type treats an element as an ordinary object placed in the virtual world (Sherman & Craig, 2002, p. 301). It may represent a physical reality control like a button or a lever or it can be something exclusively new to virtual reality, like a virtual landmark or a name tag hovering adjacent to a virtual object. It does not have to have a fixed position nor does it have to behave like the physical counterpart, as long as it is a tangible 3D object in the virtual world. As an example, it may be an object which resembles a cork board which can lie on the floor and, if the user wishes, they may summon it to their position. Thus, it is a summonable in-world control (Sherman & Craig, 2002, p. 302). 12 / 26

In the hand The 'in the hand', or in-hand, location type refers to elements which are linked to a user's hand, typically the non-primary hand, leaving the other hand free to manipulate the element. This kind of location type is used in conjunction with a bimanual interface (an input method where the user wears trackers or gloves and can use both hands (Sherman & Craig, 2002, p. 301) - with latter, it could also provide some haptic feedback trough inbuilt motors and similar technology) or optical tracking of the user's location and hand position, as far as tracking the position, rotation and translation of movement and joints. (Sherman & Craig, 2002, p. 303). This interface metaphor is one of the most natural ones because it builds upon existing knowledge of physical reality objects and may provide affordance regarding the objects usage. Also, it is very unobtrusive, easily accessible and aids the immersion of the user. They know where they can find it (in the hand) and can access it by looking down on their hands or raise them to look upon them. (Sherman & Craig, 2002, p. 303) An example would be navigation through a labyrinth — the user can tilt their head down and look upon the map, without reducing immersion in the game. They may feel even more immersed because during the examination of the map, the virtual world is still acting (not on pause) and they can still look around or move.

In front of the view (HUD) This particular metaphor refers to information display at screen depth, (thus positioned at a depth of zero) and aligned at a vertical and/or horizontal point. To support immersion, one can change the alpha (transparency) value and show it depending on context. (Schild, Bölicke, LaViola Jr., & Masuch, 2013, pp. 170 – 171) It is inspired by physical reality HUDs which reflect data onto a screen, helmet, eyeglasses or a windshield, located between the user and the world. It is also one of the most common metaphors found in AR. They can be implemented virtually by attaching the position of 13 / 26

the display relative to the user’s head motion, thus always showing it in front of the user’s eyes and view.(Sherman & Craig, 2002, p. 303). This kind of information display is a more abstract form of the in-hand metaphor because it is available independent of one's position and not bound to any virtual object. It can be used to display critical game-relevant information such as health, a map, indicators or a menu. Depending on the game setting, this kind of UI can make or break immersion. It is advisable to use this kind of metaphor sparingly, and to show or hide specific information depending on context and need - this can be done by either toning it down or hiding it completely (Sherman & Craig, 2002, p. 307). Although one may position the HUD everywhere in the view, it is encouraged to use bottom positioning because our natural view is used to the fact that close-range objects are in the lower half of the field of view while more distant objects are positioned in the upper half of the view, near the horizon (Jonas Schild, 2012, p. 10). Also, it should be noted, that freely floating GUIs and HUDs seem especially abstract and unnatural in stereo. This can be avoided by anchoring the element to a screen edge or providing it with a semi-transparent background.(Schild et al., 2013, pp. 172, 176–177)

On the panel The 'on the panel' or on-panel metaphor treats GUI elements as though they were a GUI panel on a 2D computer screen. They can be combined with any of the preceding metaphors such as a in-world menu panel floating beside an object. (Sherman & Craig, 2002, p. 305). The main reason for implementing this kind of UI metaphor is easier conversion of 2D menu interfaces for a 3D world and providing the user with a 2D Cursor for selecting options because of the difficulty of manipulating virtual controls with 6-DOF physical input devices (Sherman & Craig, 2002, p. 306).

14 / 26

3.1.2 UI Correlation Type The second metaphor group is the UI Correlation Type and it classifies GUI elements depending on attachment, relation and reference to world objects and positions. This kind of taxonomy is derived from Context-Aware Augmented Reality Interfaces and has been defined by Oliveira & Araujo (2012) and used in conjunction with their VISAR framework, an interface editor for developing and implementing AR UI. The reason we will cover this is because many games simulate a 3D world and thus can be the perfect simulators for AR UI application testing (Schmalstieg, 2005). Also, many AR UI concepts have their roots in game HUDs and vice-versa. According to Oliveira & Araujo (2012) there are four types: 'Static Normal', 'Static Tracked', 'Dynamic Tracked' and 'Dynamic with Vision'.

Static Normal This metaphor represents GUI elements that have a fixed position in relation to the user, and is frequently located in front of the view (as a HUD)(Oliveira & Araujo, 2012, p. 326). It is placed at screen depth and does not change its position. It is manipulable to an extent, as it can be hidden and shown, depending on context and information need. Text is often rendered as a Static Normal element to provide the best readability and to avoid ghosting, an effect which shows the same image twice with a slight offset in position (Jonas Schild, 2012, p. 9) Besides text, classic 2D status displaying HUD elements are also being build upon the Static Normal metaphor, like a user’s health bar.

15 / 26

Static Tracked The Static Tracked Metaphor resembles he Static Normal metaphor, only that it can also change its position or orientation.It often relates and refers to a world object but is still positioned at screen depth. Elements that use this metaphor need to know the specific position of the tracked object and the user. An example would be an arrow that points to a particular direction or an indicator at the side of the screen. (Oliveira & Araujo, 2012, p. 326)

Dynamic Tracked A Dynamic Tracked GUI element is rendered as a virtual world object in the game and has to know where it has to be positioned, where the user is located and to which world position it refers (Oliveira & Araujo, 2012, p. 326). It can be rendered at any UI location, except at HUD location. An example would be a glowing path which can show the user how to get to a specific location. Although the user's position will change while moving, the destination location will not move on its own with the exception of changing the destination point.

Dynamic with Vision Again, this metaphor resembles the Dynamic Tracked metaphor, but this time its position in the virtual world is determined by the position of a tracked game object, either being linked to it by code or trough pattern and image recognition. In AR this means, the element will be rendered on top or instead of a physical marker. (Oliveira & Araujo, 2012, p. 326) If we take this metaphor into games, we can say that the physical marker becomes a virtual world object, and the GUI element could be either super-imposed or rendered adjacent/on top/in front of it. An example would be a virtual name tag which floats near the linked world object and depicts its name or status. 16 / 26

3.1.3 UI Integration Type The last metaphor group is the UI Integration which groups GUI elements and game information into three distinct classes, proposed by Jonas Schild (2012) and his colleagues (2013) which would be: 'extra-diegetic', 'intra-diegetic' and 'spatial references'. The term 'diegetic' has its root in the word 'diegesis’, which refers to the narrative context of the virtual world and how it relates to its contents, story and the avatar (the virtual object representing the user) (Schild et al., 2013, p. 170) implying that the virtual world is existing and expresses itself in a consistent behavior, according to the world rules and helps us imagining ourself as part of this imagined reality (Sherman & Craig, 2002, p. 10).

Extra-diegetic Extra-diegetic or explicit information are elements which are not part of the narrative in the world and exist purely to deliver information to the player, since game characters and avatars do not know about them. They show information that cannot be expressed in game in the narrative and thus must be shown in a functional and abstract manner. These are superimposed upon the game view, either in-world or on the HUD and are separated from the game content. (Schild et al., 2013, p. 170). The amount of extra-diegetic information can be kept to a minimum with either showing it only when it is changed (Schild et al., 2013, p. 170), needed in the current context or, with progressing technology, substituted with better physical and mental immersion, advanced input methods and sensory feedback systems. An example of extra-diegetic information may be status icons depicting ailments of the character or menu items. Depending on the setting and story of the game itself, information that seems to be extradiegetic can be intra-diegetic, when playing as a cyborg as an example, or one may even refrain 17 / 26

from showing it at all if it aids the immersive experience(Schild et al., 2013, p. 170).

Intra-diegetic On the other hand, intra-diegetic or implicit information describes elements that are part of the world and narrative and can convey information based on their design and function. They have to be placed in-world as a virtual object.(Schild et al., 2013, p. 170) These kind of information display is generally harder to achieve but is a crucial part for immersion in the virtual world, thus it is advisable to use the intra-diegetic metaphor instead of the extra-diegetic metaphor where applicable to reduce the amount of display clutter, diminish the chance of visual information overload and to reduce the amount of depth jumps a user has to make (Jonas Schild, 2012, pp. 4, 10) and thus making the game more fun and enjoyable. As an example, the user could observe the damage to his weapon instead of showing a bar depicting the durability, or deriving the amount of ammunition from the weapon round instead of a visual indicator with a number. Special Cases exists where „intra-diegetic items can reflect narrative content on an abstract, but diegetic meta-level which could not be visualized as an explicit object inside the game world [such as] making the border of the screen flash bloody red when the main character is hit […](Schild et al., 2013, p. 170)". Spatial Reference Spatial References are elements which are technically part of the HUD (although they does not have to be placed at screen depth) but reference a virtual object, either trough appearing in the proximity of the object at a similar depth or being connected to it, be it pointing at it or connected through a line (Schild et al., 2013, p. 170). The depth-placement is preferred because it improves the perceived quality of the UI and integrates better with the diegetic narrative, improving immersion (Schild et al., 2013, p. 177). In 2D games, the referencing characteristic is achieved trough placing the reference adjacent to the game object. This is not enough for S-3D game, because in that case, the spatial 18 / 26

reference is lost and the user is left with a floating element, confusing them. Also, it increases necessary depth jumps, thus creating even more visual discomfort. (Jonas Schild, 2012, p. 2) Following that, Jonas Schild suggests placing references to objects in distant depth above them or, depending on distance and size, placing it in front of it.

19 / 26

3.2 Key Concepts and Examples After examining the different metaphor groups for designing a S-3D UI we will jump briefly into the three most important elements of a virtual reality game experience and give some examples to ‘Navigation’, ‘References’ and ‘Information’.

20 / 26

3.2.1 Navigation Path following Path following is the technique of showing the player where to go based on visual cues, like a colored line rendered at the ground or virtual posts and landmarks indicating progression along a dedicated path. This can be supplemented by an arrow that points to the next post or towards the right direction. A real world example can be found in many hospitals, where colored lines on the floor indicate specific paths to relevant destinations. (Sherman & Craig, 2002, p. 336) Map Using a map, like in the real world, player can navigate trough an environment. For better immersion, it should be a tangible virtual map. Still, it can be enhanced by either showing the current position or specific locations. The frame of reference can be either exocentric (the top border of the map is north) or egocentric (the top border of the map is the viewing direction). (Sherman & Craig, 2002, p. 336) Bread Crumbs If it is more important to know where the player came from, the designer can implement bread crumbs that leave a trail of markers or persistent footsteps. (Sherman & Craig, 2002, p. 338) Out-of-Body If it is within the narrative, the player can simply leave their body and change from the first person view to a third person view, flying above the field like a bird. This shift can break immersion if not done properly. One way to maintain (visual) context is to back or zoom away from the egocentric point of view to an exocentric view, thereby making a continuous transition. (Sherman & Craig, 2002, p. 34)

21 / 26

3.2.2 References References can be anything that provides the user with feedback of what he sees or can do, or where to move next. Annotations A simple floating text box is an excellent way to show some information about specific game objects, such as names of a character. It can also be a field which shows the player where to move next, or what he has to do to achieve a specific outcome. If there is no aural feedback, designers can also make chat bubbles which float adjacent to a game character and depict what the character is saying. (Sherman & Craig, 2002, p. 155) If there are too many annotations, one can either reduce the opacity or size till the player moves near or just prioritize specific elements in favor of others. (Schmalstieg, 2005, p. 2) Pointers are similar to annotations as they point or refer to a particular location or object in the game world. In the game ‘Portal 2‘ the player can place two distinct portals (or gates) in the world. As to not to forget where they are placed, a floating icon, positioned in front of the portal, indicates the location of it. Even when the portals are not in line of sight, the indicator is still visible, and rendered on top of everything else at screen depth. Additionally, the further away the player moves, the more will be the opacity of the indicator reduced.(Jonas Schild, 2012, p. p.8)

Selection One of the most used techniques in games is selection trough highlighting using an increased brightness or perhaps drawing either an outline or rendering a circle beneath the object. If using a circle, it can also imply the moving direction with a small dot or arrow, or if the object is airborne, rendering the circle on the ground and connecting it to the (possibly not visible) airborne object with a vertical line. (Schmalstieg, 2005, p. p.2) 22 / 26

Targeting Again there are many different ways of achieving this, depending on game and setting. One can, similar to ‘Dead Space 2‘ stay in the narrative and during attack mode, raise the weapon and look trough a scope or targeting display or use a laser pointer which travels in a straight line from the tip of the weapon to the targeted object, which helps the player during depth transition. Alternatively, if using a classic crosshair, it can change its depth position, depending on the object that is targeted or the distance to the object. In ‘Just Cause 2‘ and ‘Crysis 2‘ the crosshairs renders at screen depth if the targeted object is background or ‘non-shootable', and slightly in front of the object if it is something that can be shot (meaning it makes sense to shoot at it) and is not too far away, otherwise the crosshair is rendered at maximum shooting distance. (Jonas Schild, 2012, p. 5)

23 / 26

3.2.3 Information As to Information Display, there is a wide variety of ways, depending on the game, setting and information need.

In first person shooters, one may indicate current health with a screen slowly turning red (because of blood) or greyscale (because of losing blood) and show recent damage with making the screen flash bloody red or render blood sprinkles on screen depth. (Jonas Schild, 2012, p. p.5), (Schild et al., 2013, p. 170) In the game ‘Dead Space 2‘ the player has no dedicated life or energy bar. Instead he carries a backpack which shows the current energy intra-diegetic, like a battery. Also, he carries a weapon and can see how much ammunition he has left - which also changes the color of the Crosshair. (Jonas Schild, 2012, p. p.5)

In a racing game, instead of showing information extra-diegetic or on a HUD, designers can try to use the dashboard of the car and show information on the tachometer, speedometer or gear display and even go a step further and use the virtual windshield as a HUD, thus having the possibilities to show extra-diegetic information on a narrative level (Mahoney et al., 2011, p. 154).

24 / 26

4. Conclusion and Future Designing User Interfaces for Stereoscopic 3D games is a difficult, albeit very rewarding task that is important to do thoroughly and in a consistent manner. Just taking the existing 2D user interface or HUD and using it in S-3D is tempting, but leads most of the time to a more than suboptimal user interface and can even prevent players of fully enjoying the game because of visual discomfort, an incorrect offset in rendering or too much visual clutter. What may work on a traditional 2D computer display could totally destroy the whole gaming experience in S-3D, and ideas which were never imagined or which performed poorly in 2D could be the next big thing in 3D UI design. It is almost not possible to use the same interface for distinct games, because depending on context, setting and information need, different elements have different effects on immersion and game enjoyability. The proposed UI metaphors and metaphor groups are merely suggestions how one can group different kinds of elements, and the examples may be applicable on other games but different games may require different kinds of implementations. Further user studies and usability tests are recommended and advisable as to the relative young age of the virtual reality medium. With progressing technology, the need for specific UI metaphors may disappear and more complex and abstract ideas may emerge, as today's UIs are just showing information which cannot be conveyed in a physical manner, such as pain or velocity. At this point, it is not possible to tell if we are heading towards an interface-less virtual experience.

25 / 26

Bibliography

Jonas Schild, M. M. (2012). Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?, 91–. doi:10.1117/12.911967 Mahoney, N., Oikonomou, A., & Wilson, D. (2011). Stereoscopic 3D in video games: A review of current design practices and challenges. In 2011 16th International Conference on Computer Games (CGAMES) (pp. 148–155). doi:10.1109/CGAMES.2011.6000331 Oliveira, A., & Araujo, R. B. (2012). Creation and visualization of context aware augmented reality interfaces (p. 324). ACM Press. doi:10.1145/2254556.2254618 Schild, J., Bölicke, L., LaViola Jr., J. J., & Masuch, M. (2013). Creating and analyzing stereoscopic 3D graphical user interfaces in digital games (p. 169). ACM Press. doi:10.1145/2470654.2470678 Schmalstieg, D. (2005). Augmented reality techniques in games (pp. 176–177). IEEE. doi:10.1109/ISMAR.2005.17 Sherman, W. R., & Craig, A. B. (2002). Understanding Virtual Reality: Interface, Application, and Design. Elsevier. Yoon, J.-W., Jang, S.-H., & Cho, S.-B. (2010). Enhanced user immersive experience with a virtual reality based FPS game interface (pp. 69–74). IEEE. doi:10.1109/ITW.2010.5593369

26 / 26

Suggest Documents