Using CSP to Specify Interaction in Virtual Environments - CiteSeerX

5 downloads 0 Views 173KB Size Report
interfaces for virtual environments, in particular by looking at how end-user requirements on ...... In Computer Standards and Interfaces, 17, pp. 511-. 533. 1995.
Using CSP to Specify Interaction in Virtual Environments Shamus Smith and David Duke Department of Computer Science, University of York, York YO10 5DD, UK {shamus, duke}@cs.york.ac.uk

Abstract. Virtual environments are made up of many components that are typically specified at different levels of rigour. To date, emphasis on developing the technology underlying these components has meant that little attention has been paid to the description of interaction techniques. This paper presents initial research from the INQUISITIVE project on abstracting models of interaction in virtual environments away from technological constraints with an aim to providing more rigorous descriptions of interaction. The use of statechart based modelling techniques has been investigated. However, statebased techniques do not capture the distinctive and problematic elements of virtual environments. This paper describes how CSP (Communicating Sequential Processes) was used as a bridge between initial state-based models and a richer approach founded on hybrid systems.

1. Introduction The application of new technologies has been a predominant factor in the development of virtual environments (VEs). This has lead to many innovative systems and novel interaction techniques. Unfortunately, this has also characterised the development of interaction in these environments, particular to physical devices such as headsets and data gloves. This has meant that in the design of virtual environment interaction, user requirements have come second place to technological considerations. Although these technologies have matured, as virtual environments diffuse into a wider range of software products, questions about user requirements have highlighted many important

1

issues. One of these issues is the specification of interaction within virtual environments.

Typically, descriptions of VE interaction are informal and, commonly, incomplete. Where they are given, descriptions are either informal linguistic accounts or augmented by ad-hoc high-level diagrams. Although informal descriptions may be appropriate for general descriptions of environments, systematic implementation and analysis of systems requires more detailed and better defined specifications [1].

However, at what level do virtual environments need to be described through the design process to final system implementation? This is a pivotal question when viewed by the users of these environments. The designers, implementors and evaluators of virtual environments all have common needs and different levels of rigour when it comes to environment descriptions. In the context of research tools, designers and users are tightly bound. Typically, the designer is also the main user of the system. However, as virtual reality technologies reach a larger audience, new ways of capturing user requirements explicitly for designers will be needed so that the most appropriate technology can be used to meet the requirements of the target users.

The INQUISITIVE (INcreasing Quality of User Interfaces for Strategic Interactive Tasks In Virtual Environments) project is a three year research effort between groups at the University of York and the Rutherford Appleton Laboratory (RAL). The aim of the project is to develop methods and principles that can be used to improve the design of interfaces for virtual environments, in particular by looking at how end-user requirements on the interface can be mapped, through a development process, onto facilities supported by an interaction toolkit.

The work presented in this paper represents some initial research from the INQUISITIVE project on the modelling of interaction in virtual environments. This is a first step to defining abstract semiformal descriptions of interaction that are not defined in terms of technological constraints.

This paper is organised as follows: initially, we will discuss the modelling of interaction in virtual environments. Next, the consideration of interaction modes will be examined. 2

Three novel interaction techniques from the VE literature will be introduced and statechart models of these techniques will be described. The use of Communicating Sequential Processes (CSP) to overcome the problems identified with the statechart approach will be presented, followed by our conclusions.

2. Why Model Interaction? There are many factors that make interaction difficult in a virtual world. Mine [2] notes that many virtual worlds lack tactile feedback (something we take for granted in the real world). Although tactile interaction is becoming more widespread in high-end systems, the devices that are typically available only offer a constrained form of tactile feedback suitable for specific tasks (for example see [3]). Mine also observes that current alphanumeric input techniques (what we use for precise interaction in the computer world) are ineffective for virtual worlds. He suggests that we must learn how to interact with information and controls distributed about the user instead of focused on a terminal in front of him/her. If natural forms of interaction can be identified, and possibly extended in the VE, then more usable VE interfaces can be constructed [2].

Modelling interaction is important from the perspective of both the users and designers of VEs. The users require interaction techniques that allow them to complete interaction tasks in a particular application and designers wish to build systems that make the required interaction possible. Ideally, an interaction technique is a device and domain independent method of interaction between a user and a VE. When examining interaction techniques, three general areas of interest are their specification, modelling and evaluation.

2.1. Specification

A starting point when looking at characterising an interaction technique is to define what the technique provides and what the technique requires. Techniques provide elements both to the user (e.g. visual cues and feedback) and to the system (e.g. state change information).

3

Technique requirements can be system, technology or user based. System requirements may involve a particular interface metaphor or logical devices, while technological requirements are usually more physically based, e.g. frame refresh rate and physical device input/output. User requirements may involve a certain skill level, both in technique knowledge and physical dexterity.

Two related questions are which requirements are provided by the system, the technology and the user and when are these requirements needed? User based requirements may require advanced training and system requirements may need technological advances before they become feasible. Also, can the techniques be separated from low-level input? Moving techniques away from being based on lowlevel inputs could provide more independence in interaction technique representation but there may be a cost in the loss of affordances for a particular technique. Although understanding the consequences of interaction in terms of the cognitive resources is beyond the scope of the current paper, it is a topic that the syndetic modelling framework has the potential to address [4].

2.2. Modelling

Techniques for modelling interaction are still an active area of research (see the annual Eurographics DSVIS (Design, Specification and Verification of Interactive Systems) conference series since 1994) even without considering the apparent complexity of VEs. Numerous techniques for defining models have been suggested. Many of these are specialised either to classes of interfaces, classes of problem, or are scoped by a set of assumptions about the design process (e.g. formal notations). This makes a direct comparison between techniques difficult; nevertheless, a number of dimensions can be identified.

• Ontology: state, process, other. • Form of representation: diagram, text, hybrid. • Level of rigor: informal, formal.

There is also the question of what should be in the model? Can the interaction technique stand-alone or are models of the user and the system also required? If the latter is true, 4

can the user and the system be modelled within the same framework, and if not, how can they be extended to provide an overview of the environment?

2.3. Evaluation Typical evaluation in the area of VEs has focused on ad-hoc user studies [5, 6]. These are usually carried out with a small number of users, and typically not from a general population user base. For well-defined applications with a specific user group, this may be adequate, but it is unclear how these results can be generalised to help during the design phase. Some method of evaluation grounded on a theoretical understanding of virtual environments would be more useful and could be used to contrast different models. However, what do we require to be evaluated?

There are several areas of evaluation that seem relevant. Usability (e.g. ease of use), ergonomics (e.g. motion sickness, arm fatigue and eyestrain), human factors (e.g. user cognitive load), resource usage (e.g. user and system requirements), robustness and ease of implementation are all valuable areas of study.

In this paper, we are interested in the modelling component and how an appropriate modelling technique can be chosen or developed. However, before interaction techniques can be defined, consideration of their context of use, or mode, is required.

3. Interaction Modes Modes are implemented, in one way or another, in almost any modern machine or piece of equipment [7]. Andre and Degani [8] comment that modes represent the different behaviours, or functions, of a given system. Many interaction techniques in VEs are mode oriented. For example, VEs are sensitive to whether the user is navigating, selecting objects or manipulating objects. System behaviour defines what the user perceives of the environment and how their interaction is progressing. Therefore knowing what mode the system is in can be very important to the user in VEs. Many problems in human interaction can be found in user confusion as to mode. Degani, Shafto and Kirlik [9] observe that human interaction with complex systems is a non-

5

trivial problem and it appears that mode confusion is a significant contributor to mishaps in many domains.

System mode is also relevant when trying to define the usability of a system. The user should be able to tell what mode they are currently in [8]. Mine, Brooks and Sequin [10] note that constantly switching between object interaction and movement control breaks the natural rhythm of the operation and adds significant cognitive overhead. Although there are a variety of different interaction techniques, many can be grouped loosely into the interaction mode that they apply. Some examples are:

• Navigation: walking [2], flying [2], worlds in miniature (WIM) [10, 11], head-butt zoom [10], two handed flying [10].

• Selection: laser pointing [2], cone-casting, head crusher select [12], go-go interaction [13], look-at menus [10].

• Manipulation: scaled world grab [10], orbital mode [14], hand-held widgets [10, 15], over-the-shoulder deletion [10].

• Environment commands: pull-down menus [10], voice recognition [16], gesture recognition [17, 18].

These groups are not totally rigid as some techniques combine elements of two or more modes. For example, the scaled world grab [10] has components of object selection followed by object manipulation. Also, scaled world grab can be used as a technique for navigation where, instead of bringing objects to the user, the world is moved in relation to the selected object.

The following subsections comprise a brief overview of the three interaction techniques that will be used as examples later in this paper.

6

3.1. Head crusher select The head crusher select (HCS) [12] is a technique for use in a HMD (Head Mounted Display) immersion virtual environment and involves operating on 3D objects at a distance by interacting with 2D projections of the objects on the image plane. In this case, the thumb and forefinger is shaped around the desired object as if the environment was 2D. A projection is taken from the user’s hand and head positions to determine what object is to be selected (Figure 1). (This technique was developed as one of four techniques, with the sticky finger, lifting palm and framing hands [12], to demonstrate the use of 2D projections on a 3D image plane.)

Figure 1: A third person view of the head crusher technique. The inset shows the first person view (from [12]).

The following two techniques were developed by Mine, Brooks and Sequin [10] to exploit user proprioception in virtual environment interaction.

3.2. Two handed flying

Flying is a navigation technique that has been popular in many virtual environment implementations. However, there are several disadvantages to this technique including arm fatigue and user disorientation caused by misunderstanding the relationship between hand orientation and flying direction [2]. Two handed flying (THF) [10] is a specialised type of flying which exploits proprioception, the person’s sense of the physical position and orientation of their body and limbs. The vector between the user’s two hands defines direction of flight and the distance between the user’s hands specifies

7

the flight speed (Figure 2). Moving the hands into a deadzone, a minimum hand separation distance, stops flight.

Figure 2: Two handed flying (from [10]).

3.3. Head-butt zoom

A more complex technique is the head-butt zoom (HBZ) [10]. It uses both hand and head motion to control the interaction. Initially, the user forms a view rectangle using their hands to mark the positions of the top-right and bottom-left corners of a rectangle (Figure 3). Once this is completed, head motion is used to switch between environment views. If the user leans forward through the plane of the rectangle, the view is zoomed in and if the user leans backward, the view is zoomed out. This technique can also be used for navigation. If the user steps through the plane of the view rectangle, they are moved to the zoomed location. If they then step back, they are returned to their original position. However, in this paper, we use a simplified version of the HBZ and do not consider its use for navigation.

Figure 3: Setting up head-butt zoom (from [10]). 8

4. Modelling Techniques for Interaction Many components of virtual environments are tightly coupled together and their explicit separation may be ill advised, if not impossible. Therefore, modelling particular interactions is a non-trivial task. One question we may ask is what do we want to achieve with the modelling? Modelling interaction is one component of a framework being developed in the current project for the design and evaluation of virtual environment systems. Currently, we are interested in investigating ways of describing interaction concisely.

Initially, the use of a statechart [19] based technique seemed appropriate as statecharts provide many powerful modelling mechanisms that are desirable in the modelling of virtual environments. Two techniques investigated were Ofan [8] and Requirements State Machine Language (RSML) [20].

One interesting issue in interaction is the possibility of mode conflicts. An example of this in a virtual environment could be two interaction techniques being used in parallel where one is activated using hand positions and the other on hand gestures. Using a modelling technique that allowed such conflicts to be identified seems a good place to begin and initially lead to the investigation of Ofan due to it being a mode oriented technique.

The Ofan framework attempts to separate out the components of the whole environment. Unfortunately, in virtual environments there is not always a clean distinction between components such as the environment, the display and the control. In particular the coupling between the control and the display of the environment is very strong within virtual environments. Also, specifying the control behaviour of every object in an environment would be complex and time consuming. Degani and Shafto [21] observed in the specification of a WWW browser, with a mouse and keyboard as input devices, that their specification became extremely complicated due to the need to specify the input device control behaviour of all the browser components. Virtual environments typically have input devices more complex than a mouse and keyboard and would therefore be at least as difficult to specify. 9

However, a problem with diagram-based modelling techniques is that, depending on what level of detail is needed, a complete definition of even a simple interaction technique can involve several pages of documentation. Spreading models over many documents can make them hard to comprehend and error-prone. As with graphical notations in general, it is difficult to display the whole diagram at one level and some type of decomposition or abstraction is usually required. This can make models difficult to initially develop, validate after definition and compare with other models.

One solution to this problem is to move away from graphical notations and to look at a more formal text-based notation. Also, statecharts focus on the state of an environment. We are, however, primarily interested in the processes of getting from state to state, something that is hidden on statechart based notations. Therefore in the next section we investigate the use of Communicating Sequential Processes (CSP) [22] as a text based process driven notation.

5. Communicating Sequential Processes (CSP) Communicating Sequential Processes [22] is a modelling technique that takes the basic concept of a process as a mathematical abstraction of the interactions between a system and its environment. This concept is extended to include the use of concurrency, nondeterminism and interprocess communication.

Here we consider interactions in virtual environments as a set of concurrent processes. User processes, e.g. head movement and hand movement, provide certain outputs that are processed by tracking devices (system processes) which work in parallel with an interaction definition to provide the interaction model.

As the purpose of this section is to illustrate the potential use of CSP, rather than provide definitive accounts of a particular technique, our account is informal. Only a small number of the operators available in CSP for modelling processes have been used; these are introduced below.

10

In CSP, a process models the observations that can be made of a system as it evolves by engaging in events. For each process P, the finite or countable set of events that P can engage in is called the alphabet of P, written α P. Process expressions are formed from the names of events (usually written lowercase), names of processes (normally written uppercase), and simpler process expressions (combined using various operators).

STOP The simplest process in CSP is the process that refuses to engage in a given set of events A, written STOPA (the annotation with the alphabet becomes important when one begins to combine processes).

Prefix Given a process P and an action a in the alphabet of P, the "prefix" constructor is used to define the process that is first observed to do a, and subsequently behave like P. This is written a --> P.

Parallel Two processes P and Q can be combined in parallel to produce a new process, P || Q. When restricted to the alphabet of P, each behaviour of P || Q must be a behaviour of P, and similarly for restriction to the alphabet of Q. For example, the process a --> b --> STOP{a,b} || c --> b --> STOP{b,c}

can generate the behaviours a, c, b and c, a, b, but can not generate a, b, c, since b is in the alphabet of both the left and right process and the right process requires that the b event must follow a c event.

Choice The choice operator

represents an external choice between the behaviours

offered by two processes. Here "external" means that the environment in which the processes are operating will select a process that enables progress to be made. A separate operator,

internal choice, represents non-deterministic choice on the

part of a process itself.

11

Labelling It is sometimes useful to work with multiple distinct copies of a process. These can be defined by using the labelling operator; if P is a process with alphabet {a, b, c, ...} and s is some label, the process x:P is like P except that its alphabet becomes { s.a, s.b, s.c, ... }.

Events in CSP represent observations, for example that a button has been pressed or a signal has been sent. In some cases, it is useful to view events as the transmission or reception of data along some channel. The notation c!m is defined to be the event in which the message m is sent on the channel c, while c?v represents the arrival of a value v on the channel c.

5.1. CSP Examples Within virtual environments there are several components which are shared by most interaction techniques, e.g. the position of the logical hand and the logical head. These can be defined as concurrent processes that operate to provide user frame based output. We are thinking of a "frame" as representing the state of the user and other objects within the environment, including for example the position, orientation and other status information of each such object. The idea is that input devices (both physical and logical) can be modelled as processes that construct and/or modify frames that are passed from the interface towards the application. For example, a hand tracker process might simply report the configuration of the hand as a series of frames sent along a particular channel: Hand = hand.posn!frame --> Hand

We hope to eventually describe interaction by using a number of basic messages that can be sent along channels; the composition of these messages over time represents the evolution of those parts of the system state that are of interest. Possible messages might include: GetFrame (get the current view frame), SetFrame (set the current view to a frame), SaveFrame (save the current view frame), ModifyObject(select,object) (select an object), ModifyObject(deselect,object) (deselect an object), ModifyObject(move,object) (move an object) and ModifyObject(delete,object) (delete an object). These messages

12

(the alphabet) can be seen as the interface of an object or interactor. CSP is thus allowing us to abstract from the state of that interactor.

The following sections contain sketches of how some of the interaction techniques discussed in this paper could be represented as CSP processes.

5.1.1. General CSP Hand = posn!Frame -->Hand Head = posn!Frame -->Head

5.1.2. Head-butt zoom CSP Rect = posn!Frame -->Rect HBZ = head.posn?h --> MkRect(h)

(1)

MkRect(h) = left.posn?lh --> right.posn?rh --> SetRect(lh,rh) -->rect.posn?r --> PerformHBZ(r,h)

(1) Once the HBZ has been activated, get the initial head position (h), construct the initial view rectangle (based on the left and right hand positions), set the rectangle in the current view (SetRect) and perform the HBZ based on the current positions of the rectangle and the users head. PerformHBZ(oldr, oldh) = head.posn?h --> MkRect(h) , oldr K head.posn?h --> ZoomIn(h,oldr), oldh ZoomOut(h, oldr), oldh > h & h = oldr

(2) (3) (4)

(2) Head and rectangle are not intersecting so the user must be resizing the rectangle. This is done in the MkRect process. (3) The user’s head is moving forward (the old head position is less than the current head position) through the rectangle (the current head position is the same as the rectangle’s position) so zoom in the view. (4) The user’s head is moving backward (the old head position is greater than the current head position) from the rectangle (the head and rectangle are intersecting) so zoom out the view. ZoomIn(h,r) =

GetFrame(v) --> SetFrame(v, zoomfactor) --> PerformHBZ(r,h), h §I Y  GetFrame(v) --> PerformHBZ(r,h), h I Y

13

(5) (6)

(5) If the head position (h) is approximately the same as the current view (v), then the view is not zoomed in. Therefore zoom the view by a globally defined zoom factor. (6) Already zoomed in as the current head position does not match the current view, so do nothing. ZoomOut(h,r) = GetFrame(v) --> SetFrame(v, zoomfactor) --> PerformHBZ(r,h), h §I Yzoomfactor) GetFrame(v) --> PerformHBZ(r,h), h I Yzoomfactor)

(7) (8)

(7) Zooming out (set the view frame back by the constant zoom factor), so reset the view. (8) Already zoomed out while head is moving backward, so do nothing. Here, we have already zoomed out and are waiting for the head to stop intersecting with the rectangle. HBZIT = HBZ || left : Hand || right : Hand || head : Head || rect : Rect

(9)

(9) The interaction technique has several components operating in parallel. The hands, head and rectangle positions and the HBZ interaction process itself.

5.1.3. Two handed flying CSP THF = rel?(l,r) --> SetFrame(l,r) --> THF, f(l,r) > deadzone rel?(l,r) --> THF, f(l,r) rel!(x,r) --> RelHand(x,r) right.posn?x --> rel!(l,x) --> RelHand(l,x)

(12)

(12) Output the relative hand positions. RelHand takes the current positions of the left and right hands and outputs them as a tuple for use in THF. THFIT = THF || RelMovement

14

5.1.4. Head crusher select CSP Hand = posn!Frame -->Hand gesture!Frame -->Hand

(13)

(13) This is a refinement of the Hand process as the HCS technique is based on a hand position and a hand gesture. Hand in this example allows for both the position and gesture of the hand to be output for each frame. HCS = right.gesture?g --> right.posn?r --> head.posn?h --> ModifyObject(select,r,h) --> HCS, g= HCSpose right.gesture?g --> HCS, g +&6pose

(14) (15)

(14) If the current gesture is the HCS pose then select the object based on the positions of the right hand and the head. (15) Otherwise do nothing. HCSIT = HCS || head : Head || right : Hand

5.2. CSP Observations

When defining a particular interaction technique within CSP, is it straightforward to check that all the possible actions have been enumerated. For example in Section 5.1.2, all the combinations of interactions between the view rectangle and the user’s head are defined in PerformHBZ. This is helpful when evaluating an interaction technique as it can identify missing or erroneous elements in a technique’s specification. As users are especially reliant on system feedback within virtual environments, identifying such deficiencies can allow more robust interaction techniques to be defined.

Another advantage with CSP modelling is the provision of the trace mechanism. CSP traces can be used to produce traces of user’s actions. Building this on top of interaction techniques and user models can allow process conflicts to be identified. In the examples above, head-butt zoom uses the hand positions in a sequential manner (to define the view rectangle) while the two handed flying technique requires parallel input from the hands (to calculate the relative hand positions). If these interaction techniques were combined then this clash of interaction modes could be problematic.

15

There also exists model checkers and simulation tools for automatic CSP model validation. Such tools make it possible to automatically check properties of a specification and allow for a more systematic and formal analysis while demanding less effort from the designer in doing so than would otherwise be the case. This increases the usefulness of the notation.

Smith [23] observes that although CSP processes are specified without reference to an explicit state, to simplify specifications, they often have parameters that simulate the state information. Unfortunately, the standard CSP parameter passing mechanism can make the notation clumsy. One solution to this problem is to use an object-oriented state-based specification language (for example, Object-Z [24]) to model complex data structures within CSP [23]. This is, however, outside the scope of this report.

6. Beyond CSP CSP provides a succinct way to describe process-based models. Its provision for concurrency and process communication provides a powerful description technique with the added advantage of being based on mathematical theory.

However, how faithful are the CSP models to the interaction that they describe? In particular, are interactions in virtual environments based on sequences of discrete actions? Looking at the CSP descriptions in Section 5, there are several components of the interaction which are not clearly represented on the models. Firstly, the continuous nature of the inputs to the interaction. Typical input devices in virtual environments (for example 3D mice and data gloves) provide a continuous stream of position and orientation information. It has been noted elsewhere [25, 26] that virtual environments can be considered to comprise of a mix of continuous and discrete processes. The ability to model both discrete and continuous components is an important part of VE description.

Secondly, time is not easily represented in the CSP models. For interactions based on hand pose, for example the head crusher select in Section 5.1.4, the length of time that the pose is held is particularly relevant to the interaction. This is especially important in 16

view of mode considerations. The hand pose is the trigger for the HCS and signals the users desire to select an object in a VE. This changes the usage mode of the hand to a selection device.

Thirdly, the current description of continuous processes is awkward. For example, in one mode of the head-butt zoom technique, the user is continuously resizing the view rectangle. This process continues until either the technique is terminated or the user leans forward and intersects their head with the plane of the view rectangle. In the current description (Section 5.1.2), the resizing of the rectangle, the zoom in, and the zoom out processes, are all grouped under one parent process and chosen by use of an external choice operator. The order that these processes are activated is not explicitly described. For example, it is not clear that it is not possible to resize the view rectangle when the view is zoomed in (as the rectangle is out of the user’s field of view).

These observations have led the current research to investigate the use of hybrid system modelling techniques to provide a platform for describing the continuous and discrete nature of interaction within virtual environments [1, 26]. As a bridge from the CSP described in this paper, we have investigated the application of hybrid CSP [27] to provide mechanisms to allow the conversion of continuous variables to discrete variables over a communication channel, the modelling of time-out events and the use boolean expressions over continuous variables. For example, boolean expressions can be used to guard continuous processes in the head-butt zoom interaction technique. PerformHBZ = (PerformRect --> B1); (PerformZoomIn -->B2); PerformZoomOut --> PerformHBZ B1 = leaning forward and intersection into the view rectangle B2 = leaning backward and intersection from the view rectangle

This allows a more accurate specification of the interaction techniques to be produced.

7. Conclusions Due to the continuous and highly visual nature of virtual environments, defining salient and useful aspects of interaction is non-trivial. This is especially true as many aspects of VEs are still without concrete definitions and thus are difficult to measure. Research 17

into this unknown territory requires careful consideration. We feel that as the final objective is unclear, beginning research at an abstract modelling level is an appropriate starting point.

The use of statechart based modelling techniques has been investigated. Unfortunately, state-based techniques do not show the features of the interaction that makes different techniques distinctive. The use of CSP and a process driven approach has been used to try and overcome this limitation.

Unfortunately, CSP also has limitations when based in highly dynamic virtual environments. In particular, the sense of continuity of action and feedback within particular phases of interaction is not well captured by the discrete nature of the CSP model. It is possible that some mix of state and process based techniques will be required to enable effective modelling of the interaction techniques; techniques for dealing with hybrid systems may also provide useful insight [27].

However, when describing interaction, consideration of the cognitive processes of the user as they interact with the system is also required. If interaction is to be successful, the user cognitive model will also have to be factored into the overall model. The construction of a user’s cognitive framework and the way users utilise resources within a virtual environment are some of the next areas of research for the INQUISITIVE project.

8. Acknowledgements The authors wish to thank Patrick Olivier for his helpful comments on the final draft of this report. This work was supported in part by the UK EPSRC INQUISITIVE project, Grant GR/L53199.

18

References 1.

S. Smith, D. Duke and M. Massink. The Hybrid World of Virtual Environments. Accepted for publication at Eurographics’99, Milan, Italy. 1999.

2.

M. R. Mine. Virtual Environment Interaction Techniques. Technical Report TR95-018. University of North Carolina, 1995.

3.

J. K. Salisbury and M. A. Srinivasan. Editors, In Proceedings of the Second PHANToM Users Group Workshop, Cambridge, MA. 1997.

4.

D. J. Duke, P. J. Barnard, D. A. Duce and J. May. Syndetic Modelling. In HumanComputer Interaction, 13: 337-393. 1999.

5.

N. Higgett and S. Bhullar. An Investigation into the Application of a Virtual Environment for Fire Evacuation Mission Rehearsal Training. In Eurographics 16th Annual Conference, Leeds, pp. 87-96. 1998.

6.

R. Pausch, D. Proffitt and G. Williams. Quantifying Immersion in Virtual Reality. In SIGGRAPH ’97, Los Angeles, pp. 13-18. 1997.

7.

A. Monk. Mode errors: a user-centered analysis and some preventative measures using keying-contingent sound. In International Journal of Man-Machine Studies, 24: 313-327. 1986.

8.

A. Andre and A. Degani. Do You Know What Mode You’re In? An Analysis Of Mode Error In Everyday Things. In M. Moulaua and J. Koonce, editors, HumanAutomation Interaction, pp. 19-28. Lawrence Erlbaum Associates: New Jersey, 1997.

9.

A. Degani, M. Shafto and A. Kirlik. Modes in Automated Cockpits: Problems, Data Analysis and a Modelling Framework. In 36th Israel Annual Conference on Aerospace Sciences Conference. Haifa, 1996.

10.

M. R. Mine, F. P. Brooks. Jr. and C. H. Sequin. Moving Objects in Space: Exploiting Proprioception In Virtual-Environment Interaction. In SIGGRAPH ’97, pp. 19-26. ACM Press, 1997.

11.

R. Pausch, T. Burnette, D. Brockway and M. E. Weiblen. Navigation and Locomotion in Virtual Worlds via Flight into Hand-Held Miniatures. In ACM SIGGRAPH ’95, Los Angeles, pp. 399-400. ACM Press, 1995.

12.

J. S. Pierce, A. Forsberg, M. J. Conway, S. Hong and R. Zeleznik. Image Plane Interaction Techniques In 3D Immersive Environments. In 1997 Symposium on Interactive 3D Graphics, pp. 39-44. 1997. 19

13.

I. Poupyrev, M. Billinghurst, S. Weghorst and T. Ichikawa. The Go-Go Interaction Technique: Non-Linear Mapping for Direct Manipulation in VR. In User Interface Software and Technology ’96, Seattle, pp. 79-80. ACM Press, 1996.

14.

J. Chung. Intuitive Navigation in the Targeting of Radiation Therapy Treatment Beams, PhD thesis. University of North Carolina, 1994.

15.

D. B. Conner, S. S. Snibbe and K. P. Herndon. Three-Dimensional Widgets. In D. Zeltzer, editor, Symposium on Interactive 3D Graphics: Special Issue of Computer Graphics 1992, pp. 183-188. ACM Press, 1992.

16.

S. Stansfield, D. Shawver and A. Sobel. MediSim: A Prototype VR System for Training Medical First Responders. In IEEE 1998 Virtual Reality Annual International Symposium, Atlanta, Georgia, pp. 198-205. IEEE Computer Society, 1998.

17.

E. Gobbetti, J.-F. Balaguer and D. Thalmann. VB2: An Architecture for Interaction in Synthetic Worlds. In ACM Symposium of User Interface Software and Technology, Atlanta, pp. 167-178. ACM Press, 1993.

18.

K. Vaananen and K. Bohm. Gesture Driven Interaction as a Human Factor in Virtual Environments - An Approach with Neural Networks. In Virtual Reality Systems, pp. 93-106. Academic Press, 1992.

19.

D. Harel. On Visual Formalisms. In Communications of the ACM, 31(5), pp. 514530. ACM, 1988.

20.

N. G. Leveson, M. P. E. Heimdahl, H. Hildreth and J. D. Reese. Requirements Specification for Process-Control Systems. In IEEE Transactions on Software Engineering, 20(9), pp. 684-707. 1994.

21.

A. Degani and M. G. Shafto. A Framework for Modelling Human-Machine Interaction. In Proceedings of Computer Human Interaction ’97, pp. 1-3. 1997.

22.

C. A. R. Hoare. Communicating Sequential Processes. Prentice/Hall International, Englewood Cliffs, New Jersey, 1985.

23.

G. Smith. A Semantic Integration of Object-Z and CSP for the Specification of Concurrent Systems. In J. Fitzgerald, C. B. Jones, and P. Lucus, editors, Formal Methods Europe (FME’97), pp. 62-81. Springer-Verlag, 1997.

24.

R. Duke, G. Rose and G. Smith. Object-Z: A specification language advocated for the description of standards. In Computer Standards and Interfaces, 17, pp. 511533. 1995. 20

25.

R. J. K. Jacob. A Visual Language for Non-WIMP User Interfaces. In IEEE Symposium on Visual Languages, pp. 231-238. IEEE Computer Society Press, 1996.

26.

S. Smith and D. Duke. Virtual Environments as Hybrid Systems. In Proceedings of Eurographics UK 17th Annual Conference (EG-UK’99), Cambridge, UK, pp. 113-128. Eurographics UK Chapter, 1999.

27.

Z. Chaochen, W. Ji and A.P. Ravn. A Formal Description of Hybrid Systems. In R. Alur, T. A. Henzinger, and E. D. Sontag, editors, Hybrid Systems III: Verification and Control, pp. 511-530. Springer, 1996.

21