Modelling complex user experiences in distributed ... - CiteSeerX

0 downloads 0 Views 178KB Size Report
University of Brescia, Brescia Italy (e-mail: [email protected]). F. Pittarello is with ... 3D worlds (e.g., educational experiences) reserve an important part to content ...
Modelling complex user experiences in distributed interaction environments Fabio Pittarello and Daniela Fogli

Abstract— The focus of this work is that of the so-called mixed reality domain, where interaction is increasingly becoming a complex matter in which the user navigates the different locations of a 3D virtual or real world, manipulates objects and accesses content with different degrees of heterogeneity and synchronization. In order to control such complexity we specify the 3D environment by a multilevel control finite state machine using the statecharts notation and we model the interaction with the environment and the multimedia content communicated to the user through the concepts of experience process and experience pattern. A set of distributed software agents log and process data related to the user experience for controlling interaction and data presentation at run-time. Index Terms— 3D environment, software agents, user experience, experience pattern, interaction pattern. I.

INTRODUCTION

Interactions in 3D environments is increasingly becoming a complex matter where users navigate the scene in order to complete different goals. This is true for all the segments of the so-called mixed reality [13] domain, characterized by different mixtures of real and virtual 3D elements that the user navigates with her/his real body or virtual counterpart (i.e., avatar). Given the complexity of such environments, the term experience, usually referred to human activity in real life, can be successfully applied. In fact, a variety of features, such as subjective involvement in the scene and progressive evolution of the user behavior on the basis of the activity previously done still characterizes the human behaviour in mixed reality environments. Such environments are typically characterized by interactive objects distributed into the scene; the user experience itself is the result of an interaction distributed across the different locations of the environment. Monitoring and controlling satisfactorily such activity with a set of sensors and actuators distributed over the environment is a complex task that may be simplified by an accurate modeling of the user experience. Using such model may result in augmenting overall user satisfaction and preventing interaction errors.

D. Fogli is with the Dipartimento di Elettronica per l’Automazione at the University of Brescia, Brescia Italy (e-mail: [email protected]). F. Pittarello is with the Dipartimento di Informatica at the University of Ca’ Foscari, Venice Italy ([email protected]).

Our approach capitalizes on the Interaction Locus (IL) model [15][16] and on the concepts of interaction process and interaction pattern, previously introduced by the authors; besides, the approach goes a step beyond by discussing the concepts of experience process and experience pattern that permit to extend the typologies of experiences that can be modeled. An interaction process [4] is a detailed logging of user activity and system reactions. Interaction patterns are recurrent subsequences of an interaction process; their recognition is useful both for controlling and proactively adapting user interaction. While interaction patterns allow to face with complex situations, the approach fails where an explicit control of content fruition is required. An increasing number of experiences in 3D worlds (e.g., educational experiences) reserve an important part to content fruition. In such experiences the explicit knowledge of the part of content browsed by users (e.g., text, images, sounds and also parts of the 3D environment itself) becomes an important input for the system that can affect the evolution of the experience itself. The need of having a more sophisticated means to model the overall user activity, including a detailed view of multimedia content browsed, led us to define the concepts of experience process and experience pattern. Such concepts were introduced in [17]; the present work provides a formal specification of the concepts, and a detailed presentation of the multilevel control finite state machine (CFSM) specifying 3D environments. A description of the relations between such machine and the logging processes that permit to control and simplify the user experience is also presented. Our approach takes advantage of a set of coordinated agents distributed into the environment for monitoring and controlling the user experience. Such approach is useful also for optimizing system performance and is suitable to mixed reality paradigms characterized by distributed intelligence. The paper is organized as follows: Section II describes related work. In Section III we summarize previous results for the definition of structured 3D environments monitored by a set of coordinated agents; the concepts of interaction process and pattern are presented. In Section IV we introduce the concepts of experience process and pattern. Section V describes three case studies; a more formal description of the environment and of the user experience for one of such cases is given in Section VI. Some final remarks conclude the paper. II. RELATED WORKS A key component of our proposal is the introduction of a set of agents [11], which act in the virtual/real world keeping

track of the initial background of the human involved in the experience, of the relevant interaction and of the kinds of multimedia content enjoyed by the users during their navigation within the 3D environment. Agents have been considered in the context of intelligent workspaces for organizing the use of different devices; application examples include systems that enhance real environments for managing meetings and retrieving information through a multi-modal interface [7]. Structuring 3D environments is an important step for controlling them. Some approaches [15] establish at design time the relations between the morphology of space and activity performed, other approaches try to derive semi-automatically the partitions of space (i.e., activity zones [10]) from the observation of user actions. In any case, defining zones for different classes of activity can be useful for defining tasks and areas of influence of a set of agents distributed into the environment. Navigational modalities have been explored in the hypermedia realm [5] leading to define and underline significant paradigms such as indexes and guided tours for moving through content. Modeling navigation inside 3D scenes is more complicated, due to the augmented degrees of freedom potentially offered by such environments. A wide range of paradigms ranging from free navigation to different types of constrained navigation (i.e., guided tours or other paradigms where the orientation and/or the translation of the user viewpoint are partially or totally controlled by the system) have been defined. While free navigation paradigms are useful to explore the environment in detail, recent research shows that constrained navigation can be helpful for supporting user to complete their task and for avoiding them to get lost [6]. Not only structuring 3D environments, but also formally specifying the interaction of the users with them is crucial to obtain a better control. Several research groups have proposed systemic models of the Human-Computer Interaction (HCI) processes. Barnard et al. describe the user-computer interaction process as a syndetic system, i.e. a system binding up subsystems of different nature, the human as a cognitive subsystem, on the one hand, and the computer as a computing artefact, on the other [1]. Our work is based on the model proposed in the past by the Pictorial Computing Laboratory (PCL) [2] and successively extended in [3]. It regards HCI as a process in which the user and the computer communicate by materializing and interpreting a sequence of messages at successive instants of time. The activities performed by the computing sub-system are formalized by taking into account this symmetry and that two interpretations of each message exist, one by the human and one by the computer. III. BACKGROUND This section briefly introduces models and concepts on which this work is based. In particular, it capitalizes on the Interaction Locus (IL) model [15][16]. The IL approach aims at giving a structure to 3D environments belonging to all the segments of the mixed reality domain. The approach, originally meant to support user navigation, has been progressively developed to support authoring, user profiles, access to information with different devices and user interaction. Basically the 3D environment is divided into a set of locations characterized

by a recognizable morphology; a set of allowed/forbidden user interactions and multimedia content are associated to each IL. The IL has gradually changed its nature to include not only information necessary for navigation (e.g., labels and sounds identifying locations), but also complex content useful for augmenting the user’s knowledge. ILs are organized in hierarchies and the properties of a given level can be inherited by the inner one. The observation of the user interaction within a set of ILs is at the basis of the approach proposed here, aimed at controlling and adapting both the interaction and the fruition of multimedia content. In previous works we defined the concept of interaction process and interaction pattern [4]. The formalization of the concept of interaction pattern was obtained by adopting the PCL model of HCI [2]. Some details on the specification of interaction within ILs are here introduced for the sake of explanation. In the mixed reality environments humans and computer systems communicate exchanging multimodal messages; computer messages may be the images which appear on the screen display of a computer or a palmtop, the sound from speakers, the mixed scene composed by virtual and real elements with which the user may interact. A human interprets the image on the screen or the sound from audio speakers or the force from a haptic device by recognizing characteristic structures. More precisely: Def. 1: A characteristic structure (cs) is a set of system generated events which may be perceived by the user as a functional or perceptual unit. The css are materialized on the output devices to become perceptible by the users; they are the output events of the virtual entities [3]. Virtual entities are used by the designer to specify the system structure and behavior: Def. 2: A virtual entity (ve) is a virtual dynamic open system that exists only as the result of the execution of a program P by a computer. Actually, P is a system of programs, some of which (Input programs) acquire the input events generated by the user actions, some compute the ve reactions to these events (Application programs), and some output the results of this computation (Output programs). More precisely: Def. 3: A program is specified as P = , where In denotes the input programs, Ap denotes the application programs, Out denotes the output programs. During an interaction, the user operates on some input device to manifest his/her requirements or commands to the ve. The ve captures input events generated by user actions and reacts to them generating output events toward users (i.e., the css). More precisely a ve is defined on input and output alphabets. Def. 4: The input alphabet A of a ve is a finite set of user activities. Def. 5: The output alphabet O of a ve is the set of possible css generated by the ve as output events. The user activity is defined as follows: Def. 6: A user activity is specified as a = , where op - operation - denotes the sequence of events perceived by the machine as a consequence of the user action on some input device at a given step of the interaction, and cs is a char-

characteristic structure of a ve. Given a ve, its current state is called characteristic pattern. Def. 7: A characteristic pattern (cp) is specified as cp=, where the characteristic structure (cs) is a set of user perceivable events managed by the In and Out programs, d is a suitable description of the state of the programs Ap, int (interpretation) is a function, mapping the current cs onto d and mat (materialization) is a function mapping d onto cs. A ve may thus be specified as a dynamic open system. Def. 8: A virtual entity is a 5-tuple ve = on the input alphabet A, where 1. S is the set of admissible ve states, i.e. the set of its cps; 2. O is the output alphabet of the ve; 3. f: A × S → S is the next state function; 4. η: A × S → O is the output function; 5. s0 = cp0 is the initial state of the ve. This kind of specification can be described in a diagrammatic way through the use of a control finite state machine (CFSM). As a consequence, the following definition of interaction process can be provided. Def. 9: An interaction process is a sequence of triples , where st is the state of the CFSM, at is the activity performed by the user at a certain time t, and st+1 is the new state of the ve after the user activity is captured and managed. From this definition, that of interaction pattern can be derived. Def. 10: An interaction pattern is a recurring sub-sequence of an interaction process. A software architecture based on agents [4] have been proposed to log the interaction process, recognize the recurring interaction patterns and proactively adapt interaction with ILs, trying to anticipate the user needs. In particular, two kinds of agents have been defined, one associated to each IL, and the other associated to the user. The first agent, called genius loci, takes care of the place by giving the visitors the opportunity to get most benefit from its exploration. The latter, called numen, follows the user during navigation by accumulating and managing knowledge about him/her. The numen knows the user profile, accumulates the exploration history across several places, and is able to interact with the genii of the different places in order to give them information about how to help the user in his/her visit. The two agents mediate the interaction between the user and the environment accumulating, maintaining and exchanging knowledge about the user and the interaction place.

ties and recorded simply the interaction steps (e.g., the user clicks button1 inside IL1), the experience process represents the path of the user among the different interaction and content fruition opportunities and records both interaction steps and content browsed (i.e., at least a summarization of it). Recurring sequences of an experience process can then be extracted by the agents giving indications about users’ habits, preferences and information needs. We call such sequences experience patterns. An additional task for agents, introduced in this work, is the progressive building of a map of the user’s knowledge. The result of such activity can then be used for enabling appropriate control or for proactive behaviour; in both cases the target of agents’ action can be content presentation or user interaction. On the basis of these premises, we may give the following formal definitions of experience process and experience pattern: Def. 11: An experience process is a sequence of 4-tuples , where st is a state of the 3D environment, at is the input activity performed by the user, st+1 is the state reached by the 3D environment as a reaction to at, contentt+1 is a (machine-readable) description of multimedia content enjoyed by the user as a consequence of his/her activity at. Def. 12: An experience pattern is a recurrent sub-sequence of the experience process. Therefore, experience patterns are still sequences of 4-tuples that can be extracted from the experience process for example by exploiting literature algorithms such as those described in [12]. Interaction patterns, can be described alternatively as recurrent sub-sequences of the experience process projected on the state and activity dimensions, becoming therefore sequences of triples coherently with Definitions 9 and 10. V. THREE CASE STUDIES In order to make more clear the concepts described above, the following sections will present three case studies where such concepts can be successfully applied; a more formal description related to the second case study, evidencing the relations with the multilevel CFSM specifying the 3D environment, will follow in Section VI. The three case studies are classified according to the complexity of the navigation models available for the different kinds of experience proposed. Figure 1 illustrates the meaning of symbols used in figures associated to the examples. label

The need for a more detailed monitoring of the user activity inside the 3D environment rises from the introduction of complex experiences where content fruition is a substantial part of the experience itself and may affect the rest of the interactive session. In such experiences the distributed and cooperating agents need a means for sharing the knowledge of the content browsed, in order to control the experience. The concepts of experience process and experience pattern satisfy such requirements. In fact, while the interaction process represented the path of the user among the different interaction opportuni-

boundaries of nested ILs

label

IV. INTRODUCING THE EXPERIENCE PATTERN

label

navigation step along path

label

navigation step along alternative path relation between IL and associated information

a b

c1

e

c2 d

hierarchy of multimedia info associated to an IL; browsed content is represented as white circles

Figure 1. Explanatory list of the symbols.

Guided tour. Such navigation modality, where the user follows the steps of a predefined path, reflects the conceptual order conceived by the author of the experience for browsing the 3D world and accessing content; it is meant for all the categories of users, with a particular reference to novices that need assistance for moving across a complex environment. Figure 2 shows the layout of the environment (i.e., a cultural exhibition area organized in rooms containing works of art) and the set of hierarchical ordered ILs with associated multimedia information. In general, multimedia information has a hierarchical structure, as shown in Figure 1; hypertextual paths among the different informational nodes may be more complex, but they are not shown for the sake of simplicity. The user experience is the result of his/her navigation in the 3D world and of his/her content browsing choices inside the multimedia information hierarchies. The navigation steps done by the user inside the environment are ordered progressively from s1 to s10. In the example, at first user enters the exhibition area 1 (a first level IL) and browses information chunks a, b and c. Then s/he enters the room 1.1 (a second level nested IL) and browses a part of the associated content (e.g., a, b, c2, d1 and d2); the path then leads him/her to the object 1.1.1 of the room (i.e. a third level nested IL) giving him/her the opportunity to browse its content. The visit of the environment prosecutes till the end according to the design of the authors. Note that the browsed content is only a portion of the available multimedia information. In this situation the set of agents log the user activity into the experience process and use it to adapt the content presented to the user both basing their action on classification of information or on text indexing.

room 1.3 room 1.1 s4

s2

d1 d2 d3

a b c1 c2 a b

e d1 d2

c1 d

a b

c a b

Obj 1.3.2

c1

e1 e2 f

c2

Obj 1.3.1 s9

s7

Obj 1.2.2

room 1.2

c

s8

s6

Obj 1.1.1

a b c

Obj 1.2.1

s5

Obj 1.1.2

s3

Cultural exhibition area 1

s5

s1

Room 1.1

Room 1.3

s2

s7

s3

Obj s4 1.1.2 Room 1.2

d1 d2 d3

a b c1 c2 a b

e d1 d2

s10

s8

Obj 1.1.1

c

s6

Obj 1.2.1

a b

s11

Obj 1.3.2

Obj 1.3.3 s12

c1 d c2

e1 e2

c a b

s9

Obj 1.2.2

Obj 1.3.1

c1

d1

c2 d2

a b

d

Cultural exhibition area 1

user path

s1

be presented with information node d belonging to the same class. In the latter case content adaptation is based on the indexing of text contained in multimedia nodes, performed by genii loci as a part of their monitoring work. Indexed information is saved into the experience process and used by the numen for building a progressive map of the user’s knowledge. Such map is then shared with the genii of the following ILs that will compare it with indexed information related to the locations controlled by them; at the end, they may proactively select and present information on the basis of content matching. In both cases, according to the well-known usability guidelines [14], proactive behaviour should be coupled with the option to go back to a default entry point (e.g., the root of the multimedia information hierarchy for the current IL) that the user may select in case s/he gets disoriented or lost.

Obj 1.3.3

s10

e

c2 a b

d

d1 d2

d e

Figure 2. The user path and content fruition in a guided tour.

In the first case a preliminary organization of information nodes in classes is required. The coordinated agents will recognize the recurring experience patterns and will present to the user information proactively selecting the classes of information s/he has demonstrated to be more interested in. For example, if the user has demonstrated constant interest for a specific class of information (e.g., artist’s biographical sketch) browsed in node d1 of object 1.1.1 and in node f of object 1.1.2, when s/he will approach object 1.2.1 s/he will

a b c

d e

Figure 3. The user path and content fruition in a free wandering situation.

Free wandering. Such navigational paradigm is meant for expert users that decide their own path along the 3D world; such modality enhances the agents’ activity, for selecting different content in relation to previous user movements. Figure 3 shows an example of 3D environment where the user is not constrained to a specific navigation path. In the example the user decides in some occasions to deviate (dashed line) from the suggested path (black line). The navigational steps done by the user are shown in the figure, ordered progressively from s1 to s12. In such experience, genii loci log navigation done and content browsed into the experience process and communicate such information to the numen; the numen progressively builds a map of the knowledge acquired by the user. Such knowledge is shared with the genius of the following IL, that may suggest the user, after having compared the user knowledge with the requirements for that IL, to come back to acquire missing information. Alternatively, the genius loci may proactively shorten the path along the associated information hierarchy if the user knowledge already incorporates notions that are also redundantly available for that IL.

Cultural exhibition area 1

s1 s2

Room 1.2 s3

Room 1.1 s4

Room 1.3

Obj 1.2.1

Obj 1.3.1

s8

Obj 1.1.2

s5

s9

s6

Obj 1.3.2

s7

Obj1.1.1

s10

c1

c1 d1 d2

c2 a b

e c

c2 a b

d

a b

d1 d2 d3

c1 c

c2 a b

d

a b c

a b

d1 d2

model systems at different levels of abstraction. This specification allows us to provide a formal definition of the environment and of the experience processes. In the case at hand, ILs in the 3D environment belong to three possible levels: 1) cultural exhibition area; 2) room; 3) interaction object. Each kind of IL provides some interaction possibilities allowing the user to change the state of the environment and to enjoy the multimedia content associated with the IL. The statechart describing a 3D environment with three cultural exhibition areas can be represented at a high level of abstraction as in Figure 5, where the specification does not give any details about the rooms composition. We adopt here the notation proposed in [9]. Note the history symbol H, which is exploited to remember the last visited state belonging to the lower level; such information permits to take it as the initial one after the first visit. Cultural exhibition area 1

a b c

d e

Figure 4. The user path and content fruition in a conditional access situation.

Room 1.1

Sel Room1.3

Conditional access. In this case access to different zones is conditioned to the satisfaction of certain requirements related to user activity in other ILs of the environment. This is a typical situation in e-learning environments where the user has to prove to be able to perform certain operations or to have acquired a knowledge about specific arguments. Figure 4 shows an example of 3D learning environment where the user moves through the different zones of the environment augmenting his/her knowledge and solving questions in order to proceed. For example the user entering room 1.1 is invited to go to room 1.2 for answering to a set of questions and gaining full access to room 1.1 content. Navigation through nodes of associated multimedia information is regulated by genii on the basis of previous behaviour (e.g., the default path through information nodes for user entering room 1.1 is a-bc1; such path changes to a-b-c2-d1/d2 when the user comes back after having answered correctly to questions contained in room 1.2). Generally speaking, each genius loci detains the control of a specific zone and the knowledge of the requirements for allowing access to users. Again the numen progressively builds, on the basis of the experience process monitored by the genii of the visited ILs, a map of the interactions done by the user and of the knowledge acquired by him/her. The result of its activity is shared with the current genius loci for allowing it to act appropriately, allowing/denying access and presenting appropriate information. This scheme enables the creation of multi authored e-learning experiences, where each author detains the knowledge related to a certain zone of the learning path and establishes the requirements for accessing it. VI. FORMAL DESCRIPTION OF THE USER EXPERIENCE Following the definitions given in Sections III and IV, we will give a more formal description of the environment and of the user experience described for the free wandering case. We characterize each IL as a virtual entity. Each ve, representing an IL of a given level, may be composed by other ves representing nested ILs of the lower level; such hierarchical structure can be easily specified using statecharts [8] that permit to

Cultural exhibition area 3

Sel Room1.2

Room 1.3

Sel Room1.1 Sel Room1.3

Sel Room1.2

Sel Room3.2

Room 3.1

Room 1.2

Room 3.2 Sel Room3.1

H

H

Sel Area 2

Sel Area 1

Cultural exhibition area 2 Room 2.1 Sel Room2.1

Sel Area 2 Sel Area 3

H

sel End

Sel Room2.2

Room 2.2

Figure 5. The statechart specifying the 3D scene at a high level of abstraction.

Actually, each lower level state of an exhibition area represents a set of states of a virtual entity corresponding to an IL of type “room”. Such IL can be described, at a lower level, by the interaction objects behaving in the room. Figure 6 illustrates the lower level statechart specifying a room containing two independent interaction objects represented as two state diagrams that run concurrently. Room 1.1 Object 1.1.1

Object 1.1.2

Figure 6. The statechart specifying the concurrent components of a room.

Each interaction object is in turn an IL and thus can be specified as a virtual entity described by a further lower level state diagram. An example is shown in Figure 7 that illustrates the statechart specifying an interactive object (e.g. a painting) with which it is possible to interact to see its description or the author’s biography. Moreover, the description may include hyperlinks associated to some keywords: the interaction with them leads the object to move in a new state, by providing the user with new multimedia information.

VII. CONCLUSION

Object 1.1.1 Sel Text showingObjectOverview Sel Author

showingText Sel Back

Sel HyperSel Back link

showingAuthor

H

Sel Back

ShowingTextin-hyperlink

Figure 7. The statechart specifying an interactive object.

As explained in the previous section, at each level of the 3D environment, the user may enjoy different kinds of multimedia content. Multimedia content is obtained as a consequence of the user performing an activity, i.e. an operation with respect to a characteristic structure. As previously mentioned, each user activity is captured by the ve to which the cs belongs, and, besides determining a possible state change, it also fires the execution of a computational function providing the associated multimedia content. Therefore, in terms of the statechart, the multimedia content is the result of the output function. Each state s of the statechart is described by an array [a, b, c, ..., z] of variables, each one associated with a different nested level of the statechart. For example, in the example described above, s can assume the value [cultural exhibition area 1, room 1.1, object 1.1.1, showingAuthor], and in this case the associated content is an indexed version of the multimedia content describing the author biography, that is a synthetic description of the multimedia content provided to the user. As already mentioned, multimedia content can be associated with any kind of IL. For example, when the user is in the cultural exhibition area 1 and enters room 1.2, it might enjoy the multimedia content associated with the room, e.g. a particular music related with the theme treated in the room. In this case, state st is [cultural exhibition area 1, room 1.1, null, null], at is the activity of entering the room 1.2, st+1 is [cultural exhibition area 1, room 1.2, null, null], and contentt+1 is an indexed version of the music title. (The symbol null means that the variables indicating the interaction objects and their states have not a meaningful value). Concerning multimedia elements such as music, we require that each element associated with ILs would have an alternative textual form expressing the same content (or at least a summarization of it); this requirement is coherent with accessibility guidelines [18] for web hypermedia and grants an easy conversion and/or indexing of heterogeneous elements. The knowledge of the statecharts is distributed among the genii loci. Such decentralized knowledge is used cooperatively by the agents to log the experience process, extract the experience patterns and build progressively the map of the user knowledge. The final goal of this process is to control and proactively adapt both navigation across distributed locations and fruition of multimedia content embedded in the interaction objects distributed in the environment.

The approach presented in this paper represents an improvement of the previous work based on the concepts of experience process and experience pattern, permitting to describe and control a wider range of experiences in 3D environments, including those ones where content fruition is an important part of the user activity and can determine its evolution. A component based architecture based on the concepts discussed in this work is described in [17]. A pilot study on a scaleddown implementation of such architecture is currently being performed. The results of this study, focusing both on quantitative parameters (e.g. how long users take to perform a task with proactive features enabled) and qualitative factors (e.g. user satisfaction) encourage us to design an experimental system to verify all the features described in this work. VIII. REFERENCES [1] [2]

[3]

[4]

[5]

[6]

[7]

[8] [9] [10] [11] [12] [13]

[14] [15]

[16]

[17]

[18]

Barnard, P., May, J., Duke, D., Duce, D., Systems, Interactions, and Macrotheory. ACM Transactions on HCI, 7(2), 2000, 222-262. Bottoni, P., Costabile, M. F., Mussio, P., Specification and Dialog Control of Visual Interaction. ACM Transactions on Programming Languages and Systems, 21(6), 1999, 1077-1136. Costabile, M.F., Fogli, D., Fresta, G., Mussio, P., Piccinno, A., Software Environments for End-User Development and Tailoring, Psychnology, 2(1), 2004, 99-122. Fogli, D., Pittarello, F., Celentano, A. Mussio, P., Context-aware interaction in a mobile environment. Proc. Fifth International Symposium on Human Computer Interaction with Mobile Devices and Services (MobileHCI03), Udine, Italy, September 2003, 434-439. Garzotto, F., Paolini, P., Schwabe, D. – HDM – A Model-Based approach to hypertext application design, ACM Transactions on Information Systems, Vol. 11, No. 1, Jan 1993, pp. 1-26. Haik, E., Barker, T., Sapsford, J., Trainis, S., Investigation into Effective Navigation in Desktop Virtual Interfaces. Proc. 7th International Conference on 3D Web Technology, Tempe, Arizona, USA, 2002, 5966. Hanssens, N., Kulkarni, A., Tuchida, R., Horton, T., Building AgentBased Intelligent Workspaces. Proc. International Conference on Internet Computing, 2002, 675-681. Harel, D., Statecharts: A Visual Formalism for Complex Systems. Sci. Computer Prog., July 1987, pp. 231-274. Horrocks, I., Constructing the User Interface with Statecharts, AddisonWesley, 1999. Koile, K., Tollmar, K., Demirdjian, D., Shrobe, H. E., Darrell, T., Activity Zones for Context-Aware Computing. Proc. Ubicomp 2003, 90-106. Jennings, N. R.. An Agent Based Approach for Building Complex Software Systems. Communications of the ACM, 44(4), 2001, 35-41. Lelewer, D. A., Hirschberg, D. S., Data Compression. ACM Computing Surveys, 19(3), 1987, 261-296. Milgram, P., Kishino, F., A Taxonomy of Mixed Reality Visual Displays, IEICE Transactions on Information Systems, Vol. E77-D No. 12, 1994, 1321-1329. Nielsen, J., Usability Engineering, Academic Press, 1993. Pittarello F., Multi sensory 3D tours for cultural heritage: the Palazzo Grassi experience, Proc. ICHIM2001, Cultural heritage and technologies in the 3rd millennium, Milano, Italy, 2001, 73-90. Pittarello, F., Accessing Information Through Multimodal 3D Environments: Towards Universal Access. Universal Access in the Information Society Journal, 2(2), 2003, 189-204. Pittarello, F., Context-based Management of Multimedia Documents in 3D Navigational Environments, Proc. MIS 2005, Workshop on Multimedia Information Systems, Sorrento, Italy, September 2005 (in press). Web Accessibility Initiative, http://www.w3.org/WAI/

Suggest Documents