Integrating usability engineering and software ... - ACM Digital Library

3 downloads 1393 Views 1MB Size Report
two domains, the SE standard ISO/IEC 12207 (2002) and the UE standard ISO 13407 (1999). ISO/IEC 12207 defines the process of software development as a.
Integrating Usability Engineering and Software Engineering in Mixed Reality System Development Volker Paelke

Karsten Nebe

Institute for Cartography and Geoinformatics Leibniz Universität Hannover Appelstr. 9a, 30167 Hannover, Germany [email protected]

C-LAB Universität Paderborn Fürstenallee 11, 33102 Paderborn, Germany [email protected]

Therefore well defined development processes from software engineering (SE) and the integration of user centred design activities become increasingly important to realize the potential of MR interfaces.

ABSTRACT The integration of user centred design activities into software engineering processes is a challenge. This is especially true for next-generation user interfaces that employ interface paradigms like mixed reality. Guidance for designers and developers has to address the integration of software engineering and user centred design on all levels from abstract standards to operational development. We analyze standards in software engineering and usability engineering and derive recommendations for integrated development processes. To support developers we propose the MVCE architecture as an extension of the common model-viewcontroller pattern to address the specific requirements of mixed reality interfaces through an additional environment component.

Categories and Subject Descriptors H.5.2 User Interfaces [Theory and methods]

Keywords Usability Engineering, Software Engineering, Development Processes, Mixed Reality. Figure 1: Operation principle of the SubVision MR system for visualizing underground infrastructures

1. Introduction and motivation Mixed Reality (MR) integrates interactive computer graphics and other media into real-world environments (Milgram and Kishino, 1994). This is in contrast to conventional user interfaces where the complete display is generated from an internal model and completely under the control of the software. In the past technological limitations of base technologies have been major problems, forcing researchers to focus on these (Azuma et. al., 2001). Mixed reality user interfaces have high potential to simplify interaction, especially where interaction within a spatial context is required. To realize this potential it is necessary to consider usability engineering (UE). The development of productive MR applications that go beyond demonstrators makes the use of software engineering processes necessary.

A MR system like SubVision (see figure 1) consists of a range of hardware and software: The position and orientation of the user is determined by integrating information from several sensors. The environment and augmentation models maintained in a geographic information system (GIS) are retrieved and rendered as 3D graphics that are then fused into an augmented view of the environment either using an optical-see-through device or a video-see-through approach. The defining feature of MR systems is the integration with a real environment. The application requires information about objects and spaces whose geometry and behavior is not under the control of the designer, but must be acquired from the real environment. Real objects can be subject to physical manipulation (e.g. in a maintenance task) or external forces. Therefore it must be possible to track state changes in the environment. Welch and Foxlin (2002) provide an overview of possible sensor technology. In addition to real-time information about the environment a spatial augmentation model must be managed by the application. For the development of MR software a number of libraries and toolkits are available (e.g. Schmalstieg et al., 2002). However, these focus mostly on technical aspects. So far, no standardized

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. C3S2E-09 2009, May 19-21, Montreal [QC, CANADA] Editor: B C. DESAI, Copyright ©2009 ACM 978-1-60558-401-0/09/05

137

and “user requirements”. The result are five “common activities”: Requirement Analysis, Software Specification, Software Design and Implementation, Software Validation and Evaluation that represent the process of development from both perspectives. However, similar activities do not ensure that they are performed in similar ways in SE and UE practice. We therefore used these five “common activities” as a framework to structure a detailed analysis at next level in the hierarchy, the level of process models.

way for the creation and sharing of MR content has been established but the development of MR systems for practical applications requires scalable approaches to content management. A separation of the content and environment descriptions from the application implementation is therefore as desirable in MR systems as in other domains. In this paper we describe an approach that aims to provide guidance for developers of MR applications on all levels from abstract standards to operational development.

To identify which UE aspects of the “common activities” are already implemented in SE models, we performed a gap-analysis with selected SE models. This analysis considered four commonly used SE models: the Linear Sequential Model (Royce, 1987), Evolutionary Development (McCracken and Jackson, 1982), the Spiral Model (Boehm, 1998) and the V-Model (V-Model, 2009). The overall goal of this study was to identify integration points on the level of process models. The coverage of the ISO 13407 activities in the different SE models (Linear Sequential Model 13%, Evolutionary Development 39%, Spiral Model 52% and VModel 78%) provides a rough indication of their respective suitability to develop usable software, but obviously not all UE activities are of equal importance and inclusion does not allow to draw conclusions about the achieved quality. While the V-Model has better coverage of UE activities compared to the other models and thus might be regarded as basically suitable to produce usable products the other three SE models must be significantly extended to address all UE requirements. With the results of the gapanalysis we then proceeded to structured interviews with six experts in the field of UE. The experts were volunteers from the ISO committee TC 159 Ergonomics/SC4 as well as industry experts.

2. Hierarchy of standards, models & processes Software engineering and usability engineering address the problems faced by developers of software at different levels. At the most abstract level standards define an overarching framework to organize development activities. Standards are rarely applied directly but provide a framework to ensure compatibility and consistency. At a more concrete level the definitions of standards can then be expressed as models that describe systematic approaches to specific development challenges. Models are adapted and tailored to the organizational conditions of the development project, e.g. existing processes and organizational constraints. At the operational level these models are mapped to development activities within a development process. This instantiation of a model, fitted to the organizational constraints, is called software development process (for SE Models) or usability lifecycle (for UE Models). In order to achieve alignment between SE and UE approaches, all three levels have to be regarded to ensure that integration points and suggestions for collaboration meet the objectives of both domains and the intentions behind the standards, models and operational processes are maintained.

3. Integration points for standards and models

3.1 Recommendations

To address the requirements of MR development we have analyzed the integration of user centred design activities into software engineering processes. We started with SE and UE activities at the level of standards and analyzed the processes, activities, tasks and artefacts in established standards from the two domains, the SE standard ISO/IEC 12207 (2002) and the UE standard ISO 13407 (1999).

The interviews resulted in 63 recommendations. To validate these, a questionnaire with all recommendations was distributed to 12 usability experts (members of working-groups on the integration of SE and UE as well as industry experts) in which they were asked to rate correctness and importance. Based on these results a ranking of recommendations was produced for different usability activities. These can be used as checklists to evaluate whether a development process addresses the requirements of usability engineering and indicate the need for extensions if shortcomings are identified. Exemplary recommendations include (with relevance values between 0=irrelevant and 10=absolutely critical):

ISO/IEC 12207 defines the process of software development as a set of 11 activities: Requirements Elicitation, System Requirements Analysis, Software Requirements Analysis, System Architecture Design, Software Design, Software Construction, Software Integration, Software Testing, System Integration, System Testing and Software Installation. It also defines specific development tasks and provides details on the generated output. ISO 13407 defines four activities of human-centred design that should take place during system development. These activities are labelled ‘context of use‘, ‘user requirements’, ‘produce design solutions’ und ‘evaluation of use’. ISO 13407 also describes the results to be generated by the activities.

Context of Use: •

The results of the analysis (documents) define concrete properties that have to be achieved and can be measured as quality criteria. (Relevance 6.1)

User Requirements: •

We have examined each activity in the two standards and identified similarities in terms of the characteristics, objectives and procedures. Based on these similarities we consolidated activities into groups. These “common activities” are part of both disciplines and potential integration points. An example is the “Requirement Analysis”. From a SE point of view (as in ISO/IEC 12207) the corresponding activity is called “Requirement Elicitation”. From the UE standpoint, specifically the ISO 13407, the corresponding activities are “context of use”

It is clearly documented which activities can be performed with the system from the user’s perspective. The required interactions with the system are specified. (Relevance 8.5)

Produce Design Solutions: •

The actual design solution is produced under the joint influence of customer and user requirements. (Relevance 8.1)

Evaluation of Use:

138



The resulting system is validated against the user requirements. (Relevance 7.7)



The evaluation is conducted with real, representative users. (Relevance 6.9)

While MR applications can be (and have been) implemented according to the MVC model the impact of the real environment introduces additional problems that are not well handled by the unmodified MVC model. The key distinction between conventional GUIs and MR interfaces is the impact of the real environment. In standard GUIs the complete visual presentation that the user sees is provided by the view component, which determines the presentation and depends only on data from the model. In contrast to this, MR interfaces combine the real environment with additional information provided by the application. This necessitates the use of both an environment model of the real world that links the MR application to the real world (in a simple case by relating content elements to spatial locations in the environment) and an augmentation model (that describes the elements that are to be integrated into the real environment). While the augmentation model has a close correspondence to the model in MVC and can be manipulated either by the user through interaction techniques or by the business logic of the software, the same is not true for the environment model. In general changes in the environment can‘t be controlled through interaction techniques or the software. In some cases changes in the environment are controlled by the user through physical interaction with elements in the environment, e.g. in systems that employ tangible interaction or in maintenance applications. However, in most cases the real world environment is not under the control of the user and must be integrated “as is” into the MR application. Because an MR application requires information about objects and spaces in the real environment, sensors must acquire this information. While sensor information could be handled as controller events in the MVC model this can lead to complex and obscure models.

To provide concrete support for developers at the operational process level (where a MR system is implemented) we further analyzed the specific requirements of MR applications, as described in the following section.

4. Process level MR development support Experience with the development of MR systems for a variety of domains (e.g. illustration, entertainment, museums and navigation) shows that changing base-technologies make it hard to maintain MR systems in a working state and prevent reuse across applications (Paelke, 2008). A systematic approach is required that simplifies the reuse of MR content and MR techniques across different hardware platforms and applications. A development approach must therefore support good software engineering practices at the level of operational processes and implementation activities in addition to the consideration of good software and usability engineering practices at the standard and process model levels. The problem of maintaining MR applications in the face of constantly changing base technologies can be tracked to a tight integration between system components. While libraries and toolkits are available to support MR application developers the resulting system structures are often tied closely to the base technologies, requiring major effort if the same content or technique is to be applied in another MR system. As discussed in the motivation section, in MR systems a spatial model of both the virtual elements presented in the interface as well as the realworld surroundings is required. Currently, there is no standardized way to create, exchange and maintain these models.

We have therefore introduced an additional environment (E) component, which captures the “real world” model of the application (see Figure 3). A perfect real world model would contain all information about the real environment at the time of query. In practice both the amount of information required by the application and the amount accessible through sensors is limited.

4.1 The MVCE model In the development of graphical user interfaces (GUIs) architectural models have been used to achieve a separation of concerns in order to facilitate the reuse of individual components. The best known architectural model for GUIs is the Model-ViewController model, that was introduced by Reenskaug (1979). The MVC model was highly successful in promoting the reuse and exchange of presentation elements and interaction techniques independently from the underlying application and is still the defacto standard (with several variations) in current GUI toolkits.

Both the model (M) and the view (V) can query the environment (E). This allows to capture spatial association (e.g. the common MR scenario in which augmentation information is fixed to a location or object) as well as control relations (e.g. objects in the environment influenced by the application).

The model-view-controller pattern (MVC) structures user interfaces into three components. The key benefit of this decomposition is that the visual and interaction aspects of a user interfaces can be isolated from the underlying application. The model (M) represents the application data and encapsulates the functionality of the application. The view (V) encapsulates the visual elements of the user interface, e.g. button widgets, text fiels or visualizations. The controller (C) handles the interaction details, e.g. mouse events or text input and communicates necessary actions to the model. The MVC pattern enables modular designs in which changes in one component aren't coupled to other aspects. It also allows to provide multiple views and controllers for the same application/model. This is a desirable property, especially for MR interfaces that rely on specialized hardware that may not be available in all situations.

Figure 3: The extended MVCE model A key benefit of using the MVCE structure is that the components can be refined independently. The common scenarios of exchanging sensors (for acquiring information about the real environment) or display devices (for achieving the combination of real and virtual information) can now the addressed by modifying only the environment model or the view model, respectively. The following section describes an example of a MR system in which the MVCE model was applied.

139

Figure 4 illustrates a simple example in which the real world environment observable from the user’s point of view consists of two objects: a tree in the foreground and a building in the background. The augmentation information consists of a 3D model of a new building that is spatially located between the two real objects. To render a correct display the system must show the video of the real environment in sectors A and C and the virtual model in sector B. To achieve this the environment model must contain current information about the distance of real world objects from the point of view.

5. Example: MR public participation As an example for a MR application that was successfully restructured using the MVCE model, we present a MR application for public participation in city planning. The application was initially developed for the GeoScope MR input/output device (Paelke, and Brenner, 2007). By refactoring the system according to the MVCE model it became possible to extend the system with additional functionality, which we exploited in the city planning application to add correct spatial occlusion. The MVCE system structure also aids in the adaptation of the application to other devices and the reuse of visualization and interaction techniques.

Point of View

B

Tree (real)

C

Existing Building (real)

A

Planned Building (virtual)

The main goal of the city planning application is to provide an intuitive public participation interface for the review and discussion of city planning scenarios. For this purpose the GeoScope can installed at future building sites. The augmentation information maintained in a GIS database consists of the 3D CAD models of different proposals. Users can compare the visualizations of planned scenarios within the real view of the location and use the system to submit comments. The GeoScope was developed to make MR visualizations accessible to large groups of non-expert users. It can be installed on a tripod at arbitrary locations. Its main components are a high resolution display with touch-screen facing the user and a camera (AVT Guppy with IEEE 1394 interface) that is mounted on the back of the display, facing into the environment. By augmenting the video-stream with computer generated graphics a video-seethrough MR setup is realized. Similar to a telescope the GeoScope can be turned in two degrees of freedom (pitch and yaw). The rotation angles are captured by mechanical sensors that provide precise an latency free data.

Figure 4: The MR occlusion problem Figure 5 illustrates how the city planning application was restructured according to the MVCE model. The MVCE structure makes it easy to switch between different environment models. In the city planning application we were able to replace the static model with data acquired by a laser scanner at the time of use. For “on-the-fly” acquisition of the environment model we employ a Riegl LMS Z360I laser scanner, which has a measurement rate of 8,000 points per second, a range of 200 m and a single point accuracy of 12 mm (Riegl Website, 2009).

One limitation of the original city planning application was the use of a static environment model, which made the correct handling of occlusion between real objects and virtual objects impossible. Especially for applications involving architecture the spatially correct blending of virtual information with the real objects and buildings in the environment is paramount.

The flexibility of the MVCE structure also enables the integration of additional functionality. As an example, we can also use the laser data for the precise positioning of the GeoScope, if suitable spatial references (e.g. a cadastral map) are available.

Figure 5: Operation of the GeoScope with correct occlusion as a MVCE system

140

6. Conclusions and outlook

8. References

The development of MR applications is subject to significant constraints with regards to available base technologies and existing design expertise. To develop usable applications that realize the potential of MR interfaces requires the consideration of usability engineering activities throughout the development process. At the same time the implementation of MR systems for real applications requires the use of software engineering practices and processes. Our analysis has identified a number of potential integration points, where user centred design activities can be integrated into software engineering processes. A ranked list of recommendations can be used to evaluate and extend existing software development processes with essential usability engineering activities. Central benefits of the resulting integrated processes compared to ad-hoc solutions include a more manageable process, improved focus on usability and the increased likelihood for practical use since the UE activities are embedded into established workflows.

[1] Azuma, R.; Baillot, Y.; Behringer, R.; Feiner, S.; Julier, S. and MacIntyre, B. (2001): Recent Advances in Augmented Reality. In: IEEE Computer Graphics and Applications, Vol. 21, No. 6, November/December 2001. [2] Boehm, B. (1998): A Spiral Model of Software Development and Enhancement. IEEE Computer. Vol. 21, No. 5, 1988. [3] ISO/IEC 12207 (2002): Information technology - Software life cycle processes. Amendment 1, 2002-05-01. ISO copyright office, Switzerland, 2002. [4] ISO 13407 (1999): DIN EN ISO 13407. Human-centered design processes for interactive systems. CEN - European Committee for Standardization, Brussels, Belgium, 1999. [5] McCracken, D.D., Jackson M.A. (1982): Life-Cycle Concept Considered Harmful. ACM Software Engineering Notes, No.4, 1982. [6] Milgram, P. and Kishino, F. (1994): A Taxonomy of Mixed Reality Visual Displays, in: IEICE Transactions on Information Systems, Vol E77-D (12), December 1994.

We have identified the close coupling with a real world environment that is beyond the control of both the designer and the user and the reliance on frequently changing base technologies as specific problems in the development of MR applications. To address these problem according to the classical software engineering principle of “separation of concerns” we have extended the MVC model with an additional environment component. We have successfully employed the extended model in the redesign of a MR city planning application and since then also in the development of a MR user interface for the control of an unmanned flying sensor platform.

[7] Nebe, K. (2009): Integration von Usability Engineering und Software Engineering, Dissertation (in German), University of Paderborn, Paderborn, Germany, 2009 (in print). [8] Paelke, V. (2008): Spatial Content Models and UIDLs for Mixed Reality Systems, In: ACM CHI 2008 Extended Abstracts, Workshop User Interface Description Languages for Next Generation User Interfaces, Florence, Italy , 2008. [9] Paelke, V. and Brenner, C. (2007): Development of a Mixed Reality Device for Interactive On-Site Geo-visualization. In: Proc. Simulation und Visualisierung 2007, Magdeburg, Germany, March 2007.

The MVCE model appears to be a useful architecture and helps to provide support for development and reuse. While we do not claim that MVCE is a general solution to MR architecture problems, we think that the close relation to MVC might indicate that existing MR applications could be refactored according to the model, in order to simplify exchange and reuse. In the future we aim to gather more experience with the MVCE model. While our approach reported here addresses common problems in the development of MR application, especially from the perspective of reuse, there is still much room for improvement. An area that is still left largely unaddressed is the reuse of MR augmentation content between applications. A way to describe such models in a standardized and interchangeable format is lacking. Advances towards such standardized descriptions would enable reuse of content between applications and are a prerequisite for the creation of viable MR content creation tools. In the future we plan to examine this aspect of content reuse in MR applications in combination with the system development aspects of software reuse discussed in this paper.

[10] Reenskaug, T. (1979): Thing-Model-View-Editor - An Example from a Planning System, Xerox PARC technical note, May 1979. [11] Reitmayr, G. and Schmalstieg, D. (2005): OpenTracker: A flexible software design for three-dimensional interaction. In: Virtual Reality, Vol. 9, No. 1, Springer, December 2005. [12] Riegl Website (2009): http://www.riegl.co.at/terrestrial_scanners/lmsz360i_/360i_all.htm; accessed 30. January 2009. [13] Royce, W. W. (1987): Managing the Delopment of Large Software Systems: Concepts and Techniques. In: Proc. 9th Int. Conference on Software Engineering, IEEE, Monterey, California, 1987. [14] Schmalstieg, D; Fuhrmann, A; Hesina, G.; Szalavari, Z.; Encarnacao, L.M.; Gervautz, M. and Purgathofer, W. (2002): The Studierstube augmented reality project. In: Presence: Teleoperators and Virtual Environments, Vol. 11, No. 1, February 2002.

7. Acknowledgments We would like to thank our colleagues Christian Geiger (FH Düsseldorf) and Jörg Stöcklein (Universität Paderborn) for their contributions in deriving and experimenting with the MVCE architecture, Claus Brenner (Leibniz Universität Hannover) for his work on the GeoScope and Markus Düchting (C-LAB, Universität Paderborn) and Dirk Zimmermann (Deutsche Telekom) for their contributions in the process analysis.

[15] V-Model (2009): Website V-Model XT . Retrieved from http://www.cio.bund.de/cln_093/DE/IT-Methoden/VModell_XT/v-modell_xt_node.html; accessed 30. January 2009. [16] Welch, G. and Foxlin, E. (2002): Motion Tracking: No Silver Bullet, but a Respectable Arsenal. In: IEEE Computer Graphics and Applications, Vol. 22., No. 6, 2002.

141