evolutionary scenario based design with video - ACM Digital Library

3 downloads 0 Views 785KB Size Report
Nov 16, 2012 - Han Xu. Fakultät für Informatik. Technische Universität München. Munich, Germany han.xu@tum.de. Oliver Creighton,. Naoufel Boulila.
From Pixels to Bytes: Evolutionary Scenario Based Design with Video Han Xu

Oliver Creighton, Naoufel Boulila

Fakultät für Informatik Technische Universität München Munich, Germany

[email protected]

Siemens Corporate Technology Munich, Germany

{oliver.creighton, naoufel.boulila}@siemens.com

ABSTRACT

Bernd Bruegge Fakultät für Informatik Technische Universität München Munich, Germany

[email protected]

automatically generated demonstration videos for UI interaction validation.

Change and user involvement are two major challenges in agile software projects. As change and user involvement usually arise spontaneously, reaction to change, validation and communication are thereby expected to happen in a continuous way in the project lifecycle. We propose Evolutionary Scenario Based Design, which employs video in fulfilling this goal, and present a new idea that supports video production using SecondLife-like virtual world technology.

However, changes and new requirements can arise at any point in the project lifecycle, which leads to situations where unimplemented requirements (technical or non-technical) often coexist with implemented parts of the system. To cope with spontaneous changes and user involvement, we argue that video, which has been used as an effective way for communication and validation in Software Cinema and Continuous Demonstration, should be given due consideration during the project lifecycle. Therefore we propose Evolutionary Scenario Based Design, which combines the spirits of Software Cinema and Continuous Demonstration and advocates using scenario video to represent the unimplemented parts of the system for both requirements elicitation and system demonstrations purposes throughout the project lifecycle. To satisfy the ubiquitous needs for scenario videos, we present a new idea that supports easy scenario video production and modification using virtual world technology.

Categories and Subject Descriptors D.2.1 [Software Engineering]: Requirements/Specifications

General Terms Design, Human Factors

Keywords Requirements Engineering; Scenario Based Design; Video Prototyping; Virtual World

1. INTRODUCTION

2. BACKGROUND 2.1 Software Cinema

Software development projects are subject to constant change and the cost of change late in the process is extremely high [1]. Miscommunication between developers and end users is another common contributing factor to unsuccessful software projects. Software Cinema and Continuous Demonstration are techniques introduced for use in the early stage of software development to address these problems. Software Cinema is an extension of video prototyping techniques and provides a toolkit called Xrave to support video based requirements engineering [2, 3]. Continuous Demonstration calls for end-user involvement from the early stage of the software lifecycle, which requires continuous system demonstrations, and presents an approach for automatically generating UI interaction video that acts as the placeholder for not-yet-existing components [4]. Both techniques rely on video as the essential artifact in the process among others: Software Cinema uses video to address communication issues in requirements engineering and Continuous Demonstration employs

Communication among developers and customers is one of the major challenges in Requirements Engineering. They often have different technical backgrounds, use different sets of terminology and look at things from different points of view. A scenario is a concrete story about use [5] and is regarded as one of the best ways to foster mutual understanding among stakeholders during requirements development [6]. To eliminate certain classes of potential misunderstandings that may easily arise with textual scenario descriptions, Software Cinema has already been proposed by Creighton as a technique for video based requirements engineering [2, 3, 6], which follows the idea of scenario-based design [5] and uses video as the medium for describing scenarios and as a communication mechanism among stakeholders. Compared to textual scenario descriptions, video based scenarios have the advantages of lucidity and intuitiveness. In the Software Cinema technique, video is used in three phases: Preproduction, End-user session and Postproduction [2]; and video is an indispensable artifact in each of the phases. Being based on the video, Software Cinema is an effective way to cope with the user involvement and communication issue in requirements engineering.

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SIGSOFT'12/FSE-20, November 11-16 2012, Cary, North Carolina, USA Copyright 2012 ACM 978-1-4503-1614-9/12/11...$15.00.

2.2 Continuous Demonstration Although change is inevitable in software projects, we can respond to it positively. In agile software development, fast iteration and continuous customer involvement are highly

1

settings, virtual actors and virtual cameras, it is an easy, flexible and economical way to make cartoon-style scenario videos and can thus better support Evolutionary Scenario Based Design.

emphasized in responding to changes. Small and quick iteration facilitates quick feedback and early problem identification; continuous customer involvement makes sure that each iteration gets closer to the real goal. In the same vein, Continuous Demonstration is proposed by Stangl to bring system demonstrations as early into the development lifecycle as straight after the initial requirements elicitation [4]. However, the runnable code on which part of the system demonstration relies may not yet be available at that time. As an alternative to the missing part of the system, video can be utilized to show intuitively how the system works and that contributes to an effective discussion among stakeholders for effective discussion among stakeholders. To support this idea, Stangl has proposed a tool so that demonstration videos that show UI interactions can be automatically generated based on UI sketches and relevant use cases/scenarios [7]. With the aid of video, Continuous Demonstration is an effective way to address the change and validation issues in software project especially for UI interaction.

3.1 Problem Statement and Rationale Although video is an efficient way to describe scenarios compared to textual descriptions and can be used to demonstrate the nonexisting parts of the system, there is the economic viability issue with the video production work itself [8, 9]. While video production is affordable for multi-year large projects, it can be hard for medium-sized projects to adopt, because the one-off video production may be too expensive for them. On the other hand, the video team is often casually formed with people from the development team and uses amateur actors from the same organization. Therefore, as these actors do not work for the video team full-time, their presence cannot be guaranteed. Moreover, sometimes the scenario is expected to happen in a restricted or dangerous area (e.g. airport), which prevents the video team from accessing it. These factors make the situation more difficult.

3. EVOLUTIONARY SCENARIO BASED DESIGN WITH VIDEO

While the production cost and presence of actors are inherent characteristics of real-world video production and can hardly be changed given the state-of-the-art of current technology, the situation could be much alleviated if the effort spent on one video were not one-off for a specific project but could be easily reused for other projects.

As agile development tends toward vertical slicing as compared to horizontal slicing, changes and user involvement can happen at any point in the project lifecycle. As a result, unimplemented parts of the system often coexist with the implemented parts. Based on that observation, we propose a combination of philosophies of Software Cinema and Continuous Demonstration, in order to cope with frequent change and user involvement challenges. The unified approach is called Evolutionary Scenario Based Design, whose idea is that validation is expected to take place whenever new changes occur, to make sure that the project is approaching the real goal. For implemented parts we use system demonstrations and for unimplemented parts we advocate using scenario videos as the placeholder. This ensures that the total user experience is always validated as a whole with the stakeholders after any change. As depicted in Figure 1, the software project starts with only scenario videos describing new requirements (a stream of pixels) and evolves into implementation (a stream of bytes) iteratively. It is by this From Pixels to Bytes metaphor that we give it the name Evolutionary Scenario Based Design.

In work by Creighton [2], the flexibility of video is analyzed and classified into seven different levels, called Plasticity Classes of Video (see Table 1). According to this classification, the ordinary video is very limited in terms of plasticity (only in Class-0 by definition). The higher Plasticity Class a video has, the more flexibility it gains, and thus enabling easier video reuse among similar scenarios. We will discuss how virtual world technology helps generating scenario videos in higher plasticity in the next section. Table 1. Plasticity Classes of Video Class

Description

Class-6

Switch Point of View

Class-5

Modify Object Interaction

Class-4

Modify Object Position

Class-3

Replace Objects

Class-2

Change Basic Object Properties

Class-1

Annotate Objects

Class-0

Set Time-based Markers

3.2 Using Virtual Worlds for Scenario Videos Regarding animation production, usually people would use programs like 3DMAX and Unity3D. While these generalpurpose animation software packages allow users to create virtually any animation on top of the provided building blocks and functionalities, the user is supposed to take care of all other necessary details; even a simple animation of flying airplane may cost much effort to accomplish. On the other hand, popular virtual world platforms like SecondLife and OpenSim allow users to create characters and objects in the virtual world more easily. The created region (including the objects in it) can be shared with other users, who can then log in and control the characters via controllers like keyboard. The user can control the movement of

Figure 1. From Pixels to Bytes: Evolutionary Scenario Based Design Although the use of video as representation of scenario is not new, how to create and update scenario videos easily and quickly is a central problem in Evolutionary Scenario Based Design. To meet the ubiquitous needs for scenario videos in our method, we present a new idea that supports easy scenario video production and modification using virtual world technology. Compared to the real-world video that requires real-world settings, real-world actors and real-world cameras, we use SecondLife-like virtual world platform in producing scenario videos. By using virtual

2

benefits compared to real-world video production (also for the Marionette mode by a little effort):

the character freely in the virtual world without having to define the animation path beforehand as in standard animation software. Although virtual world may not fit for producing every kinds of animation, it is particularly suitable for animations with much interaction and environmental context exhibition. And scenario video matches these characteristics.

1. Flexibility. As properties of 3D objects and scripts can be modified on the fly in the virtual world, variation and adaptation becomes relatively easier for the changing scenarios. 2. Expressiveness. Virtual world provides a cheap way to realize any special effect you need. For example, specific parts can be highlighted or exaggerated and it is possible to show abstract ideas, for example the underlying mechanism of a system.

There can be different ways to generate scenario videos from the virtual world. Below we will be examining various possible ways and analyze their complexity and plasticity according to the Plasticity Classes mentioned in Table 1.

3. Reusability. Virtual world facilitates reuse of the existing story framework (the objects and scripts). If needed, specific minor changes on the properties of actors and environmental settings can be taken, for example, different gender or look of a character, different weather (snowing, raining) or different venue (school, factory, company).

1. Immersive mode in which the user controls the character (also called avatar) in the virtual world like in a role-playing games (RPGs). In this mode the end-user is given the opportunity to experience the context of the visionary scenario. The user can even interact with the visionary system if the logic is implemented in the virtual world environment beforehand. When taking this mode, what is actually delivered is not the video, but the immersive experience supported by the virtual world environment. All the necessary 3D objects, characters and the logics that support the events, actions and gestures should be created and stored in the virtual world beforehand. Just like a vivid video is better than a long paragraph of text, a participatory experience is supposed to be better than a video. According to the flexibility the virtual world can provide, the plasticity class of the immersive experience obtained this way can be up to Class-5 Modify Object Interaction. As virtual world environments are normally running in the Client/Server architecture and allow multiple clients to login and view the “world” in freeview mode, Class-6 Switch Point of View can be achieved with a little more effort as well.

4. Virtual production. Unlike real-world filming that requires field filming, virtual world supports virtual video production that uses only virtual actors, virtual environmental settings and virtual cameras. So you do not need to worry about the actor's actual presence and the access problem with hazardous, restricted (e.g. airports and military zones) or non-existing areas (e.g. buildings yet to be built). 5. Distributed Work. As virtual world usually runs in the Client/Server architecture, it facilitates distributed, collaborative and asynchronous teamwork. Therefore, the virtual world can be built, used and reused with teams from different places. 6. Object Identifiability. Finally the characters and objects in the virtual world can be easily and accurately identified and annotated. This feature can be particularly helpful for video analysis and model generation.

2. Movie viewing mode in which characters act like robots and events automatically happen in predefined sequence in the virtual world based on a specific scenario. In this mode the viewer does not need to control the character him- or herself in the virtual world; instead, he just sits before the computer and watches how things are going on just like watching a movie [10]. What is delivered in this mode is a “movie” played in the virtual world. To trigger and orchestrate events and actions based on the scenario, the controller program should be written in advance. Such feature is called Non-Player Character (NPC) and is supported by major virtual world platforms like SecondLife and OpenSim. The plasticity class of the movie viewing experience obtained this way can also be up to Class-6 Switch Point of View.

Additional benefits can be obtained if the virtual world platform is an open source one (e.g. OpenSim). It is totally free of charge and has less non-technical limitations than a commercial platform. Open source also means the possibility of feature customization when necessary.

3.4 Challenges and Solutions Despite these benefits, producing a cartoon-like scenario video from virtual world can still be troubling for people new to virtual world environments; although for veteran virtual world users, they will find it easy and comfortable. There are also other factors that may prevent the user from feeling comfortable in trying this way. Below we review these challenges and solutions available today. We intentionally make distinction between low-end solutions and high-end solutions in terms of cost effectiveness. The low-end solution means that the known challenge can be mostly solved anyway with reasonable cost and effort. They are enough for the needs of small or medium projects. The high-end solution shows the existence of mature and scalable solutions to the challenge. They are much more expensive and can cater for special needs.

3. Marionette mode in which the video team members control the characters and events in the virtual world based on a specific scenario (just like controlling marionettes in a theater) and the video clips for each scene of the scenario are recorded respectively using screen-recording software. These video clips are then put together to produce the whole length scenario video via video editing software. This mode does not provide immersive experience and the virtual world is just used as a tool for virtual production. As the action and event in the virtual world are controlled by human users, the scripts necessary for automatically triggering and orchestrating events, actions and gestures can be much eliminated. For the plasticity class of scenario videos produced in this mode, although the scenario video itself cannot be modified on the fly, the overhead for video modification based on virtual filming is negligible compared to real-world filming; so it can also be considered up to Class-6 Switch Point of View.

1. Creating 3D object models can be time-consuming. Low-end solution(s): to learn and use professional 3D modeling software (e.g. 3DMax, Maya3D); or to make full use of online 3D libraries (e.g. Google SketchUp Warehouse). High-end solution(s): to use Photo-to-3D systems (e.g. Insight3d); or to consider various 3D reconstruction systems. 2. Creating 3D character gesture/motion animation is hard. Low-end solution(s): to use gesture design software (e.g. DAZ3D). High-end solution(s): to consider motion capture systems.

3.3 Benefits Given the digital and computable nature of the elements (3D objects and scripts) in the virtual world, we receive the following

3

In this experimental trial, we learned the following lessons, which could be valuable for the future projects.

3. Object to object messaging mechanism is not provided in virtual worlds. Low-end solution(s): to take advantage of private/public message functionality as a workaround. High-end solution(s): to hack the source code (for example in OpenSim) and implement a proprietary messaging mechanism. 4. Camera control in virtual worlds is poor. Low-end solution(s): to practice more on keyboard controlling; or to consider using Joystick to control the camera. High-end solution(s): to write ad-hoc scripts to realize the camera auto-follow feature. With the evolution of technology, today’s high-end solutions will become low-end ones tomorrow. And the virtual world community is still growing, so it can be expected that the challenges mentioned here are very promising to be better solved in the future.

2.

When you wanted to make a scenario that happened in a specific condition (e.g. snowing weather or airplane landing), you had to spend time waiting until it actually happened.

3.

2.

It is better that each virtual actor (character) or virtual camera (freeview character) is controlled by different people.

3.

The video must be story-driven, not a diary of events. For example, a ten-minute walk should be represented in threesecond length.

4.

Involvement of the developer and end-user/customer in the video production helps them better understand the scenario.

5.

Make full use of the film language and employ text or audio to express ideas when necessary.

In this paper we have presented Evolutionary Scenario Based Design, which solves the constant change and user involvement challenges in agile projects by using scenario video for continuous validation and effective communication. To support the method, we proposed a new idea of producing scenario video using virtual world platform. According to our experiment based on the marionette mode, it showed that creating scenario videos this way was cost-efficient and offered more flexibility and convenience. In the future work, we will explore other practical issues in using video for Evolutionary Scenario Based Design. For example, studying different methods for scenario video creation and their applicable areas, detailed guidance on the design and evaluation of the video-based scenario, and the discovery and definition of requirements for systems developed using the video-based scenario approach are among those that need further study.

The best way to examine whether or not a new technique is truly effective is to put it in practice. To this end, we took one of our ongoing projects and applied the virtual world supported scenario video production technique. The project is called DOLLI5, which is a follow-up project of the previous projects in cooperation with Munich Airport. In the previous projects, real-world filming was used and had reported some difficulties, for example: Any minor change to the scenario meant that the film team had to rework the relevant scenes once again and existing scenario videos could hardly be reused even for another very similar scenario.

Storyboard is important. A well-designed storyboard from an experienced “director” helps fast production of the movie.

5. FUTURE WORK AND CONCLUSION

4. CASE STUDY – DOLLI5 PROJECT

1.

1.

6. REFERENCES [1] B. Bruegge and A. H. Dutoit. Object-Oriented Software Engineering: Using UML, Patterns and Java (3ed). Pearson, 2010.

The geographical distance increased the inconvenience of transporting everything including the filming equipment as well as actors/actresses to the airport; to make it worse, the airport had a high level of security and some restricted areas were not allowed for the film team to enter.

[2] O. Creighton. Software Cinema: Employing Digital Video in Requirements Engineering. PhD thesis, TU München. 2006. [3] O. Creighton, M. Ott and B. Bruegge. Software Cinema: Video-based Requirements Engineering. In RE 2006. [4] H. Stangl and O. Creighton. Continuous Demonstration. In MERE 2011. [5] J. M. Carroll. Making Use: Scenario-Based Design of Human-Computer Interactions. MIT, Cambridge 2000. [6] B. Bruegge, O. Creighton, M. Reiss and H. Stangl. Applying a Video-based Requirements Engineering Technique to an Airport Scenario. In MERE 2008.

Figure 2. Scenario Video Produced Using OpenSim Our new solution was based on an open-source virtual world platform called OpenSim [11], which was technically an open source version of SecondLife (which is a commercial platform). Among three different modes (see Section 3.2), we adopted the marionette mode because this mode requires the minimal script writing. So on the OpenSim platform, the virtual world was built up by creating 3D objects and writing Linden-compatible scripts. As our requirement on the quality of video was not that high, we did not use 3D models from commercial 3D libraries, instead we used available 3D models from free online 3D libraries like Google SketchUp Warehouse etc. Figure 2 shows two screenshots from the VIP scenario for the DOLLI5 project. The new solution proved to be helpful in solving the above-mentioned problems.

[7] H. Stangl. SCRIPT: A Framework for Scenario-Driven Prototyping. PhD thesis, TU München. 2012. [8] G. Broll, H. Hussmann, E. Rukzio and R. Wimmer. Using Video Clips to Support Requirements Elicitation in Focus Groups - An Experience Report. In MERE 2007. [9] O. Brill, K. Schneider and E. Knauss. Videos vs. Use Cases: Can Videos Capture More Requirements under Time Pressure? In REFSQ 2010. [10] N. Boulila, O. Creighton, G. Markov, S. Russell and R. Blechner. Presenting a Day in the Life of Video-based Requirements Engineering. In Onward! 2011. [11] The OpenSimulator Project. http://opensimulator.org/.

4

Suggest Documents