Proceedings of the 37th Hawaii International Conference on System Sciences - 2004
Designing Mobile Information Services: User Requirements Elicitation with GSS Design and Application of a Repeatable Process Mariëlle den Hengst
Elisabeth van de Kar
Jaco Appelman
Delft University of Technology The Netherlands
[email protected]
Delft University of Technology The Netherlands
[email protected]
Delft University of Technology The Netherlands
[email protected]
Abstract The main challenge in the first phase of designing mobile services is eliciting user requirements. We propose a repeatable process for eliciting user requirements based on the literature on requirements engineering and group support systems. We applied the repeatable process in three sessions to elicit user requirements for a mobile information service on a UMTS testbed. The sessions resulted in ideas for services that are highly valued by potential users and criteria for when they will or will not use the service.
1. Introduction The primary measure of success of a designed system, being it a product, a software system, or a service system, is the degree to which it meets the purpose for which it was intended. Requirements engineering is the process of discovering that purpose, by identifying stakeholders and their needs. Prior research showed the importance of requirements engineering [31] at least for software systems: more than half of the 8000 software projects studied were compromised from the outset by shortcomings in the requirements definition process. Improvements in requirements engineering could therefore substantially reduce the risks. Requirements engineering consists of several steps [25]: • Eliciting requirements: understand the stakeholders and ‘capture’ their requirements. • Modeling and analyzing requirements: construct abstract descriptions that are amenable to interpretation. • Communicating requirements: transfer knowledge between stakeholders. • Agreeing on requirements: negotiate with stakeholders about requirements.
• Evolving requirements: manage changes in requirements as the environment and stakeholders requirements change. Requirements engineering is a highly collaborative process that involves many stakeholders: the customer who pays for it, the user who interacts with it, the domain expert, the developers who build it, sales, marketing, to name but a few. Much research has been carried out on the use of Group Support Systems (GSS) to support and improve the process of requirements engineering [2], [14]. Two things that come forward after analyzing the literature are the small range of the proposed approaches and the lack of reproducibility of the approaches described. Collaboration between stakeholders during requirements engineering is apparent during communication of and agreement on requirements. Most GSS-based approaches focus on these steps only, and leave out the other steps on eliciting requirements, modeling and analyzing, and evolving requirements. And, in order to have a reproducible, repeatable approach based on GSS three aspects should be addressed: the tool, the configuration, and the script [5]. These concepts are relatively new and most research on the use of GSS for requirements engineering do not yet use these concepts explicitly, nor implicitly. So far, the GSS use is subject to multiple interpretations by various users [11]. In this paper, we present a repeatable GSS process for eliciting requirements, the first step in requirements engineering. Since eliciting requirements has not been subject to many GSS related research, we show in the second section why we think this step may benefit from the use of GSS. Section 2 also provides background information on the newly emerging principles of collaboration engineering. In section 3, we combine the theories on requirement elicitation and collaboration engineering to come up with a repeatable process for user requirements elicitation. We evaluated this repeatable process in a case study. We used the process
0-7695-2056-1/04 $17.00 (C) 2004 IEEE
1
Proceedings of the 37th Hawaii International Conference on System Sciences - 2004
to identify user requirements for a service on a UMTS1 testbed on the university campus. The case study is presented in section 4. We conclude with a discussion of future research directions in section 5.
2. Background 2.1 Requirements elicitation For eliciting requirements, many different techniques have been used. The techniques can be summarized into several categories [25]: • Traditional techniques include questionnaires, surveys, and interviews with the individual stakeholders. • Group elicitation techniques aim to foster stakeholder agreement and buy-in, while exploiting team dynamics to elicit a richer understanding of needs. Techniques that can be mentioned here are brainstorming, focus groups, and RAD/JAD2 workshops. • Prototyping can be used when there is a great deal of uncertainty or when early feedback from stakeholders is required [7]. • Model-driven techniques provide a specific model of the type of information to be gathered and use this model to drive the elicitation process. • Cognitive techniques include a series of techniques originally developed for knowledge acquisition, such as thinking aloud and card sorting [30]. • Contextual techniques are an alternative to both traditional and cognitive techniques. These include techniques such as participant observation and conversation analysis [10]. Each technique has its advantages and disadvantages and there is no best technique. The combination of several techniques can be very fruitful: combining prototyping with cognitive techniques, for example, allows stakeholders to think aloud while actually experiencing the new system. When taking a closer look at the categories above, we can distinguish between group techniques and individual techniques. Prototyping, model-driven techniques, cognitive and contextual techniques can all be used in a group setting
UMTS = Universal Mobile Telecommunications Systems, a 3rd generation mobile network (see www.umts-forum.org). In 2003 several European mobile operators initiate pilots, however there are still uncertainties on the availability of UMTS services for the mass market, see also [17]
1
2
RAD = Rapid Application Development; JAD = Joint Application Development
or in an individual setting. Although eliciting requirements does not need a group setting, it can benefit from it [19]. It is more time efficient, it has a higher flexibility (depending on the group, the steps can be adapted along the way), the output is very easy to understand, a higher richness of the information is gained, and information that would have stayed hidden emerges. Some of the disadvantages are long preparation time, a group setting is more difficult to manage, participants could be very dominating or silent on the other hand, recruiting participants takes a lot of time and effort, the analysis of the results afterwards takes a lot of time, and the participants are not really representative for a larger group [19]. Some of these disadvantages can be resolved by using Group Support Systems. Subsection 2.1 will pay attention to this. The combination of techniques to be used and steps to be taken to elicit requirements depends greatly on the situation at hand. The stakeholders involved are an important indicator for this. Requirements elicitation involves ‘capturing’ the requirements of different stakeholders, such as customers, users, and developers. Users play a central role in the elicitation process [25]. User requirements define what should be developed. The requirements of the other stakeholders mainly define the constraints to what has to be developed. In this paper, we focus on eliciting user requirements. Other stakeholders are left aside. Users are not a homogenous group; different user classes usually can be identified [28]. The selection of participants for the requirements elicitation process is an important topic. The participants should be chosen carefully through purposive sampling (as opposed to random sampling for surveys) – by selecting participants belonging to specific user groups [23]. The participants need to be reasonably knowledgeable about the topic and should be interested in talking about it. Ideally, the groups should not include too many different types of people [21], whilst a certain amount of diversity may be useful to encourage contrasting opinions [6]. Participants need to be comfortable in talking to each other and should share a similar background to encourage a common understanding of more detailed insights. Preferably, requirements elicitation groups consist of participants who are not too familiar with each other: the more diverse the views that are represented, the more reliable or robust the results become. Some familiarity between participants may help to ‘break the ice’, but overfamiliarity may adversely affect the synergy of the group [23]. Furthermore, the participants do not take the role of designers. They do not have to find solutions, but provide suggestions for the designers. It is often the case that users find it difficult to articulate their requirements and to be creative. It is
0-7695-2056-1/04 $17.00 (C) 2004 IEEE
2
Proceedings of the 37th Hawaii International Conference on System Sciences - 2004
very unlikely that the users can tell what they want in the future. They can tell what they want now, and the requirements elicitation techniques should guide the users from what they want now into future usage scenario’s [15], [16]. The users should talk through their daily experiences to end up with usage scenario’s. Another way to support the users in thinking into the future is to demonstrate a possible future [21], [27]. The demonstration allows the users to understand the anticipated future more clearly. Disadvantage of demonstration is that the users might get unrealistic expectations, for example, because of lack of detail. Besides techniques to support users in articulating their requirements, techniques to increase the creativity of users can be used as well. A number of techniques could be taken into account, each with benefits and drawbacks [6], [20]: visual evaluation, product handling, mini-user trial, creating 3D-forms, users drawing their ultimate product, and nominal group technique among others. This means that for eliciting requirements we can benefit from a group setting where we have to choose carefully the users who will participate in the group session and need techniques to support creativity. Collaboration engineering is a way to design a requirements elicitation process that meets these criteria.
2.2 Collaboration engineering Group Support Systems (GSS) are designed to improve the efficiency and effectiveness of meetings by offering a variety of tools to assist the group in the structuring of activities, generating ideas, and improving group communications [24]. Previous studies on GSS have reported labor cost reductions averaging 50% and reductions of project calendar days averaging 90% [12], [26]. The success of GSS meetings is often attributed to specific GSS characteristics [33]: anonymity, parallel input, and group memory. These characteristics also resolve many of the disadvantages of using groups mentioned in the previous subsection. • Anonymity: By being able to enter ideas and votes anonymously, silent or shy participants are more encouraged to participate, other group members cannot dominate. Ideas are, therefore, judged on their merit, not on the personality or position of the person that submitted it. • Parallel input. By generating ideas and communicating them in parallel, participants get equal time, preventing production blocking, so that participants can spend more time on generating new ideas. Also working in
parallel allows groups to generate more ideas. It is as if all people in the meeting are talking at the same time. • Group memory. During an electronic meeting, all ideas and votes are stored electronically. Hence, little time is needed to produce meeting minutes and previous meeting results are readily available in follow-up meetings. Moreover, the meeting record is untainted in nature and also describes the evolution of a group’s position over time. Although these factors are often reported as success factors, we can find some conflicting results when comparing the performance of GSS’s in the literature [3]. To counter this problem Briggs et al. [3] propose another unit of analysis, labeled thinkLets as an approach to produce far more predictable and repeatable results. GSS is at too high a level of abstraction, while the thinkLets describe in detail how a certain activity can be realized. This has an added advantage that thinkLets can inform the design of sessions [5]. ThinkLets can be used for collaboration engineering. Briggs et al. [3] have identified seven basic activities in a group process: divergence, convergence, organization, elaboration, abstraction, evaluation, and building consensus. These basic activities will be used in the next section to design a repeatable process for user requirements elicitation. ThinkLets will be attached to these basic activities in order to create a successful repeatable process. ThinkLets must be defined at least in terms of the tool used, the configuration of this tool, and the facilitation script. The tool component describes the specific version of the specific hardware and software used. The configuration specifies how the hardware and software were configured. And the script describes the sequence of events and instructions given to the group.
3. Repeatable Process Requirements Elicitation
for
User
We should pay attention to three elements when designing a repeatable process for user requirements elicitation: the participants to invite, the steps to take, and the techniques to use. When we translate these elements to literature on GSS and collaboration engineering, we need to pay attention to participants, basic meeting activities, and thinkLets. Each of these three elements will be presented below. Together they form the repeatable process for user requirements elicitation. Selecting and inviting the participants is the first step for analyzing user requirements. The guidelines for selecting participants mentioned in section two are replicated. Group sessions that are not facilitated by
0-7695-2056-1/04 $17.00 (C) 2004 IEEE
3
Proceedings of the 37th Hawaii International Conference on System Sciences - 2004
divergence
FreeBrainstorm (0) warm-up warm-up
divergence
FreeBrainstorm
identify problems in current situation
convergence
FastFocus
formulate the most important problems
(1) problem analysis
convergence
BroomWagon
select the x most important problems
divergence
LeafHopper
identify solutions for the problems
(2) solution generation
divergence
FreeBrainstorm
Presentation identify problems in current situation
(3) demonstration of future scenario's
OneUp convergence
GSS ideally have between five and twelve participants. GSS, however, have been found to effectively support large groups (more than 8 members) [9]. The number of participants is only limited by the facilities used. Facilities are available that can easily host 50 participants. • Participants need to be reasonable knowledgeable about the topic. • Participants should be interested in talking about it. • The group should not include too many different types of people; the participants should share a similar background. • Participants should be comfortable in talking too each other, but over-familiarity might have a negative effect on the results. From this we can conclude that we need to run at least one session, but probably more. We have to run more sessions when we want to include more people than the GSS facilities allow us. A second reason for having more than one session is when we can identify different types of users that do not have a similar background. For each type of user, we should plan for a separate session. The steps to be carried out, which are concluded from literature on focus groups [6], are: (0) warm-up, (1) problem analysis based on current daily experiences, (2) solution generation based on those experiences, (3) demonstration of future scenario’s, and (4) redefinition of solutions based on this. We will focus on each of these steps by designing a sequence of thinkLets. An overview is presented in figure 1. The individual thinkLets are described in more detail in the appendix. The warm-up step is needed to get people acquainted with the GSS facilities, with each other, and with the goal of the session. After a short introduction of the goal of the session, the participants are asked to introduce themselves. After this, the participants are asked to practice with the system by using it on a ‘dummy’ question. We prefer a question that is somehow related to the topic of the session. Once the participants are comfortable with the session, the facilities, and each other, the core session is started. Participants are asked to describe problems concerning the topic of the session, based on their daily experiences. To come up with the key problems of the participants, three thinkLets are used: FreeBrainstorm (Divergence), FastFocus (Convergence), and BroomWagon (Convergence). In the FreeBrainstorm thinkLet, the participants brainstorm ideas in response to a single question. The participants are working on separate pages that are circulating among them. They contribute ideas to the pages or reactions to previous
converge on high quality solutions and identify criteria for quality
(4) redefinition of solutions
Figure 1: Repeatable Process for User Requirements Elicitation
0-7695-2056-1/04 $17.00 (C) 2004 IEEE
4
Proceedings of the 37th Hawaii International Conference on System Sciences - 2004
ideas on the page. This often results in a long list of ideas, but the list will contain redundant ideas, irrelevant ideas, ideas on different levels of abstraction, and ambiguous ideas. The FastFocus thinkLet is used to clean up the list. The participants browse through the brainstorming contributions and each participant in turn proposes aloud a key issue. The participants discuss the meaning and the wording of the proposed item. When the participants cannot find any key issues any more the activity ends with a clean, non-redundant list of key issues. After the FastFocus the list of key issues most often is still too long to focus on. The BroomWagon thinkLet is then used to even zoom in more on the key issues. The participants can check a limited number of issues they want to focus on. The group result is that a number of issues got no votes and that a number of issues ends up as real key issues. If this number is still too large, the BroomWagon thinkLet is repeated with a cleaner list (the issues with no votes are removed from the list) until the top issues are not more than the maximum you want to handle. The next step in the repeatable process is the generation of solutions. The LeafHopper (Divergence) thinkLet is used for this. In the LeafHopper thinkLet, participants brainstorm on several discussion topics. Each participant hops among the topics to contribute ideas as dictated by interest and expertise. In this case, the topics are the key problems identified in the previous step and the ideas are the possible solutions to these key problems. These solutions can be anything and do not yet have to relate to the solutions the project team is thinking of. After this step, the focus of the session will shift to the solutions the project team has in mind. The possibilities of these solutions are demonstrated through a presentation on future usage scenario’s. During the redefinition of the solutions, the participants should again define solutions for the key issues, but this time in the direction of the solutions of the project team. The goal of this step is to come up with solutions and a list of criteria why these are good solutions. These criteria can then be translated into user requirements. The thinkLet used for this is called OneUp. In this thinkLet, the participants identify increasingly high quality ideas, while explaining why this idea is better than any of the previous ones. The explanation yields valuable information about the user requirements. The time schedule for the repeatable process is 3,5 hours in total. Prior experience has shown that three hours is a suitable length for these kinds of sessions [6]. The warm-up takes about 30 minutes; the problem analysis is scheduled for 75 minutes. The generation of solutions takes no more than 15 minutes. The demonstration is scheduled for 20 minutes, and the
redefinition of solutions takes 55 minutes. With a break in the middle and a closure in the end, the session totals to 3,5 hours. This schedule is based on a group size of 15 participants. With more or less participants, the time schedule should be adapted.
4. Case study The repeatable process described in the previous section has been applied to identify user requirements for mobile information services of visitors of Delft University of Technology (TUD). The combination of mobile telecommunication and the Internet forms the basis for mobile information services. The introduction of advanced mobile technology, such as UMTS, enables the development and, hopefully, use of mobile information services. We define mobile information service as an activity or series of activities of intangible nature that occur when the user is mobile and is intermediated by a mobile telecommunications network that supports the interaction between customers and systems of a service provider, whose aim it is to provide solutions to customer problems and needs, add value and thus create customer satisfaction (based on [13] and [18]). By adding the term mobile, we want to underline that the technologies and services are offered while the user is ‘on the move’. Many mobile services have already been proposed and that is not where the real challenge is to be found. Obvious services are Internet access, location & friend-finders, and context aware information services. The main challenge in the first phase of designing mobile services is getting to know what the potential user wants. The experience with the repeatable process to identify user requirements is described below. Before we do so, however, we provide some information on the complexity of designing mobile information services.
4.1 Complexity in information services
designing
mobile
Designing services differs from designing products because they have different characteristics. The services marketing literature abounds with definitions on traditional services and delivers us a set of generally accepted characteristics of services The SHIP-acronym catches four characteristics of services meaning that services are: • Simultaneously produced and consumed, the user and producer are assumed to be present during a transaction, they co-produce the service.
0-7695-2056-1/04 $17.00 (C) 2004 IEEE
5
Proceedings of the 37th Hawaii International Conference on System Sciences - 2004
• Heterogeneous, every service produced through interaction is unique to a certain degree and services are extremely perishable. • Intangible, services are intangible but at the same time they are coupled to products. This is especially true for mobile services because they imply intermediation by ICTapplications, for instance, you need a mobile phone to receive a SMS. • Perishable, Their value is gone with the act of consumption. That is why it is hard to quantify the value of a service: it is hard to put a price on an evoked feeling or an experience delivered. All these characteristics are applicable for a mobile information service, except for the simultaneity of production and consumption. With information services, like mobile services, users use the services that are earlier designed by the producer. We try to mitigate this problem through the inclusion of users in the design phase of mobile services: user requirements elicitation. The heterogeneous quality of mobile services and the difficulty to measure that quality are relevant for designing mobile information services. Anckar et al. [1] argue that on a general level the technology acceptance model of Davis [8] is widely accepted but that it is questionable whether the model is applicable to users’ choice of commercial channels. They set out a questionnaire from a perceived value-based view on adoption decisions with the objective to identify the key benefits and barriers that drive or inhibit user adoption of mobile Internet and mobile commerce. They found out that the adoption/rejection decisions were to a greater extent determined by perceived benefits than by perceived barriers. We focus on perceived benefits in our design because benefits drive satisfaction. Satisfaction is deemed an appropriate measure to estimate performance of services since it is hard for service users to describe their experience with numerical values [34]. In this way we hope to have overcome the perishability of mobile services and the complexity involved in measuring their performance. The bundling of the service with a handset with browser functionalities is also relevant for designing mobile information services. The user interface via the browser and the keyboard of the mobile phone are tangible and very relevant in the service design process. This also shows that the design of mobile information services is complex because these services are not produced by one firm but in a complex value network of actors [22]. Different stakeholders need to cooperate to design a mobile information service and all these actors may have different perceptions regarding the user requirements of the services. The
increased bandwidth UMTS promises, drives content providers to consider the mobile channel as potential to users and handset manufacturers are very busy developing handsets for UMTS. However, none of these players knows what the user wants, especially not the user him/herself. There is a need to discover latent user needs, the requirements they expect from services offered. Since there are so many different stakeholders, perceptions need to be aligned. Every stakeholder envisions a different future and they all need to adapt these visions to the demands of the user. The process of user requirement elicitation should prove helpful in this regard. This leads us to the conclusion that the design process of mobile information services is complex due to the fast developing technology, service characteristics (heterogeneous quality, difficulty to measure, and bundling with the handset), a large number of involved stakeholders, and unknown user requirements. To identify the user requirements we applied the repeatable process as proposed in the previous section.
4.2 User experiences with repeatable process In this subsection we describe our experiences with the repeatable process to identify user requirements for mobile information services for visitors of Delft University of Technology (TUD). The first step to be carried out is the selection of participants. We first define a visitor of the TUD as someone who is not regularly and frequently at the university. Regular students and employees are therefore out of the scope. Several groups of visitors of Delft University of Technology can be distinguished. Foreign students visit the TUD for doing their masters. Family and friends of students visit the TUD to attend the final presentation for the master’s degree. High school students visit the university to get more information on the studies offered by the university. Students from other universities visit the TUD to follow courses. Researchers and practitioners from all over the world visit the TUD to attend conferences. Business partners visit the university to work on projects. Post academic students visit the TUD for short periods to follow interesting courses. Suppliers bring goods to the TUD. From this list we defined three different groups: foreign academics, Dutch academics, and Dutch practitioners. We left out foreign practitioners because they are few in numbers compared to the other groups. We also left out relatives and suppliers since they are not related to the primary activities of Delft University of Technology. The session was executed three times. The first session was carried out with foreign academics: foreign
0-7695-2056-1/04 $17.00 (C) 2004 IEEE
6
Proceedings of the 37th Hawaii International Conference on System Sciences - 2004
Table 1: Contexts for which we decided to find a solution (service) after the BroomWagon
Session 1 Foreign academics Local Logistics by public transport (when, where, how)
How can I get in touch with my fellow countrymen? Where can I find information about educational subjects? Where do I get emergency help and medical care and do I need it? What's the weather forecast? Where can I find information about cultural/ recreation/sports events?
Session 2 Dutch academics Do I have to walk or is there a bus or a taxi and how expensive is a taxi (personal up to date route planner)? Restaurant facilities Information on courses (subject, time, place, teacher) What facilities can I use?
Session 3 Dutch practitioners Details on the appointment (location, time, attendees)
Information on other people around the campus What eating and drinking facilities are there and what is their actuality? What educational, scientifical, and social events are taking place?
Can I find a person with the same interests and meet this person? What events are going on where?
professors as well as foreign masters, and PhD students. The third session was carried out with Dutch practitioners from different firms who have connections with the TUD. The second session should be carried out with Dutch academics, who visit the university irregularly and infrequently. Since we were not able to invite enough Dutch academics from outside Delft, we asked Dutch students from different universities to stand in for them. Each time we invited the participants to come to the GSS facilities of Delft University of Technology. We used laptops on which GroupSystems TM was running. The first and third session were guided by the same facilitator, the second session was carried out by a ‘guest’ facilitator. The participants did not have to prepare anything for the session. As a warm up question we asked the participants what irritates them on mobile phones (step 0). Next, we asked them to think of questions or problems they encounter when they visit Delft (step 1). Later on we focused on possible solutions to these questions and problems (step 2). We demonstrated UMTS from the network technology and the user perspective, and showed video clips with possible UMTS services (step 3). Finally, we asked the participants to redefine the solutions (step 4). The repeatable process was carried out fairly successful in the three sessions. We did get insight in the kind of services people are interested in and in criteria, which the service should met. However, it is difficult to extract the criteria users have for adapting services. One of the reasons for this could be that real examples of services were not available yet. Based on the results of the sessions we are able to start designing mobile information services. Once the first prototypes
of these become available, we will try to focus on more specific user requirements once again. We had to make one change to the process, concerning the last activity: redefinition of solutions. The OneUp thinkLet did not seem to work out during the first session. The participants had difficulties in defining better solutions, and even more in describing why they were better. One observation is that the participants were too tired in the end to do an activity with quite a high cognitive load. Another observation is that participants are still to unfamiliar with the subject ‘mobile information services’ to really come up with solutions increasing in quality. The OneUp thinkLet has been replaced for the second and third session by a design activity. For each of the key issues identified in the problem analysis a subteam was formed. Each subteam described a so-called ‘use case’. The subteam described the context in which the problem occurs, the way they want to navigate through the menu, and the way they want the results to be presented on their mobile device. After some time the subteams present their designs to each other and comments on the design were given electronically using the LeafHopper thinkLet. This approach seemed to work better in the second session than using the OneUp thinkLet as in the first session. It, however, still was difficult to define user requirements based on this. For the third session we, therefore, asked the participants to also define criteria, which should be met in order for them to use the service just designed. The results of the three sessions show different visitor groups have very similar questions and demands, but also some differences can be distinguished, see table 1. In all sessions, location or
0-7695-2056-1/04 $17.00 (C) 2004 IEEE
7
Proceedings of the 37th Hawaii International Conference on System Sciences - 2004
Table 2: Satisfaction of participants
Likert 1-7 Interest accommodation Product value Product satisfaction Process satisfaction
Foreign academics Mean StDev N 6,2 0,79 10 5,8 0,63 10 5,3 1,42 10 5,8 1,48 10
route related information was important as well as questions of a more social nature: who are the other people, what are their interests, how can I contact other people, where are they. The session with the foreign academics showed extra attention for the weather and information on medical and non-medical emergencies. The sessions with Dutch participants (session 2 and 3) showed extra attention for social events: what can we do, and where can we eat. Finally, the session with the Dutch practitioners showed more attention to information about appointments for which they visit the university: where is it, how late is it, who are attending, how do I let know that I am late, etcetera. The satisfaction of the participants is an important measure for the success of the session [29], [32]. We used four 7-point Likert questions, relating to each of the constructs as described in Vreede et al. [32]. The results are presented in table 2. Overall conclusion is that the participants are satisfied as well with the product and the process. The outcomes and the repeatable process satisfied the problem owner. Although it is difficult to extract user requirements, key issues to be addressed with a mobile information service were identified and criteria for using such a service were defined. The ultimate satisfaction of the problem owner (who is in fact the future service provider), however, will only be clear if the product is designed and used.
5. Conclusions and discussion In this paper, we presented a design for a repeatable process for user requirements elicitation based on GSS. We applied the repeatable process for eliciting user requirements for mobile information services. These sessions resulted in ideas for mobile information services that are valued high by potential users. The designs of the participants with their sketches on what the information on the mobile phone screen should look like indicate criteria for when they will or will not use the service. With this GSS experiment we gained insight in the participants’ thinking on which services will be useful in situations when visiting a university campus and which not. We cannot say anything on other concepts related to user requirements, such as for
Dutch academics Mean StDev 5,3 0,61 5,4 0,84 4,2 1,12 5,0 1,36
N 14 14 14 14
Dutch practitioners Mean StDev N 6,1 0,35 8 6,0 0,76 8 5,6 0,74 8 5,9 0,99 8
example ease of use. [Davis 1989]. We argue that it is only possible to test these three other concepts by letting users play with services, this may be, of course, in a test or pilot situation. In the next phase of this research we will build demo’s and ask the participants of the repeatable group process to test them. After that we will give them prototypes to test and in the end we have a group of people to test the final service. The application of the repeatable process, furthermore, provided us with information on the process itself. First of all, we had to make some adjustments to the last step of the repeatable process. With this change, we believe the repeatable process for eliciting user requirements is a successful process. To really support this notion, we, of course, should run more sessions on different topics.
Acknowledgements We would like to thank Joris Knigge, Joost Kalwij, Wouter Blom, and Thomas Reiners for supporting the sessions.
References [1] Anckar, B., C. Carlsson, and P. Walde, ‘Factors affecting Consumer Adoption Decisions and Intents in Mobile Commerce: Empirical Insights’, In: Conference proceedings of the sixteenth Bled electronic commerce conference; Bled, Slovenia; June 9-11, 2003 [2] Boehm, B., P. Grunbacker, R.O. Briggs, ‘Developing Groupware for Requirement Negotiation: Lessons Learned’, in: IEEE Software, vol. 18, no. 3, 2001, p46-55 [3] Briggs, R.O., G.J. de Vreede, J.F. Nunamaker, and D. Tobey, ‘ThinkLets: Achieving Predictable, Repeatable Patterns of Group Interaction with Group Support Systems (GSS)’, in: Proceedings of the 34th Hawaii International Conference on System Sciences, IEEE Computer Society, Los Amalitos, 2001 [4] Briggs, R.O. and G.J. de Vreede, ThinkLets. Building Blocks for Concerted Collaboration, GroupSystems.com, 2003 (in press) [5] Briggs, R.O., G.J. de Vreede, and J.F. Nunamaker jr., ‘Collaboration Engineering with ThinkLets to Pursue Sustained Success with Group Support Systems’, in: Journal of Management Information Systems, vol. 19, no. 4, 2003, p31-64 [6] Bruseberg, A. and D. McDonagh, ‘Organising and Conducting a Focus Group: The Logistics’, in: Focus Groups, Supporting Effective Product Development, J.
0-7695-2056-1/04 $17.00 (C) 2004 IEEE
8
Proceedings of the 37th Hawaii International Conference on System Sciences - 2004
Langford and D. McDonagh (eds.), Taylor & Francis, London, 2003, p21-45 [7] Davis, A., ‘Operational Prototyping: A New Development Approach’, in: Software, vol. 9, no. 5, 1992, p70-78 [8] Davis, F.D., ‘Perceived usefulness, perceived ease of use, and user acceptance of information technology’, MIS Quarterly, vol. 13, no. 3, 1989, p319-340. [9] Dennis, A.R., A. Heminger, J. Nunamaker, and D. Vogel, ‘Bringing automated support to large groups: The Burr-Brown Experience’, in: Information & Management, vol. 18, no. 3, 1990, p111-121 [10] Goguen, J. and C. Linde, ‘Techniques for Requirements Elicitation’, in: 1st IEEE International Symposium on Requirements Engineering, San Diego, USA, 1993, p152-164 [11] Gopal, A. and P. Prasad, ‘Understanding GDSS in Symbolic Context: Shifting the Focus from Technology to Interaction’, in: MIS Quarterly, vol. 24, no. 3, 2000, p509546 [12] Grohowski, R., C. McGoff, D. Vogel, B. Martz, and J.F. Nunamaker jr., ‘Implementing Electronic Meeting Systems at IBM: Lessons Learned and Success Factors’, in: MIS Quarterly, vol. 14, no. 4, 1990, p327-345 [13] Grönroos, C., Service Management and Marketing, Lexington Books, Lexington, MA, 1990 [14] Grunbacker, P. and R.O. Briggs, ‘Surfacing Tacit Knowledge in Requirements Negotiation: Experiences using EasyWinWin’, in: Proceedings of the 34th Hawaii International Conference on System Sciences, IEEE Computer Society, 2001 [15] Ireland, C. and B. Johnson, ‘Exploring the Future in the Present’, in: Design Management Journal, no. 6, 1995, p57-64 [16] Johnson, P., Human-Computer Interaction: Psychology, Task Analysis and Software Engineering, McGraw-Hill, 1992 [17] Kar, E.A.M. van de and P. van der Duin, ‘Dealing with uncertainties in building scenario’s for the development of mobile services’, Proceedings of the 37th Hawaii International Conference on System Sciences, 2004 [18] Kasper J.D.P., van Helsdingen P.J.C., and de Vries jr. W., Service Marketing Management, An International Perspective, John Wiley and Sons Ltd, England, 1999 [19] Langford, J. and D. McDonagh, ‘Introduction on Focus Groups’, in: Focus Groups, Supporting Effective Product Development, J. Langford and D. McDonagh (eds.), Taylor & Francis, London, 2003a, p1-18 [20] Langford, J. and D. McDonagh, ‘Focus Group Tools’, in: Focus Groups, Supporting Effective Product Development, J. Langford and D. McDonagh (eds.), Taylor & Francis, London, 2003b, p173-224 [21] Maguire, M., ‘The Use of Focus Groups for User Requirement Analysis’, in: Focus Groups, Supporting Effective Product Development, J. Langford and D. McDonagh (eds.), Taylor & Francis, London, 2003, p73-96 [22] Maitland, C.F., Kar, E.A.M. van de, and Wehn de Montalvo, U. (2003) Network formation for provision of mobile information and entertainment services. In: Conference proceedings of the sixteenth Bled electronic commerce conference; Bled, Slovenia; June 9-11.
[23] Morgan, D.L., Planning Focus Groups, Sage Publication, Thousand Oaks, 1998 [24] Nunamaker jr, J.F., A.R. Dennis, J.S. Valacich, D.R. Vogel, and J.F. George, ‘Electronic Meeting Systems to Support Group Work’, in: Communications of the ACM, vol. 34, no.7, 1991 [25] Nuseibeh, B. and S. Easterbrook, ‘Requirements Engineering: A Roadmap’, Imperial College, London, United Kingdom, www.doc.ic.ac.uk/~ban/pubs/sotar.re.pdf, 2000 [26] Post, B.Q., ‘Building the Business Case for Group Support Technology’, in: Proceedings of the Hawaiian International Conference on System Sciences, IEEE Computer Society Press, 1992 [27] Schneider, G. and J. Winters, Applying Use Cases: A Practical Guide, Addison-Wesley, 1998 [28] Sharp, H., A. Finkelstein, and G. Galal, ‘Stakeholder Identification in the Requirements Engineering Process’, in: Workshop on Requirements Engineering Processes, Italy, 1999, p387-391 [29] Shaw, G., ‘User Satisfaction in GSS Research: A MetaAnalysis of Experimental Results’, in: Proceedings of the 31st Hawaii International Conference on System Sciences, IEEE Computer Society, 1998 [30] Shaw, M. and B. Gaines, ‘Requirements Acquisition’, in: Software Engineering Journal, vol. 11, no. 3, 1996, p149-165 [31] Standish Group, CHAOS Report: Application Project and Failure, 1995 [32] Vreede, G.J. de, R.O. Briggs, R. Van Duin, and B. Enserink, ‘Athletics in Electronic Brainstorming: Asynchronous Electronic Brainstorming in Very Large Groups’, in: Proceedings of the 33rd Hawaiian Internationl Conference on System Sciences, IEEE Computer Society, 2000 [33] Vreede, G.J. de and P. Muller, ‘Why Some GSS Meetings Just Don't Work: Exploring Success Factors of Electronic Meetings’, in: Galliers, R., S. Carlsson, C. Loebbecke, C. Murply, H.R. Hansen, and R. O'Callaghan (eds.), Proceedings of the 7th European Conference on Information Systems (ECIS), 1997, p1266-1285 [34] Zeithaml, V.A.s and Bitner, M., Services Marketing, McGraw-Hill, New York,1996
Appendix: ThinkLets For a complete description of the thinkLets we refer to [4]. The description below is copied from [4]. FreeBrainstorm Tool: GroupSystem tm 3.4 Configuration: Electronic Brainstorming, one page for each participating team member plus one extra, an additional page for each 10 participants. Script: Say this: ‘Each of you has a different electronic page. You may each type one idea. Then you must click the submit button to send the page back to the group. The system will randomly bring you back a different page. That page may have somebody else’s ideas on it. You may respond in three ways: (1) you may agree with an idea by adding detail to it, (2) you
0-7695-2056-1/04 $17.00 (C) 2004 IEEE
9
Proceedings of the 37th Hawaii International Conference on System Sciences - 2004
may argue against an idea, or (3) you may be inspired to contribute a completely new idea. You may type exactly one idea on the new page. Then you must send that page back to the group. The system will bring you a new page.’ Leafhopper Tool: GroupSystem tm 3.4 Configuration: A list of topics for discussion in Topic Commenter or one of the other list building tools (or an outline of topics in Group Outliner). Script: Explain the topics to the groups and verify their understanding. Explain the kinds of ideas that the group must contribute. Say this: ‘Start working on the topics in which you have the most interest or the most expertise. Then, if you have time, move to each of the other topics to read and comment on the contributions of others. You may not have time to work on every topic, so work first on the topics that are most important to you.’ FastFocus Tool: GroupSystem tm 3.4 Configuration: Participants view their comments in Electronic Brainstorming. The facilitator displays an empty public list, for example in Vote or Categorizer. Script: Explain clearly the kind of items that belong on the public list. If you want problem statements, give examples of problem statements. If you want solutions, give example of solutions. Say this: ‘Each of you is on a different electronic page. Each of you has a different part of our brainstorming conversation on the screen in front of you. Please read the screen in front of you, and tell me the single most important issue represented in the discussion on your screen that should be included in this public list.’ Call on each person in turn. Elicit one concept. Reframe the concept in as few words as possible. Check with the person to assure that your reframing captures the issue appropriately. When you have called on everybody in the group, say this: ‘Now press the F9 key (or click the submit button) to swap pages. Each of you should now see a different page. Read the new page and raise your hand if there is an important issue on the new page that has not yet been posted to the public list.’ Call on people who raise their hands. Discuss, condense, and add their issues to the public list. Say this: ‘Now press the F9 key to swap pages again. Every page has now been seen by at least three pairs of eyes. Is there any issue on the screen in front of you that has not yet been posted to the public list?’ Continue the cycle of page swapping and elicitation until nobody can find any important issues to add to the public list.
Tool: GroupSystem tm 3.4 Configuration: Participants view the list of items in Vote. The facilitator selects the Multiple Selection voting method and allows group members to select between 20 and 33 percent of the total number of ideas. For example, if the main list consists of 47 ideas, the group may select up to 15 ideas. Script: Say this: ‘We have a long list of brainstorming items here that we will sift before we begin working on refining them in more detail. Read through the items on the list and check the ones that you think merit more attention. I have given you X checkmarks, so you can only check X items. Once you run out of checkmarks you’ll have to uncheck an item before to check another one.’ Let the group vote and display the results on the public screen. Focus everyone on the results, saying: ‘Let’s look at the results. There are a number of items that got few or no votes. Let’s remove these from the list as they appear to be less interesting than the other ones. Let’s vote again now. I will give you Y checkmarks. Please check the items that you feel merit more attention.’ Repeat this process until you end up with the maximum number of issues that you want to handle from that moment onward. Normally, you achieve this in about 2-3 iterations depending on the length if the original list. OneUp Tool: GroupSystem tm 3.4 Configuration: Leave the participants in the tool where they brainstormed their comments. Open a public list in any list building tool, for example Categorizer or Vote, and prepare to add key items. Open a private list in categorizer and prepare to add criteria for evaluating items. Script: Say this: ‘Please look at the brainstorming comments in front of you on the screen. In a moment I will call on each of you in turn. The first person I call on will tell me the most important item represented in the discussion on his or her screen. I will post it on this list. From then on, when I call on you, you may suggest another item for the list. However, the ideas you offer must be better in some way than the ideas that are already on the list. You must offer both the idea, and the argument about why it is better that the previous ideas.’ As people offer items for the public list, discuss them, reframe them for clarity and brevity, and post them on the public list. As people offer arguments about why an idea is better than the existing ideas, abstract a criterion for judging idea quality, and post that on you private list. Later you can refine and condense those criteria and use them in moderated discussions or in a MultiCriteria thinkLet.
BroomWagon
0-7695-2056-1/04 $17.00 (C) 2004 IEEE
10