A Decision Support System for Global Team

3 downloads 0 Views 190KB Size Report
recommendations to work out which ones apply to them. To this end, we ... try the system, then complete a questionnaire about their ... of recommendations to be more relevant to project managers and other team members who have specific GSD issues to ..... Researchers raised concerns over the idea, particularly on.
A Decision Support System for Global Team Management: Expert Evaluation Sarah Beecham, Noel Carroll, John Noll Lero, The Irish Software Engineering Research Centre Email:{sarah.beecham,noel.carroll,john.noll}@lero.ie

Keywords-Global Software Development, Distributed Software, Decision Support System, Software Process, Global Teaming Model, Expert Evaluation

of the GT DSS. This paper reports on the results of the most recent trial, conducted at the International Conference on Global Software Engineering (ICGSE) held in Helsinki in August, 2011. Twelve volunteers comprising both GSD researchers, and practitioners engaged in GSD, agreed to try the system, then complete a questionnaire about their impressions. The results of the evaluation suggest that the concept is useful and has potential to achieve the goal of making research accessible to practitioners. However, the participants in the evaluation had many suggestions for how to improve the way users interact with the system, and how to improve the quality of recommendations to be more relevant to project managers and other team members who have specific GSD issues to address. The remainder of this paper is organized as follows: in the next section we provide some background on issues related to GSD, and how they could be addressed by a DSS. In Section III we briefly describe our GT DSS prototype as a possible way to bridge the gap between research and practice. Section IV reports on the evaluation, including methodology and a summary of how the group of experts viewed the currency and usability of our system. Section V concludes the paper with a summary of the results and future directions for DSS development in the GSD field.

I. I NTRODUCTION

II. BACKGROUND

Abstract—Context: The literature is rich in examples of both successful and failed global software development projects. However, practitioners do not have the time to wade through the many recommendations to work out which ones apply to them. To this end, we developed a prototype Decision Support System (DSS) for Global Teaming (GT), with the goal of making research results available to practitioners. Aims: We want the system we build to be based on the real needs of practitioners: the end users of our system. Therefore the aim of this study is to assess the usefulness and usability of our proof-of-concept in order to create a tool that is actually used by practitioners. Method: Twelve experts in GSD evaluated our system. Each individual participant tested the system and completed a short usability questionnaire. Results: Feedback on the prototype DSS was positive. All experts supported the concept, although many suggested areas that could be improved. Both expert practitioners and researchers participated, providing different perspectives on what we need to do to improve the system. Conclusion: Involving both practitioners (users) and researchers in the evaluation elicited a range of useful feedback, providing useful insights that might not have emerged had we focused on one or the other group. However, even when we implement recommended changes, we still need to persuade practitioner to adopt the new tool.

The research literature is rich in practical examples of the benefits of Global Software Development (GSD). For example, many GSD organizations enjoy extending their business competencies, reducing software development costs, providing greater access to human resources and new business opportunities [1–5]. However, ensuring that the potential benefits are realized requires that research results are accessible to practitioners who may lack the time or resources to survey the literature for solutions that apply to their specific context. We are developing the Global Teaming Decision Support System (GT DSS) [6] in attempt to close this gap by presenting current knowledge in a more accessible, tailored, and supportive way that allows practitioners to realize the value of GSD. But is such a tool really useful? And if so, what is the best way to implement such a tool? In order to answer these questions, we conducted evaluation trials of early prototypes

A. Global Software Development Motivation and Issues Many factors motivate the need to implement GSD [7], including: • The need to capitalize on globally dispersed resources, wherever they are located; • The business advantages of proximity to the market, including knowledge of customers and local conditions, as well as the good will engendered by local investment; • The quick formation of virtual corporations and virtual teams to exploit market opportunities; • Pressure to improve time-to-market by using time zone differences in ”follow-the-sun” development; • The need for flexibility to capitalize on merger and acquisition opportunities wherever they present themselves. ˇ However, GSD is not without risk. Smite and Borzovs [8] identify numerous global threats that need to be considered when developing recommendations for GSD, including

1) Geographic distribution - distance that separates the participating teams; 2) Socio-cultural differences - diversity in social, ethnic, and cultural gospel; 3) Time zone differences - temporal distance, level of working hours overlay; 4) Language differences - linguistic diversity; 5) Multisourcing - multiple team involvement in a single life cycle; 6) Organizational differences, diversity in process maturity, inconsistency in work practices, goals and expectations; 7) Cross border transaction - political and legislative diversity. These threats place major importance on implementing a software development model that fits the context of a specific project. The Global Teaming Model (GTM) is such a model, comprising a set of goals and recommended practices created to c regarding GSD [9]. address the shortcomings of the CMMI It is oriented toward practitioners, and thus served as the basis for the decision support system discussed in Section III. B. Decision Support Systems A Decision Support System (DSS) is defined as “a system under the control of one or more decision makers that assists in the activity of decision making by providing an organized set of tools intended to impose structure on portions of the decision-making situation and to improve the ultimate effectiveness of the decision outcome” [10]. A DSS typically consists of three components: information store (knowledge base), inference engine, and the user interface [11]. These are combined to improve human decision-making expertise by reducing complexity. There are many potential advantages of a DSS for GSD; for example 1) Book-keeping: some conditions affect more than one practice or recommendation, and some recommendations are predicated on several conditions. Keeping track of these relationships is time consuming and error prone. 2) ”What-if” analysis: a DSS can determine not only what should be done, but also what might be done under different circumstances. 3) Sequence of implementation, whereby a recommendation follows on from a previous recommendation. III. T HE G LOBAL T EAMING D ECISION S UPPORT S YSTEM To explore the potential of a Decision Support System for GSD, we implemented a prototype DSS based on the Global Teaming Model [6] . The goals of the GT DSS are to integrate existing knowledge and experience, ensure consistency and compatibility among recommendations, and avoid conflicting strategies. The prototype system achieves this by: 1. Providing tailored advice to software project managers and developers engaged in global community based software development, about which factors need to be considered

Fig. 1.

Decision Support System Architecture.

for their particular project situation, in all project phases, from start-up through development to deployment. 2. Assessing the capability of an organization or specific team to develop software globally. This includes: highlighting weaknesses and risks of a specific project approach; identifying issues that need to be addressed; and helping to resolve differences among project stakeholders (executives, project managers, developers and other team members). 3. Capturing organizational experience from current and previous projects. 4. Training individuals and teams on how to approach global community-based software development, including areas that need managing and how to manage them. These will be achieved with an implementation that is easy to use and extend. The DSS makes existing knowledge accessible by tailoring recommendations embodied in current guidelines to fit given contexts. Specifically, the GT DSS builds on the existing GTM model to develop a centralized decision support system. The Global Teaming Decision Support System comprises the following components (Figure 1): a knowledge base, containing classifications of project characteristics, and rules that determine appropriate recommendations based on facts provided by the user. Figure 1 shows how managers and developers interact with the Decision Support System. A session begins with the DSS presenting a set of classification questions related to GSD projects. The manager or developer specifies values for these options that capture the characteristics of the specific project in question. The Inference Engine applies rules in the Knowledge Base to these classification values to infer a set of recommendations for the project. The DSS presents the final set of recommendations to the user, who can then ask for the rationale behind them, or for references to the source of the recommendation (empirical study or previous experience).

Fig. 2.

DSS User Interface

Figure 2 shows an example of the DSS user interface in action. The left pane shows the interview questions, several of which have been answered already. The current question is highlighted, and its text is displayed in the upper right pane. The lower right pane lists a summary of recommendations made so far; clicking on any of these displays detailed recommendations and explanations. IV. E VALUATION The objectives of this study are to both expose any design problems early in development of the DSS and to assess whether the community actually needs such a tool. Software evaluations should be conducted early in the software development life cycle [12] to improve the system design and reduce the cost of correcting errors later in the development [13]. The method of evaluation we use is similar to Heuristic Evaluation (HE), in which a small number of experts, chosen intentionally to represent varying disciplines, evaluate the interface separately [14]. According to Holzinger [12], the advantages of using HE include: “the application of recognized and accepted principles; intuitiveness; usability early in the development process; effective identification of major and minor problems; rapidity; and usability throughout the development process.” Since we use the potential end users in our evaluation group, we are also able to allow for users’ needs and domain specific problem identification. However we do accept that our method may not necessarily evaluate the complete design and evaluators may focus too much on one part of the system [12]. There is no consensus as to the optimal number of experts required to evaluate a system, however, an in-depth study indicated the ideal number to be between eight and twelve [15]; our study involved twelve participants.

Fig. 3.

A. Method During the 2011 ICGSE conference we asked participants to evaluate our prototype DSS system. As shown in Figure 3, twelve experts agreed to participate, comprising researchers (25%), practitioners (42%), and individuals who were both researcher and practitioner (33%). Table I shows that the participants had an average of nearly eleven years experience in software engineering, and four and a half years experience in Global Software Development. The organizations that the participants worked for (now or in the past) covered an average of nearly 20 countries. The main offices for these companies were widely distributed, including locations in France, USA, India, Australia, Pakistan, Switzerland, Finland and Germany. Therefore, although the sample was small, the experts represented a good cross section

Mean

Range

Experience (in industry or research)

10.83

1 to 30 years

Experience in GSD

4.5

0 to 10 years

Number of countries in org (applied to 8 participants)

19.625

2 to 60

Head/main office location

France, USA, India, Australia, Pakistan, Switzerland, Finland, Germany

TABLE I E VALUATION S AMPLE D EMOGRAPHICS .

of GSD expertise. Each participant was asked to explore and navigate the system unaided, where they interacted with the system to work their way through a series of GSD related contextual questions, answers and recommendations. The time taken to complete this task varied from participant to participant, but averaged around 20 minutes per participant. After using the system, each participant completed a short questionnaire that included some demographic questions and system related usability questions. These questions were qualitative and open, designed to elicit useful feedback that will direct future development. The six usability questions were: 1. What do you think about using a Decision Support System (DSS) to create a list of recommendations to support project managers new to GSD? (i.e. is it the right kind of tool?) 2. The current system is just a subset of practices; given that this is a prototype, did you manage to get any useful feedback from using the system? 3. What extra features and practices would you like to see included in this system? 4. Is the granularity of the model/tool at the correct level of abstraction? 5. Can you think of where this system might be useful (other than with project managers involved in GSD)? 6. Any other comments? B. Results Answers to all six questions are divided into the role of the expert: Researcher, Researcher/Practitioner, and Practitioner. Question 1: What do you think about using a Decision Support System (DSS) to create a list of recommendations to support project managers new to GSD? (i.e. is it the right kind of tool?) This is a concept question: All 12 replied that they felt this tool appears to be the right kind of tool to support project managers new to GSD; enthusiasm ranged from “it depends”, to “useful”, to “great idea”. Researchers were slightly more reserved in their responses, stating “it depends, but should be useful”; The group representing researchers/practitioners, i.e., those who have been in industry and have moved to research, were most critical, always following up their enthusiasm with a

conditional statement, “It depends on the organization”, “I think personal skills and experience matter most”. Practitioners were more enthusiastic, stating “this kind of expert system would be very needed”; “I think proper training is essential”, and “DSS should be the next step”, and “Great idea”. Question 2: The current system is just a subset of practices; given that this is a prototype, did you manage to get any useful feedback from using the system? This question was asked to check whether the version of the prototype was sophisticated enough for the participants to evaluate. Researchers - some queried the way the question (question 2 in this questionnaire) was phrased, and wondered whether it was possibly leading, in that any recommendations are always useful. This was not the intention of the question, which was just to check whether they felt they could give an assessment. They felt the presentation of data was complex, and too generic, and that some combinations were strange, could be problematic in terms of granularity of questions. Researcher/practitioners: also wanted to see more context related recommendations, but felt they could get useful information from the system at this stage of development. Practitioners: condoned that the recommendations seem sensible, but also asked for more content and smaller granularity. Asked for more structure and a better UI. All three groups demonstrated through their answers that the system was sufficiently developed to allow them to give useful feedback and directions for improvements. Question 3: What extra features and practices would you like to see included in this system? We asked this question to elicit suggestions for new and improved functionality. Researchers provided more pointers to how to improve the underlying information provided and technical aspects: “need action points rather than textual descriptions and links to examples and plans”; “need improved consistency checks (contradictions, exclusions)” Researcher/practitioner’s feedback was almost all about the context and ordering of questions asked in DSS, which requires more tailoring, and detailed hierarchy. Practitioners were more concerned with HCI, UI, flow of information, structure, hierarchy: “more attention to HCI” and “UI can be improved” - “show difference before and after answering questions”, “highlight changes”. Question 4: Is the granularity of the model/tool at the correct level of abstraction? Question 4 was included to check on the granularity of the questions and subsequent answers, to see whether the hierarchy is viewed consistently by the user. Researchers were guarded on how they answered this question, in particular questioning the way the questionnaire question was asked; but the general sense is that it is difficult to say, and that even at this level of development, the granularity appeared to some as different and overlapping.

Researcher/practitioners gave conditional answers, with suggestions given for improvement, and examples and explanations were qualified with phrases like “from my experience as a developer”. Practitioners generally approved (three agreeing it was OK); the two that said not really, explained exactly why they did not agree: “some questions scope is big”, and more explicitly “It is not uniform, e.g. FTs and 1-1 communication are at different levels.” Question 5: Can you think of where this system might be useful (other than with project managers involved in GSD)? We wanted to know whether the experts thought this system could be applied to users other than Project Managers, who are envisaged to be the key users of this system. Researchers had an emphasis on education, suggesting students, and those who want to self organize within a team, and the broader picture of portfolio management. Researcher/practitioners had varied responses, from suggesting targeting “Just project managers, especially to start with”, which is pragmatic, to suggesting that any type of project would be relevant, also a classroom setting, and different levels of organizational abstraction. Practitioners provided practical advice, stating that all team members should have access to this kind of tool, and adding in particular lead level staff, who handle the build. Question 6: Any other comments? We added this question, to give the experts an opportunity to add anything they felt strongly about that was not covered in the previous 5 questions. Most participants (8 out of 12) did have something to say: Researchers raised concerns over the idea, particularly on the success of applying the system in practice, but were generally positive, stating “nice idea”. Researcher/practitioners: those that gave optional information suggested improvements to the structure, “GUI needs to improve. Maybe a tree structure of items?” Practitioners gave practical advice on where they have found help in collecting recommendations, suggesting we check existing commercial products, and also suggesting that we could extend the idea to include Agile methods. C. Discussion In general the responses were positive. Practitioners appeared more concerned with the user interface, and gave suggestions about the structure. It seems that they were imagining that they might very well use this kind of tool in practice, and were looking for what immediately struck them as barriers to tool adoption. When criticisms were made they were generally accompanied by a concrete example. Researchers looked more at the tool as something very much in development, raised concerns about how useful the tool would be in practice, and even raised doubts as to its adoption. They did see this as a learning tool, something that could be used in education. A lot of constructive ideas did come from this group, however, many of them were conceptual making it difficult to react to without further clarification.

Participant category

Questions answered

Category size

Useful comments

researchers res & prac practitioners

18 24 30

3 4 5

9 11 14

total

72

12

34

Mean 3 2.75 2.8

TABLE II A NALYSIS OF RESULTS PER GROUP.

The participants who were both researchers and practitioners provided a third perspective: they came up with new ideas for development, and in some ways were the most pragmatic in their responses, in terms of what is possible, and scoping the development. While we were opportunistic in our sampling of the experts, it might be that had we insisted that we only used practitioners (who are after all the projected end user), we would have obtained more positive overall feedback, but we would have missed out on some very valuable insights offered by researchers. While some of the comments of the researchers may appear harsh, their comments do offer useful insights that we may otherwise have missed. On close examination of the experts’ responses we found that each group contributed equally to providing useful comments - comments that lead to directing our future work which is the objective of the evaluation. Table II shows that out of a potential 72 answers, we have a very high number of comments (34) that we can use directly to help us with our future development, almost 50%. Each group provided equal amounts of useful feedback, with an average of 3 useful comments per participant. D. Limitations We are aware there may be some bias in how participants answered their questions since although anonymity was assured, they handed their responses directly to the authors of the study after their evaluation. Also, there might be some issues with construct validity in the way we asked the questions. Finally, given that all the participants were attendees at ICGSE, there is the possibility of selection bias, especially among the practitioners in that, as attendees at a research conference, they may be more forward-looking than the general population of projected end users. E. Directions for Future Work After synthesizing the responses of all the experts, we found that four key areas need to be addressed in our future development: 1. The interview process needs to be more structured, with a hierarchical organization, and questions selected based on the answers already given. 2. The recommendations need to be based on the specific GSD project context as identified during the interview process.

3. There are different levels of specificity of recommendations; but overall, the recommendations should be more specific. 4. One participant observed that “managers don’t read”; this suggests that we need to structure the recommendations as a checklist or set of bullet points, with supporting documentation and links to video or real examples. These results indicate that the knowledge base needs the most refinement: the inference engine can already handle tailored lines of questioning if the knowledge base includes rules to implement it. V. T O C ONCLUDE In this study we focussed on evaluating a prototype Decision Support System to support practitioners involved in Global Software Development (GSD). We elicited the opinions of a group of experts in GSD in order to guide our future development. We have focussed on those comments that suggested changes in the current Decision Support System that we have built [6]. There were, however, many very positive comments to confirm that the development is needed, necessary, and important. We found that the responses from our group of experts were insightful and we would recommend that anyone developing tools for practitioners undertake early evaluations, and involve not only the end-users, but also researchers who have indepth knowledge of how best to educate and communicate recommendations in a way that is clear and unbiased. The results of conducting this study indicate that the community needs this tool, and this gives us encouragement to enter the next phase of development to meet the needs of the community, and produce a tool that not only is needed in theory, but also meets the real practical requirements of the end users - the practitioners. Without conducting such an evaluation as this, at the early stages of development, we might end up producing a tool that is needed in theory, but that is never taken up and adopted in practice. ACKNOWLEDGMENTS This work was supported, in part, by Science Foundation Ireland grant 10/CE/I1855 to Lero - the Irish Software Engineering Research Centre (www.lero.ie). We also thank the twelve experts who took the time to evaluate our system at ICGSE 2011. R EFERENCES [1] D. Karolak, Global Software Development Managing Virtual Teams and Environments. IEEE Computer Society Press, 1998. [2] E. Carmel, Global Software Teams Collaborating Across Borders and Time-Zones. Upper Saddle River, NJ: Prentice Hall, 1999. [3] E. Carmel and P. Tjia, Offshoring Information Technology: Sourcing and Outsourcing to a Global Workforce. Cambridge: Cambridge University Press, 2005.

[4] R. Sangwan, M. Bass, N. Mullick, D. J. Paulish, and J. Kazmeier, Global Software Development Handbook. Boca Raton: Auerbach Publications, 2007. [5] M. Jimnez, M. Piattini, and A. Vizcano, “Challenges and improvements in distributed software development: A systematic review,” Advances in Software Engineering, vol. 2009, Article ID 710971, no. 2, p. 14 pages, 2009. [6] S. Beecham, J. Noll, I. Richardson, and D. Dhungana, “A decision support system for global software development,” in Methods and Tools for Project/Architecture/Risk Management in Globally Distributed Software Development Projects (PARIS) Workshop, co-located with IEEE International Conference on Global Software Engineering (ICGSE ’11), Helsinki, Finland, 2011. [7] J. D. Herbsleb and D. Moitra, “Global software development,” IEEE Software, vol. 18, no. 2, pp. 16–20, 2001. ˇ [8] D. Smite and J. Borzovs, “A framework for overcoming supplier related threats in global projects,” in Proceedings of the Intl. Conf. on European Software Process Improvement (EuroSPI). Finland: published in LNCS by Springer Verlag, October 2006, pp. 49–60. [9] I. Richardson, V. Casey, F. McCaffery, J. Burton, and S. Beecham, “A process framework for global software engineering teams,” Information Software Technology, 2012. [10] G. Marakas, Decision support systems in the 21st century. Upper Saddle River, New Jersey, USA: Prentice Hall Inc, 2003. [11] C. Holsapple and A. Whinston, Decision Support Systems: A Knowledge-Based Approach. Minneapolis/St. Paul, MN: West Publishing, 1996. [12] A. Holzinger, “Usability engineering methods for software developers,” Commun. ACM, vol. 48, no. 1, pp. 71–74, 2005. [13] N. M. M. Zainuddin, H. B. Zaman, and A. Ahmad, “Heuristic evaluation on augmented reality courseware for the deaf,” in 2011 International Conference on User Science and Engineering (i-USEr), Nov. 2011, pp. 183– 188. [14] J. Nielsen, “Enhancing the explanatory power of usability heuristics,” in CHI’94, ACM, Boston, Massachusetts USA, 1994. [15] W. Hwang and G. Salvendy, “Number of people required for usability evaluation: the 10 plus/minus 2 rule,” Commun. ACM, vol. 53, no. 5, pp. 130–133, 2010.