From Requirements to Continuous Re-prioritization of ... - IEEE Xplore

46 downloads 8405 Views 408KB Size Report
Department of Computer Science and Engineering. Chalmers University ... collection practices in five software development companies. We introduce a model ...
2016 International Workshop on Continuous Software Evolution and Delivery

From Requirements To Continuous Re-prioritization Of Hypotheses Helena Holmström Olsson

Jan Bosch

Department of Computer Science Malmö University Sweden [email protected]

Department of Computer Science and Engineering Chalmers University of Technology Sweden [email protected] a feature is typically based on very limited data [1]. Although companies use multiple techniques to collect customer feedback [2, 3, 4,], they lack a systematic approach for doing this. As recognized in previous research [1, 5, 6, 7, 8, 9], the problem is not the lack of data. On the contrary, most companies collect huge amounts of data from customers and from products in the field [10, 11]. Rather, the challenge is for developers and product managers etc. to get access to relevant data that helps in decision-making and prioritization processes. Often, and as a major problem in the companies we studied, feedback that is collected in the early stages of development, is seldom validated with data collected in later stages of development. This means that companies cannot continuously validate whether a feature is actually adding value to customers, resulting in a situation in which the outcome of the pre-study is difficult to question and that re-prioritization of features is scarce. As a consequence, the risk is that software companies invest in developing features that were considered value adding in the pre-study phase, but without the ability to continuously validate if the predicted value indeed is realized in later stages of development, as well as after the release of the feature. In this paper, we explore the data collection practices, and the challenges associated with these, in five software development companies. In these companies, huge amounts of customer data is collected throughout the development process. However, this data has limited impact on business decisions and does not allow for a dynamic development process characterized by continuous re-prioritization of features. Based on our previous research, we introduce a model that allows for a systematic approach to data collection, and in which multiple customer feedback techniques are used to run frequent validation cycles with customers. Our model advocates a development approach in which requirements are viewed as hypotheses that, instead of being frozen early in the development process, are continuously validated as part of a hypothesis backlog. We present results that show how the use of the model increases the number of feedback techniques that are used, as well as the frequency of validation cycles also after development has started. The contribution of this paper is twofold. First, we identify challenges associated with data collection in large softwareintense companies. Second, we introduce a model to help the companies adopt a systematic approach to customer data collection and we identify new ways-of-working that emerge as a result of the model and the development approach it

Abstract—Typically, customer feedback collected in the prestudy, and during the early stages of software development, determines what new features to develop. However, once the decision to develop a new feature is taken, companies stop validating if this feature adds value to its intended customers. Instead, focus is shifted towards developing and implementing the feature. As a result, re-prioritization of feature content is rare, and companies find it difficult to continuously assess and validate feature value. In this paper, we explore the data collection practices in five software development companies. We introduce a model that allows continuous re-prioritization of features. Our model advocates a development approach in which requirements are viewed as hypotheses that need to be continuously validated, and where customer feedback is used to continuously re-prioritize feature content. We identify how the model helps companies transition from early specification of requirements towards continuous re-prioritization of hypotheses.

CCS Concepts • Software creation and management➝Software development process management➝Software development methods➝Agile software development.

Keywords Customer feedback; continuous re-prioritization; continuous validation, hypotheses. I.

INTRODUCTION

In most software development companies, the pre-study is the phase in which decisions are taken on whether to develop a new feature or not. In this phase, the expected value of a feature is estimated and if the outcome is positive the feature is developed. However, in this early phase the estimated value of

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. CSED'16, May 14-15 2016, Austin, TX, USA Copyright is held by the owner/author(s). Publication rights licensed to ACM. ACM 978-1-4503-4157-8/16/05…$15.00

DOI: http://dx.doi.org/10.1145/2896941.2896955

63

link customer data to business decisions by having R&D efforts prioritized based on real-time customer data. In online offerings, the notion of A/B testing is used as a mechanism to determine what version of a feature that adds most value to customers. In similar with this, Bosch [8] outlines a process for an experiment as starting with the formulation of a hypothesis and identification of quantitative metrics to measure whether the hypothesis is fulfilled or not. Recently, the process for feature experiments was further elaborated upon, and in developing the HYPEX model, Olsson and Bosch [1] provide process support for initiating, running and evaluating feature experiments. In similar, Fagerholm et al [7] describe the basic building blocks of continuous experimentation and emphasize that customer experiment results need to be closely linked to feature prioritization and road mapping in order to effectively support a more flexible business strategy. The experiment techniques presented above help companies collect customer data primarily during development, as well as post-deployment. In this way, companies can shorten feedback loops to customers and increase their ability to continuously improve the product based on customer feedback. However, and as recognized in this study, although companies collect large amounts of data throughout the development cycle, they lack a systematic approach in which they utilize different customer feedback techniques to help them accomplish a dynamic development process that allows continuous re-prioritization of features.

advocates. In this way, our research helps software development companies transition from early specification of requirements towards continuous re-prioritization of hypotheses. The paper is organized as follows. In section II, we summarize related work and we present the model we developed in our previous research. In section III, we describe the case study design. We present our case study findings in section IV. In section V, we discuss our findings and in section VI we conclude the paper. II.

RELATED WORK

A. Customer feedback techniques As recognized in previous research [2, 3, 4], companies apply a wide range of techniques to collect customer feedback. In various ways, these techniques allow customers to engage in problem definition, requirements engineering and system evaluation and validation. Already before development starts, as well as in the early stages of development, use cases, scenarios, prototyping, customer interviews, customer observations and surveys are common [2]. Likewise, alphaand beta-testing, techniques are efficiently used pre- and during development in order to continuously validate feature content and value with customers. In addition, and in later stages of development, post-deployment data is collected [8, 12]. With products being increasingly connected, companies can monitor them, collect data on how they perform, predict when they break, know where they are located and learn about how they are used or not used. However, and as reported in previous research [5, 6, 11], companies struggle with how to effectively include the feedback they collect into their development processes. In earlier research [1], we coined the term the ‘open loop’ problem denoting the situation in which there is no accurate way for product management to validate if the features that were prioritized in the pre-study phase are also the features that are appreciated by customers after release, and that generate the expected revenue to the company. As a result, the confirmation of the correctness of the decisions takes place only after the product has been deployed to its customers, and there is the risk that companies invest significant R&D efforts in development of features that have no proven customer value.

C. The QCD model In our previous research [16], we developed the ‘Qualitative/quantitative Customer-driven Development’ (QCD) model (Figure 1). Customer(Feedback( Techniques((CFT):( ( Qualita1ve(data:( •  Surveys* •  Interviews* •  ParAcipant* observaAons* •  Prototypes* •  MockHups( ( Quan1ta1ve(data*:( •  Feature*usage* •  Product*data* •  Support*data* •  Call*center*data* (

New* hypotheses*

Product(R&D(organisa1on(

SelecAon*of* hypothesis* SelecAon* of*CFT*

Selected* customers*

Customer* Feedback* Technique*(CFT)*

CFT* Data*

Hypotheses* backlog*

Hypothesis*

!"Concepts" !"Ideas"

New(hypotheses(based( on:( •  Business* strategies* •  InnovaAon* iniAaAves* •  QualitaAve* customer* feedback* •  QuanAtaAve* customer* feedback* •  Results*from*QCD* cycles*

B. Continuous customer experimentation In recent research, increasing attention has been put to continuous experimentation with customers [1, 7, 8, 9]. Inspired by the ‘Build-Measure-Learn’ loop [13], a number of approaches emphasizing rapid customer evaluation of small product increments are emerging. While the concept is not new [14], it has become a reality also in large software-intensive companies that nowadays apply experiment techniques to learn about product use and customer behaviors [12, 15]. The concept of an experiment system has been defined as an experiment-centric approach to product development with the purpose of accelerating innovation through systematic and continuous collection of customer feedback [8]. As such, experiment based approaches to software development seek to

Products(in(the(field(

QCD*validaAon* cycle*

Product* data* database*

Deployed* products*

CFT* Data* Abandon*

* *Loop"in"which"decisions"are"taken"on"whether"to"do"more"qualita9ve"customer"feedback"collec9on."

Figure 1. The QCD model.

The model advocates a structured approach to customer data collection, and for continuous re-prioritization of feature content. Our model emphasizes an approach in which requirements, instead of being specified and frozen early in the development cycle, are viewed as hypotheses that are continuously validated by using qualitative and quantitative customer feedback techniques (CFT’s). In this way, different customer feedback techniques are selected to continuously

64

validate the value of a new feature, and to help re-prioritize the feature backlog also after development has started. As pictured in Figure 1, the QCD development approach is driven by hypotheses. The hypotheses are derived from business strategies, innovation initiatives, qualitative and quantitative customer feedback and results from on-going customer validation cycles. Once a hypothesis has been selected for validation, the company picks a customer feedback technique (CFT) for this purpose. The CFT can be of a qualitative nature e.g. customer interviews, customer surveys, observations or prototypes and/or mock-ups. Alternatively, the CFT can be of a quantitative nature e.g. support data, call center data, feature experiments such A/B testing or product data revealing feature usage etc. Once the CFT has been chosen, the validation cycle starts. The validation cycle typically involves a limited number of selected customers, or it can be deployed directly in existing products in the field. If the CFT that was chosen is qualitative, the validation cycle consists of direct interactions with customers resulting in smaller amounts of qualitative data sets. If the CFT is quantitative, the validation cycle consists of having the feature deployed in products to collect data revealing feature performance and operation. As the result of the validation cycle, the CFT data is used to decide whether to re-prioritize the hypothesis and put it back into the backlog, run another validation cycle using a different CFT, or abandon the hypothesis due to lack of customer value. III.

mix of group interviews and workshops with company representatives in all five case companies. In our group interviews, we had semi-structured interview templates with pre-defined themes that were used across the companies. The themes focused on current data collection practices in the predevelopment phase, the development phase and the postdeployment phase. To target the objective of our study, we were interested in understanding the current techniques that the companies use to collect customer feedback, and the potential challenges they experience in relation to these. We explored qualitative as well as quantitative feedback techniques, and as part of the interview we asked the companies (1) what techniques they use in the different development stages, (2) how their current data collection practices inform decisionmaking processes, (3) what challenges they experience, and (4) to what extent their current data collection practices allow for continuous re-prioritization of the feature backlog also after development has started. In all companies, we met with groups consisting of multiple roles such as e.g. product managers, project managers, product owners, chief architects and software developers. Each group interview lasted for two hours and all discussions were held in English. The interviews were documented by both researchers and after each interview we merged and compared our notes. In addition to the group interviews, we organized workshops at each company site, as well as joint workshops to which all companies were invited. At these workshops, we had predefined themes that were discussed among all company representatives, and at some occasions we had company representatives give presentations to share their experiences related to customer feedback techniques and data collection practices. The company presentations were shared with the researchers and constitute an additional data source. Finally, and as part of the workshop sessions, we interacted with the companies in relation to customer experimentation techniques, such as e.g. feature experiments, and how to initiate, run, and evaluate these. In Table 2, we summarize our research activities.

CASE STUDY DESIGN

The research reported in this paper focuses on the customer feedback practices, and the challenges associated with these, in five software development companies. We conducted case study research [17, 18] based on interviews and workshops in the case companies. A. Case companies This study builds on a multiple case study conducted in five software-intensive companies. All companies develop software systems that serve businesses such as operators, distributors and retailers before they reach the end customers. The companies are presented in Table 1 below. Case company A

Research phase July – Dec. 2013

• •

Domain

A provider of telecommunication systems and equipment, communications networks and multimedia solutions for mobile and fixed network operators. B A software company specializing in navigational information, operations management and optimization solutions. C A network video company offering products such as network cameras, video encoders, video management software and camera applications for video surveillance. D An equipment manufacturer developing, manufacturing and selling a variety of products within the embedded systems domain. E A pump manufacturer producing circulator pumps for heating and air conditioning, as well as centrifugal pumps for water supply. Table 1. The five case companies involved in the study.

Jan. – July 2014

• •

July – Dec. 2014

• •

B. Data Collection and Analysis The research reported in this paper covers empirical work conducted between July 2013 – June 2015, and is based on a



65

Research activities Group interviews Company workshops

Joint workshops Feature experiment workshops

Joint workshops E-mail questionnaire Validation workshop

Description and purpose In the first phase, we had one group interview in each of the five companies. We met with 5-8 people in each company. In addition, we arranged workshops at each company site. In this phase, the focus was to identify the customer feedback techniques that are used and what data that is collected. In the second phase, we had three joint workshops to which all companies were invited. In addition, we arranged workshops in each company focusing on how to initiate, run and evaluate feature experiments. Based on this work, we developed the HYPEX model [1]. In the third phase, we had two joint workshops to which all companies were invited. The focus was to evaluate on-going experiments and to expand the number of experiments and to identify

avoid having one perspective and/or data source influence us to heavily.

additional experiments. Based on this work, we developed the QCD model [16]. Jan. – In this phase, we introduced the • Joint June 2015 QCD model in the companies to workshops help them adopt a systematic approach to customer data collection and we start identifying new ways-of-working that emerge as a result of the model and the development approach it advocates. Table 2. Research activities in the project phases.

IV.

FINDINGS

This section describes the data collection practices, and the challenges associated with these, in the five case companies. We identify data collection practices in the pre-development phase, during development and in the post-deployment phase. We summarize our case study findings in Table 3, as problems identified based on a generalization of the experiences in the five companies.

In total, our collaboration with the companies involved twelve group interviews at the different companies with 5-8 people participating in each group, eleven workshops with 4-8 people from different companies attending each workshop, and seven joint company workshops at which company representatives from all five companies participated. Also, the company representatives continuously presented how they experienced the research collaboration and the project results. Based on our experience of working with a large number of companies in the software-intensive domain, we see that there are different types of features. In our research collaboration with the case companies, we distinguish between three different types of features. The research presented in this paper is concerned with ‘flow’ features as defined below: • Wow features: This type of feature is concerned with creating a “wow” effect with the customer, i.e. the person making the buying decision. Typically, wow operation of the system. Instead, they are added to the system to drive sales. • Flow features: Flow features are concerned with the functionality that is used continuously and on a daily basis. If well built, flow features contribute significantly to customer satisfaction, efficiency, and they help accomplish the business goals set for the system. Although ‘wow’ features drive short-term sales, it is the ‘flow’ features that decide the fate of the system in the long run. • Check box features: Check box features are features that need to be present in the system because competitors have them and, therefore, customers expect them. In B2B markets, certain features are required in order to be able to participate in call for proposal (CFP) processes. If such a ‘check box’ feature is absent, the company will not even be invited to negotiations.

A. Data collection practices: Pre-development All case companies collect large amounts of customer feedback as part of their product development process. In early development stages, product owners work closely with a selected number of customers to collect feedback. In company A, there are customer-specific teams that serve the needs of a particular customer [21]. Typically, techniques such as alphaand beta testing, customer interviews, surveys, participant observations, expert reviews, and prototyping are used to obtain qualitative customer feedback on product concepts and ideas. The intention is to have customers try early versions of a product and provide feedback on interfaces, design choices and product functionality. In company D, extensive evaluation is done pre-development by using test labs in which test-drivers try early prototypes of vehicles. Similarly, the other companies deploy a number of techniques to capture customer perceptions of new product concepts. In all case companies, the predevelopment phase generate primarily qualitative customer feedback, i.e. smaller amounts of data that reveal individual customer perceptions on new functionality. However, while all companies have well-established techniques for collecting qualitative customer feedback in early stages of development, they experience difficulties when interacting with customers to ask what they want. Typically, customers are not aware of the many technological opportunities that exist. Moreover, to provide input on existing features and ways-of-working might imply identifying your own weaknesses or mistakes. As a result, qualitative customer feedback techniques typically capture “ideal” customer situations and behaviors rather than the “actual” state and “real” use of a system. Moreover, the companies experience a lack of validation of qualitative feedback, i.e. once the decision to develop a new feature has been taken they stop to continuously validate if this feature adds value to customers.

C. Validity of results Qualitative research rarely has the benefit of previously planned comparisons, sampling strategies, or statistical manipulations that control for possible threats. Instead, researchers must try to rule out validity threats after the research has begun by using evidence collected during the research itself to make alternative hypotheses or interpretations implausible. To strengthen the validity of empirical research, triangulation is an important concept [19, 20]. For the purpose of this study, we used data triangulation, i.e. more than one data source, and observer triangulation, i.e. more than one observer in the study. In addition, theory triangulation and methodological triangulation were applied in that we build on a number of previous studies and frameworks presented in these, and we use a combination of data collection methods e.g. interviews, workshops and demonstration sessions in order to

B. Data collection practices: During development To capture real use of their systems, four of the five case companies run feature experiments as part of their development cycles. A feature experiment, as described in our previous research, is a process in which companies define the expected behavior of a new feature, develop a small slice of this feature, instrument the code so that feature usage can be measured, collect data on actual usage and perform gap analysis if there is a gap between expected and actual behavior [1]. Based on the outcome of this analysis, hypotheses can be developed on why the gap exists, i.e. if an alternative implementation of the feature is needed, if the feature slice was too small and needs to be extended to provide accurate metrics, or of the feature should be abandoned altogether.

66

In four of the five companies, feature experimentation is a newly adopted practice. However, already now there are lessons learned in these companies with regard to feature prioritization and content. By defining the expected behaviors of a feature and then run continuous experiments in which these are validated, the companies experience an improved understanding of why certain assumptions exist. The interviewees report on a situation in which they better understand what features are used or not and what functionality that add value to customers. In particular, product managers experience that feedback loops are shortened and that they get more accurate data to use when prioritizing feature content.

that don’t work well. Qualitative customer feedback is not validated in later stages, causing a situation in which vast amounts of development takes place although it has never been proven valuable to customers. Table 3. Summary of the problems identified in the case companies. Lack of validation of feedback

D. Continuous validation of feature value: The QCD model As experienced in all case companies, the collection of customer feedback is challenging. Typically, customer feedback is limited to certain phases of development and in most companies it is difficult to collect feedback also after development of a new feature has started. To help the case companies address the problems identified in the section above, we introduced the QCD model. Our model advocates a development approach in which requirements are treated as hypotheses that need to be continuously validated with customers. Below, and based on our empirical study in the case companies, we show how four of the five companies work with customer data collection in the pre-development phase, during development and post-deployment. While the companies had established practices in parts of these phases already before the QCD model was introduced, they have significantly changed their ways-of-working as a result of the QCD model and the development approach it advocates. In particular, the companies have added new customer feedback techniques in the development phase to help them improve and confirm qualitative data collected pre-development. In Table 4, we outline the customer feedback techniques that the companies use. Those that are marked in italics are those techniques that were introduced as a result of the QCD model and the continuous validation approach it emphasizes. We have not yet been able to introduce the model in the fifth company. However, already now the other four companies show interesting results in terms of new techniques they have adopted and new ways-of-working as a result of these. In Table 4, the companies are referred to as A, B, C and D corresponding to the case description earlier in this paper, and the customer feedback techniques they use are presented according to the different development phases.

C. Data collection practices: Post-deployment After commercial release to customers, the case companies collect large amounts of data revealing product operation and performance. This data is collected post-deployment and allows for quantitative analysis of customer behaviors in terms of e.g. features used or not used, information on system restarts, outage, faults, re-booting, upgrade success etc. Dimensioning data such as CPU load, licenses sold etc., serve as important input for system configuration and capacity, as well as for producing sale statistics and market assessments in all case companies. For example, in the automotive domain, performance data such as speed, fuel efficiency, energy consumption, acceleration, and road conditions is continuously collected from the vehicle. However, while the mechanisms to collect data are there, the companies experience significant problems in analyzing the data and have it inform business decisions and development investments. Also, while all companies have access to large amounts of data, they experience vast amounts of “useless” data due to insecurity in what questions to ask when collecting it. Although the companies have access to large data sets, these are only used for troubleshooting and support, and for answering customer queries when problems occur. What is not common is to have this data inform the development organization. Problem identified: The ‘open loop’ problem

Large amount unused features

Wrong implementation features

of

of

Requirements are seen as “truths”

Lack of feature optimization

Description: The situation in which product management experience difficulties in getting accurate customer data. This leads to a situation in which decisions are taken based on opinions rather than customer data, resulting in R&D investments that are not aligned with customer needs. Due to limited mechanisms to validate feature usage, our case companies are convinced that a large number of the features they develop are never used, and that investments are put on functionality that are not proven valuable to customers. There are different ways in which features can be implemented. However, there is no efficient way in which the companies can continuously validate and re-prioritize these alternatives with customers to decide which alternative is the best one. A common view in all case companies is that requirements are regarded as “truths”. All companies experience difficulties in validating and re-prioritizing requirements also after development has started. In the companies we study, the majority of the development effort is allocated to new feature development. As a result, time is spent on adding functionality instead of re-implementing features

Case company A

B

C

67

Predevelopment Customer unit workshops Customerspecific teams Product seminars Surveys Product owner Customer conferences Customer interviews Surveys Product owner Surveys Product seminars

During development Feature value experiment

Post-deployment

Webinars Optimization feature experiment

Operational data Sales statistics Support data Usage pattern metrics

Configuration data experiment

Operational data Sales statistics Support data Configuration data

Operational data Sales statistics Support data Feature value metrics

Finally, in company D, post-deployment data has been collected for a long time. However, while this data reveals system operation and performance, it doesn’t allow for monitoring of individual features. As a result, the company is not able to track whether a specific feature is used or not, or if there are specific problems related to individual features. As part of our study, company D has developed new functionality that logs individual features. Currently, the functionality is prototyped in test vehicles with the intention to be deployed also in vehicles for commercial use. In this way, the company has identified metrics that will allow for continuous validation of feature usage and content also after development has started. Moreover, this has resulted in new prototypes being developed and new ways-of-working where quantitative data is collected during and post development to inform qualitative tests with customers conducted in the early stages of development. Below, we summarize our findings and present how the QCD model addresses the problems as identified in the case companies (Table 5).

Diagnostic data collection Sales statistics Support data Feature logging metrics Table 4. Customer feedback techniques used in the case companies. D

Proof of concept Prototyping User test labs Test driving

V.

Feature logging

DISCUSSION

As can be seen in Table 4, all companies have adopted new ways-of-working in the development phase and in the post-deployment phase. In the companies, different types of feature experiments have been initiated in the development phase in which smaller increments of features are continuously validated with customers and where these feature increments are instrumented so that relevant metrics can be collected. In this way, the companies can track feature usage and better understand how customers use (or not use) new features. To make this happen, all companies have introduced new metrics into their products allowing for additional postdeployment data collection and analysis. Company A runs feature experiments in which selected features are instrumented with metrics in order to monitor feature usage. The experiment is conducted during development and from having developed bigger chunks of functionality the company has started to develop increments of features that can be validated more frequently and at a lower cost. From being well versed in pre-development and post-deployment data collection, company A is developing new techniques that will help them validate feature content also during development of a feature and that allow for post-deployment data collection and analysis. In company B, and as a result of a feature experiment that was initiated as part of this research, training for customers, i.e. ‘webinars’, have been introduced. This was due to the experiment revealing limited use of a certain feature and it was concluded that additional customer training was needed to encourage new usage patterns. Also, company B used the quantitative data collected in the feature experiment to design customer interviews and customer surveys as a qualitative follow-up study to help them better understand the quantitative product data, i.e. data revealing usage patterns, that was collected during the experiment. In this way, the company is combining qualitative and quantitative customer feedback techniques to help them continuously validate if the features they develop are also what their customers want. Company C, runs experiments in which the system has been instrumented so that configuration data is collected during the installation of the product, and then sent to the company. On the basis of this data, the company can continuously learn about different customer configurations, customer segments and hardware platforms etc. While product data has been collected post-deployment for a number of years, company C has never collected configuration data as part of on-going development cycles. In this way, the company is introducing an additional quantitative customer feedback technique to complement already established techniques.

Problem identified: The ‘open loop’ problem

QCD model: Requirements are treated as hypotheses that are continuously validated with customers. In this way, the model helps companies close the ‘open loop’ and have customer feedback inform the development process. Large amount of unused Features are validated with customers before features being fully developed. The model helps companies reduce effort put on unused features. Also, hypotheses can target existing features to help reveal use/non-use. Wrong implementation of The model suggests iterative cycles in which implementation alternatives are continuously features evaluated to confirm which implementation alternative is the most appreciated one. Requirements are seen as Requirements are treated as hypotheses that are “truths” continuously validated. Only after iterative validation cycles, decisions are made whether to continue development, put it back into the backlog, or abandon the hypothesis. Lack of feature By continuous data collection revealing feature optimization usage, the model helps companies identify what features and what behaviors can be optimized. Misrepresentation of A wide range of CFT’s are used allowing customers companies to learn from a larger set of customer data. Lack of validation of Qualitative and quantitative CFT’s are feedback combined, with qualitative feedback used as input for quantitative validation cycles and vice versa. Large amounts of (useless) Frequent validation cycles and different CFT’s data are used to help companies refine their hypotheses and ask the right questions. Table 5. Problems as identified in the case companies, and how the QCD model addresses these.

With regard to the problems that were identified in the case companies, and described in section IV, Table 6 presents what problems that have been addressed in each company by adopting the QCD model and the development approach it advocates. In the left column, we list the problems as identified in section IV, and for each company we show whether this has been addressed or not by using the QCD model. In the table,‘Yes’ indicates that the problem has been addressed – however, not that it is solved and not an issue

68

[5]

anymore. ‘No’ indicates that the problem has not yet been addressed by the new ways-of-working. Problem identified The ‘open loop’ problem Large amount of unused features Wrong implementation of features Requirements are seen as “truths” Lack of feature optimization Lack of validation of feedback

A Yes

B Yes

C Yes

D Yes

Yes

Yes

No

No

Yes

Yes

No

No

[7]

Yes

Yes

Yes

Yes

[8]

Yes

Yes

No

No

Yes

Yes

Yes

Yes

[6]

[9]

Table 6. Problems addressed by the QCD development approach.

VI.

[10]

CONCLUSION

This paper presents a multiple case study in which we explore the data collection practices, and the challenges associated with these, in five software development companies. Based on our previous research, we introduce the ‘Qualitative/quantitative Customer-driven Development’ (QCD) model in the case companies. The model allows for a systematic approach to customer data collection in which qualitative and quantitative customer feedback techniques are used to run frequent validation cycles with customers. By recognizing the synergies between qualitative and quantitative customer feedback techniques, and by advocating continuous validation of feature content, the model helps companies move from early specification of requirements towards continuous re-prioritization of hypotheses.

[11]

[12]

[13] [14] [15]

[16]

ACKNOWLEDGMENT We would like to thank all the participants in the companies for their valuable input in interviews and workshops.

[17]

REFERENCES [1]

[2] [3]

[4]

Olsson, H.H., and Bosch, J. From Opinions to Data-Driven Software R&D: A Multi-Case Study On How To Close The ‘Open Loop’ Problem. In Proceedings of EUROMICRO, Software Engineering and Advanced Applications (SEAA), August 27-29, Verona, Italy, (2014). Hofman H.F., and Lehner F. Requirements engineering as a success factor in software projects, IEEE Software, 18, 58-66, (2001) Kabbedijk, J.; Brinkkemper, S.; Jansen, S.; van der Veldt, B.: Customer Involvement in Requirements Management: Lessons from Mass Market Software Development, Requirements Engineering Conference, (2009) Yiyi Y., Rongqiu C.: Customer Participation: Co-Creating Knowledge with Customers, Wireless Communications, Networking and Mobile Computing, (2008)

[18]

[19] [20] [21]

69

Olsson H. H., and Bosch J. 2013. Towards Data-Driven Product Development: A Multiple Case Study on Post-Deployment Data Usage in Software-Intensive Embedded Systems. In Proceedings of the Lean Enterprise Software and Systems Conference (LESS), December 1-4, 2013, Galway, Ireland. Olsson Holmström H., and Bosch J. (2013). Post-Deployment Data Collection in Software-Intensive Embedded Products. In Proceedings of the 4th International Conference on Software Business, June 11-14, 2013, Potsdam, Germany Fagerholm, F., Sanchez G., Mäenpää, H., and Münch, J. Building blocks for continuous experimentation. In the Proceedings of the RCoSE ‘14 workshop, June 3, Hyderabad, India, (2014) Bosch, J. 2012. Building Products as Innovations Experiment Systems. In Proceedings of 3rd International Conference on Software Business, June 18-20, Cambridge, Massachusetts. Kohavi, R., Crook, T., and Longbotham, R. 2009. Online Experimentation at Microsoft. In: Third Workshop on Data Mining Case Studies and Practice Prize. Chen, H., Chiang, R., and Storey, C.V. Business intelligence and analytics: From big data to big impact. MIS Quarterly, Vol. 36. No. 4, pp 1165-1188, December (2012) Backlund E., Bolle M., Tichy M., Olsson H.H., and Bosch J. Automated User Interaction Analysis for Workflow-Based Web Portals. In Proceedings of the 5th International Conference on Software Business, June 16-18, Paphos, Cyprus, (2014). Bosch, J., and Eklund, U. 2012. Eternal Embedded Software: Towards Innovation Experiment Systems, In Proceedings of International Symposium on Leveraging Applications, 15-18 October, Crete. Ries, E. 2011. The Lean Startup: How Constant Innovation Creates Radically Successful Businesses. London: Penguin Group. Blank, S. 2005. The Four Steps to the Epiphany: Successful Strategies for Products that Win (3rd edition), Cafepress.com. Olsson H. H., Alahyari H., and Bosch, J. 2012. Climbing the “Stairway to Heaven”: A multiple-case study exploring barriers in the transition from agile development towards continuous deployment of software, In Proceedings of the 38th Euromicro Conference on Software Engineering and Advanced Applications, September 5-7, Cesme, Izmir, Turkey. Olsson, H. H., & Bosch, J. 2015. Towards Continuous Customer Validation: A Conceptual Model for Combining Qualitative Customer Feedback with Quantitative Customer Observation. In Proceedings of the 6th International Conference on Software Business, June 10-12, Braga, Portugal, pp. 154-166. Springer. Yin R. K. 2003. Case study research. Design and methods, 3rd edn. London, Sage. Runesson, P., and Höst, M. 2009. Guidelines for conducting and reporting case study research in software engineering. Empirical Software Engineering, 14, pp.131 – 164. Maxwell J. A. 2005. Qualitative Research Design: An interactive approach, 2nd Ed. Thousands Oaks, CA: SAGE Publications. Stake, R.E. 1995. The art of case study research. SAGE Publications. Olsson H.H., Sandberg A., Bosch J., and Alahyari H. (2014). Scale and responsiveness in large-scale software development. IEEE Software, Vol 31, Issue 5, pp. 87- 93.

Suggest Documents