Measuring Domain Engineering Effects on Software Change Cost

2 downloads 2051 Views 92KB Size Report
Measuring Domain Engineering Effects on Software Change Cost. Harvey Siy. Software Engineering Technology Transfer. Lucent Technologies.
Measuring Domain Engineering Effects on Software Change Cost Harvey Siy Software Engineering Technology Transfer Lucent Technologies Bell Laboratories [email protected]

Abstract Domain Engineering (DE) is an increasingly popular process for efficiently producing software. DE uses detailed knowledge of a particular application domain to define rigorously a family of software products within that domain. We describe methodology for precise quantitative measurement of DE impact on software change effort. The methodology employs measures of small software changes to determine the effect of DE. We illustrate this approach in a detailed case study of DE in a telecommunications product. In the particular case the change effort was dramatically reduced. The methodology can precisely measure cost savings in change effort and is simple and inexpensive since it relies on information automatically collected by version control systems.

Audris Mockus Software Production Research Department Lucent Technologies Bell Laboratories [email protected]

to measure effects of DE on software change effort. The methodology is based on modeling measures of small software changes or Modification Requests (MRs). By modeling such small changes we can account for primary factors affecting effort like individual developer productivity and the purpose of a change. The measures of software changes come from existing change management systems and do not require additional effort to collect. We apply this methodology to an actual DE project at Lucent Technologies. We show that using DE increased coding productivity around four times. Sections 2 and 3 describe DE in general and specifics of the particular case study. Sections 4 and 5 describe general methodology to estimate effects of DE and the analysis we performed on one DE project. Finally we conclude with relevant work section and a summary.

2. Domain Engineering 1. Introduction Software engineering productivity is notorious for being difficult to improve [4]. Domain Engineering (DE) is a promising new approach for improving productivity by simplifying coding tasks that are performed over and over again [16, 5]. DE practitioners believe that it can improve productivity by a factor between two and ten, although there has been no quantitative empirical support for such claims. Quantifying the impact of a technology on software development is particularly important in making a case for transferring new technology to the mainstream development process. Rogers [14] cites observability of impact as a key factor in successful technology transfer. Observability usually implies that the impact of the new technology can be measured in some way. However, most of the time, the usefulness of a new technology is demonstrated through best subjective judgment. This may not be persuasive enough to convince other managers and developers to try the new technology. In this paper we describe a simple-to-use methodology

Traditional software engineering deals with the design and development of individual software products. In practice, an organization often develops a set of similar products, called a product line. Traditional methods of design and development don’t provide formalisms for taking advantage of these similarities. As a result the developers practice some informal means of reusing designs, code and other artifacts, massaging the reused artifact to fit into new requirements. This can lead to software that is fragile and hard to maintain because the reused components were not meant for reuse. There are many approaches to implementing systematic reuse, among them Domain Engineering. Domain Engineering approaches the problem by defining and facilitating the development of software product lines (or software families) rather than individual software products. This is accomplished by considering all of the products together as one set, analyzing their characteristics, and building an environment to support their production. In doing so, development of individual products (henceforth called Application

Domain Engineering a. Domain Analysis b. Domain Modeling and Design c. Domain Implementation and Integration Create Feedback

Application Engineering Environment Use

Application Engineering Create

Applications

Figure 1. FAST process. FAST is an iterative process of conducting Domain Engineering and Application Engineering.

Engineering) can be done rapidly at the cost of some upfront investment in analyzing the domain and creating the environment. At Lucent Technologies, Domain Engineering researchers have created a process around Domain Engineering called FAST (Family-oriented Abstraction, Specification and Translation) [6]. FAST is an iterative process of conducting Domain Engineering and Application Engineering, as shown in Figure 1. In FAST, Domain Engineering consists of the following steps: 1. Domain Analysis. This step is also known as Commonality Analysis. The goal of this step is to identify the commonalities among members of the product line as well as the possible ways in which they may vary. Usually, several domain experts assist in this activity. 2. Domain Modeling and Design. The application environment is designed and built. This usually involves creation of a high level domain-specific language as well as a graphical user interface. 3. Domain Implementation and Integration. A modified development process is put in place and necessary adjustments to the product construction tools (makefiles, change management systems, etc.) and overall development process are made. Application Engineering is the process of producing members of the product line using the application environment created during Domain Engineering. Feedback is then sent to the Domain Engineering team which makes necessary adjustments to the domain analysis and environment.

3. The AIM Project We will now describe the domain under study. TM Lucent Technologies’ 5ESS switch is used to connect local and long distance calls involving voice, data and video communications. To maintain subscriber and office information at a particular site, a database is maintained within the switch itself. Users of this database are telecommunications service providers, like AT&T, which need to keep track of information, such as specific features phone subscribers signed up for. Access to the database is allowed through a set of screen forms. This study focuses on a domain engineering effort conducted to reengineer the process for developing these screen forms. Whenever a new service provider purchases a 5ESS switch, a set of customized screen forms are created because each provider typically purchases a different set of 5ESS features. Whenever a service provider purchases new features, its screen forms have to be updated. Occasionally, a service provider may request a new database view to be added, resulting in a new screen form. Each of these tasks requires significant development effort. In the old process, screen forms were customized during compile-time. This often means inserting #ifdef-like compiler directives into existing screen specification files. Forms have had as many as 30 variants. The resulting specification file is hard to maintain and modify because of the high density of compiler directives. In addition several auxiliary files specifying entities such as screen items need to be updated. The Asset Implementation Manager (AIM) project is an effort to automate much of this tedious and error-prone process. The FAST process was used to factor out the customer-specific code and create a new environment that uses a data-driven approach to customization. In the new process, screen customization is done at run-time, using a feature tag table showing which features are turned on for a particular service provider. A GUI system was implemented in place of hand-programming the screen specification files. In place of screen specification files, a small specification file using a completely different language is stored for each individual screen item such as a data entry field. This new system also automatically updates any relevant auxiliary files.

4. Methodology We undertook to evaluate the impact of the AIM project on software change effort. Collecting effort data has traditionally been very problematic. Often, effort data can be found in financial databases keeping track of project budget allocations. These data do not always accurately reflect the effort spent for several reasons. Budget allocations are

based on estimates of the amount of work to be done. Some projects exceed their budget while others go under budget. Rather than going back and adjusting budget allocations, the tendency of management is to charge work on projects that have exceeded budgets to those that haven’t. We chose not to consider financial data, but rather to infer the key effort drivers based on the number and type of source code changes each developer makes. This has the advantage of being finer grain than project-level effort data. Analyzing change-level effort could yield trends that would be washed out at the project level due to aggregation. In addition, our approach requires minimal additional data to be collected from developers. We use existing information from the change management system such as change size, the developer who made the change, time of the change, and purpose of the change, to infer the effort to make a change.

4.1. Change Data The 5ESS source code is divided into subsystems with each subsystem further subdivided into a set of modules. Each module contains a number of source code files. The change history of the files are maintained using Extended Change Management System (ECMS) [9], for initiating and tracking changes, and Source Code Control System (SCCS) [13], for managing different versions of the files. Each logically distinct change request is recorded as a Modification Request (MR) by the ECMS. Each MR is owned by a developer, who makes changes to the necessary files to implement the MR. The lines in each file that were added, deleted and changed are recorded as one or more “deltas” in SCCS. While it is possible to implement all MR changes restricted to one file by a single delta, but in practice developers often perform multiple delta on a singe file, especially for larger changes. For each delta, the time of change, the login of the developer who made the change, the number of lines added and deleted, the associated MR, and several other pieces of information are all recorded in the ECMS database. This delta information is then aggregated for each MR. A more detailed description on how to construct change measures is provided in [10]. We inferred the MR’s purpose from the textual description that the developer wrote while working on the MR [12]. In addition to the three primary reasons for changes (repairing faults, adding new functionality, and improving the structure of the code; see, for example, [15]), we used a class for changes that implement code inspection suggestions, since this class was easy to separate from others and had distinct size and interval properties. We also obtained a complete list of identifiers of MRs that were done using AIM technology. We took advantage of the way AIM was implemented. In the 5ESS source, a special directory path was created to store all the new screen

specification files created by AIM. We refer to that path as AIM path. The source code to the previously used screen specification files also had a specific set of directory paths. We refer to those paths as pre-AIM paths. Based on those sets of paths we classified all MRs into three classes: 1. AIM MRs that touched at least one file in the AIM path; 2. pre-AIM MRs do not touch files in the AIM path, but touch at least one file in the pre-AIM path; 3. other MRs do not touch files in the AIM or pre-AIM paths. Thus, for each MR, we were able to obtain the following information, 

who made the change (developer login)



size of the change (number of lines added and deleted)



number of deltas



duration (dates of first and last deltas)



purpose of change



number of files touched



whether it was AIM, pre-AIM, or other

In the next sections we describe modeling techniques used to estimate change effort drivers.

4.2. Modeling Change Effort We explain here the algorithm for estimating change (MR) effort described in more detail in [8]. Let us consider one developer. From the change data, we obtained the start and ending month of each of the developer’s MRs. This helps us partially to fill a table as in Table 1 (a typical MR takes a few days to complete, but to simplify the illustration we show only three MRs over five months). We assume that monthly developer effort to be one technical headcount month so we have 1’s as the column sums in the bottom row. The only exception are months with no MR activity which have zero effort. We then proceed to iteratively fit values into the blank cells, initially dividing the column sums evenly over all blank cells. The row sums are then calculated from these initial cell values. A regression model of effort is fitted on the row sums. The cell values in each row are rescaled to sum up to the fitted values of the regression model. Then the cell values in each column are rescaled to sum up to the column sums. A new set of row sums is calculated from these adjusted cell values and the process of model fitting and cell value adjustments is repeated. This is done until convergence occurs. The code to perform the analysis is published in [11].

Jan 0 ? 0 .. .

Feb ? ? 0 .. .

Mar ? 0 ? .. .

Apr 0 0 ? .. .

May 0 0 ? .. .



MR1 MR2 MR3 .. . Total

1

1

1

1

1



  

Total ? ? ? .. . 12

Table 1. An example table of effort-per-MRper-month breakdown for one developer. The ?’s represent blank cells, whose values are initially unknown.

5. Analysis Framework We outline here a general framework for analyzing domain engineering projects and describe how we applied this to the AIM project. The analysis framework consists of five main steps. 1. Obtain and inspect measures of changes. The changes made on domain engineered code must be identifiable.

In real life situations developers work on several projects over the course of a year and it is important to identify which changes they make are affected by DE. There are several ways to identify these changes. In our example the domain engineered features were implemented in a new set of code modules. In other examples we are aware of (but do not analyze in this paper) a new domain-specific language is introduced. In such a case the DE changes may be identified by looking at language used in the changed file. In yet another DE example, a new library was created to facilitate code reuse. To identify DE changes in such case we need to look at function calls used in the modified code to determine if those calls involve the API of the new library. Finally we need to identify changes that were done to perform pre-DE work. In our example the entire subsystem contained pre-DE code. Since the older type of development co-existed in parallel with the post-DE development, we could not use time as the factor determining if the change was post-DE. In practice it might happen often that initially only some of the projects use new methodology until it has been proven to work. Inspection of change measures

2. Select a subset of variables that might predict the effort to make a change. 3. Select developers who have implemented a substantial number of MRs in both the DE and non-DE approach. Since developer factor has the largest effect on change effort, a balanced group of developers reduces variances of other estimates. More importantly, it makes results insensitive to potential correlation between use of DE and overall productivity of individual programmer. 4. Fit and validate a set of candidate models. 5. Compare the functionality implemented by an average change before and after DE and integrate the costs savings over individual changes.

5.1. Obtaining and inspecting change measures The basic measures of software changes include: identity or the login of the person performing the change; the files, modules, and actual lines of code involved in the change; when the change was made; the size of change measured in the number of lines added or changed and the number of deltas; the complexity of the change measured in the number of files touched; and the purpose of the change including whether the change was done to fix a bug or to add new functionality. Those measures may be obtained from many change management systems using software described in [10].

Before fitting the effort models we first inspect the change measures to compare interval, complexity, and size between DE and pre-DE changes. The part of the product the AIM project was targeting has a change history starting from 1986. We did not consider very old MRs (before 1993) in our analysis, to avoid other possible changes to the process or to the tools that might have happened more than six years ago. The following table gives average measures for AIM and pre-AIM MRs. Most measures are log-transformed to make use of t-test more appropriate. Although differences appear small, they are all significant at : (using two-sample ttest) due to very large sample size (19450 pre-AIM MRs and 1677 AIM MRs). measure units pre-AIM AIM interval days : : complexity #files : size #delta : size #lines As shown in the table, AIM MRs do not take any longer to complete than pre-AIM MRs. The change complexity, as measured in number of files touched by the MR, appears to have gone up, but it is noted that instead of modifying specification files for screens, they are modifying smaller specification files for individual screen attributes. In addition, all changes are done through the GUI, which handles the updating of individual files. The table also shows that more deltas are needed to implement an MR. The increased number of deltas might be a result of the MRs touching more

0 05

log( log( log( log(

) log(1 7) log(1 5) ) log(1 5) log(2) ) log(2) log(2 2) ) log(13) log(10)

files. The numbers of lines are an artifact of the language used and since the language in the new system is different, they are not directly comparable here.

5.2. Variable Selection First among variables that we recommend always including in the model is a developer effect. Other studies have found substantial variation in developer productivity [7]. Even when we have not found significant differences and even though we do not believe that estimated development coefficients constitute a reliable method of rating developers, we have left developer coefficients in the model. The interpretation of estimated developer effects is problematic. Not only could differences appear because of differing developer abilities, but the seemingly less productive developer could be the expert on a particularly difficult area of the code, or that developer could have more extensive duties outside of writing code. Naturally, the size and complexity of a change has a strong effect on the effort required to implement it. We have chosen the number of lines added, the number of files touched, and the number of deltas that were part of the MR as measures of the size and complexity of an MR. We found that the purpose of the change (as estimated using the techniques of [12]) also has a strong effect on the effort required to make a change. In most of our studies, changes which fix bugs are more difficult than comparably sized additions of new code. Difficulty of changes classified as “perfective” varies across different parts of the code, while implementing suggestions from code inspections is easy.

5.3. Eliminating collinearity with predictors that might affect effort Since developer identity is the largest source of variability in software development (see, for example, [2, 7]), we first select a subset of developers that had substantial number of MRs both in AIM and elsewhere so that the results would not be biased by the developer effects. We chose developers that had completed between 150 and 1000 MRs on the considered product in their careers. We chose only developers that completed at least 15 AIM MRs. The resulting subset contained ten developers. Breakdown of their MRs is given in Table 2. Finally we made sure the rest of the predictors are not collinear amongst themselves and are not correlated to AIM factor.

5.4. Models and Interpretation In the fourth step we are ready to fit the set of candidate models and interpret the results. The full model that

included measures that we expect to affect the change effort was:

E (effort)

=   

#delta 1



#files 2



#lines 3

BugFix  Perfective  Inspection

AIM  PreAIM Y Developeri i

In the model formula we use BugFix as a shorthand for   I is a bug fix BugFix , where I is a bug fix is 1 if the MR is a bug fix and 0 otherwise. The same abbreviation is used for and for  . The basis for comparison was set by letting New

other . The estimated coefficients with p-values and 95% confidence intervals calculated using jackknife (for details see [11]) were:

exp (

) log

1 2 3

BugFix Perfective Inspection

AIM

PreAIM

(

=

=1

estimate 0.69 -0.04 -0.10 1.9 0.7 0.6 0.25 1.03

p-val 0.000 0.75 0.09 0.003 0.57 0.13 0.000 0.85

)

95% CI [0.43,95] [-0.3,0.22] [-0.21,01] [1.3,2.8] [0.2,2.4] [0.34,1.2] [0.16,0.37] [0.7,1.5]

The following MR measures were used as predictors: #delta — the number of deltas, #lines — the number of lines added, #files — the number of files touched, indicator if the change is a bug fix, a perfective change, an inspection change, indicator if it was AIM or pre-AIM MR, and an indicator for each developer. Only the coefficients for number of deltas, indicator of the bug fix, and indicator of AIM were significant at : level. The coefficients reflect how may times a particular factor affects the change effort in comparison with the base factors New and other which are assumed to be 1 for reference purposes. Inspecting the table, we see that the only measure of size that was important in predicting change effort was #delta (p-value of 1 is 0.000). Although we refer to it as a size measure it measures both size and complexity. For example, it is impossible to change two files with a single delta, so the number of delta has to be no less than the number of files. It also measures the size of the change. A large number of new lines is rarely incorporated without preliminary testing which would lead to additional deltas. From the type measures we see that the bug fixes are al: ). Other most twice as hard as new code ( BugFix types of MRs (perfective and inspection) are not significantly less difficult than new code changes (p-values of

0 05

= 19

Type of MR other pre-AIM AIM

Dev1 109 82 16

Dev2 118 56 27

Dev3 92 32 30

Dev4 121 46 30

Dev5 312 209 21

Dev6 174 70 20

Dev7 152 55 23

Dev8 408 351 81

Dev9 93 48 12

Dev10 197 14 91

Table 2. Breakdown of MRs.

Perfective and Inspection are 0.57 and 0.13, respectively). This result is consistent with past work [8]. Finally, the model indicates that pre-AIM MRs are indistinguishable from other MRs (p-value of PreAIM is 0.85). This result was expected since there was no reason why preAIM MRs should be different from other MRs. More importantly, AIM MRs are significantly easier than other MRs ( AIM : , with p-value of 0.000). Consequently, AIM reduced effort per change by a factor of four. Next we report the results of reduced model containing only predictors found significant in the full model. The simple model was:

= 0 25

E (effort)

= #delta Y



i



BugFix  AIM

Developeri ;

and the estimated coefficients were:



BugFix

AIM

estimate 0.51 2.1 0.27

p-val 0.000 0.002 0.000

95% CI [0.34,0.67] [1.3,3.2] [0.17,0.43]

Those results are almost identical to the results from the full model indicating the robustness of the particular set of predictors.

5.5. Calculating total cost savings The models above provide us with the amount of effort spent at the fine change level. To estimate the effectiveness of the DE we must integrate those effort savings over all changes and convert effort savings to cost savings. Also, we need an assessment of the cost involved in creating a new language, training developers and other overhead associated with the implementation of the AIM project. To obtain the cost savings first the total effort spent doing DE MRs is estimated. Then we use the cost saving coefficient from the fitted model to predict the hypothetical total effort for the same set of features as if the domain engineering has not taken place.1 The effort savings would then be the difference between the latter and the former. Finally, 1 This represents only hypothetical cost savings, since in reality fewer features could have been implemented without the DE. Fewer features would lead to reduced sales revenue.

the effort savings are converted to cost and compared with additional expenses incurred by DE. The calculations that follow are intended to provide approximate bounds for the cost savings based on the assumption that the same functionality would have been implemented without the AIM. Although we know that AIM MRs are four times easier, we need to ascertain if they implement functionality comparable to that of pre-AIM MRs. In the particular product, new functionality was implemented as software features. The features were the best definition of functionality available in the considered product. We assume that on average, all features implement similar amount of functionality. We do not have any reason to believe that the definition of a feature changed over the considered period. Consequently, even a substantial variation of functionality among features should not bias the results. We determined the software feature for each adaptive MR (MR implementing new functionality) using an inhouse database. We had 1677 AIM MRs involved in implementation of 156 distinct software features and 21127 pre-AIM MRs involved in implementation of 1195 software features, giving 11 and 17 as MR per feature ratio. Based on this analysis AIM MRs appear to implement 60 percent more functionality per MR than pre-AIM MRs. Consequently the functionality in 1677 AIM MRs would approximately equal the functionality implemented by 2650 pre-AIM MRs. The effort spent on 1677 AIM MRs would approximately equal the effort spent on 420 hypothetical pre-AIM MRs if we use the estimated 75% savings in change cost obtained from the models above. This leaves total cost savings expressed as 2230 pre-AIM MRs. To convert the effort savings from pre-AIM MRs to technical headcount years (THCY) we obtained average productivity of all developers in terms of MRs per THCY. To obtain this measure we took a sample of relevant developers, i.e., ones that performed AIM MRs. Then we obtained total number of MRs each developer started in the period between January 1993 and June 1998. No AIM MRs were started in this period. To obtain number of MRs per THCY, the total number of MRs obtained for each developer was divided by the interval (expressed in years) that developers worked on the product. This interval was approximated by the interval between the first and the last delta each developer did in the period between January 1993 and June 1998. The average of the resulting ratios was 36.5 pre-AIM MRs per THCY.

Using MR per THCY ratio the effort savings expressed as 2230 pre-AIM MRs are equal to approximately 61 technical headcount years. Hence the total savings in change effort would be between $6,000,000 and $9,000,000 in 1999 US dollars. This assumes technical headcount costs varying between $100K and $150K per year in the Information Technology industry. To obtain expenses associated with the AIM domain engineering effort we used internal company memoranda summarizing expenses and benefits of the AIM project. The total expenses related to the AIM project were estimated to be 21 THCY. This shows that the first nine months of applying AIM saved around three times (61/21) more effort than was spent on implementing AIM itself. We also compared our results with internal memoranda predicting effort savings predictions in AIM and several other DE projects performed on the 5ESSTMsoftware. Our results were in line with the approximate predictions given in the internal documents indicating that DE interval reduction to one third to one fourth of pre-DE levels.

6. Related Work This work is an application of the change effort estimation technique originally developed by Graves and Mockus [8]. This technique has been refined further in their more recent work [11]. The method was applied to evaluate the impact of a version editing tool in [1]. In this paper we focus on a more general problem of domain engineering impact, where the software changes before and after the impact often involve completely different languages and different programming environments. This technique is very different in approach and purpose from traditional cost estimation techniques (such as COCOMO and Delphi [3]), which make use of algorithmic or experiential models to estimate project effort for purposes of estimating budget and staffing requirements. Our approach is to estimate effort after actual development work has been done, using data primarily from change management systems. We are able to estimate actual effort spent on a project, at least for those phases of development leave records on the change management system. This is useful for calibrating traditional cost models for future project estimation. In addition, our approach is well-suited for quantifying the impact of introducing technology and process changes to existing development processes.

7. Summary We present methodology to obtain cost savings from Domain Engineering exemplified by a case study of one project. We find that the change effort is reduced three to

four times in the considered example. It is in line with generally accepted view that DE techniques improves software productivity two to ten times. The methodology is based on measures of software changes and is easily applicable to other software projects. We described in detail all steps of the methodology so that anyone interested can try it. The key steps are, 1. Obtain measures of software changes and identify changes that were done before and after the application of the particular domain engineering. 2. Identify other variables that might predict effort. 3. Select a subset of developers with substantial DE and non-DE experience. 4. Model the effort involved in performing changes. 5. Obtain cost savings by integrating effort over all possible changes. We expect that this simple methodology will lead to more widespread quantitative assessment of Domain Engineering and other software productivity improvement techniques. We believe that most software practitioners will save substantial effort of trials and usage of ineffective technology, once they have the ability to screen new technologies based on a quantitative evaluation of their use on other projects. Tool developers and other proponents of new (and existing) technology should be responsible to perform such quantitative evaluation. It will ultimately benefit software practitioners who will be able to evaluate appropriate productivity improvement techniques based on quantitative information.

Acknowledgements We would like to thank Mark Ardis and Todd Graves for their valuable comments on earlier drafts of this paper. We also thank Nelson Arnold, Xinchu Huang and Doug Stoneman for their patience in explaining the AIM project.

References [1] D. Atkins, T. Ball, T. Graves, and A. Mockus. Using version control data to evaluate the effectiveness of software tools. In 1999 International Conference on Software Engineering, Los Angeles, CA, May 1999. ACM Press. [2] V. Basili and R. Reiter. An investigation of human factors in software development. IEEE Computer, 12(12):21–38, December 1979. [3] B. Boehm. Software Engineering Economics. Prentice-Hall, 1981.

[4] F. P. Brooks, Jr. No silver bullet: Essence and accidents of software engineering. IEEE Computer, pages 10–19, April 1987. [5] J. Coplien, D. Hoffman, and D. Weiss. Commonality and variability in software engineering. IEEE Software, 15(6):37–45, November 1998. [6] D. Cuka and D. Weiss. Engineering domains: executable commands as an example. In Proc. 5th Intl. Conf. on Software Reuse, pages 26–34, Victoria, Canada, June 2-6 1998. [7] B. Curtis. Substantiating programmer variability. Proceedings of the IEEE, 69(7):846, July 1981. [8] T. L. Graves and A. Mockus. Inferring change effort from configuration management databases. In Metrics 98: Fifth International Symposium on Software Metrics, pages 267– 273, Bethesda, Maryland, November 1998. [9] A. K. Midha. Software configuration management for the 21st century. Bell Labs Technical Journal, 2(1), Winter 1997. [10] A. Mockus, S. G. Eick, T. L. Graves, and A. F. Karr. On measurement and analysis of software changes. Technical report, Bell Laboratories, 1999. [11] A. Mockus and T. L. Graves. Identifying productivity drivers by modeling work units using partial data. Technical report, Bell Laboratories, 1999. [12] A. Mockus and L. G. Votta. Identifying reasons for software changes using historic databases. Technical report, Bell Laboratories, 1997. [13] M. Rochkind. The source code control system. IEEE Trans. on Software Engineering, 1(4):364–370, 1975. [14] E. M. Rogers. Diffusion of Innovation. Free Press, New York, 1995. [15] E. B. Swanson. The dimensions of maintenance. In Proc. 2nd Conf. on Software Engineering, pages 492–497, San Francisco, 1976. [16] D. Weiss and R. Lai. Software Product Line Engineering: A Family-Based Software Development Process. AddisonWesley, 1999.

Suggest Documents