FOCUS: Technical Debt
Estimating the Principal of an Application’s Technical Debt Bill Curtis, Jay Sappidi, and Alexandra Szynkarski, CAST Software
// A formula with adjustable parameters can help in estimating the principal of technical debt from structural quality data. //
Steve McConnell described technical debt as including both intentional and unintentional violations of good architectural and coding practice1 —an expansion of Ward Cunningham’s original focus on intentional decisions to release suboptimal code to achieve objectives such as faster delivery.2 By choosing debt as a metaphor, Cunningham engaged a set of financial concepts that can help executives think about software quality in business terms. Although the concept of technical debt incorporates entities such as principal, interest, liabilities, and
opportunity costs, this article explores only the estimation of its principal.
The Technical Debt Metaphor In embracing McConnell’s approach as the most comprehensive for communicating the costs and risks of poor structural quality, we use the following definitions for constructs estimated in this article: • Should-fix violations are violations of good architectural or coding practice (hereafter referred to
34 I E E E S o f t w a r e | p u b l is h e d b y t h e I E E E c o m p u t e r s o c ie t y
s6cur.indd 34
simply as “violations”) known to have an unacceptable probability of contributing to severe operational problems (outages, security breaches, data corruption, and so on) or of contributing to high costs of ownership, such as excessive effort to implement changes. • Principal is the cost of remediating should-fix violations in production code (hereafter referred to as “TD-principal”). • Interest is the continuing costs attributable to should-fix violations in production code that haven’t been remediated, such as greater maintenance hours and inefficient resource usage. • Technical debt is the future costs attributable to known violations in production code that should be fixed—a cost that includes both principal and interest. For the technical debt metaphor to be useful, its constructs must be measurable or at least estimable from measurable elements of software. Fortunately, we can estimate the violations underlying TD-principal via techniques such as static analysis of the software’s nonfunctional, structural characteristics. 3 Violations of structural quality are often difficult to detect through standard testing but are frequent causes of severe operational problems.4,5 Facing limited application budgets, IT organizations will never fix all violations in an application. Technical debt estimates ought to only include should-fix violations in production code. Nonetheless, the amount of should-fix problems sometimes exceeds the budget available for remediation. Consequently, IT management must estimate the amount of technical debt in its applications and then adjust the 0 74 0 -74 5 9 / 12 / $ 3 1. 0 0 © 2 0 12 I E E E
10/4/12 2:35 PM
Table 1
Parameter values for three estimates of TD-principal. Parameter values Variable
Estimate 1
Estimate 2
Estimate 3
100%
Violations that must be fixed High-severity violations
50%
100%
Medium-severity violations
25%
50%
Low-severity violations
10% Hours to fix
High-severity violations
1 hour
2.5 hours
Medium-severity violations
1 hour
1 hour
Low-severity violations
1 hour
10%—1 hour 20%—2 hours 40%—4 hours 15%—6 hours 10%—8 hours 5%—16 hours
US$ per hour All violations
parameters in its estimates to determine how much of that debt can be reduced within the available budget. This evaluation helps prioritize the problems to remediate, as well as provides information about the amount and types of risk remaining in an application.
A Method for Estimating TD-Principal There’s no exact measure of an application’s TD-principal because its calculation should be based only on the should-fix violations, some of which might be undetected. However, modern software analysis and measurement technology lets us estimate TDprincipal from counts of detectable violations. Thus, we can estimate TDprincipal as a function of three variables—the number of should-fix violations in an application, the hours to
s6cur.indd 35
75
75
fix each violation, and the cost of labor. We can measure or estimate each of these variables to develop a formula for computing TD-principal. The time to fix a violation, for example, could be available from historical effort data. The cost to fix violations can be set to the average burdened rate for the developers assigned to the activity. Using the three variables, the equation for estimating TD-principal is TD-principal = ((S high-severity violations) × (percentage to be fixed) × (average hours needed to fix) × (US$ per hour)) + ((S mediumseverity violations) × (percentage to be fixed) × (average hours needed to fix) × ($ per hour)) + ((S low-severity violations) × (percentage to be fixed) × (average hours needed to fix) × ($ per hour)).
75
Estimates from this equation provide managers with ballpark figures for assessing future maintenance costs, as well as for making investment decisions regarding structural quality improvements to reduce future costs and risks. Although we present several choices for parameter values here, IT organizations should calibrate these parameters to their own experiences to obtain the most relevant estimates. We used three different settings for these parameters (see Table 1) to explore their effects on TD-principal estimates. Although burdened hourly rates can vary by experience and location, we found that a rate of US$70 to $80 per hour reflects the average costs for many IT organizations, especially where they have a mix of on- and offshore operations. Consequently, we used the same hourly rate in all three estimates. In
N o v e m b e r / De c e m b e r 2 0 1 2
| IEEE
S o f t w a r e 35
10/4/12 2:35 PM
FOCUS: Technical DebT
Language parsers
Oracle PL/SQL Sybase T-SQL SQL Server T-SQL IBM SQL/PSM C, C++, C# Pro C Cobol CICS Visual Basic VB.Net ASP.Net Java, J2EE JSP XML HTML JavaScript VBScript PHP PowerBuilder Oracle Forms PeopleSoft SAP, ABAP Netweaver Tibco Business Objects Universal analyzer for other languages
Application analysis
Evaluation of 1,200+ coding and architectural rules
Application metadata
Detected violations
Quality characteristics
Expensive operation in loop Static vs. pooled connections Complex query on big table Large indices on big table
Performance
Empty CATCH block Uncontrolled data access Poor memory management Opened resource not closed
Robustness
SQL injection Cross-site scripting Buffer overflow Uncontrolled format string
Security
Unstructured code Misuse of inheritance Lack of comments Violated naming convention
Transferability
Highly coupled component Duplicated code Index modified in loop High cyclomatic complexity
Changeability
FiGURe 1. CAST Software’s Application Intelligence Platform. This technology analyzes all the source code of an application and reintegrates the metadata across languages to detect violations of more than 1,200 rules of good architectural and coding practice. These violations are aggregated into measures of several quality characteristics, which are provided to both management and the developers.
each estimate, we varied the percentage of violations at each severity level that would be prioritized for remediation, addressing fewer severity levels in estimates 2 and 3. The calculation in estimate 1 assumes that all violations would be fi xed within one hour. Based on industry data, this is an extremely conservative number, which we chose to provide a lower bound for TD-principal. In estimate 2, we varied the hours needed for fi xing within each severity category, assuming that high-severity violations would contain proportionately more violations and require more hours to fi x. Because there are few published distributions of hours to fi x, we modeled
the time to fi x in estimate 3 as a distribution that might be more realistic based on data observed in several IT organizations.
the Sample and data The data reported here are drawn from the Appmarq benchmarking repository maintained by CAST Software. 6 For this exploration, we drew a sample from the repository of 700 applications submitted by 158 organizations for the analysis and measurement of their structural quality characteristics. The applications in this sample cumulatively contained 357 MLOC. We didn’t accept applications into the sample if they consisted of fewer than 10 KLOC.
The organizations that submitted these applications were primarily from the US, Europe, and India. Because there’s no rigorous characterization of the global population of business applications, it’s impossible to assess the generalizability of results drawn from this sample. Nevertheless, these results emerge from what we believe to be the largest sample of applications to be statically analyzed for structural quality characteristics across different languages and technology platforms. The industries from which we received these applications included fi nancial services, insurance, telecommunications, manufacturing, energy, IT consulting, retail, and government. Because of criteria
36 I E E E S o f t w a r E | w w w. c o m p u t E r . o r g / S o f t w a r E
s6cur.indd 36
10/4/12 2:35 PM
used by most companies for submitting applications to analysis, we believe this sample is biased toward businesscritical applications. We analyzed these applications using CAST’s Application Intelligence Platform (AIP),7 which analyzes an entire application using more than 1,200 rules to detect violations of good architectural and coding practice. We drew these rules from software engineering literature, repositories such as the Common Weakness Enumeration (CWE; cwe.mitre.org), online discussion groups of structural quality problems, and customer experience as reported from defect logs and application architects. As an example, securityrelated violations would include SQL injection, cross-site scripting, buffer overflows, and other violations from the CWE. The AIP begins by parsing an application’s entire source code at build time to develop a representation of the elements from which the application is built—its data flows, class hierarchies, transaction paths, calling relationships, and so on. The AIP includes parsers for the 28 languages listed in Figure 1, and a universal analyzer provides a partial parse of languages that don’t have dedicated parsers. The metadata produced from this parsing is reintegrated across languages and platforms to provide a full view of the application. The AIP uses several techniques to detect violations of its architectural and coding rules (examples of which are under the Detected violations column in Figure 1). Integrating the metadata lets us consider the full application context in detecting violations and reducing false positives. For instance, to determine whether a table being called by a loop with an expensive operation violates performance efficiency rules, we need to know the context of whether the table is large or small. AIP scoring begins by detecting
s6cur.indd 37
the number of opportunities for a rule to be triggered and the percentage of times that rule was violated. Each violation is weighted by its severity and aggregated to the application level. (Users can adjust severity weights for each violation to match local priorities, but the results we present here are based on the original weights.) The AIP provides a series of management reports and a portal to guide developers to locations in the source code where specific violations can be remediated. The management report aggregates violation scores into measures for the five quality characteristics shown in Figure 1: • robustness, the stability or resilience of an application and its ability to avoid outages or recover quickly from them; • performance efficiency, the application’s responsiveness and its efficient use of resources; • security, an application’s ability to prevent unauthorized intrusions and protect confidential information; • transferability, the ease with which a new team can understand the application and quickly become productive in working with it; and
that can be computed from the source code, some quality characteristic names used here differ based on the content analyzed and the meaningfulness of the names to management. The AIP evaluates anywhere from 176 to 506 rules for each quality characteristic, and some rules are evaluated for more than one characteristic.
Calculating TD-Principal We calculated three estimates of TDprincipal for each of the 700 applications using equations with the three sets of parameter values listed in Table 1. The first row of Table 2 shows descriptive statistics for the distribution of these estimates across the full sample for each of the three estimating equations. The differences among means for these three estimates are large, ranging from $3.61 in estimate 1 to $15.62 in estimate 3. This large difference in mean values reveals how sensitive the resulting estimates of TD-principal are to values selected for the parameters. Although almost any result can be obtained by manipulating parameters, the critical lesson is the importance of adjusting parameters to fit local priorities for fixing violations and matching historical data regarding the times and
Integrating the metadata lets us consider the full application context in detecting violations and reducing false positives. • changeability, an application’s ability to be easily modified and avoid the injection of new defects. We selected these characteristics after reviewing ISO/IEC 9126.8 However, because the quality characteristics in 9126 and its successor, ISO/ IEC 25010,9 aren’t defined to a level
costs for doing so. Only then can such estimates provide useful information about TD-principal. Table 2 also presents descriptive statistics for the three estimating equations for seven of the languages used in the 700 applications. Because modern applications are frequently developed in several languages, we split each
N o v e m b e r / De c e m b e r 2 0 1 2
| IEEE
S o f t w a r e 37
10/4/12 2:35 PM
Estimated US dollars per LOC of TD-principal by language.*
Est. 1
Est. 2
Est. 3
Est. 1
Est. 2
Est. 3
Est. 1
Est. 2
Est. 3
Maximum
Est. 3
25th & 75th quartiles
Est. 2
Minimum
Est. 1
Median
Est. 3
All apps
Mean
Est. 2
Est. 1
Table 2
FOCUS: Technical Debt
3.61
10.26
15.62
2.79
7.94
11.77
0.06
0.01
0.21
1.13 5.25
3.49 14.45
5.91 18.28
38.08
132.74
278.00
3.09
12.29
28.34
2.37
10.20
22.32
0.96
0.49
1.18
0.84 4.98
3.36 19.06
8.02 43.01
16.52
73.00
175.63
0.43
1.90
4.29
0.41
1.73
3.79
0.05
0.20
0.41
0.27 0.57
1.20 2.50
2.47 5.85
1.42
6.89
16.31
2.62
7.65
17.12
2.18
6.46
14.62
0.02
0.01
0.33
0.83 3.18
2.93 9.73
4.36 21.69
12.82
31.89
75.64
4.33
12.95
26.77
2.41
7.83
14.52
0.02
0.01
0.06
1.55 4.41
4.51 10.53
8.80 22.25
38.08
132.91
278.00
5.42
14.68
19.82
5.13
13.66
16.18
0.07
0.23
0.50
2.40 7.48
8.19 18.52
11.94 21.33
49.72
253.03
608.68
4.57
21.16
49.52
1.12
3.87
7.58
0.49
1.13
1.19
0.99 5.92
3.24 27.88
5.82 66.70
30.23
151.93
366.65
2.93
9.83
18.91
2.58
8.37
15.29
0.68
2.77
4.01
1.16 3.20
3.45 11.21
6.10 20.69
12.14
45.01
93.59
(n = 700)** .NET (n = 63) SAPABAP (n = 72) C (n = 44) C++ (n = 30) Java EE (n = 474) Oracle Forms (n = 45) Visual Basic (n = 16) *Maximums, minimums, and quartiles for individual language distributions can be greater than or less than the figures for the total sample because the numerator and denominator change when applications are divided into language-specific subsystems. **Because some applications contain multiple languages, the sum of the samples for the languages is greater than 700.
application into subsystems written in different languages. We produced estimates using each of the three estimating equations for each subsystem within the seven languages in Table 2. The differences in TD-principal estimates among different languages are
large. For instance, the mean for estimate 1 ranges from a low of $1.90 per LOC for Advanced Business Application Programming (ABAP) to a high of $21.16 per LOC for Oracle Forms (F = 10.63; p < .0001). Structural differences in the languages could partly
explain this large spread. However, it might also be affected by the different uses to which these languages are applied, ranging from customizing an existing vendor package with ABAP to developing an entire application with Java EE or C++. We also can’t rule out
38 I E E E S o f t w a r e | w w w. c o m p u t e r . o r g / s o f t w a r e
s6cur.indd 38
10/4/12 2:35 PM
Est. 1
Est. 2
Est. 3
Est. 1
Est. 2
Est. 3
Est. 1
Est. 2
Est. 3
Est. 2
Transferability
Est. 1
Changeability
Est. 3
Security
Est. 2
All apps
Performance
Est. 1
Robustness
Est. 3
Table 3
Percentage of technical debt associated with each quality characteristic.
18%
19%
18%
5%
1%
1%
7%
3%
3%
30%
37%
39%
40%
40%
39%
17%
16%
15%
8%
6%
7%
9%
13%
13%
36%
38%
39%
30%
27%
26%
41%
41%
43%
2%
2%
0%
0%
0%
0%
9%
13%
10%
48%
44%
47%
13%
10%
9%
3%
2%
2%
4%
3%
3%
35%
40%
41%
45%
45%
45%
7%
7%
5%
2%
1%
2%
7%
5%
0%
44%
45%
46%
40%
42%
47%
12%
21%
30%
3%
5%
8%
5%
10%
16%
16%
22%
28%
63%
42%
18%
32%
34%
35%
7%
6%
8%
1%
1%
0%
13%
18%
44%
47%
41%
13%
23%
23%
22%
3%
3%
3%
6%
9%
10%
34%
35%
31%
34%
30%
34%
(n = 700) .NET (n = 63) ABAP (n = 72) C (n = 44) C++ (n = 30) Java EE (n = 474) Oracle Forms (n = 45) Visual Basic (n = 16)
sample-specific factors affecting these differences because other research, contrary to our results, found technical debt to be the highest for ABAP.10 The variance in estimates within a single language category is quite large. For instance, among Java EE applications, estimates of TD-principal using the parameters in estimate 2 ranged from $0.23 per LOC to $253.03 per LOC. The distributions are all positively skewed, with most languages having large interquartile ranges. Consequently, to be used effectively in management decisions regarding cost of ownership or investments in structural quality, IT organizations should measure and analyze
s6cur.indd 39
TD-principal for each application individually—or, at most, for applications being developed under similar conditions for similar uses—rather than using an average estimate across all applications within a language category. Identifying the factors behind these variances in operational environments can reveal opportunities for large reductions in software costs.
TD-Principal by Quality Characteristic The violations of structural quality from which TD-principal is estimated can represent different types of threats to the business or costs to IT. To use
TD-principal estimates strategically, management must establish its structural quality objectives and allocate remediation resources accordingly.11 Our data allow us to estimate the TDprincipal associated with the violations that constitute each of the five quality characteristics we defined in the section called “The Data and Sample.” The first row in Table 3 shows the percentage of total violations constituting TD-principal for each of the five quality characteristics using violation parameters for each of the three estimates. For the full Appmarq sample, 70 percent of the TD-principal estimated in this sample was contained in
N o v e m b e r / De c e m b e r 2 0 1 2
| IEEE
S o f t w a r e 39
10/4/12 2:35 PM
FOCUS: Technical Debt
the IT cost–related quality characteristics of changeability and transferability, whereas only 30 percent was contained in the business risk factors of robustness, performance efficiency, and security. We can’t determine from the data whether IT organizations are spending more time eliminating TD-principal related to business risk or, alternatively, whether TD-principal is created most often in violations associated with IT cost–related factors. The remaining rows in Table 3 indicate that the relative percentages of violations from each of the quality characteristics constituting TD-principal were generally consistent across language categories and estimating parameters. However, there were two notable variations. First, robustness violations accounted for relatively higher percentages of ABAP’s and Oracle Forms’ TD-principal; security and changeability violations accounted for relatively lower percentages. These differences could relate to how these languages are used in customizing their associated packaged software. Second, as the estimating parameters for violations shifted toward
violations. (We discuss this more in the next section.) The quality characteristic results suggest that the analysis and measurement of TD-principal can be used in conjunction with structural quality priorities to guide management decisions about how to allocate resources for reducing business risk and IT cost. For many IT managers and executives, trying to make decisions about retiring TD-principal at a global level is overwhelming, and they struggle to visualize what the expected payoff will be. However, when managers can analyze TD-principal by its constituent quality characteristics, they can set specific reduction targets based on strategic quality priorities with a better understanding of the cost or risk reduction benefits.
Analysis of Violations We can gain deeper insight into structural quality by investigating the types of violations that create TD-principal. Table 4 presents the five most frequent violations included in TD-principal and the frequency of their detection across applications for each of the seven lan-
Managers can set specific reduction targets based on strategic quality priorities. high-severity violations in estimates 2 and 3, the percentage of violations shifted from transferability toward other quality characteristics for Java EE and Oracle Forms. This shift likely resulted from the relatively high proportion of comment-related violations for these languages that were eliminated from TD-principal calculations as the estimating parameters shifted toward higher-severity
guages we studied. These results are difficult to compare at the application level because many violations are defined specific to each language. Nevertheless, several themes emerge from these results. First, if we divide the frequencies of the individual violations by the number of applications in which they were detected, we find that the number of occurrences per application is large for
all of the violations. For instance, the average number of violations in Java EE for the high-severity violation of using fields from other classes was 1,173 per application. Second, a consistent problem across all the languages except Java EE is complexity represented as violations of either a module-calling structure with high fan-out to external components or a control flow with high internal complexity. These violations are often traces of the tradeoffs that led Cunningham to make his original observation regarding technical debt. 2 Understanding the tradeoffs between these violations and either operational performance or maintenance costs are critical to helping managers set strategic structural quality objectives. Third, the large percentage of TDprincipal accounted for by the cost-related quality characteristics of changeability and transferability in Table 3 is reflected in Table 4 by the frequencies of complexity-related violations, as well as violations of practices that aid comprehension. These cost-related violations are especially prevalent in Java, accounting in part for its higher TD-principal cost compared to other languages. Because these tend to be categorized as medium-severity violations, managers must weigh the cost tradeoff between remediating these violations and the accruing interest from increased effort to understand the code.
E
ven when measured with a conservative formula, the amount of technical debt in most business applications is formidable. For instance, even when applying the conservative parameters in estimate 1, the average application is estimated to have $361,000 of TD-principal for each 100 KLOC. When the more realistic parameters of estimate 3 are applied,
40 I E E E S o f t w a r e | w w w. c o m p u t e r . o r g / s o f t w a r e
s6cur.indd 40
10/4/12 2:35 PM
Table 4
The top five violations contributing to TD-principal by language. Language
Violation
.NET
Avoid uncommented methods Avoid declaring public class fields Avoid artifacts with high fan-out Avoid classes with a high lack of cohesion Avoid instantiations inside loops
(n= 63)
ABAP (n = 72)
C (n = 44)
C++ (n = 30)
Java EE (n = 474)
Oracle Forms (n = 45)
Visual Basic (n = 16)
s6cur.indd 41
203,651 152,972 84,580 56,486 16,309
Avoid artifacts with a complex Select clause Avoid artifacts with high internal complexity Avoid artifacts with high fan-out Avoid artifacts with high depth of code Avoid artifacts with high fan-in
61,376 61,184 43,428 20,061 16,490
Avoid undocumented functions Avoid artifacts with high internal complexity Avoid functions with SQL statement including subqueries Never use strcpy() function—use strncpy() Never use sprintf() function or vsprintf() function
56,027 32,943 30,153 29,332 21,608
Avoid undocumented functions, methods, constructors, destructors Avoid data members that are not private Avoid unreferenced methods Avoid using global variables Avoid artifacts with high internal complexity
267,861 182,076 73,888 47,834 18,065
Avoid methods missing JavaDoc comments Avoid methods missing appropriate JavaDoc @param tags Avoid methods missing appropriate JavaDoc @return tags Avoid private fields missing JavaDoc Comments Avoid using fields (nonstatic final) from other classes
4,028,727 3,227,014 3,018,182 1,737,620 556,046
Avoid objects without Comment property Avoid artifacts with high fan-out Avoid artifacts with high internal complexity Avoid artifacts with high fan-in Use based data blocks naming convention—represented table
1,717,616 68,213 32,411 20,616 9,866
Avoid undocumented functions and methods Avoid using global variables Avoid unreferenced functions and methods Avoid direct usage of database tables Avoid artifacts with high internal complexity
executives will likely dismiss the estimated size of TD-principal as excessive. However, when large estimates result from accurate parameters for an organization’s hours to fix and cost per hour, then the percent of violations to be fixed can be varied to determine how many violations can be remediated within existing budgets and which violations to prioritize. We urge caution in interpreting the
Frequency
45,680 32,258 23,675 12,885 7,143
estimates we present in this article as industry benchmarks. This exploration of estimates for TD-principal demonstrates that these estimates are extremely sensitive both to the assumptions made in parameterizing the equation and the different types of languages to which they are applied. These estimates could also shift with changes in the mix of application characteristics in each language category
as the number of applications grows in the Appmarq repository. Nevertheless, these results provide a good starting point for exploring TD-principal, and one that can be adjusted based on different assumptions about the parameters used. When developed with professional discipline, estimates of TD-principal can be a powerful tool to aid management in understanding and controlling IT costs and risks.
N o v e m b e r / De c e m b e r 2 0 1 2
| IEEE
S o f t w a r e 41
10/4/12 2:35 PM
abOUT The aUThORS
FOCUS: Technical DebT
bill CUrtiS is senior vice president and chief scientist of CAST Software and heads CAST Research Labs. His research interests include software productivity and quality, empirical software engineering, organizational maturity models, and a formal proof that the Veer-T triple option can execute in linear time. Curtis received a PhD from Texas Christian University. He’s an IEEE Fellow. Contact him at b.curtis@ castsoftware.com.
JAY SAPPidi is the senior director of product marketing at CAST
Software and a senior director in CAST Research Labs. His research interests include predictive analytics for software risk management through software analysis, measuring the impact of technical quality on developer productivity, and comparative analysis of application technical quality across technologies. Sappidi received an MBA from the MIT Sloan School of Management. Contact him at
[email protected].
AleXAndrA SZYnKArSKi is a research associate in CAST Software’s Research Labs. Her interests include structural quality benchmarks and measuring software performance trends on the global application development community. Szynkarski received an MS in international business administration from the Institut Administration des Entreprises de Nice. Contact her at
[email protected].
Call for Articles IEEE Software seeks practical, readable
articles that will appeal to experts and nonexperts alike. The magazine aims to deliver reliable information to software developers and managers to help them stay on top of rapid technology change. Submissions must be original and no more than 4,700 words, including 200 words for each table and figure. Author guidelines: www.computer.org/software/author.htm Further details:
[email protected]
The next step in our exploration of TD-principal is to provide individual ratings for the effort to fi x each of the 1,200+ violations. These effort ratings will be further adjusted by the number of components involved in fi xing the violation and the complexity of each component. This refi nement will make the calculation of TD-principal more granular and could provide better indicators of components most in need of refactoring.
references 1. S. McConnell, “Technical Debt,” blog, 1 Nov. 2007; http://blogs.construx.com/blogs/ stevemcc/archive/2007/11/01/technical -debt-2.aspx. 2. W. Cunningham, “The WyCash Portfolio Management System,” ACM SIGPLAN OOPS Messenger, vol. 4, no. 2, 1993, pp. 29–30. 3. B. Curtis, J. Sappidi, and A. Szynkarski, “Estimating the Size, Cost, and Types of Technical Debt,” Proc. 3rd Int’l Workshop Managing Technical Debt, IEEE CS, 2012, pp. 49–53. 4. D. Spinellis, Code Quality: The Open Source Perspective, Addison-Wesley, 2006. 5. M.T. Nygard, Release It!, Pragmatic Bookshelf, 2007. 6. J. Sappidi, B. Curtis, and A. Szynkarski, CAST Report on Application Software Health, tech. report, CAST Software, 2011. 7. CAST Application Intelligence Platform, tech. report, CAST Software, 2008; www. castsoftware.com/resources/document/ zbrochures/cast-ai-platform. 8. ISO/IEC 9126, Software Engineering— Product Quality, Int’l Org. for Standardization, 2001. 9. ISO/IEC Std. 25010, Systems and Software Engineering—Systems and Software Quality Requirements and Evaluation (Square)—System and Software Quality Models, Int’l Org. for Standardization, 2011. 10. J. de Groot et al., “What Is the Value of Your Software?,” Proc. 3rd Int’l Workshop Managing Technical Debt, IEEE CS, 2012, pp. 37–44. 11. C. Sterling, Managing Software Debt: Building for Inevitable Change, Addison-Wesley, 2011.
www.computer.org/software Selected CS articles and columns are also available for free at http://ComputingNow.computer.org. 42 I E E E S o f t w a r E | w w w. c o m p u t E r . o r g / S o f t w a r E
s6cur.indd 42
10/4/12 2:35 PM