FEATURE: SOFTWARE QUALITY
A Survey on Open Source Software Trustworthiness Vieri del Bianco, OpenSoftEngineering Luigi Lavazza, Sandro Morasca, and Davide Taibi, Università degli Studi dell’Insubria
// When it comes to choosing open source software, a recent survey reveals that users typically value functionality and reliability. //
OVER THE PAST few years, users in many sectors have increasingly adopted open source software (OSS) products. No longer the product of lone coders, industrial-strength OSS is often developed by organized communities and sometimes even by major software companies applying the same rigorous processes and high-quality standards 0 74 0 -74 5 9 / 11 / $ 2 6 . 0 0 © 2 0 11 I E E E
ably be more easily trusted when their claims are backed by reliable evidence. To understand the factors that influence trust in OSS by users and developers, we conducted a survey of 151 OSS stakeholders with different roles and responsibilities about the factors that they deem most important when assessing whether OSS is trustworthy. In this article, we describe the results and provide an analysis of our survey, which includes a ranking of these factors and some insights into the motivations behind them. Functionality and reliability were consistently ranked as the most important factors, along with a few other technical and nontechnical ones. In addition to assisting prospective adopters of OSS components and products, the motivations and importance ranking of factors may be useful for OSS developers, to make their products and components more interesting and useful.
Setting the Stage
they use when developing their commercial products. However, many software companies and users alike are still somewhat reluctant to massively adopt OSS in their mainstream activities, primarily because they’re unsure if and to what extent they can actually trust OSS. Trust must be built, and, as with any offering, OSS products would prob-
Assessing the trustworthiness of OSS products is not an easy task as it can encompass many different aspects, some related to the intrinsic quality of OSS products and others to the development and adoption processes. The assessment of OSS product trustworthiness was a primary goal of the Quality Platform for Open Source Software (QualiPSo) project (IST project no. 034763; www.qualipso.org), one of the largest EU-funded projects in the 6th Framework Programme. The QualiPSo consortium includes several largescale software companies, research centers, and universities. Even though the vast majority of the partners are from EU countries, the QualiPSo consortium also includes partners from fast-growing countries that also have a
S E P T E M B E R /O C T O B E R 2 0 11
| IEEE
S O F T W A R E 67
FEATURE: SOFTWARE QUALITY
fast-growing ICT and software market, such as Brazil and China. One of the main motivations of the QualiPSo project is to go beyond the ideological positions of believers and non-believers, the hype and quick dismissal that have often characterized the OSS debate. These positions have certainly pointed out possible advantages and disadvantages of OSS, but both need to be empirically investigated and evaluated so that evidence-based answers can be
Defining and using an OSS trustworthiness model requires identifying the trustworthiness goals of software organizations when they deal with OSS and the factors taken into account when deciding whether a given OSS application (or library or any other piece of software) is trustworthy enough. To collect information about these goals and factors, we took an empirical approach by surveying 151 OSS stakeholders who differ in several aspects:
The notion of trustworthiness might be inherently subjective.
provided for the questions and expectations about OSS trustworthiness. A widely used definition of software trustworthiness is “the degree of confidence that exists that the software meets a set of requirements.”1 Clearly, this definition is quite broad and subjective, but we need to accept that the notion of trustworthiness might be inherently subjective, because trustworthiness requirements depend on how, why, where, and by whom the software is used. Many people agree that product trustworthiness encompasses a software system’s reliability, security, and safety, 2,3 as well as its fault tolerance and stability. Still, users might legitimately expect several additional qualities in trustworthy software products. It’s unrealistic to look for a single, general measure that can reliably quantify OSS trustworthiness. Instead, we should define trustworthiness models that could be tailored to the requirements the software must fulfill and take into consideration multiple user viewpoints. This approach shouldn’t be surprising—no one-size-fits-all set of measures works for all products, environments, and goals.
• Organizational roles in their companies. Of our interviewees, 30.8 percent were upper managers, 20.5 percent project managers, 39.7 percent developers, and 9 percent played other roles. • Type of organization. Of the respondents’ organizations, 80.5 percent were profit-oriented, while 19.5 percent were either nonprofit or public administrations. • Industrial sector of their organizations. Of our interviewees, 64 didn’t indicate any domain, and 19 indicated involvement in all domains. The rest were distributed as follows: 32 in public administration, 29 in hardware/software development, 14 in R&D, 14 in telecommunications, 13 in finance, 8 in security, and 4 in avionics. • Usage of OSS products. We wanted to address the most common uses of OSS—namely, to support software development, to include OSS in larger software products, to use OSS after proper customization or configuration, as part of some business process, to provide services, as a development platform, and as the
68 I E E E S O F T W A R E | W W W. C O M P U T E R . O R G / S O F T W A R E
target usage platform. Every interviewee claimed multiple uses, with two-thirds of interviewees saying they used OSS for all these uses. • Nationality. Classifying the nationality of respondents is not an easy task in a globalized world. Quite often, the nations of birth, residence, and work and the company’s location differed. Moreover, the answers provided by the respondents often clearly appeared nation-independent. For instance, the people working in universities tended to provide homogeneous answers, regardless of their own nationality or the university’s location. Therefore, we were able to accurately classify the nationality of about half the respondents (45 percent Italians, 4.5 percent other Europeans, 4.5 percent non-Europeans), while the others were classified as “global” (30 percent) and “academic” (16 percent). We didn’t determine our interviewee sample in advance. A preplanned sample would have allowed for a more controlled analysis, but it would also have limited the possibility of adding interviewees to the set in an unanticipated manner. We’re fully aware that this might have somewhat influenced our results, but • it wasn’t possible to interview several additional people that could have made our sample more “balanced,” because they weren’t available or had little or no interest in answering our questionnaire; • no reliable demographic information exists about the overall population of OSS stakeholders, so it would be impossible to know if a sample were “balanced”; • we dealt with motivated interviewees, which ensured a good level of response quality; and • our survey has no research bias be-
cause we simply wanted to collect and analyze data from the field, not to provide evidence supporting or refuting a specific theory. A detailed report and analysis of the information we collected appears in a QualiPSo project technical report.4
The Questionnaire and the Survey We developed the questionnaire we used in the survey from available literature on OSS product trustworthiness and software quality evaluation.1,2,5,6 It can be used for different kinds of interviewees and organizations, and it serves two main purposes: • assessing the current situation— that is, understanding the current trustworthiness problems, evaluation processes, and factors; and • collecting “wishes”—that is, understanding whether our interviewees believed additional information that might not be commonly available about OSS would be important for adopting an OSS product. We organized the questions according to the types of information we sought to collect: • Personal information. Helps profile the interviewee, the company, and the organizational unit to which the interviewee belongs. (It was made clear to the interviewees that any piece of information they provided would only be disclosed in aggregated form so that a single respondent could never be identified.) • Role of the organization in relation to OSS. Helps us understand the specific use of OSS in an organization. • Selection process. Helps us understand the process followed when selecting a specific OSS product, even when the process is completely
informal. • Economic. Helps us understand the main economic drivers behind the choice of a specific OSS product over other OSS products or proprietary software. • License. Helps us identify the most widely used licenses, the problems that can occur when using available licenses, and the characteristics of a “good” license. • Development process. Helps us understand if development process aspects influence the product’s trustworthiness. • Product quality issues. Helps us understand the product quality attributes that OSS stakeholders consider when selecting OSS products. • Customer requirements. Helps us understand the extent to which customer requirements influence choice of an OSS product.
We clarified that only the ranking of the factors has a real meaning. For instance, giving a value of 6 to interoperability and 4 to size indicates that interoperability is believed to be more important than size, but the individual values of 6 and 4 have no meaning in themselves. We conducted the vast majority of the interviews in person and some by phone. Most interviews lasted longer than 90 minutes. We believe that this is the most effective way to elicit information. We were able to establish an effective communication channel with the interviewees, which allowed us to ensure a better degree of understanding of the closed- and open-answer questions and provided higher uniformity in the data collected. We conducted all the interviews individually, to let interviewees provide their own viewpoint without any sort of conscious, or even
We dealt with motivated interviewees, which ensured a good level of response quality. Table 1 summarizes the list of factors (along with some results of our study, which we explain in the following section). We collected information in both a structured fashion via closed-answer questions and a less-structured way by talking with interviewees via openanswer questions to prompt them to provide additional relevant information (such as wishes). One of the questionnaire’s most important objectives was to investigate which factors are deemed to be important during OSS product assessment. Thus, we asked interviewees to rank a factor’s value on a 0-to-10 scale, where 0 meant “totally irrelevant” and 10 meant “absolutely fundamental.”
unconscious, interference due to the presence of other people, especially of those belonging to the same organization. While conducting the interviews, the feedback received from the first three or four interviewees allowed us to marginally revise and extend the questionnaire.
Qualitative and Quantitative Analysis of Factors A statistical analysis of the responses lets us partition the factors into importance groups, which we show in Table 1’s columns “entire dataset,” “for profit,” and “nonprofit.” Let’s first discuss the column “entire dataset,” where we identified eight importance
S E P T E M B E R /O C T O B E R 2 0 11
| IEEE
S O F T W A R E 69
FEATURE: SOFTWARE QUALITY
groups, from 1 (least important) to 8 (most important). The ordered grouping indicates a statistically significant importance ranking between factors belonging to different groups, but no such ordering within each group. For instance, the factor usability belongs to group 5, so it’s ranked as more important than modularity, which is in
rive Table 1 didn’t provide statistically significant results. The mean instead provides a purely qualitative indicator, as, from a technical point of view, it is not totally reliable when ranking ordinal data, such as those we have collected in our survey. When considering user location, very few noticeable differences emerge.
Ethics values were at least as important as economic profit for OSS supporters. group 4, and just as important as shortterm support, which is in group 5. The number of groups depends on the portion of the population considered. For profit-oriented organizations (column “for profit”), the statistical analysis led to four groups, and for column “nonprofit” (which includes public administrations) to five groups. Out of 37 factors, 28 are considered equally important by profit-oriented organizations at the highest importance level—that is, these organizations appear quite demanding about many qualities of OSS. The values in Table 2 are the percentages of difference from the average—values close to zero indicate medium importance, negative values indicate below-average importance, and positive values indicate greaterthan-average importance. These values show the extent to which evaluations by interviewees in various roles differ. (Because the overall average grade was 6.76, being below the average doesn’t imply low importance.) For instance, upper managers consider total cost of ownership (TCO) of medium importance, but developers consider its importance substantially below average. Here, we use the mean of the evaluation grades because the tests used to de-
Global users consider a few factors— best practices, programming language uniformity, portability, and self-containedness—to be more important than, say, Italian or academic users do. We also investigated whether the industrial sectors of the interviewees’ organizations influenced the evaluations by focusing on the five industrial sectors that occur most frequently in the survey: public administration, security, telecommunications, computer-related (hardware and software), and R&D. Reliable evidence supported the existence of only 16 out of 370 possible dependencies, the majority of which were expected. For instance, interviewees from public administrations seem to be less interested in return on investment (ROI) and TCO than those from computer-related companies; interviewees from public administrations or R&D seem to be less interested in customer satisfaction than interviewees from computer-related companies.
Selection Process Most interviewees (74.3 percent) said that they don’t use a formal OSS selection process, but when questioned further, admitted to using an informal one. None of the interviewees used the
70 I E E E S O F T W A R E | W W W. C O M P U T E R . O R G / S O F T W A R E
existing OSS product evaluation methods available in the literature, 5 even though a sizeable number knew about them.
Economic Factors Both ROI and TCO were—as expected—among the most controversial factors. They’re of great importance to profit-oriented organizations, but they’re much less interesting to nonprofit organizations and public administrations. Similarly, upper management is more interested in ROI and TCO than are project managers, who are more interested than developers. Interestingly, all roles considered both factors fairly relevant, with the exception of programmers, who regard TCO as less important than most other factors. All roles reported more interest in ROI than TCO, possibly because of a not completely mature comprehension of OSS costs. OSS ethics was an important social and economic factor—in fact, ethics values were at least as important as economic profit for OSS supporters. Integration is crucial because integration cost and effort are often high, especially if there’s a need to integrate proprietary software. Code control is believed to be a great advantage of OSS because it can help avoid vendor lockins and unwanted economic dependencies. Ease of OSS acquisition was also mentioned because spending money to buy software might be complicated, whereas typically no money is spent at the time of OSS acquisition.
License Some interviewees identified a large number of licenses used in their organizations, but the vast majority only named a few. Overall, respondents view the GNU Public License (GPL) as the standard license. Most interviewees considered licenses and legal issues of medium importance when incorporating an external OSS product in their
TABLE 1
Factors believed to affect trustworthiness, with factors ranked according to interviewees’ organizations.* Factor
Section
Entire dataset
For profit
Nonprofit
Reliability
Quality
8
4
5
Degree to which an OSS product satisfies/covers functional requirements
Quality
8
4
5
Satisfaction
Customer
7
4
4
Interoperability
Quality
7
4
5
Interoperability issues
Customer
7
4
5
Availability of technical documentation/user manual
Development
7
4
5
Maintainability
Quality
6
4
3
Standard compliance
Quality
6
4
4
Mid-/long-term existence of a user community
Development
6
4
4
Performance
Quality
5
4
3
Usability
Quality
5
4
3
Security
Quality
5
4
4
Existence of a sufficiently large community of users that can witness its quality
Development
5
4
4
Short-term support
Development
5
4
3
Availability of tools for developing, modifying, and customizing OSS products
Development
4
4
3
Environmental issues
Development
4
4
3
Portability
Quality
4
4
3
Reusability
Quality
4
4
3
Modularity
Quality
4
4
3
Standard architecture
Quality
4
4
3
Law conformance
License
4
4
2
Types of licenses used
License
4
4
3
Localization and human interface
Quality
3
3
3
Availability of best practices on specific OSS products
Development
3
4
2
Programming language uniformity
Development
3
4
2
Complexity
Quality
2
2
2
Patterns
Quality
2
2
2
Return on investment (ROI)
Economic
2
4
1
Self-containedness
Quality
2
2
2
Standard imposed
Customer
2
2
2
Total cost of ownership (TCO)
Economic
2
4
1
Availability of training, guidelines, and so on
Development
2
2
2
Existence of benchmarks/test suites that testify to OSS quality
Development
2
4
2
Mid-/long-term existence of a maintainer organization/sponsor
Development
2
2
2
OSS vendor’s reputation
Development
2
2
2
Size
Quality
1
1
1
Distribution channel
Development
1
4
1
* Value based on a 0-to-10 scale, where 0 means totally irrelevant and 10 means absolutely fundamental.
S E P T E M B E R /O C T O B E R 2 0 11
| IEEE
S O F T W A R E 71
FEATURE: SOFTWARE QUALITY
own products: the factors type of licenses and law conformance are in group 4 in the entire set of responses. Sometimes, OSS products come with licenses that aren’t explicitly mentioned. The need for clarity in licenses and what they allow or forbid is a common request. The large number of existing licenses further complicates this issue: some licenses appear to be similar but turn out not to be fully compatible. This appears to be a relevant hindrance to OSS adoption.
Development Process Some interviewees check OSS product quality by testing it thoroughly, even though the factor benchmarks/ test suites had limited importance for all roles except project managers. This low ranking might be partially due to the fact that users don’t expect benchmarks and test suites to be available for OSS products. In some cases, OSS couldn’t be used because some components weren’t certified, while the applicable regulations mandated software certification. The availability of documentation
concerned with the probability of project success. The environment and context play less significant roles, as confirmed by the factor environment being in group 4. Finally, availability of tools was considered fairly important by developers but not upper managers or project managers. On average, interviewees were only mildly interested in the existence of a sponsor organization behind an OSS product: the factors vendor reputation and mid-/long-term existence of a maintainer organization/sponsor were considered fairly important but only to project managers. The less interested interviewees are usually willing to carry out the required modifications to the chosen OSS themselves. Availability of best practices wasn’t believed to be important, except by upper managers, who considered this issue of average importance. Again, this might partly be due to the fact that people expect OSS to be delivered “as is.” Other factors considered unimportant (by all roles) were availability of training/guidelines and language uniformity—only programmers gave al-
Stability, in particular, requires special attention: OSS products are often released when they aren’t yet stable. was considered very important, with project managers being most concerned. Interviewees pay high attention to the user community’s vitality in terms of how long it has existed and, to a lesser degree, the number of people involved; mid-/long-term existence of a user community, user community that witnesses quality, and short-term support were all considered fairly important. Interestingly, project managers were most interested in these factors because such people are the most
most average importance to the latter. The answers to the open-answer questions revealed some additional relevant facts for at least some of the interviewees. Having more information about the development process would be important. Some interviewees mentioned a heterogeneous set of factors and measures, including development approach visibility (the practices, methodologies, tools, and so on used in development), bug lists, quality review process (how quality is handled),
72 I E E E S O F T W A R E | W W W. C O M P U T E R . O R G / S O F T W A R E
benchmarks, and certifications (the official certifications obtained by the product or parts of it). A project’s future was considered very important, in terms of a detailed roadmap (planned milestones and releases), a detailed release history (past milestones and releases), the project’s expected lifetime, and the project’s active developers. Lists of users and data about the product’s popularity were also believed to be worth collecting. Finally, some interviewees suggested that if a well-defined relationship with a sponsor exists, it should be expressed clearly and made publicly available. Product Quality As expected, functionality was almost unanimously ranked as the most important quality. Along with functional requirements, group 8 contained only one other factor, reliability. This is somehow intuitive and reasonable, as a product should first do what it’s expected to do and, second, do it reliably. Maintainability, which landed in group 6, also ranked as important. Performance and usability were considered fairly important (group 5), but portability was considered only mildly relevant (group 4). The rankings differ substantially when dealing with code- and designquality attributes, which were, in general, considered of lesser importance. The use of a standard architecture, the production of reusable code, and good modularization were considered mildly important. (Modularity, reusability, and standard architecture landed in group 4.) The remaining code- and design-related qualities were considered rather unimportant: complexity and patterns were in group 2 and size in group 1. Only programmers gave some importance to code complexity and pattern usage, though their rankings were below average. Surprisingly, size, one of the most important drivers in the scientific literature 4 for development ef-
TABLE 2
Factors believed to affect trustworthiness, with importance indicated according to user roles. Upper management
Project manager
Developer
All respondents
Total cost of ownership (TCO)
2%
−6%
−26%
−15%
Return on investment (ROI)
4%
−1%
−10%
−14%
Types of licenses used
6%
10%
5%
1%
Availability of tools for developing modifying customizing OSS products
−8%
−6%
8%
1%
Availability of best practices on specific OSS products
−2%
−3%
−12%
−6%
Availability of technical documentation/user manual
8%
28%
20%
19%
Environmental issues
7%
12%
−4%
4%
−13%
−16%
−24%
−16%
Mid-/long-term existence of a user community
16%
24%
7%
13%
Mid-/long-term existence of a maintainer organization/sponsor
−7%
−2%
−22%
−12%
6%
25%
4%
9%
OSS vendor’s reputation
−20%
−4%
−19%
−14%
Distribution channel
−85%
−85%
−41%
−49%
Programming language uniformity
−19%
−8%
−4%
−9%
10%
16%
2%
12%
−14%
−8%
−20%
−14%
Degree to which an OSS product satisfies/covers functional requirements
27%
28%
29%
28%
Reliability
21%
23%
27%
25%
4%
12%
9%
10%
Usability
14%
18%
9%
13%
Maintainability
12%
23%
21%
17%
Portability
−6%
1%
3%
−1%
Reusability
−4%
8%
9%
5%
Size
−51%
−48%
−38%
−39%
Complexity
−23%
−22%
−13%
−16%
Modularity
1%
8%
10%
8%
Standard architecture
4%
−7%
5%
6%
Patterns
−20%
−18%
−11%
−14%
Security
1%
27%
9%
13%
−2%
14%
19%
12%
−10%
−22%
−4%
−9%
Interoperability
21%
14%
21%
19%
Localization and human interface
−5%
8%
−9%
−5%
Customer satisfaction
24%
33%
10%
16%
Interoperability issues
11%
32%
19%
18%
Law conformance
−9%
0%
5%
−1%
Standard imposed
−30%
−1%
−9%
−13%
Factor
Availability of training, guidelines, and so on
Short-term support
Existence of a sufficiently large community of users that can witness its quality Existence of benchmarks/test suites that testify to OSS quality
Performance
Standard compliance Self-containedness
S E P T E M B E R /O C T O B E R 2 0 11
| IEEE
S O F T W A R E 73
FEATURE: SOFTWARE QUALITY
fort and time and the number of faults, was generally reported as unimportant by the interviewees, regardless of role or organization type. Interoperability was believed to be very important (it landed in group 7): OSS products are supposed to interact with several other pieces of software. Another associated factor was standard compliance (group 6). Project managers and developers considered it quite important, but upper management considered it of average importance. Security was also believed to be important (group 5). Although it’s believed to be fairly important in the literature, self-containedness only landed in group 2. A possible explanation is that one of the principles in OSS is to reuse as much code as possible, even if it creates complexities in the build process and in the management of component dependencies. Project managers regarded selfcontainedness as less important than most other factors, whereas programmers considered it of average importance. Finally, localization and human interface were also believed to be of limited importance (group 3). Project managers appeared more concerned with this factor than other roles were. The interviewees often stated that the lack of product and design documentation was a major issue, and not only should it be available, but it should be of high quality and accuracy. OSS developers and communities should give more attention to aspects such as ease of use and installation, certifications, and accurate documentation on stability. Stability, in particular, requires special attention: OSS products are often released when they aren’t yet stable.
Customer-Oriented Requirements The factors related to customeroriented requirements were believed to be important because they’re usually mandated by customers or law. The factor customer satisfaction landed in
group 7, showing that it’s considered very important (not surprisingly, since it’s supposedly directly related to functional requirements and reliability factors). Customer satisfaction turned out to be the most important factor for project managers and the second most important for upper managers. Another factor considered of very high importance was customer’s interoperability issues, which landed in group 7 as well. (Again, this factor is somewhat related to standard compliance and directly related to interoperability.) Project managers were very concerned with this factor. The factor law conformance—that is, OSS compliance with legal regulations—was considered to be mildly important (group 4). This might be explained by the fact that OSS products aren’t always subject to legal regulations. The only customer-related requirements factor considered of lesser importance was standard imposed (group 2), with profit-oriented organizations much more concerned than nonprofit ones.
Interpreting the Data Our analysis of the collected responses yielded interesting results—some of which give evidence to support consolidated beliefs, whereas others are somewhat unexpected. Among the indications that confirmed our expectations, the factors involving an application’s user requirements and reliability were reported as very important, along with interoperability and standard compliance. The community was an important factor for understanding an OSS product’s vitality, health, actual usability, and potential longevity. Some product qualities (maintainability, modularity, and standard architecture) were also believed to be important when assessing an OSS product’s trustworthiness, with documentation of almost any kind considered to be very important.
74 I E E E S O F T W A R E | W W W. C O M P U T E R . O R G / S O F T W A R E
However, unexpected indications emerged as well: • Complexity and size. They’re two of the most widely used and accepted attributes of software systems, but interviewees assigned a low importance to these factors; in particular, size was classified as the least important factor. • Economic factors. Both ROI and TCO were far from being considered as relevant as is widely publicized. Traditionally, the main leverage for promoting OSS adoption has been economic convenience, that is, the fact that OSS is usually available at no immediate cost. The advantages from the economic viewpoint were widely proved as far as ROI is concerned, but much debated for TCO. According to our survey, economic factors were no longer among the top concerns for organizations adopting OSS. • Licenses. Legal factors were considered to be quite important but not as important as you might think. There was a clear agreement on GPL-like licenses, with some (but not all) OSS users understanding very well the long-term advantages of such licenses against totally permissive licenses such as the free Berkeley Software Distribution (BSD) license. In practice, the need for licenses that let users take, use, and redistribute OSS code as they please was less important than expected. • Maintainer organization. The presence and reliability of an organization that maintains and commercializes the OSS product was considered unimportant. Instead, the presence of a large community using and developing the product was considered sufficient evidence to trust the product. The distribution channel was considered completely negligible.
T
he results reported here match the fi ndings of a survey on selecting and using commercial off-the-shelf and OSS components,7 an in-depth investigation about the reasons for adopting OSS, 8 and a survey on external support for OSS adoption.9 Our analysis will continue through an online survey and we will periodically release new results, to monitor possible changes due to the evolution in the OSS field. The Trustworthy Products Questionnaire is accessible online (www.opensoftengineering.it/oss_ survey), and we invite all interested readers to fill out the questionnaire and contribute to this study.
ABOUT THE AUTHORS
The results provide useful quantitative and qualitative indications of the factors and criteria that are used by practitioners and users when adopting OSS products and components.
VIERI DEL BIANCO is a researcher and a consultant with OpenSoft-
Engineering (www.opensoftengineering.com). During the research reported here, he was a lecturer in the Università degli Studi dell’Insubria at Varese’s Department of Computer Science and Communication. His research interests include software processes, software requirements engineering, software metrics and cost models, free software, agile methodologies, distributed computing, and formal methods. del Bianco has a PhD in software engineering from Politecnico di Milano. Contact him at
[email protected].
LUIGI LAVAZZA is an associate professor in the Università degli Studi
dell’Insubria at Varese’s Department of Computer Science and Communication. His research interests include empirical software engineering and measurement, especially those concerning OSS quality evaluation; functional size measurement and process evaluation; model-based development, especially concerning real-time and embedded software; requirements engineering; and software-development environments and tools. Lavazza has a DrEng in electronics engineering from Politecnico di Milano. He is a member of the IEEE Computer Society. Contact him at luigi.lavazza@ uninsubria.it. SANDRO MORASCA is a professor of computer science at the Uni-
versità degli Studi dell’Insubria in Como and Varese, Italy. His research interests include empirical software engineering, specification of concurrent and real-time software systems, software verification, and open source software. Morasca has served on the program committees of several international software engineering conferences and on the editorial board of Empirical Software Engineering: An International Journal, published by Springer-Verlag. He is a member of the IEEE Computer Society. Contact him at
[email protected].
Acknowledgments
The research presented in this article has been partially funded by the IST project QualiPSo (www.qualipso.eu), fi nanced by the EU in the 6th Framework Programme (IST034763); the FIRB project ARTDECO, financed by the Italian Ministry of Education and University; and the projects “Elementi metodologici per la specifica, la misura e lo sviluppo di sistemi software basati su modelli” and “La qualità nello sviluppo software,” funded by the Università degli Studi dell’Insubria.
DAVIDE TAIBI is a post doc researcher at the Università degli Studi dell’Insubria’s Department of Computer Science and consultant at OpenSoftEngineering. His research interests include empirical software engineering and measurement, especially concerning OSS quality evaluation; OSS business models; and OSS Marketing. Taibi is involved in several OSS projects and serves as the sales director of the Italian Open Source Competence Center (www.flossitaly.it). Contact him at
[email protected].
References 1. E. Amoroso et al., “A Process-Oriented Methodology for Assessing and Improving Software Trustworthiness,” Proc. 2nd ACM Conf. Computer and Communications Security, ACM Press, 1994, pp. 39–50. 2. L. Bernstein, “Trustworthy Software Systems,” SIGSOFT Software Eng. Notes, vol. 30, no. 1, 2005, pp. 4–5. 3. M. Hertzum, “The Importance of Trust in Software Engineers’ Assessment and Choice of Information Sources,” Information and Organization, vol. 12, no. 1, 2002, pp. 1–18. 4. V. del Bianco et al., How European Software Industry Perceives OSS Trustworthiness and What Are the Specifi c Criteria to Establish Trust in OSS, project official deliverable D5.1.1, QualiPSo, Quality Platform for
5.
6.
7.
8.
Open Source Software, IST-FP6-IP-034763, 2008; www.qualipso.org/sites/default/fi les/ A5.D1.5.1%20v2.pdf. N.E. Fenton and S.L. Pfleeger, Software Metrics: A Rigorous and Practical Approach, 2nd ed., PWS Publishing, 1998. D. Taibi, L. Lavazza, and S. Morasca, “OpenBQR: A Framework for the Assessment of OSS,” Open Source Development, Adoption and Innovation, vol. 234, 2007, pp. 173–186. J. Li et al., “Development with Off-the-Shelf Components: 10 Facts,” IEEE Software, vol. 26, no. 2, 2009, pp. 80–87. K. Ven, J. Verelst, and H. Mannaert, “Should
You Adopt Open Source Software?,” IEEE Software, vol. 25, no. 3, 2008, pp. 54–59. 9. K. Ven and J. Verelst, “The Importance of External Support in the Adoption of Open Source Server Software,” Proc. Int’l Conf. Open Source Systems (OSS 2009), Springer, 2009, pp. 116–128.
Selected CS articles and columns are also available for free at http://ComputingNow.computer.org.
S E P T E M B E R /O C T O B E R 2 0 11
| IEEE
S O F T W A R E 75