Certifying Software Component Attributes software quality - IEEE Xplore

1 downloads 0 Views 390KB Size Report
rect and complete. Using independent, accredited organizations to certify software quality. Jørgen Bøegh, DELTA (Danish Electronics, Light & Acoustics).
feature software quality

Certifying Software Component Attributes Jørgen Bøegh, DELTA (Danish Electronics, Light & Acoustics)

System integrators depend on component suppliers to reliably describe their products. A flexible, propertyvalue approach lets suppliers adequately describe simple and complex properties, and a certification scheme ensures the descriptions’ trustworthiness.

74

IEEE SOFTWARE

he success of component-based software engineering depends on systems integrators’ ability to select the most suitable components for their intended application. This is only possible if component suppliers provide clear and reliable information about their component’s functional and nonfunctional properties. Furthermore, system integrators must be confident that the provided information is correct and complete. Using independent, accredited organizations to certify

T

components would help guarantee consistent and correct property claims. The European Clear and Reliable Information for Integration (CLARIFI) project has investigated this topic in order to propose methods that make component use more effective.1 As a result, we developed four proof-of-concept implementations for the broker system. These prototypes let us experiment with registering components in the broker system, certifying components’ properties, and searching components based on desired properties. Experienced software developers and project managers helped verify and validate our proposed models. In addition to promoting component adoption, we propose a certification scheme that moves certification to the lower level of properties and attributes. Because we can treat certificates at different levels of abstraction identically, the breakdown lets us distribute certification

Published by the IEEE Computer Society

among certification bodies and possibly reuse certificates of simple properties when certifying complex properties. Reducing complexity and possibly splitting costs could make certification more attractive for both component suppliers and system integrators.

Component descriptions Different methods exist for describing and classifying components. Without a proper description and classification scheme, it’s difficult to select the most appropriate component for a specific purpose. First, the component description must reflect the component’s functional abilities, including its environment needs (such as operating system) and interfaces to other software. The range of required information depends on the actual component and the intended application area. An indexing approach to component classification seems obvious, but

0740-7459/06/$20.00 © 2006 IEEE

it requires significant knowledge about all possible domains and, hence, would be extremely difficult to implement.2,3 In the CLARIFI project, we decided to apply a property-value-based approach to describe components. We express all relevant information about a component using its properties. Many properties are generic and apply to all components, but others are domain specific and apply only to certain components. Therefore, we can describe different components using intersecting sets of properties. Figure 1 shows three such domains. Some properties only belong to one domain, whereas other properties belong to more or all domains. For example, we can represent a software standard with a set of properties, and it might be relevant to one or more domains. (We describe this software example in more detail later on.) We can define properties so that domainspecific sets of properties can evolve. The set of properties we use to describe one component is therefore independent of how we describe other components, and we can always add new properties to a component description. This flexible approach makes it easy to describe a component according to the needs of different domains, avoiding redundancy. It’s often necessary to provide high- and low-level descriptions of a component. In the CLARIFI model, a property might consist of other properties—that is, hierarchical relations can exist between properties.

Properties, attributes, and claims A property’s description must be sufficiently precise to avoid misunderstandings and misinterpretations. Therefore, our first aim is formalization. It’s not always possible to formalize properties, but whenever possible, it should be done. For example, we can formalize quality properties using software measures.4 In the Internal Organization for Standardization’s (ISO) terminology, measurable quality properties are called attributes. ISO/IEC 9126 indicates how to measure attributes and provides the measures in accompanying technical reports. CLARIFI followed the slightly more formal Software Quality in the Development Process (SQUID, developed by the same-named European project and evaluated by the European Validating SQUID in Real Environments [VALSE] project). The SQUID approach is closely related

Domain 2 Standard 1

* *

* *

*

* Domain 1

*

*

*

*

*

* *

*

* *

Property *

*

*

*

*

* *

* *

*

Standard 2 Standard 3

*

*

*

*

*

* *

Domain 3

Standard

to the ISO/IEC 14598-3:2000 standard approach,5,6 where we formalize quality measures by specifying a counting rule, unit, and scale type. The counting rule specifies the measurement procedure, the conditions under which we take measurements, and the calculations and logical operations that might be involved. These operations reflect the relations between attributes. Our property-value approach makes it possible to homogeneously manage component properties at different levels. We break down complex properties into more manageable properties, covering the full range, from simple properties such as a component’s size to complex attributes like compliance with a specified standard. It usually requires considerable insight to appreciate the full implications of standard compliance, but breaking down standard compliance requirements using an iterative process might simplify this considerably. Using our approach, when a component supplier offers a component to the broker, the supplier must decide which attributes to use to describe the component and claim values for the describing attributes. These claims must be formalized with measures for each attribute. The claimed values represent the component supplier’s expression of the degree to which the component possesses the attributes. Therefore, the trustworthiness of claimed values depends on the component supplier’s trustworthiness.

Figure 1. Schematic view of a component description as intersecting sets of properties.

May/June 2006

IEEE SOFTWARE

75

Certification and Accreditation Definitions Formally, a certificate assures that an entity possesses defined properties. (See IEEE Software’s July/August 1999 special issue on software certification for additional information and viewpoints.) In certification, those properties are generally standards or other normative documents. However, it could also include a set of specifications produced within an organization that have been made public and approved or at least agreed to by independent working parties. An essential part of the component certification process is concerned with test and evaluation.1 Usually, testing laboratories perform tests and evaluations, while certification bodies2 handle certification. Although certification is usually performed by an independent third party, the certification body could be a department inside a supplier organization. The difference between the two cases is the degree of confidence a client has in them. Other parties involved are usually suppliers (first party) and purchasers (second party). A certification body must have a certification system that describes the management, workflow, and conditions for performing the certification process. A certification system has its own rules, procedures, and management system for carrying out certification of conformity. In principle, anybody can issue a certificate, but to ensure the certificate’s credibility, an accreditation system must be established. Accreditation is a procedure by which an authoritative body gives formal recognition that an organization or person is competent to carry out specific tasks. In the context of this article, accreditation is relevant for testing laboratories and certification bodies. An accreditation system has its own rules of procedures and management for carrying out accreditation that are defined in standards. Accreditation of conformity assessment bodies is normally awarded after successful assessment and is followed by appropriate surveillance. Examples of accreditation system standards are the ISO/IEC 170113 and ISO/IEC 170254 standards. References 1. L. Beus-Dukic and J. Bøegh, “COTS Software Quality Evaluation, International Conference on Component Based Software Engineering,” Proc. 2nd Int’l Conf. COTS-Based Software Systems (ICCBSS 03), LNCS 2580, Springer, 2003, pp. 72–80. 2. ISO/IEC Guide 2: Standardization and Related Activities—General Vocabulary, Int’l Organization for Standardization, 2004. 3. ISO/IEC 17011: Conformity Assessment—General Requirements for Accreditation Bodies Accrediting Conformity Assessment Bodies, Int’l Organization for Standardization, 2004. 4. ISO/IEC 17025: General Requirements for the Competence of Testing and Calibration Laboratories, Int’l Organization for Standardization, 1999.

Certification and accreditation To make a component’s trustworthiness independent of the supplier, CLARIFI proposes a certification scheme for component attributes. This scheme takes advantage of the breakdown of attributes by attaching certificates to all relevant claimed attribute values. This makes the certification process more visible and manageable. 76

IEEE SOFTWARE

w w w . c o m p u t e r. o r g / s o f t w a r e

Usually certification is done at the standards level—for example, certifying conformance to functional requirements standards, security standards, or the ISO/IEC 25051 standard.7 Our proposed approach extends certification to include single properties and complete standards. An important aim is to make certification as objective as possible. Currently, most software evaluation and certification done in practice is qualitative and, to some extent, subjective. The CLARIFI scheme suggests making practical software certification more quantitative and, hopefully thereby, more objective. (Researchers have made other attempts in this direction. For example, Jeffrey Voas and Jeffery Payne8 suggest a testing-based approach to certification, but that work goes in the opposite direction by proposing a single testquality rating metric for a component.) See the “Certification and Accreditation Definitions” sidebar for more details.

Simple attributes The problem now is how to find attributes that properly describe a component’s quality and that we can certify. Quality attributes can of course be selected on an ad hoc basis. The important point is that suppliers describe component attributes with sufficient precision and that all concerned parties can access the description. A more systematic approach is to use an international standard for identifying attributes. The obvious choice is the ISO/IEC 9126 standard because it provides a software product quality model. The model is hierarchical and identifies six high-level quality characteristics and 27 subcharacteristics. These characteristics cover the quality aspects of software that are relevant for most applications. Some software products might have special quality requirements that this model doesn’t cover. In that case, the component supplier must enhance the quality model. The six quality characteristics from ISO/IEC 9126 are as follows: ■



Functionality. The software product’s ability to provide functions that meet stated and implied needs when the software is used under specified conditions. Reliability. Its ability to maintain a specified level of performance when used under specified conditions.









Usability. Its ability to be understood, learned, and used by users and be attractive to them when used under specified conditions. Efficiency. Its ability to provide appropriate performance, relative to the amount of resources used, under stated conditions. Maintainability. Its ability to be modified (which might include correcting, improving, or adapting the software to changes in environment or changes to requirements and functional specifications). Portability. Its ability to be transferred from one environment to another.

These six quality characteristics are a good starting point for identifying attributes describing relevant quality properties. Claims of a component’s quality are then stated as claims for values of measures for the selected attributes. In the following example, we apply the reliability characteristic and define an attribute: ■



Attribute. Mean time between failures (MTBF). How frequently does the software fail in operation based on counting the failures occurring during a defined period of operation and computing the average interval between failures? Measure. X = T/A, where T is the sum of the time intervals between consecutive failures (that is, termination of the software’s ability to perform a required function) and A is the total number of actually detected failures (those occurring during a specified operation time with a normal operational profile). The measure’s scale type is a ratio.



In this example, we express the measure as a function of three variables that we can formulate as measures of attributes. So, we could certify them independently, making it easier to certify the attribute in question. For real certification purposes, the measures in these examples would be more stringent in terms of counting rules and operational profiles.6 This lack of precision is a weakness of the set of ISO 9126 measures.



Attribute. Estimated residual latent fault density. How many problems still exist that might emerge as future faults based on the number of faults detected during a defined trial period and predicted number of remaining faults using a reliability growth estimation model?

ISO/IEC 9126’s six quality characteristics are a good starting point for identifying component quality attributes.

Complex attributes Using a security example, we can see how to manage more complex attributes. In the ISO/IEC 9126 quality model, security appears as a subcharacteristic of functionality, defined as the software product’s ability to protect information and data so that unauthorized persons or systems can’t read or modify them and authorized persons or systems can access them. Components intended for security-critical applications usually require detailed information about the security attributes. In that case, we should consider a specialized security standard such as the ISO/IEC 15408.9 This standard provides requirements for assuring software security at seven increasingly higher levels: ■

To find more specific attributes, we look at the subcharacteristics from ISO/IEC 9126. For example, maturity—a software product’s ability to avoid failure as a result of faults in the software—is one of the subcharacteristics of reliability. We can use the maturity subcharacteristic to define a lower-level attribute:

Measure. X = |P – A| / S, where P is the number of predicted latent faults in a software product from a reliability growth estimation model, A the number of actually detected faults, and S is the product size (for example, lines of code). The measure’s scale type is an absolute.

■ ■ ■ ■ ■ ■

Evaluation Assurance Level 1 (EAL.1): Functionally Tested EAL.2: Structural Tested EAL.3: Methodically Tested and Checked EAL.4: Methodically Designed, Tested, and Reviewed EAL.5: Semiformally Designed and Tested EAL.6: Semiformally Verified Design and Tested EAL.7: Formally Verified Design and Tested

A component supplier might claim conformance to ISO/IEC 15408 at one of these seven levels, which isn’t easy to evaluate and certify. Breaking down the properties makes it more

May/June 2006

IEEE SOFTWARE

77

The ISO/IEC 15408 security standard provides an extensive set of requirements for compliance at the different levels of assurance.

transparent. (Actually, the standard compliance statement is more complex than this example indicates, but for illustrating the approach, I’ve taken a simplified view.) The security standard provides an extensive set of requirements for compliance at the different levels of assurance. Carefully processing the standard can reveal the necessary attributes. The first step is to identify relevant requirements and formulate these requirements as the software’s attributes. At the highest level, the standard identifies eight areas of concern for security, which we can use as subcharacteristics of security: ■ ■ ■ ■ ■ ■ ■ ■

target of evaluation (TOE) configuration management (CM) coverage; delivery procedures; installation, generation, and start-up procedures; informal functional specification; security enforcing high-level design; informal correspondence demonstration; administrator guidance; and user guidance.

We can use the subcharacteristic “TOE CM coverage” an example of how to break down attributes. This choice illustrates another important issue when attempting to certify component properties: even a product standard can refer to properties related to the development process. In that case, the certifier/testing laboratory must have access to information related to the development process to certify properties related to the product. We can formulate the first relevant subsubcharacteristic in the standard as the component supplier’s ability to demonstrate CM capabilities for the TOE (ACM_CAP). (I’ve added the standard requirement reference in parenthesis following each characteristic.) Following the standard’s structure, the subsequent characteristic is the component supplier’s ability to demonstrate a CM system that ensures version numbers for the TOE (ACM_CAP.1). For this characteristic, we can identify two attributes and corresponding measures from the standard: ■ ■

78

IEEE SOFTWARE

Attribute. Uniqueness of references (ACM_ CAP.1.1C). Measure. Is the reference for the TOE unique to each version of the TOE?

w w w . c o m p u t e r. o r g / s o f t w a r e

■ ■

Attribute. Labeling references (ACM_ CAP.1.2C) Measure. Is the TOE labeled with its references?

Both measures have an ordinal scale type with the possible values yes or no. The interpretation is that if both measures have the value yes, the component fulfills the requirements for CM at level EAL.1. At the next level of detail, we find this subcharacteristic: the component supplier’s ability to demonstrate a CM system that ensures configuration items (ACM_CAP.2). Its attributes and measures are as follows: ■ ■

■ ■

■ ■

■ ■

Attribute. Configuration list (ACM_CAP. 2.3C). Measure. Does the CM documentation include a configuration list? Attribute. Configuration item description (ACM_CAP.2.4C). Measure. Does the configuration list describe the configuration items that comprise the TOE? Attribute. Unique CM identification method (ACM_CAP.2.5C). Measure. Does the CM documentation describe the method used to uniquely identify the configuration items? Attribute. Unique CM item identification (ACM_CAP.2.6C). Measure: Does the CM system uniquely identify all configuration items?

All these measures also have an ordinal scale type, and fulfilling the CM requirements at level EAL.2 demands yes values for all three measures in addition to yes values for the two measures at the previous level. We can continue this breakdown following the standard requirements until we cover the full set of requirements. The set of attributes for the ISO/IEC 15408 standard is large, but individual attributes become simple and easily understood aspects of the component. A similar breakdown is possible for the safety standard IEC 61508.10 This standard defines compliance at four different safety integrity levels (SIL) expressed in terms of the probability of failure. Carefully identifying the

software-related requirements in IEC 61508 gives us attributes with associated measures similar to what we saw in this security example.

Component 1

Attribute certification model Figure 2 shows the general model of certification of component attributes using an entity-relationship diagram. The CLARIFI model assumes that an attribute consists of simpler attributes, possibly derived from standard requirements. Component suppliers claim values of attributes. Such claims are made precise by the component supplier using specified measures. A certification body can certify these claims by issuing certificates to claimed values. To ensure the certificates’ trustworthiness, certification bodies (and testing laboratories) should be accredited by an accreditation body to certify against specified standards. The composition of attributes makes it possible to decompose certification, thereby enhancing the process’ visibility. An advantage of this is that it distributes certification among different accredited certification bodies that can work in parallel.

Prototype tools and experiences We evaluated the CLARIFI component model using four proof-of-concept implementations of the broker system. The implementations were built around an Oracle database and applied a proprietary front-end for generating the database model. The CLARIFI tool consists of the four parts in figure 3: ■



Broker administration. The broker registers component suppliers, system integrators, and certifiers and manages access rights and other administrative tasks. Component classification. Component suppliers enter descriptions of their components in terms of attribute-value pairs. They augment the approach with thesauri, using synonymous attributes, and with domain contexts, which lets particular predefined subsets of properties simplify integrator and supplier processes. The component integrator can use integrator contexts to embody specific preferences or constraints that are imposed on the selection processes. Supplier contexts represent domain-specific views of the global set of possible properties.

1

n Value

n 1

n

n 1

Measure

Attribute n

1 n

1 Standard requirement

Certificate 1

n

n Certification body/testing lab

1 n

n

Standard

Accreditation



Component selection. The process starts with defining the integrator’s requirements. In this step, the broker helps integrators express their requirements. From these requirements, the system will formulate the initial query. Initially, the focus is on functional requirements, and based on these, the broker identifies a set of candidate components and presents them to the integrator. This set of candidates can be large because many components might satisfy each functional requirement. The broker uses the integrator’s preferences and nonfunctional requirements to rank them. Visualization and the multiple criteria decision-making methods support the

Supplier Provides components and makes claims about their properties

Figure 2. The CLARIFI component certification model.

Figure 3. The CLARIFI model of information flow in the component supply chain, from component suppliers to system integrators.

Broker Supports component suppliers and system integrators by offering a component database

Integrator Selects components based on their properties for use in specific products

Certification body Certifies claimed properties of components

May/June 2006

IEEE SOFTWARE

79

The prototypes let us experiment with software components of realistic size and complexity.



selection process. The broker uses graphical representation to decompose the required functionality to get matches to candidate components, describe those candidates so integrators can select sets of individual components, and present the progress of a solution build and selecting between candidate solutions. Component certification. With access to the components, certifiers can certify attribute-value pairs. A scheme based on cryptographic techniques ensures a unique correspondence between a certificate and a specific version of the component under certification.

The prototypes let us experiment with software components of realistic size and complexity. Each of the four prototypes was thoroughly evaluated from theoretical and industrial points of view. Two large companies, British Telecom and Italian software developer Engineering Engegneria Informatica, tested the prototypes in practice. British Telecom’s evaluations focused on applying the approach to external component suppliers, whereas Engineering Engegneria Informatica used CLARIFI to manage internally developed components. Additionally, the project obtained numerous comments and suggestions from other companies and academic researchers. On the basis of evaluation feedback on one prototype, we developed the next prototype. The iterative approach helped refine the underlying model and the system’s user interface. TÜV Nord in Germany evaluated the certification model in collaboration with DELTA. Both organizations have considerable experience in commercial software product evaluation and certification,11 particularly in safety- and security-critical applications. Two experienced TÜV evaluators applied the tool set to conduct trial certifications on 17 software components, which were classified with emphasis on critical attributes. The final evaluation confirmed the certification approach’s applicability. The most difficult part of applying the approach was, not surprisingly, defining the measures used for certification. In particular, the simple quality attributes based on ISO 9126 caused difficulties. Although ISO provides examples of measures for all the ISO 9126 characteristics and subcharacteristics, we couldn’t apply these measures directly because 80

IEEE SOFTWARE

w w w . c o m p u t e r. o r g / s o f t w a r e

they are imprecise. The project team devoted considerable effort to redefining the measures and applying the formalized approach with counting rules and profiles. The complex attributes based on standards required a careful examination of the standards, but they eventually turned out to be easier to define.

Cost considerations Certification provides advantages for both component suppliers and system integrators. With certification, component suppliers can offer customers an independently issued guarantee of claimed functional properties and quality attributes. For system integrators, certification also offers a guarantee against unexpected surprises when building software from components. So why isn’t certification used more? Part of the problem is cost. Who will pay for certificates? In addition, why should one system integrator invest in evaluating a component property and then make the result publicly available in the form of a certificate? Cost sharing is one of the main ideas behind component-based software engineering, but the evaluation leading to a certificate is expensive. The CLARIFI approach lets us treat certificates at different levels of abstraction identically and, therefore, distribute certification among certification bodies. Our approach lets certificates of simple properties be reused for more complex properties. Reducing complexity and even splitting costs might make certification more cost-effective for both component suppliers and system integrators.

I

don’t claim to have solved all problems with certification of software components, where there are still many open issues. For example, from the system perspective, what can we conclude about a software system built of certified components, and how can we certify such a system? There is ongoing research related to which attributes of a software architecture are the most important, but with our current knowledge, we can’t convert general information about component and systemarchitecture attributes into reliable information about system attributes. This process can provide an efficient certification approach to software systems based on certified components. If we can make progress on system certification by reusing component certificates, then

we can extend the component-based software development paradigm to software certification. The hope is to increase the motivation for component certification.

About the Author Jørgen Bøegh is a project manager at DELTA (Danish Electronics, Light & Acoustics). He is head of the Danish delegation to ISO/IEC JTC1/SC7 and is editor of three international standards in software quality requirements and evaluation. His research interests include the research and development of software quality and critical software applications. He received an MS in mathematics and computer science from the University of Aarhus, Denmark. Contact him at DELTA, Venlighedsvej 4, DK-2970 Hørsholm, Denmark; [email protected].

References 1. P. Brereton et al., “Software Components—Enabling a Mass Market,” Proc. Int’l Workshop Software Tech. and Eng. Practice (STEP) 2002, IEEE CS Press, 2003, p. 169. 2. R.L. Glass and I. Vessey, “Towards a Taxonomy of Software Application Domains: History,” J. Systems and Software, vol. 17, no. 2, 1992, pp. 189–199. 3. R.L. Glass and I. Vessey, “Contemporary ApplicationDomain Taxonomies,” IEEE Software, vol. 12, no. 4, 1995, pp. 63–76. 4. ISO/IEC 9126-1, Information Technology—Software Product Quality—Part 1: Quality Model, Int’l Organization for Standardization, 2001. 5. J. Bøegh et al., “A Method for Software Quality Planning, Control, and Evaluation,” IEEE Software, vol. 16, no. 2, 1999, pp. 69–77. 6. L. Chirinos, F. Losavio, and J. Bøegh, “Characterizing a Data Model for Software Measurement,” J. Systems and Software, vol. 74, no. 2, 2005, pp. 207–226. 7. ISO/IEC 25051 Software Engineering—Software Product Quality Requirements and Evaluation (SQUARE) — Requirements for Quality of Commercial Off-the-Shelf

LL A C OR S F CLE TI R A

8.

9.

10.

11.

(COTS) Software Product and Instructions for Testing, Int’l Organization for Standardization, 2006. J. Voas and J. Payne, “Dependability Certification of Software Components,” J. Systems and Software, vol. 52, nos. 2–3, 2000, pp. 165–172 ISO/IEC 15408: Information Technology—Security Techniques—Evaluation Criteria for IT Security, Int’l Organization for Standardization, 1999. IEC 61508: Functional Safety of Electrical/Electronic/ Programmable Electronic Safety Related Systems, Int’l Electrotechnical Commission, 1998. J. Bøegh, “Quality Evaluation of Software Products,” Software Quality Professionals, vol. 1, no. 2, 1999, pp. 26–37.

For more information on this or any other computing topic, please visit our Digital Library at www.computer.org/publications/dlib.

PUBLICATION DATE: January/February 2007 SUBMISSION DEADLINE: 1 July 2006

Software Engineering Challenges in Small Software Organizations Although small software organizations face many of the same issues as large organizations, software engineering solutions and best practices often target large organizations and can be expensive and time consuming when applied in small settings. This issue will discuss how small organizations apply software engineering methods and techniques, best practices, or tools during software development and maintenance to improve quality and productivity. This might include • Requirements engineering • Development and testing tools • Product line evolution • Process assessment and improvement

This special issue focuses on independent enterprises (not part of larger organizations) with fewer than 50 employees providing software products or services as part of their main business. WE SEEK SUBMISSIONS THAT •





Clearly state the software engineering problems and challenges encountered in the context of small organizations and describe how software engineering methods and techniques, best practices, or tools helped solve them Focus on solutions that have been successful in small organizations, stating what makes the approach work rather than describing generic approaches Discuss lessons learned, solution costs involved, and benefits obtained

For the complete call, go to www.computer.org/ software/edcal.htm. Guest editors: Ita Richardson, Lero–the Irish Software Eng. Research Centre and the University of Limerick; [email protected] Christiane Gresse von Wangenheim, Universidade do Vale do Itajaí, São José, Brazil; [email protected] or [email protected]

For author guidelines and submission details, write to [email protected] or visit www. computer.org/software/author.htm. For information about the topic, contact the guest editors.

May/June 2006

IEEE SOFTWARE

81