Default values for improved product line ... - ACM Digital Library

7 downloads 63 Views 164KB Size Report
Default Values for Improved Product Line Management. Juha Savolainen, Jan Bosch, Juha Kuusela, and Tomi Männistö. Nokia Research Center, Intuit, Nokia ...
Default Values for Improved Product Line Management Juha Savolainen, Jan Bosch, Juha Kuusela, and Tomi Männistö Nokia Research Center, Intuit, Nokia Devices, Helsinki University of Technology [email protected], [email protected], [email protected], [email protected]

mandatory features and forces most features to be expressed as variation points. When the number of variation points is counted in the thousands and the relationships between these variation points become more and more complex, the cost of product derivation increases as well and as a consequence the benefits provided by a software product line start to diminish. Product line research has focused on improving the current situation by proposing, among others, hierarchical product lines [4] and compositional product line development [5]. Despite the potential benefits, many companies still rely on traditional centralized variability management to orchestrate their product development. In industrial product lines studied by the authors, default values have been effectively used to create complex software products. This paper explains how default values can be used in centralized variability management to relieve the problems caused by nearcommonality. The use of default values allows engineers, during product derivation, to ignore most of the variation points and only focus on those that define the unique features of the specific product. The contribution of this paper that we explain how commonality is managed in industry; how default values can be used to gain control over expanding scope and near-commonality; and discuss how evolution of default values indicate potential problems in industrial product lines. In addition, we explain what corrective actions can be taken to alleviate the identified problems. Our approach maintains line of sight to the problem domain while providing an effective mechanism to manage complexity of large product lines. The remainder of this paper is organized as follows. In the next section, we introduce the research setting for the paper. Then we report our observations on industrial product line. The fourth section introduces the concept of default values and fifth section shows different evolution scenarios that can take place. The section after that describes how these scenarios can be used to find out potential problems and suggest corrective actions to alleviate those problems. Finally, we discuss our findings and conclude.

Abstract Many companies apply software product lines based on explicit variability management. These product lines have existed for more than ten years. While research has progressed during that time and industrial experiences have been extensively reported, there seems to still be a gap between the industrial practice and research with respect to explicit software variability management. In this paper, we explain how commonality is managed in industry; how default values can be used to gain control over expanding scope and near-commonality; and discuss how evolution of default values indicate potential problems in industrial product lines. In addition, we explain what corrective actions can be taken to alleviate the identified problems.

1 Introduction New products in a software product line can be derived by making requirement selections from a product line model of requirements. However, as the product line evolves, selections are constrained by the design of the existing product line architecture. Such constraints are results of substantial investment of resources. Final products are a compromise between what is desired and what can be achieved at appropriate cost. Once established, product lines constantly become more complex, expanding both in terms of features and scope covered [1]. As the number of features is increased, the number of variation points increases requiring more selections to be made during product derivation. Product lines are based on the premise that enough commonality exists. A traditional way to model commonality in product lines is mandatory features [2]. The expanding scope reduces commonality and the “near-commonality” problem emerges. A product line exhibits near-commonality when for almost every feature at least one product derived from the product line chooses not to select this feature [3]. Near-commonality dramatically reduces the amount of

Copyright held by the author/owner. 51

Figure 1 Commonality categorization for industrial product lines rules match the authors’ experiences from product families. Field-testing the proposed framework in other cases by other researchers is put forward as future work.

2 Research setting The research reported in this paper comprises four parts structured so that the subsequent parts build on the results of the prior ones. In the first part, observations on industrial product lines are reported. These observations have been made by the authors interacting in close co-operation in daily work in 6 different product lines in three different domains. In the second part, data about the use of default values in product families was collected from the case company, Nokia. The unit of analysis was Nokia Devices organization responsible for creating among others S40 and S60 platforms for mobile phones between years 2001–2008. The data of this part is based on the observations of the authors working in Nokia Research Center in both management and technical positions. The summary of all mobile phone product lines in Nokia has been previously reported [6]. Both S40 and S60 product lines produce tens of unique products each year for each geographical area (e.g. Europe, Asia, USA, China, India). The number of unique, customer deployed software configurations in one year vary between 10 00 - 100 000 depending on the product line. The overall number of realistic configurations is many millions of potential products. The ability to manage evolution of variability in this context is crucial for the long-term success of these two product lines. In the third part, different scenarios of product line evolution, with respect to feature commonality, was identified within the unit of analysis. The fourth part is more constructive in nature proposing a synthesized framework for addressing particular variability management issues. The part takes an initial step towards developing technological rules [7] for variability management. The proposal was not properly field tested, although it is argued that the

3 Observations on industrial product lines In product lines, commonality forms the basis for software reuse. The fact that a number of products share some characteristics enables software to be reused. As the products are different, variability needs to be introduced into the shared software assets. One of the most commonly used approaches for variability management in product lines employs feature modeling. The only explicit way for expressing commonality in feature models is to use of mandatory features. However, in real life commonality exhibits many different shapes. Figure 1 shows a categorization for commonality derived from practice. For many product lines, most of the commonality is not explicitly managed. This may sound problematic from the variability management perspective, but in fact, this lowers the cost of reuse. Often large amounts of software are reused in forms of language frameworks, open source software and databases where none of these are explicitly managed as a part of variability management. This is shown as implicit commonality in Figure 1. Next, commonality can be managed as mandatory features. Mandatory features require effort during domain analysis and they are managed as a part of the feature model. For most feature models mandatory features form the internal structure and are not represented by the leaf nodes. It makes sense to avoid extensive functional decomposition and excessive number of features and refine feature models only until variability still exists. Some mandatory features are attached to a variation point. They either have been variable before or seen as

52

potentially variable in the future. Adding a variation point increases the cost of the mandatory features, since now we have to implement the actual mechanisms to handle the potential variability. But also optional features represent commonality. A mandatory feature defines what is common to all. An optional feature describes what is potentially common to some products, thus expressing commonality among the set of products that share this feature. To capture more accurately the characteristics of optional features in variability management, the concept of a default value is used. A default value is a property attached to an optional feature to indicate whether the feature is by default selected or not. A feature that is selected by default is called excludable and one that is by default not selected includable. By using default values, one can decrease the effort needed in selecting features during derivation given the feature values match the wanted selections. Each time one needs to override a default value, additional effort is naturally needed. Default values can be attached to more complex variability types. One feature from a set of mutually exclusive (alternative) features can be excludable where others are includable. This would make the set of mutually exclusive features having a default feature that is automatically selected unless the default values are overridden. Similarly a set of multiple (must select one, but can select many) features may contain many excludable features. In the remaining of the paper, we ignore these more complex cases. Successful industrial product lines tend to, over time, grow in size and complexity. One can identify a number of causes for this development. First, the number of different products increases steadily over time. Although, in the case of embedded systems, the lifetime of any product is not necessarily very long, the total number of products that is under development or needs to evolve as part of the product line grows over time. Second, there is a constant tendency to implement what could be considered product specific features directly into the platform. This can happen because several new products all need the new feature or because, as is the case when the platform is licensed to other companies, the platform itself needs to be differentiating and contain sufficient novel features to compete effectively with other platforms. Third, as the product line architecture erodes over time, new functionality tends to crosscut the architectural decoupling points, hence further increasing complexity. Fourth, a proven product line can easily become a victim of its own success in that the company frequently aims to stretch the product line way beyond its originally conceived scope, causing significant a architectural mismatch between intended

and actual design scope. Fifth and finally, in the course of time, business leaders demand the incorporation of legacy products and independent products, e.g. achieved through merger and acquisition (M&A) activities, to be incorporated in the product line, again increasing complexity and variability to nearunmanageable levels. Alarmingly, we have seen that the size of the asset base in terms of variation points, supported features, and number of components may increase much faster that the number of products. As a consequence, both the cost of maintaining the asset base and the cost of deriving new products from the product increase significantly. Over time, the amount of work per product starts to actually increase, deteriorating the product line benefits [1]. In the remainder of this section, we discuss each of the causes for increased size and complexity in more detail. A product line initiative typically starts with the development of the platform in parallel with the development of the first lead product based on the platform. This allows the organization to validate the viability of the platform initiative while allowing for early revenue generation from the product line investment. Once the first product is in market, subsequent products are developed on top of the platform. In addition, the products released on the platform often start to release new versions. As a consequence, the amount of variability that the platform needs to support increases, causing the platform to increase in size and complexity at, unless it is managed very carefully, a rate that is worse than linear with the number of products supported by the platform. Once the platform has proven its initial success, the organization looks to reap the benefits from software reuse wherever it can. One mechanism is to implement features that are product specific directly into the platform. The argumentation is that subsequent products will require this feature anyway and in this way the commoditization of product-specific functionality into the platform can be bypassed, resulting in a lower overall R&D expenditure. The challenge with this approach is that often one of two outcomes occurs. First, the implementation of the feature is rather product specific, despite being implemented in the platform and is not sufficiently general for future products. Second, the implementation of the feature is too generic as the designer overestimates the needed generality of the functionality, causing too many variation points, variants and dependencies. This contributes to unnecessary size and complexity of the platform. The third factor is architectural erosion. During the initial design of the platform, the architects prepare the

53

architecture for the future extensions that are known at the time of design. Over time, the accuracy of the predictions, obviously, starts to decrease and as a consequence, a growing percentage of the new functionality added to the platform is crosscutting multiple components. Especially in the case where this functionality is required to be variable the complexity implications can be quite challenging. A different, but related scenario is where the organization decides to build products on top of the platform that are significantly outside the initial scope of the platform. In this case, the fundamental design decisions underlying the platform architecture are no longer valid and hence the team needs to take additional design decisions to incorporate the new functionality. The resulting architecture is often significantly degraded from the perspective of conceptual integrity, causing future additions to the platform to require more code, adding size, implemented at a lower productivity level, due to the increased complexity. Finally, businesses tend to organize product development in terms of the business domain addressed by the development team. In the case of M&A and costly legacy products, there often is significant pressure to bring these products into the product line to reduce overall R&D expenditure through the sharing of software assets. Similar to the former factor, this has major implications for the platform architecture in terms of complexity and conceptual integrity and tends to bloat the platform code as legacy and unrelated code artifacts need to be brought in. Most of the factors discussed above can be addressed by a consistent and continuous investment in architecture refactoring. However, in practice it proves to be very difficult to prioritize refactoring work when it is competing with the implementation of additional functionality. Worst case, this results in platform that over time erodes so rapidly that it needs to be sunset long before it should have been from a domain functionality perspective. Although we have focused on the effects of evolution, the roots of the problem can often already been found during the initial design of the product line. Architects and engineers, often having the frustration of maintaining an eroded architecture fresh in their minds, have a tendency to design new product line architectures in such a way that virtually all perceived future flexibility needs are met. The logical way to achieve this is by the introduction of a variation point for each feature and hence provisioning it as optional. Rather that just implementing what was required the tendency is to construct complex frameworks that allow future variance in that feature. These frameworks

add complexity and as they are based on limited amount of examples they seldom have the right flexibility. This observation is supported by earlier research in software variability management, e.g. [8]. Increase in functionality and product line scope increases the number of connections to external environment. This environment is digital and evolve steadily. Small changes in external requirements cause an avalanche of variation. Central components vary as changes in mandatory features lead to new versions of components supporting them. These changes have ripple effects even if internal interfaces would stay the same. Ripple effects cause updates on other components and to corresponding integration and testing problems. The benefits of a successful product line often provide the technical basis for significant business growth due to the reduced cost of product derivation and hence company’s improved ability to serve specific customer segments. The consequence of its economic success is that the company looks at adjacent domains that it can now afford to invest in. Interested in repeating earlier success, the product line is expanded to include domain that earlier were out of scope and for which the product line architecture has not been designed for. Continuous hardware evolution offers opportunities to bill of material (BOM) optimization. This is often seen as replacement based optimization rather than optimization of the entire hardware configuration. New hardware components seldom match perfectly the logical hardware architecture leading into dispersion due to non-functional aspects, like power management. All corresponding frameworks have to be updated to accommodate new approaches to these aspects. Variance management is overly concentrated on adding variability and flexibility. There are no incentives for decreasing variability, increasing mandatory components, simplifying frameworks and removing support to obsolete hardware. This leads to continuously extending amounts of variation points and reduction of constraints on selections of that variability. Although, the company has succeeded in building a flexible asset base, overall productivity seems to decrease. Despite all flexibility there seems to be lack of meaningful variability that allows creating differentiating products. The needed variations are hidden under mountains of technical variation points. For practitioners, it is hard to understand what is the root cause for reducing productivity of the product line and which are the right corrective actions. We propose the use of default values to restore intellectual control over variability management and to identify right corrective actions.

54

Table 1 Default values in context of variability Rule

Variability

In the model

(De)selectable

Selected by default

Proposed

N/A

No

No

No

Planned

As defined

Yes

No

No

Includable

Variable

Yes

Yes

No

Excludable

Variable

Yes

Yes

Yes

Mandatory

Mandatory

Yes

No

Yes

Obsolete

As defined

Yes

No

No

Removed

N/A

No

No

No

features that can technically be deselected (there is a variation point implemented), but remain mandatory throughout the product line lifecycle are not desired. Obsolete: Obsolete features are part of the variability model, but cannot be selected anymore. They typically represent features that have been replaced by new, improved features. Obsolete features typically still retain their variability type. Having obsolete features part of the variability model makes their status very explicit to developers. Removed: These features have been previously part of the variability model, but now they have been removed. In this paper we do not consider the dependencies between variation points, between variation points and variants and between variants. Although managing these dependencies is important for any product line, considering them when trying to understand potential problems derived from choices in variability and default values will only further complicate the analysis. However, when making possible corrective actions the dependencies will have a huge impact on what is possible and how difficult such actions will be to realize. In the next section, we describe the observed evolution of the default values in real product lines.

4 Default values for variability Table 1 shows the categorization of default values. They are defined as follows: Proposed: All features are introduced first as proposed features. This means that they are not yet part of the variability model, but they have been added to the product line management tools as a possible future feature. These features do not have a defined variability type, since they have typically been proposed by one product program interested to having this particular feature. Defining correct variability for the feature requires interaction with the reuse organization. Planned: After features have passed the proposal state and they have become planned. Planned features have their variability defined and they are introduced into the variability model. However, they cannot be selected yet. In most cases planned features or some features, which they depend on, have not been implemented yet. Includable (optional): Includable refers to an optional feature that is by default deselected. An includable feature can be selected by any product program. Typically these features represent features that are coming in or out of the variability model or some features that are not part of the mainstream products. Excludable (optional): Excludable features are similar to includable ones, but they are by default selected. They typically represent mainstream features that some (possibly low-end) products deselect. Both excludable and includable features are also optional. Mandatory: Mandatory features represent features that cannot be deselected at the moment. However, they may have been previously optional features earlier or become optional in the future. In fact, mandatory

5 Practical examples of default value evolution Different evolution patterns of default values can provide insight into how product line development happens within the company. In this section we ignore some frequent patterns are outside the scope of the paper. Especially, cases where features are cancelled, i.e. moved directly from being proposed to being removed.

55

It is also important to understand that the selected scenarios or their order do not imply to which extent that scenario takes place in practice. How many times one particular scenario exhibits itself depends on the product line. We have observed each of these scenarios in real industrial product lines.

4.1 A complete evolution cycle

product line and eventually reduces productivity of the product line. Transitioning from optional to mandatory and back to optional requires ability to understand variability evolution. It represents capability to identify when initial domain analysis results are not valid any more and what used to be variability is now in fact commonality.

The first scenario is what we call the complete evolution cycle. It represents a feature lifecycle that goes through all default values in sequence.

4.2 From proposed to mandatory through optional The second scenario describes a case that initially is identical to the complete evolution cycle, but if variability reaches mandatory, it remains that way forever. The fact that the feature is never removed from the asset base distinguishes this scenario from the first one.

Proposed – Planned – Includable – Excludable – Mandatory – Excludable – Includable – Obsolete - Removed The complete cycle gets features first to be proposed and planned for the product line. Importantly they are first introduced as includable features. This is important since introducing new features as optional allows product differentiation. Typically one lead product introduces the new feature and this justifies the higher pricing of this new product. Includable default value highlights the fact that most products will (and should) not select the new feature. Eventually, the feature becomes mainstream and the default value changes to excludable. Rather than changing directly to mandatory, excludable feature allows some e.g. low-end products to deselect the feature. The choice between includable and excludable provides much more context than having just optional features. The knowledge of the previous changes in default values and expected lifecycle assists portfolio management and developers to make correct choices in their daily work. In addition, having default values set for each variation point enables having a baseline configuration for the product line. However, it is also important to have mandatory as one step in the lifecycle. Mandatory features provide instant reuse and prove that the scope of the product line is not exceeding too much. But equally important is that mandatory features become optional again. Mandatory features that transition to optional features represent features that are becoming obsolete. But before this they become first excludable and then includable. Eventually they become obsolete and then are removed. Even though this variability lifecycle sounds easy, in practice it rarely happens. Ability to transition variability through the whole lifecycle imposes huge requirements to the organization. Many product lines never remove variability. Variation points are only added. This constantly increases the complexity of the

Proposed – Planned – Excludable – Mandatory

Includable



There are multiple reasons why variability evolution may stop to mandatory. In best case it is a natural consequence of a product line that expands in scope, in terms of functionality, but in ways that this functionality is common to all product line members. E.g. all mobile phones in Nokia S60 product line have a camera. This means that camera and gallery applications are both mandatory features for S60 product line. However, previously both of these were optional. But there are other reasons for this type of evolution as well. Transitioning from mandatory to optional is often very difficult in reuse heavy organizations. Pursuit of reuse makes these organizations very reluctant to remove commonality [8]. In this case, the amount of mandatory features continuously increase in number, but also often in proportion. If the proportional share of mandatory features increase, then products derived from this product line become more and more similar to each other. This reduces possibilities for differentiation. The high percentage of commonality gives organizations false sense of productivity. Although the high commonality does increase the amount of reuse, but it at the same time reduces the value of reusable assets. Some features are now added to products which are not really requiring these features and whose customers are not willing to pay for these features. This reduces the value of these features when the customer’s of higher end products see those features being practically given away for free. This also increases bill of materials (BOM) for embedded products and therefore reduces the available profit margin.

56

always beneficial. It is believed that the more variability can be designed into the product line the more successful it will become. There are good reasons for this belief: variability increases the number of possible product configurations and therefore facilitates creating more diverse product portfolio. However, there are no guarantees that the variability supports creating products that are significantly different from the perspective of their customers. Often the variability created only for flexibility and possibly future products only increases complexity but does not provide meaningful variability for customers. That is, the variability does not provide differentiation. Another valid reason for continuously increasing variability is an attempt to reduce BOM to as low cost as possible. To limit BOM cost, only the essential features should be included to the product. Despite good intentions, this approach often fails. Minimizing the BOM for all products of the product line requires deep understanding of both component and feature dependencies in addition to all possible configurations of the product line [9]. In practice, BOM optimization becomes more difficult when the number of variation points increases. Typically, high degrees of flexibility results in lack of documentation of feature dependencies. Although the components may be designed (in cost of productivity) to offer a high degree of flexibility, in reality realizing the customer visible features require complex collaboration between components. Having an architecture that does not restrict the way components can be configured makes analyzing and understanding the dependencies very difficult [8]. If most variability is excludable then one base configuration exists and the product line has a common basis. This means that the product line is experiencing the near commonality phenomenon. A lot of variability exists that is nearly mandatory, but still deselected by some products. In fact, excludable can be claimed to be the mandatory of near commonality product lines. If most optional variability is of type includable then there is no natural base configuration. The approach resembles reusable libraries that contain mostly independent functions. In product lines, this may imply that the scope of the reusable asset base is too large and steps should be taken to reduce the scope. In the next section, we will explain how indentifying the dominant evolution scenarios will help managing product lines, identifying potential problems and considering corrective actions.

The large commonality portion is typically reused as a platform where all mandatory features make up the platform release. By having more and more mandatory features typically leads to increase of the size of the platform until products become reluctant to use the latest versions of the platform. In industry, this is usually referred as the “bloated platform” problem. This may lead to situation where products select different versions of the platform or some features. In practice, this means simulating variability by having many simultaneous versions of mandatory features or the whole platform. This phenomenon should be solved by increasing optionality and reducing the size of the platform.

4.4 From proposed to mandatory Third scenario identifies a case where variability is introduced not as optional but rather as mandatory. This scenario is a variant of the second scenario and previous discussion applies also in this case. Proposed – Planned – Mandatory When the needed functionality of the product line is expanding rapidly and the number of future products is unclear, there is a temptation to make this functionality a part of the platform in order to have it easily reusable. Introducing functionality mandatory for all products will maximize reuse and reduce the complexity in planning, analysis and implementation since variability does not have to be considered. This approach can create a vicious cycle. Because most new functionality is allocated to reuse organization they have less and less time to carefully analyze the needed variability. Without proper variability analysis, there is no option to shift some implementation responsibility to products. The remaining options are creating all features as optional (discussed in the fourth scenario) or creating everything as mandatory part of the product platform. Creating majority of functionality initially as mandatory may cause major problems, since having the functionality as mandatory prevents products from deselecting it. This limits the ability to differentiate products. By preventing deselecting features may increase unnecessarily product’s BOM and worsen its market position.

4.3 From proposed to optional The fourth scenario shows a case where variability remains optional for the lifecycle of the product line. Proposed – Planned – Optional Product lines exist because of variability. In many product lines organizations variability is considered as

57

Table 2 Primary evolution scenario types with possible corrective actions Primary Evolution Type Complete variability lifecycle

From proposed to mandatory through optional

Primary question

N/A

Are some currently derived product line variants using different versions of the platform or other mandatory assets?

Secondary question

Corrective action

N/A

N/A

None needed

If yes

Simulating variability with versioning mandatory assets

Prevent versioning of mandatory by replacing mandatory assets with excludable

If no Do some products get unnecessary features?

If yes From proposed to mandatory1

Root cause

Can products achieve differentiation? If no

If yes Bloated platform If no Expanding scope in functionality

Remove unnecessary features and increase variability for rest None needed if profitability is maintained

New truly common nondifferentiating functionality are introduced because of expanding scope

None needed if the scope expansion also brings increasing profits

Premature commitment to mandatory shared assets

Introduce new features first as optional, ideally as includable and then excludable

If yes If yes From proposed to optional (includable excludable)

Are there groups of products within the product line that define same sets of features as excludable?

There are effectively multiple product families in the assets base

Is this true for most of the assets?

If no

If no

Heavily expanding scope in both functionality and products

Near-commonality driven by an expanding scope mainly in terms of products

1

Introduce a hierarchical product line [4]

Use multiple product line models for different parts [8] Introduce compositional product line development [5]

Please note that also the questions for the from proposed to mandatory through optional are applicable here. In this case, the real problem is not that features are not introduced as optional, but the increasing amount of mandatory features in the platform.

58

set of its products simultaneously. Many other companies have similar results [10], resulting in not only in efficient R&D organizations, but also enabling an extended period of business growth for the company due to the competitive advantage created through the adoption of software product lines. Over time, however, software product lines increase in size and complexity due to design erosion, but especially due to unmanaged software variability. As a consequence, the cost of product derivation, platform evolution and the transfer of features from product specific code to the platform start to increase and this leads to decreasing business benefit of the software product line. Earlier research by one of the authors [11] confirms this specifically for product derivation. However, even in cases where the variability is managed appropriately, without automated support for product derivation, the complexity due to the number of variation points, the number of variants and the dependencies between these can still be mindboggling. In the literature, including work by one of the authors [12], the product derivation process has different approaches. The first approach is assembly centric, which either employs a constructive or a generative style. The constructive style first derives the product specific architecture, then selects the appropriate components and then configures each component. The generative style uses, as the name implies, a model-driven engineering approach to generate the product from selections at the model level. The second approach to product derivation uses configuration selection and three tactics can be distinguished. The first tactic is to use an old configuration, i.e. the configuration of a similar product derived earlier, as a basis and this configuration is changed to reflect the difference between the products. The second tactic is to use a reference configuration that has been defined for the software product line. The reference configuration defines a reference product representing a typical or average product in the product line. The third tactic is to use a base configuration, which is partial configuration capturing the typical configuration of a group of products part of the product line. During product derivation, the partial configuration is completed by selecting settings for the un-configured variation points. In this paper, we propose the use of default values as a mechanism to significantly reduce the complexity of product derivation. Placed in the context of [12], this can be thought of as using a reference configuration, but in this paper the link between the configuration and the variability model is presented in much more detail. Using default values, the engineers deriving a product only have to consider the variation

6 Guidelines on evaluating product lines based on the default value evolution For practitioners there are many different ways to organize the sharing of software assets. When initiating a new product line, several research results can provide guidance. Initially, software product lines always seem to be reasonable simple and controllable. However, after a few years the scope of the product line becomes larger, as more and more products are introduced and wide range of functionality is added to the asset base. Then solving problems and optimizing reuse becomes very difficult. Also the support provided by the research community typically deals with specific products lines and practitioners must try to evaluate if the results are applicable to their situation. Although default values help to understand and control product line evolution their use can be even more important for evaluating existing product lines. Based on our experience, we claim that the primary evolution type of default values can be used to identify a possible root cause for problems in the way the company approaches product line development. In addition, we can propose a corrective action to alleviate the problems identified in the first step. Most product lines exhibit many different types of evolution scenarios. By primary evolution type we mean the dominant way in which the evolution happens in the whole product line or some coherent subset of assets. The primary evolution type heavily contributes on how easily new differentiating products can be made, can benefits of reuse be realized and does the product line remain flexible. Table 2 describes an analysis on the consequences of having a certain primary evolution type. For each primary evolution scenario, a number of questions are presented. Based on the answers we can estimate a potential root cause for problems in the product line. We do not claim that this is a complete solution for all possible problems, but it is valid for the software product lines that the authors have experience with. Table 2 provides guidelines on addressing potential problems and provides relevant starting points for corrective actions. By corrective actions we mean a possible roadmap for the future. In large product lines assets cannot be changed overnight and a plan for future changes is needed.

7 Discussion and conclusions Software product lines can provide great leverage of R&D investments. Nokia has managed to increase the number of products released per year with an order of magnitude, while increasing the complexity and feature

59

[2]

points that define how the product is different from the standard configuration. In the typical case, this allows the engineers to ignore the majority of the variation points. This approach has been used successfully in Nokia (S40 product line) for over 10 years. This paper also introduced a different view on commonality. The traditional belief in research and practitioners operating simple product lines has been that commonality is never associated with a variation point (see e.g. [13]). While intuitively this seems natural, in practice, mandatory features are often associated with variation points and supported by implemented variability mechanisms. If a mandatory feature has been or will potentially be in the future optional, there is no reason to remove the variation point. However, removing even obsolete variation points seem to be a problem. Although the use of default values significantly decreases product derivation effort, it does not absolve the R&D organization from investing a part of its budget to architecture and code refactoring fight design erosion and to remove obsolete variation points and functionality. If the organization ignores this, the cost of product derivation will continue to rise independent of the adoption of default values. One way to find what variation points to remove and how to optimize the variability in a product line has been reported by Loesch and Ploedereder [14]. Evolution of default values provides great insight into product line development. Even the best organizations may end up in problems despite the efforts in refactoring architecture and optimizing variability. This may happen because the current way of operating product lines may not match the changing environment. Expanding scope in terms of functionality and products may force the company to redesigning the way in which product lines are operated. We proposed a guideline for evaluating the current state of product line development, identifying potential root problems and suggesting corrective actions. Based on our experience, we believe that the use of default values improves understanding of the current state of the product line, improve feature management, and correct problems by actually removing irrelevant variability. We suggest other researcher and practitioners to verify our suggestions in their own projects.

[3]

[4] [5]

[6] [7]

[8]

[9]

[10] [11]

[12]

[13]

8 References [1]

[14]

J. Bosch, "The challenges of broadening the scope of software product families", Communications of the ACM, 49(12), 2006, pp. 41-44.

60

K. C. Kang, S. G. Cohen, J. A. Hess, W. E. Novak, and P. A.S., "Feature-Oriented Domain Analysis (FODA) Feasibility Study," Software Engineering Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA, Technical Report CMU/SEI90-TR-21, 1990. R. R. Lutz, "Toward Safe Reuse of Product Family Specifications", Proceedings of the 1999 Symposium on Software Reusability (SSR'99), ACM Press, 1999, pp. 17-26. J. Bosch, "Software product lines: organizational alternatives", 23rd International Conference on Software Engineering 2001, pp. 91-100. C. Prehofer, J. van Gurp, and J. Bosch, "Compositionality in software product lines," in Emerging methods, technologies, and process management in software engineering, A. D. Lucia, F. Ferrucci, G. Tortora, and M. Tucci, Eds.: Wiley, 2007, pp. 21-42. J. Bosch, "Software Product Families in Nokia", Software Product Lines Conference (SPLC 2005), Springer-Verlag, 2005, pp. 2-6. J. E. v. Aken, "Management research as a design science: Articulating the research products of mode 2 knowledge product in management", British journal of management, 16(2005), 2005, pp. 19-36. J. Savolainen, J. Kuusela, M. Mannion, and T. Vehkomäki, "Combining different product line models to balance needs of product differentiation and reuse," in 10th International Conference on Software Reuse, vol. LNCS 5030, H. Mei, Ed. Beijing, China: Springer, 2008, pp. 116-129. J. Savolainen, I. Oliver, V. Myllärniemi, and T. Männistö, "Analyzing and Re-structuring Product Line Dependencies," in 31st annual international computer software and applications conference: IEEE, 2007, pp. 569-574. SEI, "Product Line Hall of Fame," vol. 2009: Software Engineering Institute, 2008. S. Deelstra, M. Sinnema, J. Nijhuis, and J. Bosch, "Experiences in software product families: Problems and issues during product derivation," in Third software product line conference, vol. LNCS 3154: Springer, 2004, pp. 165-182. S. Deelstra, M. Sinnema, and J. Bosch, "Product derivation in software product families: a case study", The Journal of Systems and Software, 74(2005), 2005, pp. 173-194. L. Geyer and M. Becker, "On the influence of variabilities on the application engineering process of a product family," in second international software product line conference vol. LNCS 2379, G. Chastek, Ed. San Diego, CA, USA: Springer, 2002, pp. 1-14. F. Loesch and E. Ploedereder, "Optimization of variability in software product lines," in 11th international software product line conference: IEEE, 2007, pp. 151-160.