Empir Software Eng (2007) 12:647–660 DOI 10.1007/s10664-007-9047-3 INDUSTRY EXPERIENCE REPORT
Philips experiences in global distributed software development Rob Kommeren & Päivi Parviainen
Published online: 2 September 2007 # Springer Science + Business Media, LLC 2007 Editor: Forrest Shull
Abstract Global software development is increasingly common. Main expected benefits are improvements in time-to-market efficiency and access to greater—and less costly— resources. A number of problems are still to be solved before the full potential of global development can be obtained. This paper describes the experience of over 10 years of global distributed development at Philips, derived from about 200 projects. We discuss the experience and lessons learnt from multi-site development. Main lessons learned are that explicit agreements and ways of working should be defined for the following areas needing the most attention; team coordination and communication, requirements and architectures, integration, and configuration management. In addition, we discuss the experience gained from subcontracting software development to suppliers. Main lesson learned from subcontracting software development is the need for explicit attention and ways of working with respect to selection of suppliers, specification of the work to be subcontracted and establishment and content of the contract. Keywords Philips . Software development . Globally distributed software
R. Kommeren Philips Applied Technologies, P.O. Box 218 / SFJ-7, 5600 MD Eindhoven, The Netherlands e-mail:
[email protected] P. Parviainen VTT, Technical Research Centre of Finland, Espoo, Finland P. Parviainen (*) VTT Technical Research Centre of Finland, P.O. Box 1100, 90571 Oulu, Finland e-mail:
[email protected]
648
Empir Software Eng (2007) 12:647–660
1 Introduction The highly competitive business environment—with the ever increasing functionality of the products implemented in software—places intense demands on delivering higher quality software faster. Companies need to use their existing resources as effectively as possible, and they also need to employ multiple development teams on a global scale. The ability to collaborate amongst these teams has become a critical factor in software development life cycle. Globally distributed development (Ebert and De Neve 2001; Herbsleb and Moitra 2001; Damian et al. 2004) has a number of potential benefits, including shortening time-to-market cycles by using time zone differences and improving the ability to quickly respond to local customer needs. Globally distributed software development also allows organizations to benefit from access to a larger, qualified resource pool with the promise of reduced development costs. Another potentially positive impact of global distributed development is innovation: mixing of developers with different cultural backgrounds may trigger new ideas. On the other hand, several studies have indicated problems in distributed development, including Damian et al. 2004; Boland and Fitzgerald 2004; VA Software 2005): – – – – – –
Poor visibility and control of remote activities, Inadequate communication, collaboration and coordination across individuals, teams, time-zones and projects, Insufficient (or lacking) knowledge and asset management capabilities, Language and cultural differences, Trust factors, and Lack of shared contextual awareness.
Several experience reports of distributed software development have been published over the years, for example, Siemens (Bass and Paulish 2004), Motorola (Battin et al. 2001), Alcatel (Ebert and De Neve 2001), and Lucent Technologies (Herbsleb et al. 2001; Herbsleb and Grinter 1999). Surveys of several projects have also been published (Komi-Sirviö and Tihinen 2003; Paasivaara and Lassenius 2004; Falls 1995). The published experience are often case studies of one or a few projects, surveys, or focused on some specific topic such as configuration management and incremental development. Only a few comprehensive lessons learned reports have been published. In this paper we discuss the experience of Philips of over 10 years of distributed development, thus providing an aggregate of experience and lessons learnt from a longterm and large-scale development activity. The experience and lessons learnt discussed in this paper have been found repeating in several projects over time, in different settings and observed by different people, thus they can be seen as general, common issues occurring in, and because of, distributed development. 1.1 Experience Collection The Philips experience presented in this paper has mainly been derived by means of audits and evaluations from consumer electronics product development projects (TV, DVD). These products have been developed in more and more distributed environments, i.e. at various locations all over the world, and including third party products and subcontracting software development to third parties. The data originates from Philips’ Consumer Electronics (CE) division, and was gathered by a central software process office by means of a standardized questionnaire. The questionnaire was filled out by all software development groups of Philips Consumer Electronics, on average 15 groups. Starting in 1999 the ques-
Empir Software Eng (2007) 12:647–660
649
tionnaire was filled out twice a year. From 2002 onwards it was filled out once a year. Groups would provide data of five projects (small groups) to ten projects (large groups). An exception was the software development center in India, which would provide data of about 25 projects. This paper concerns data collected in the period 1999 to 2005, covering about 200 projects in total. The questionnaire covered topics of product roadmap, staffing, software process maturity and improvement, software development environment, effort consumption, project data and product data like size and defect numbers. Per category 10 to 20 questions were stated. About 80% of the questions were open, and the rest were closed for issues like process maturity and development experience. The CE central software process office collected and analysed the data, including statistical analysis. The results of the analysis were discussed twice a year with the software development managers responsible for the projects leading to shared conclusions. The issues presented in this paper are based on the conclusions from these meetings. For instance problems related to proper planning and progress reporting, such as no proper work breakdown and lack of estimation techniques were combined to a general problem named lack of management capabilities. Statistical analysis was done for issues like lead-time estimation accuracy, effort estimation accuracy, productivity, and defect density. The results of the statistical analysis made clear that the effort distribution figures were different for multi-site development compared to one-roof development, e.g., effort spent on overhead (communication etc.) and integration testing was tens of percentages higher than in one-roof. These results caused investigation of problems related to multi-site development. This was done in next steps and resulted in the issues presented in this paper. In the following sections we explain the motivation of Philips for distributed development, introduce the environment where the experience have been collected, and explain the terminology used. Then we discuss the experience and lessons learned, first for multisite projects, meaning projects developed in-house, distributed to several Philips sites, and then for supplier management. Finally, we discuss the conclusions and implications of the presented experience.
2 Philips Reasons for Distributed Software Development Software size in Philips Consumer Products typically follows Moore’s law: every 6–7 years the software size grows by a factor 10 as shown in Fig. 1. For instance, the size of TV software grew from 10 KB in 1986 to 100 KB in 1992 and to 1 MB in 1998. Additionally, the functionality and complexity of these products are rapidly growing. Nowadays complex user interfaces, connectivity devices, ‘intelligence’, and configurability of the system are common features in appliances that used to have no more than a start–stop button. To develop this kind of software, an increasing number of software staff is required, which is not always locally available. Moreover, the growing functionality and complexity asks for specific domain knowledge, and requires building and maintaining competencies in dedicated teams: it is simply unaffordable to start new product development building these competences from scratch. Consequently, specific knowledge and product components are often externally obtained from (remote, third party) groups that are specialized in the development and supply of those software components and that are also often less expensive (e.g., in Asia) than in-house development. The company’s make-or-buy policy results in a trend of growing incorporation of 3rd party software in Philips products, as shown in Fig. 1. The figure also shows clearly that Philips Consumer Electronics hardly develops new software in-house.
650
Empir Software Eng (2007) 12:647–660
This trend also leads to an increasing importance of supplier management, concerning both the incorporation of COTS (Commercial Off the Shelf) software and the subcontracting of software development to third parties.
3 The Environment Consumer Electronics (CE) is one of the product divisions of Philips. In 2005, the CE division had 992 software staff members, including both in-house and contracted staff. The ratio of in-house personnel to contracted staff was about 1:1. The software development of Consumer Electronics is done in over 10 sites, while the products include high volume electronics products like TV sets and DVD recorders. In 2005 CE was mainly based in Asia (73%) and in Europe (27%). The project size can be more than 100 person years with a duration of 1 to 2 years increasingly commonly. The smallest projects were of about 5 FTE, while most of the projects were 10–20 FTE (Full Time Equivalent, meaning a person year). The challenges presented in this paper were typically from the large projects, larger than 20 FTE, as they were the ones carried out in collaboration and in a distributed fashion. Those projects would have project defined processes, based on existing best practices from the involved sites (at the minimum CMM level 2 compliant). The purpose of the project defined processes was to standardize the projects’ way of working for a number of basic items like, progress reporting, configuration management, and change control. The product development projects of the Philips Consumer Electronics division follow defined software development processes. The software development processes fit in a standard product creation process called SPEED and basically follows a waterfall lifecycle model. The lifecycle is defined in such a generic way that it allows for incremental development of the functionality. In fact, in most cases the functionality is developed in increments. In 2005, over 50% of the software development staff was working in groups rated at CMM level 5, and about 30% in groups rated at levels 3 and 2. The distributed software development projects are typically allocated to three to five sites.
20
MB
15
10x=7y
External software
10
5
Philips software
0 1990
1992
1994
1996
1998
2000
2002
2004
2006
MB=MegaByte y = year
Fig. 1 Extrapolated external software size in Philips consumer products
2008
2010
Empir Software Eng (2007) 12:647–660
651
The terminology regarding distributed software development used in this paper is explained next. Distributed development, as discussed in this paper, involves two or more sites, departments or companies that work together to develop a product, where one of the parties has the main responsibility for the final end product. Distributed development can be a purely ‘in-house’ (single company, multi-site) activity or it can also involve customer– supplier relationships. In customer–supplier relationships, the customer is the organisation which is buying the software work (and technology and knowledge) from the supplier. The work itself may be based on given requirements, or modification of existing COTS or open source code. The customer may also hire workers from the supplier, also called bodyshopping (not further discussed in this paper). The supplier is the organization that provides the software work to the customer. In this paper, a distinction is made between supplier management and multi-site projects (see Fig. 2). Philips policy is to organize work in multi-site projects by allocating specific tasks to particular groups within Philips with the required competences, e.g., development of the user interface to group A, development of the data management to group B, development of the driver software to group C. Those projects are virtual organisations where the overall project leader has direct control over the groups. The difference between multi-site development and customer–supplier relationship (or supplier management) is that in supplier management, the overall project doesn’t have direct control over the work carried out by the supplier during the development, but the supplier is given an assignment and the supplier is responsible for how the development regarding the assignment is organized. The difference made is conceptual: in practice, clear agreements have to be made of the controls the overall project leader really has over the remote teams. We discuss the experience and lessons learnt for both, multi-site development and supplier management in the following sections. When we talk of distributed development, we mean both multi-site development and development including external suppliers (supplier management). Fig. 2 Distributed development context
652
Empir Software Eng (2007) 12:647–660
4 Experience with Multi-site Software Projects Multi-site software development provides an important source of experience in distributed software development. In this case, the project is organized in multiple teams within Philips, practically always globally distributed. All teams report to one overall software project leader. This section lists a number of problems and solutions encountered in multi-site software development. The challenges presented were typically from the large projects, larger than 20 FTE, as they were the ones carried out in collaboration and in a distributed fashion. The problems encountered in Philips multi-site software development have been grouped in the following categories: Main categories Team coordination and communication
Managing requirements and architectures Integration
Configuration management
Problems Basic management capabilities were not present in teams, Dependencies between teams were not made explicit and managed, Acceptance procedures of mutual deliveries were not defined, Status of teams was not pro-actively checked, Escalation mechanisms were not defined, Learning curve was underestimated, Need for explicit communication was underestimated, and Loss of efficiency due to multiple teams High impact of unstable requirements, No unified understanding of requirements and architecture was reached, and Architecture status was not managed. Responsibilities were not assigned clearly and integration strategy and plan were missing, Integration effort and time were underestimated, Required knowledge and skills were not present in integration team, and Integration was not centrally controlled. Not enough preparation time taken to set up CM infrastructure, Competent configuration managers were not available, and Change management procedures were not defined.
The following sections will address the problems and solutions encountered in each of these areas in more detail. 4.1 Team Coordination and Communication The following problems encountered—causing loss of efficiency instead of the intended efficiency gain—concern team coordination and communication: – – – – – – – –
Basic management capabilities were not present in teams, Dependencies between teams were not made explicit and managed, Acceptance procedures of mutual deliveries were not defined, Status of teams was not pro-actively checked, Escalation mechanisms were not defined, Learning curve was underestimated, Need for explicit communication was underestimated, and Loss of efficiency due to multiple teams.
Each team should manage its own internal operation locally, and accordingly it should have basic management capabilities. This condition is, however, often (in more than 50%
Empir Software Eng (2007) 12:647–660
653
of the cases) not met in practice. As a minimum requirement, all the development teams involved should have the CMM level 2 process maturity as then also the basic management is covered. For example, overall project management was in a number of cases practically impossible, because some teams involved missed the capability to come to proper work breakdowns, adequate estimates and progress reporting accordingly. A particular basic management control problem is the lack of configuration management. For example, if the overall project delivers its software by upgrading functionality in multiple increments, it should be able to fall back on previous working baselines if a next increment breaks the build. If even only one of the contributing teams has a lack of configuration management and consequently cannot reproduce its associated part of that particular baseline, the project is in trouble. The dependencies between teams are often not made explicit. In eight out of ten cases, the management structure of a multi-site development project stops with the organizational chart. The individual teams, however, operate on a peer-to-peer basis, and their mutual dependencies should be managed at a higher level, i.e., by the overall project leader. Most overall project plans lack explicit descriptions of the mutual dependencies and their management. The evolving management practices are also missing in 80% of the cases. At least the mutual deliveries should be clearly stated, including the factors “who, what, when, and to whom” and then also tracked accordingly during the project. The responsibilities regarding the acceptance of mutual deliveries are unclear or inaccurately defined in 70–80% of the projects. In addition, acceptance procedures and criteria are also missing. The authority to accept deliveries should be with the receiving parties as they are the ones depending on the deliveries. Mutually agreed acceptance procedures should make clear what steps are to be taken, and what (objectively stated) criteria should be met. These steps should include (planned) verification of deliverables by parties involved, e.g., (in-between) reviews and tests. For example, in a project (multi-site project involving five software teams) development teams would submit software to the central test and integration team without using acceptance criteria. As a result, the test and integration team activities overrun planned effort and lead-time partly due to defects found that would have been found by the development teams if proper component tests had been run. The status of teams is not pro-actively checked on overall project level in more than half of the cases. The project teams basically operate in a self-controlled mode. Deviations from their plans may, however, have a major impact on other parties and on the overall project. This requires pro-active checking of the teams by the overall project leader. For example, in a project, one of the teams reported delay, but the overall project leader did not consequently account for that in the overall project schedule. As a result, the schedule overrun was signaled too late, causing problems for manufacturing and product release. Local circumstances may hamper the contribution of an individual team to the overall project. Escalation mechanisms with explicit management involvement of collaborating parties are thus required to cope with this problem. These mechanisms may also be predefined, while they are not in place in practice in more than 80% of the cases. The learning curve in multi-site software development is practically always underestimated. It may take years to learn how to work together, to master the domain, and to understand mutual sub-domains. For example, in 1996 Philips started to offer cheap software engineers from its software laboratory in India to develop TV software. It took more than 5 years before the group in India had enough application domain knowledge to co-operate with the TV software integration center in Bruges, Belgium effectively. The need for explicit communication is also usually (80%) underestimated. Persons who need to work together may have wrong assumptions about each other’s approaches, attitudes, ways of working, and results, when they have never really met. To enable smooth
654
Empir Software Eng (2007) 12:647–660
and efficient collaboration, cultural differences often need to be identified and dealt with accordingly. Moreover, there seems to be a certain threshold for contacting somebody you have never met whom you only know through mail correspondence and reports sent. Philips experience show that the performance level of inter-team cooperation increases dramatically once the teams have physically met. Commonly applied communication practices like e-mail, instant messaging (IBM Lotus SameTime), team rooms, teleconferences, and videoconferences are simply not enough. Often this is organized for only very late in the project, when the project is in real trouble. Thus, it can be concluded that the overall project efficiency is likely to be higher if a meeting is arranged in an early stage of the project, for instance in the form of a kick-off meeting where all parties are present. As a result, Philips experience show that working with multiple teams leads to loss of efficiency. People, who work together remotely, and with different cultures and ways of working, need to communicate more, to really understand each other. The costs associated with communication and meeting physically are often underestimated. Also, in some cases, effort distribution figures indicate a loss of productivity in multi-site projects to be close to 50%, compared to ‘one-roof’ projects. 4.2 Managing Requirements and Architectures The following problems have been encountered regarding the management of requirements and architectures: – – –
High impact of unstable requirements, No unified understanding of requirements and architecture was reached, and Architecture status was not managed.
The impact of unstable requirements is generally high for any software development project. In distributed development, the consequences of unstable requirements may even be dramatic because of the leverage effect caused by the multiple levels of control. A change in requirements has to be first analyzed at the top level and then transferred for further analysis or elaboration to the (remote) teams involved. All teams have to take into account the consequences of the change for their individual development trajectories in terms of time, effort and functionality, as well as the consequences for the interfacing with other teams. Finally, that all has to be fed back in order to determine the consequences at overall project level. It’s clear that a lot of effort is easily spent on a change, resulting in significant delay, with a fair chance of introducing errors and misunderstanding. This is killing in case of unstable requirements with numerous changes. Experience shows that much effort has to be spent on the right involvement and understanding of requirements analysis by all teams involved. Requirements have to be discussed again and again to achieve a unified interpretation, resulting in optimal designs and software components which can be smoothly integrated. A lack of common understanding of requirements may result in poor design decisions and lead to dramatic delay in the integration phase of the project. The development, maintenance, and evolution of software architecture appear to be crucial, especially with respect to the definition of interfaces. Lack of continuous and active management of the architectures, including change control with representation of all parties involved, is likely to lead to major problems, which appear to be detected only during the integration stage of the project. Moreover, experience shows that active and continuous communication regarding the architecture is badly needed. The architecture needs to be explained again and again to achieve an accurate and common understanding of it among
Empir Software Eng (2007) 12:647–660
655
all parties involved. In addition, the lack of stable requirements results in unstable architectures with a compatible, dramatic impact on the project performance. 4.3 Integration The following problems have been encountered with integration: – – – –
Responsibilities were not assigned clearly and integration strategy and plan were missing, Integration effort and time were underestimated, Required knowledge and skills were not present in integration team, and Integration was not centrally controlled.
Dividing the work over multiple teams introduces the danger of a lack of explicit attention for the integration of the results of the various teams. Clearly assigned responsibilities, e.g., by appointing an integration architect, and a well elaborated integration strategy and plan are missing in more than half of the cases. Lack of a good integration strategy can lead to major overruns in integration effort and lead time. For instance, a particular project doing first-of-a-kind development had a bottom-up, component-based integration strategy: building the system by adding fully functional components one by one. Defects in components in combination with lack of understanding of the entire system (being first-of-a-kind) lead to major overruns in lead time and effort in initial integration steps. A more preferable integration strategy would have been to integrate the system top-down, starting with a “frame” of the system that fulfils the basic functionality, and gradually filling the frame by adding and replacing components. The advantage of this approach is that it’s always possible to fall-back to an operational version of the system and it enables the early validation of the architectural concepts. The required effort and time for integration are practically always underestimated. This is partly caused by the lack of a good work breakdown of the integration, so it’s not clear what all has to be done, and partly by that the project teams are that naive that they think everything will be right the first time. Another condition for a good integration plan is the availability of a stable architecture in which the integration requirements of the components are clearly specified. As it may be clear from the previous section, this condition is often not met. The integration plan should account for all the things that can go wrong—for multiple reasons as discussed earlier—and that are only revealed during integration. Integration teams do not have the required knowledge and skills. Competent resources are allocated to development teams, and integration and testing is perceived to be a less attractive and low-level job. As a result, integration teams often lack the needed domain and system expertise for mastering the complexity of integrating the components into one system. Moreover, the architects are generally poorly involved in the integration phase, while they are the happy few who have the badly needed overview of the entire system. Site experts are not actively involved in the integration of the system, as their participation is normally not planned for. A ‘throw it over the wall’ attitude is usually encountered in distributed software projects. Teams think they are only responsible for the delivery of their components, and don’t feel any responsibility for the realization of the whole system. Integration activities take place at various sites, introducing inefficiency and extra complexity when finally putting the system together. This can be caused by the lack of an explicit integration plan, for instance. Components may be integrated at multiple sites, but probably not in the same configuration. Exact status of the integration steps taken is also often not clear, which results in partly or completely repeating the integration step. Thus, integration asks for a centrally controlled approach.
656
Empir Software Eng (2007) 12:647–660
4.4 Configuration Management The following problems have been encountered about configuration management: – – –
Not enough preparation time taken to set up CM infrastructure, Competent configuration managers were not available, and Change management procedures were not defined.
Configuration management is the control area that has the most impact on the daily operations in distributed software development: on a continuous basis, engineers are checking in and out software, testing it in environments which should include baselines of contributions of other teams, and promoting their software to components that are finally integrated at system level. Because of all these steps and interactions, a stable, well thought of, and often complex configuration management infrastructure is required, and it’s required in time because almost the entire software staff is depending on it in its daily work. Experience shows that not enough preparation time is taken to set up this infrastructure, resulting in a lot of time and effort lost in the early stages of the project. The preparation and control of the complex configuration management systems needed for distributed software development require highly competent configuration managers who work together with the architects and the integrators intensively. The required level of configuration managers appears to be a scarce resource and not available in 50% of the cases. A configuration management system that is not well prepared, tend to cause problems like huge transfers of data between sites, that in turn causes long waiting times and thus irritation among developers, for instance. Change control and handling of problem reports is another area that requires attention and good preparation. In practice, change requests and problem reports are handled at various levels in the project, leading to a lot of inefficiency, confusion, and long throughput time. A clear definition of what change or problem is to be handled at what level, including criteria to transfer these to other levels, is missing in more than 50% of the cases. In addition, the lack of a standard change request and problem report handling procedure hampers oversight and smooth transfer from one level to another.
5 Experience with Software Suppliers The problems mentioned in previous sections are often encountered in both ‘normal’ distributed software development as well as in subcontracting software development to suppliers. Specific problems and solutions encountered in subcontracting software to suppliers are discussed in this section. Roots of these problems are sometimes similar to the ones addressed in Section 4.1. Following major problems have been encountered in subcontracting software development to suppliers: – – – –
Supplier was audited after the contract was signed, Lack of detailed planning from software supplier, Contract was signed too late and at too high an abstraction level, and No escalation path was defined.
The subcontracting party should select its supplier based on its software expertise in the required domain, and its capability to control software development projects. In more than 80% of the cases, the supplier was only audited on its capabilities after the contract was
Empir Software Eng (2007) 12:647–660
657
signed, or not at all. As a result, it is concluded too late that another supplier should have been selected. This problem is similar to the problem of lacking basic management capabilities in teams as described in Section 4.1. A detailed supplier project plan is required in order to give insight in the manageability of the supplier’s project, and to track its progress accordingly. In more than 50% of the cases, it appeared that software suppliers neither had any detailed plan, nor any project planning and tracking practices in place for controlling their projects. Consequently, the overview of the project progress was low. This problem is related to lack of explicit management of dependencies between teams as addressed in Section 4.1. Contracts with suppliers are signed in an early stage of the project. By that time, technical details are most times not clear and the involvement of in-house development parties is generally low. In particular, it is the development teams dealing with interfaces to supplier’s software and with integration of software that need agreements on software details and mutual way of working. For instance, agreements on the specification and implementation of software interfaces and their change control, and on testing and releasing the supplier’s software, are of major importance. However, this level of agreement is often not part of the formal contract, which is likely to result in problems later on: the customer simply has no means of control to adapt the supplier’s way of working according to his needs. For example, for a project it was decided to subcontract the development of driver software. The contract with the subcontractor was made in a very early stage, when the interface specification of the driver software was not yet defined. As a result, major problems were encountered during test and integration, such as overlaps, not covered functionality etc. From a contract point of view the subcontractor was not to blame, since they met the poor specification of the driver software that the contract was based on. The daily interaction with the supplier is carried out on a team-to-team basis: the supplier’s project team interacts with the main project regarding managerial and technical matters on a peer to peer basis, both not in the position to control each other. Escalation paths to higher levels of management are not defined (in at least 80% of the projects) to address problems that cannot be solved at the level of the project teams. Escalation paths should be included in the formal contract (similar to escalation mechanisms in multi-site development like addressed in Section 4.1). In addition, various kinds of problems are encountered in the daily interaction with the supplier, like vague progress reporting, incomplete and changing specifications, inadequate problem solving by the supplier, and error-prone software deliveries. They all have their roots in one of the major problems addressed above. The next section addresses some vital initial actions to tackle the problems listed above. 5.1 Supplier Management: What To Do? Supplier selection should be made integral part of the product roadmapping process. When planning the development of products or product ranges including software from third parties, both the technical and managerial capabilities of the suppliers need to be known. This knowledge can be established by experience, or should be obtained via audits. Auditing of suppliers thus should become an in-house competence and auditing the suppliers should be a standard element of the product roadmapping process, used as a basis for selecting suppliers and done prior to signing supplier contracts. In addition, carrying out a risk analysis between supplier capabilities and Philips requirements is a must. This should be started already before selecting the supplier, while it should also be continued after signing the contract.
658
Empir Software Eng (2007) 12:647–660
A specific aspect of portfolio management to be considered as a part of product program management is the definition of fall back scenarios related to subcontracted software development; i.e., what to do in case the supplier fails to perform according to expectations. The management of supplier agreements should be explicitly assigned to a software subcontract manager. As a minimum, the tasks of the subcontract manager should include reviewing and approving plans of suppliers, along with tracking their progress. As a minimum progress tracking should include examination of suppliers’ progress reports, and participation in supplier milestone reviews. In addition, the escalation paths used for addressing problems between the supplier and the subcontracting party, that cannot be solved at project level should be defined up front, i.e., as a part of the formal contract. A way to organize for coordination and escalation is to establish a Steering Board including management representatives from both parties. Predefined escalation paths should include types of issues to be escalated (organizational, managerial, contractual, technical) and criteria for escalation.
6 Conclusions Distributed software development is becoming more and more common as a development strategy. The main reasons for this can be ascribed to the potential improvements gained through distributed development processes regarding time-to-market and customer request response efficiency along with access to greater and less costly resources. A number of problems is, however, to be solved before the full potential of distributed development can be obtained. This paper has discussed the Philips experience of over 10 years of distributed development involving dozens of projects. The outcome is an aggregate of experience and lessons learnt of a long-term and large-scale development activity. Since the experience and lessons learnt discussed in this paper have been found repeating in several projects over time, in different settings and observed by different people, they can be seen as general, common issues occurring in, and because of, distributed development. The general lesson learnt from this experience is that the reality of distributed software development is significantly deviating from the theoretic hypothesis: the efficiency of distributed software development is perceived to be disappointingly low, whereas increased efficiency was expected. First measurements indicate that up to 50% of the development effort is spent on overhead (such as extra project management and team coordination) and communication. This has lead to that global distributed development has in practice been two to three times more costly compared to one-roof development. Preliminary conclusion is that, in general, distributed software development should be avoided as far as possible, although positive effects like exchange of knowledge and practices are definitely experienced. Distributed development is sometimes a necessity (a number of reasons have been listed earlier) and if the decision to go for multi-site development is made, particular attention should be paid to the preparation and management of team coordination and communication, requirements and architectures, and also integration and configuration management, as argued in this paper. If, in addition, distributed software development involves subcontracting software development to suppliers, capabilities and management processes should be introduced for selecting suppliers, coordinating supplier agreement management, and escalating problems that cannot be resolved at the level of the project teams.
Empir Software Eng (2007) 12:647–660
659
Directions for solutions for most of these problem areas have been indicated in this paper, and are mainly a matter of good preparation, clear agreements and disciplined behavior. Similar solutions as indicated in this paper are proposed by Cusick and Prasad (2006) based on their experience on distributed development. Other solutions have not yet been proven to be effective in large-scale distributed development. For example, new approaches like Agile are still considered controversial, as discussed by Ågerfalk and Fitzgerald (2006). No solutions have been encountered for the problems of unstable requirements and their consequences for architectures and integration. Maybe this is a fact of life that principally has no solution in the world of distributed software development. Acknowledgements The authors would like to thank Hans Aerts from Philips for providing information on distributed product development in Philips Consumer electronics and Ben Spierenburg from Philips for reviewing the paper.
References Ågerfalk PJ, Fitzgerald B (2006) Flexible and distributed software processed: old Petunias in new bowls?, Communications of the ACM 49(10):27–34, October 2006 Bass M, Paulish D (2004) Global software development process research at Siemens, The 3rd international workshop on global software development, May 24, 2004, In proceedings of ICSE 2004, International Conference on Software engineering, Edinburgh, Scotland, May 2004 Battin RD, Crocker R, Kreidler J, Subramanian K (2001) Leveraging Resources in Global Software Development, IEEE Software, March/April 2001, pp 70–77 Boland D, Fitzgerald B (2004) Transitioning from a co-located to a globally-distributed software development team: a case study at Analog Devices, Inc., The 3rd international workshop on global software development, May 24, 2004, In proceedings of ICSE 2004, International Conference on Software engineering, Edinburgh, Scotland, May 2004 Cusick J, Prasad A (2006) A Practical Management and Engineering Approach to Offshore Collaboration, IEEE Software, September/October 2006, pp 20–29 Damian D, Lanubile F, Hargreaves E, Chisan J (2004) The 3rd International Workshop on Global Software Development, May 24, 2004, In Proceedings of ICSE 2004, International Conference on Software engineering, Edinburgh, Scotland, May 2004 Ebert C, De Neve P (2001) Surviving global software development, IEEE Software, March/April 2001, pp 62–69 Falls M (1995) Managing collaborative R&D projects. Eng Manag J, December 1995 Herbsleb JD, Grinter RE (1999) Splitting the organisation and integrating the code: Conway’s law revisited, Proceedings of the 1999 International Conference on Software Engineering :85–95, 16–22 May 1999 Herbsleb JD, Moitra D (2001) Global software development. IEEE Software :16–20, March/April 2001 Herbsleb JD, Mockus A, Finholt TA, Grinter RE (2001) An empirical study of global software development: distance and speed. In the Proceedings of 23rd International Conference on Software Engineering, IEEE, Toronto, 2001.Also in. IEEE Trans Softw Eng 29(6):481–494, June 2003 Komi-Sirviö S, Tihinen M (2003) Great challenges and opportunities of distributed software development— an industrial survey, In The 15th International Conference on Software Engineering and Knowledge Engineering (SEKE’03), July 1–3, 2003, San Francisco Bay, USA Paasivaara M, Lassenius C (2004) Using iterative and incremental processes in global software development, The 3rd International Workshop on Global Software Development, May 24, 2004. In proceedings of ICSE 2004, International Conference on Software engineering, Edinburgh, Scotland, May 2004 VA Software (2005) The road to higher development efficiency, white paper, January 2005, available from: http://www.vasoftware.com/gateway/pollresults.php
660
Empir Software Eng (2007) 12:647–660
Rob Kommeren joint Philips in 1984 to work in computer aided manufacturing projects at the Centre for Industrial Technologies, both as a software engineer and software project manager. Since 1991 he has been working as a consultant and assessor in the area of software process improvement, referring the SEI’s Capability Maturity Model for Software. In sixteen years he has gained a broad experience both in and outside Philips. During the last years Rob has mainly been working in high innovative software development organizations, aiming at lean & mean implementations of software development processes meeting CMM L2 and CMM L3 requirements. Currently he spends the majority of his working time as Development Process Improvement Manager of Philips Digital Systems & Technology in Eindhoven, the Netherlands, one of Philips’ largest laboratories for first-of-a-kind embedded system development.
Päivi Parviainen received her M.Sc. in Information processing science from the University of Oulu in 1996. She is currently working as a Senior Research Scientist in the Software development innovations Team at VTT in Oulu, Finland. She has worked at VTT since 1995. She has experience in software process improvement, measurement, software reuse, software development tools and their integration, systems and software requirements engineering and global software development practices, for example. She has managed several industrial projects at national level and participated in many national and international research projects as well. She has published several papers in international journals and conferences. Currently she is in research exchange in Eindhoven, the Netherlands (since September 2005).