Macro-processes Informing Micro-processes: The Case of Software Project Performance Paul L. Bannerman NICTA, Australian Technology Park, Sydney, NSW, Australia
[email protected]
Abstract. This paper explores the operational context of software processes and how it can inform the micro-process level environment. It examines the case of software project performance, describing a novel explanation. Drawing on the management literature, project performance is modeled as the contested outcome of learning as a driver of success and certain barrier conditions as drivers of failure, through their effects on organizational capabilities. A case study illustrates application of the theory. Implications of this macro-process case for micro level software process improvement are discussed. Keywords: software process, macro-process, capabilities, project performance.
1 Introduction There is growing interest and recognition in the software community of the need to understand the broader context of software processes [23, 33]. For example, the most appropriate software processes are often dependent upon the organizational context. Furthermore, since most software development is done in projects, factors that affect project performance can influence the perceived success of the software artifact produced. The interaction between the macro and micro environment is bi-directional since organizations are dependent on the use of good software processes to produce high quality software. Therefore, they are co-dependent. As defined in [33], the macro-process environment is concerned with high level behaviors while the micro-process environment deals with the internal workings of processes. Osterweil proposes that software process research has a role in informing macro-process behaviors [33]. This paper takes the complementary view that the macro-process environment may also inform the micro-process environment of software processes. It does this by examining the case of a novel explanation of the tendency of software projects to perform poorly. It illustrates the explanation with a practical example. Implications are drawn from the case for finer-grained processes. A distinctive feature of the case is that it draws from the management literature to view processes in the higher level construct of organizational capabilities, which are a composite of tangible and intangible organizational knowledge, processes, skills and other assets that are developed through ‘learning by doing’. Projects are an important vehicle in developing software and delivering change to organizations. As such, they are critical enablers or inhibitors of business growth and Q. Wang, D. Pfahl, and D.M. Raffo (Eds.): ICSP 2008, LNCS 5007, pp. 12 – 23, 2008. © Springer-Verlag Berlin Heidelberg 2008
Macro-processes Informing Micro-processes
13
development, depending on the success of the project. A key sticking point, however, is that software projects have a highly variable performance record. According to Charette [8], software projects have “a long, dismal history” of going awry. In a “Software Hall of Shame”, he lists examples of over 30 major failures or losses that occurred in little more than a decade. He argues that billions of dollars are lost each year on failed or under-performing software projects. According to Charette, such failures can jeopardize organizations, imperil national security, and stunt economic growth and quality of life. Indeed, some reports suggest that software projects are more likely to perform poorly or fail than succeed [24], while others indicate that this problem has existed for decades [7, 51]. Extant explanations of the problem focus on factor- and process-based research to identify the drivers of project success. Implicitly, the research assumes that more of a desirable factor or process, or less of an undesirable ‘risk’ factor or process, improves the likelihood of success. Drawing on management theory (specifically, the resourcebased view of the firm and organizational learning theory), this paper extends these views with an alternative theoretical explanation of the problem. It presents a capabilities-based view of software project performance that explains the data in a way the extant approaches have not. This broader conceptualization of processes has implications for the development of thinking at the software process level. The paper is structured as follows. The following sections review the dominant current explanations of project performance and outline an alternative framing of the problem. Contributing theory to the proposed explanation from the management literature is then introduced, before the extended model is described. Application of the theoretical model is then illustrated with a case study of an Australian DMV. Finally, implications of the case for the micro level software process environment are drawn and conclusions reached on the contribution.
2 Current Explanations Extant explanations of IT project performance are dominated by factor and process prescriptions in conjunction with assumptions about accumulating improvements through learning. Factor research focuses on identifying antecedent variables (factors) for which a relationship can be hypothesized with a performance outcome. Well-known examples include a lack of top management commitment and/or user involvement, poorly defined requirements, gold plating and an absence of project management experience. The underlying logic of factor research is that a desirable factor is associated with a positive performance outcome while an undesirable factor is associated with a negative outcome. The limitations of this approach are that it is difficult to establish a definitive set of factors as they tend to vary with project type, stage of the project life cycle, organizational context, and cultural context [36, 45]. Also, by focusing on inputs and outputs, it ignores intervening processes. Process research in information systems examines interactions between stakeholders and events to find more complex causal relationships between factors and project outcomes (e.g., [43]). However, it reports limited convergence and empirical support [44]. In software engineering, process research is concerned with improving
14
P.L. Bannerman
product quality through software processes [16]. Despite many benefits, Jeffery has argued that the performance improvement effect of software process improvement in practice is often smaller than expected, in isolation of other enabling drivers of change [22]. Furthermore, these approaches are only weakly supported by theory. The strategy underlying the two research streams is simple and compelling: identify factors and processes associated with success (or failure) in practice, and then develop prescriptions to emulate (or avoid) them [44]. It is assumed that, if these factors and processes are properly accounted for, quality will be high, failure will be avoided and the project will be a success. The basic argument is that, if you do ‘the right things in the right ways’, the outcome will be success. This has led to a proliferation of methodologies on what to do ‘right’. Underpinning the factor and process research streams are several assumptions about learning: learning is a prerequisite for project success [25]; organizations learn from experience [49]; and learning accumulates within and between projects [11]. These assumptions reinforce the view that experience accumulates during and across successive projects, refines the application of desirable factors and processes, resulting in continuous improvement over time and increasing project performance.
3 Reframing the Problem Taken together, current research seeks to explain why software projects fail by focusing on the drivers of success. The logic is that if a project fails, then the project did something wrong. The project failed to take sufficient account of all the factors and processes that research and prior experience have shown are necessary for success. This approach has strong logical and practical appeal. The problem with this framing, however, is that it does not fit the empirical data that shows there has been limited improvement in software project performance over the years [8, 24]. By reframing the problem, new insights can emerge. If we accept the industry data and adopt the view that high variability in outcomes and low success rates should be expected from such high risk ventures, then what is needed is a causal model that includes drivers of success and drivers of failure. A capabilities-based explanation provides such a model, supported by two bodies of management literature: the resource-based view of the firm and related capabilitybased perspective, and; organizational learning theory. According to this literature, superior performance is associated with developing and maintaining stocks of firmspecific resources and capabilities (including processes) that are developed primarily through learning from experience. Extant research tends to understate the importance of having ‘the right resources’. This literature also argues, however, that learning and capability development may be inhibited, blocked or made redundant, creating a capability deficit and propensity to under-perform or fail. These components provide the basis of a capabilities-based explanation of project performance.
4 Contributing Theory In this section, we profile the management theory underlying the proposed model.
Macro-processes Informing Micro-processes
15
4.1 Resource-Based View and Organizational Capabilities According to the resource-based view of the firm (RBV), organizational performance is a function of internal resources, which are heterogeneously distributed across firms. Firm-specific idiosyncrasies in accumulating and using differentiated resources drive superior firm performance and sustainable competitive advantage [4]. Resources with high performance potential are characterized as valuable, rare, non-tradable, nonimitable, non-substitutable, causally ambiguous and/or socially complex [5, 12]. The capability-based perspective extends this view by emphasizing building and accumulating resources better and faster than competitors [38]. Capabilities, a subset of firm resources, typically comprise an amalgam of intangible assets, knowledge, skills, processes, routines, technologies and values. A firm is modeled as a learning organization that builds and deploys advantageous, firm-specific capabilities and applies them to achieve superior levels of performance [18]. A firm’s comparative effectiveness in developing and deploying capabilities determines its marketplace success. Furthermore, its ability to adapt and innovate in identifying, building and leveraging new competencies is a capability in itself, called ‘dynamic capability’ [48]. The IT literature reports empirical support for a positive relationship between capability and firm performance [41]. 4.2 Organizational Learning In the management literature, organizational learning is the primary generative mechanism of organizational capabilities. Capabilities are developed through learning from experience, or ‘learning by doing’ [27]. Organizational capabilities are developed and institutionalized in the operating routines, practices and values of organizations in a way that outlives the presence of specific individual members [32]. Routines that lead to favorable outcomes become institutionalized in organizations as capabilities, which are adapted over time in response to further experiential learning. Organizations can also build capabilities through management practices [19, 39]. This view of learning is different to that of the experience factory in software engineering [6]. The latter deals with declarative or explicit knowledge that can be codified in a data repository. In contrast, learning by doing builds tacit knowledge and skill-based capabilities that cannot be encapsulated in a database (other than through links to sources, as in network knowledge management systems). Learning improves the ‘intelligence’ of the organization and, thus, its performance. In this sense, the ability to learn from experience is a critical competency, requiring a deliberate investment [11]. Learning takes two forms [1]. One is continuous, incrementally improving organizational capabilities (called ‘single-loop learning’). The other is discontinuous (called ‘double-loop learning), resulting in fundamentally different organizational rules, values, norms, structures and routines. 4.3 Learning and Capability Development Barriers However, the literature also reports that learning and capability development are neither certain nor cumulative. Barrier conditions may arise in the organizational and technological contexts of projects that can inhibit or block learning and capability
16
P.L. Bannerman
development and/or make accumulated capabilities redundant or obsolete in the face of new opportunities and challenges [2]. Conditions that can inhibit organizational learning and capability development include learning disincentives; organizational designs that create artificial barriers; time compression diseconomies (learning is slow, it cannot be rushed); low aspiration levels; absorptive capacity (the ability to absorb new learning); asset mass inefficiencies (learning requires some prior knowledge); and transformative capacity (the ability to share or transfer learning) [10, 12, 17, 28, 50]. Conditions that can block organizational learning and capability development include causal ambiguity (lack of clarity of causal drivers); complexity; tacitness; embeddedness and interconnectedness (capabilities are deeply entrenched in their context so they may be difficult to explicate or share); learning myopia, competency traps and core rigidities (existing learning is favored); the need to unlearn old ways before new learning can occur; organizational inertia; focus diversion (distractions from relevant learning); unjustified theories-in-use (defensive routines); and certain characteristics of projects (e.g., projects are usually too short and focused on other objectives for effective learning to occur within the project) [1, 26, 27, 29, 37, 42, 47]. Finally, conditions that can negate the value of existing organizational capabilities or make them obsolete include newness; technological discontinuities; radical business process reengineering (redesigning from a zero-base); staff loss through turnover, downsizing or outsourcing; organizational forgetting; and asset erosion (capabilities can be lost over time) [9, 12, 15, 20, 21, 28, 40, 46]. Individually and in combination, these organizational learning and capability development barriers can significantly diminish the ability of an organization – or project – to perform according to expectation and in a manner assumed by the best practices that are applied. They can offset positive learning effects and devalue or destroy the accumulated capabilities necessary for project success.
5 Alternative Explanation Based on this literature, poor project performance is not an exception or simply the result of managers not taking account of all relevant factors and process prescriptions. Also, learning from poor results and improving is not enough to ensure success next time. Rather, project performance can be explained by the generative and regressive mechanisms underlying organizational capabilities. Specifically, project performance is a function of the capabilities applied to the project. Learning drives capability development and project success. Barriers to learning and capability development diminish or negate existing capabilities, driving poor performance and project failure. Project performance, then, is the contested outcome of the two opposing effects on capabilities. These relationships are shown in the causal model in Figure 1. In this model, learning is the process of developing capabilities through experience. Barriers are conditions that retard, deplete or make organizational capabilities redundant, creating a propensity to fail. Capabilities are the differentiated resources that generate operational and strategic value for an organization. Performance is the extent to which the outcome of an IT project meets stakeholder expectations.
Macro-processes Informing Micro-processes
Learning
(+) Capabilities
Barriers
17
(+)
Performance
(-) Fig. 1. Model of project performance
The individual relationships between organizational learning and capabilities, and organizational capabilities and performance, are accepted, a priori, as hypothesized in the literature and discussed above. The central model-based contributions of this paper are in the negative relationship and the total dynamic of the model. One general barrier effect, a negative association with capabilities, is shown in the model. In fact, two effects were identified earlier. First, barrier conditions reduce or block the learning improvement effect on capabilities; second, and more fundamentally, they can make existing capabilities redundant or obsolete for the needs of new or changed conditions. These effects are of different types. The first effect is continuous, interrupting or slowing the incremental accumulation of capabilities. In contrast, the second effect is discontinuous, making existing capabilities obsolete for requirements under the new condition. Based on these relationships, two central propositions arise. First, project performance is the contested outcome of the positive effects of learning and the negative effects of learning and capability development barrier conditions on the organizational capabilities available to the project. Second, in software projects, especially those involving large-scale developments, the initial capability stocks plus the learning that occurs on the project can be less than the effect of the barrier conditions experienced during the project. This results in a net competence liability that limits performance. The barrier effects offset the learning effects so that capabilities for the project are sub-optimal, resulting in a poor performance outcome. When these effects persist, both within and between projects, limited learning accumulates. The organization is in no better position to take on a new project than it was at the start of the previous one. Software projects are particularly susceptible to the disruptive effects of technology and organization changes, which occur both within and between projects. Hardware, software, processes, tools and methods are constantly changing or becoming obsolete. This occurs on cycle times that vary from a few months for leading-edge technologies, to longer for other technologies. These discontinuities significantly impact people, processes and technologies currently in use, destroying the value of existing capabilities, and requiring resources to be diverted to manage them, as well as affecting the resources needed for future projects. So, if the project cycle time is several years, which is typical for a major system development project, technology will have cycled a number of times, making some capabilities obsolete, refreshing the organization’s competence liability and propensity to fail. On this basis, even when capabilities are developed in the current technologies during a project, it is likely that they are obsolete by the time the next project starts.
18
P.L. Bannerman
Different hardware, software and tools may be needed; the project manager and team changes; and the business context of the new project may change. Similarly, changes in organizational directions, priorities, processes and management can set back or negate the value of accumulated capabilities, reducing the ability to perform well. This view of the problem differs markedly from current research approaches. In contrast to the assumption that factors and processes can be increasingly applied to deliver a successful outcome, it argues that the shift in factors that would be necessary for success is unlikely to occur, or does not have the desired result, because of the progressive offsetting effects of barriers to learning and capability development. The unpredictable nature of these conditions, in both frequency and magnitude of effect, can produce substantial variation in outcomes from one project to the next within the same organization. According to this view, it is not sufficient to ‘do the right things in the right ways’. It is also critical to ‘have the right resources (capabilities)’. While an organization has followed the appropriate guidelines of factor and process prescriptions, the normal ongoing conditions that arise during a software project can damage or destroy the capabilities needed to ensure its success, increasing the likelihood that it might fail. Furthermore, even if the organization is successful, it cannot assume that it can replicate that success in the next project. This model provides an explanation for the persistent variance in empirical data (e.g., [24]). It also explains why an organization can have an outstanding success with one project and a total failure with the next. This was the experience, for example, of Bank of America with its ERMA and MasterNet projects [30], and of American Airlines with its SABRE and CONFIRM reservation systems projects [34].
6 Illustration Application of the theory is illustrated by a longitudinal case study of a major system development in an Australian State Government department of motor vehicles (DMV). The case is detailed in [2] and summarized here. A project was initiated to replace the department’s inefficient batch processing mainframe system with a new server-based online system for administration of the State’s drivers and motor vehicles. The study examines the initial development and subsequent upgrades from 1989 to 2001. The initial development was an interesting case of a ‘successful failure’. That is, even though the new system was very late, well over budget and significantly descoped in functionality, it ultimately achieved the Department’s major business objectives, including savings of $20m a year. Consequently, the success was acknowledged with an industry award for excellence for the system. Due to this anomalous outcome, the case presented an opportunity to examine the substance of the agency’s excellence and performance in system development. One implication of the award is that the initial development ‘failure’ was a ‘glitch’, so capabilities would have accumulated and exemplary performance would dominate subsequent development. However, this was not evident. To understand why, the case was examined for evidence of learning accumulation and barrier effects.
Macro-processes Informing Micro-processes
19
2001
The study period featured many organizational and technology changes that occurred concurrently with or as a direct result of ongoing development of the system. The major changes are shown in Figure 2. These occurred in conjunction with a stream of ongoing software version upgrades, escalating design complexity and architecture fragmentation, and other people movements not shown in the figure. To resource the project, the DMV relied heavily on contract developers. While they were highly skilled and experienced professionals, there was no incentive for them to transfer knowledge and skills to staff.
Main server platform upgraded Databade/CASE upgrades Terminal emulator upgrade implemented
2000
Network Upgrade Project started Corporate restructure Development team centralised
Vendor governance set up
1999
GM promoted to CIO
Data centres outsourced Project GM departs
1997
New Project GM arrives
1998
System Branch restructure
1996
Corporate restructure
POS enhancements implemented Major system upgrades for Y 2K Desktop/EFTPOS rollouts completed Internet applications development started National database project started EFTPOS rollout started Desk-/counter-top upgrade announced New server platform implemented Integrated driver test PCs implemented First IVR application implemented
Outsourcing Project started Founding CEO departs
1993
Excellence Award won
1994
1995
Project founder departs
Full financial benefits realized
2nd target due date 1st target due date
Major performance upgrade
1992
Registration implemented
1991
Initial financial benefits realized
CASE tool upgraded (Batch) CASE tool upgraded (Online) New terminal emulator implemented Platform Upgrade Project started
Licensing implemented System deliverable split Scope reduced
Restructure started Department established Organisation Changes
1989
1990
Project founder promoted
Project started (using new technology) Technology Changes
Fig. 2. Chronology of change at the DMV
Changes in the status of eight IT core capabilities adapted from [13] were analyzed in four phases across the study period [2]. The capabilities were: leadership, business systems thinking, building business relationships, infrastructure management, making technology work, building vendor relationships, managing projects, and managing
20
P.L. Bannerman
change. Capability strength was measured as low, medium or high for each capability in each phase of the study. Barrier conditions associated with each capability were also identified in each phase. Measurement was aided by empirical indicators developed for each capability and barrier condition, described in [2]. The analysis found that very little cumulative learning occurred in the capabilities during the study period. Furthermore, barrier conditions were found to be high and significantly associated with each capability across the study. In particular, newness discontinuities were found to negatively impact over 80% of the capabilities over the study period. The study concluded that any organizational learning and capability development that occurred during the study period was offset by the effects of recurrent barrier conditions, resulting in the maintenance of a cumulative net competence liability rather than accumulated organizational learning and capability development. The level of capability stocks at the end of the period was no greater than at the start (rather, it was slightly lower). Accordingly, in moving forward to any other large software project in the future, such as the next generation administration system, the DMV was in no better position to improve its likelihood of success than it had been for the previous system project. The earlier result appeared not to be a ‘glitch’ but a reflection of an underinvestment in organizational learning and capability development. The longitudinal scale of this case highlights the learning and barrier effects, but the logic of the theory makes the propositions equally relevant to smaller projects.
7 Discussion and Conclusion Motivated to consider the interrelationship between macro- and micro- processes and gain insights that might contribute to the software process environment, this paper examines the problematical case of software project performance and presents a novel approach to modeling project outcomes. Accepting the empirical data as indicative of what might be expected rather than as an exception, drivers of project failure are recognized as well as drivers of project success. Drawing on management literature, a capabilities-based theory is proposed that models learning as the driver of success and learning-related barriers as the driver of failure. Project performance is the contested outcome of the two effects on organizational capabilities. As an independent proposition, there are limitations. First, the model is validated only through application to a case study. While illustration of its explanatory and predictive power provides a high degree of face validity, empirical testing is required to establish the bounds of its relevance. Second, the study is based on a public sector organization. The findings may not generalize to private sector projects. However, there is nothing inherent in the model that is sensitive to sector. There is also evidence to suggest that public sector practices are not radically different [14]. Third, the model is not applied to any specific organizational capabilities (apart from in the illustration). Other research, however, reports progress in identifying capabilities relevant to software engineering [3]. Finally, future research could extend the model by considering, for example, how capabilities are built, their inter-relationships and complementarities, or how they aggregate.
Macro-processes Informing Micro-processes
21
The case has several implications for micro level software process research. The most immediate implication is that extant factor- and process-based research approaches to macro level problems may not be sufficient on their own to adequately explain the phenomenon. They may benefit, for example, from considering drivers of failure as well as drivers of success, and the organizational resources needed to make them effective. Specifically, the effectiveness of software process best practices may be a reflection of the competencies that underlie and enable their impacts. Second, software process can be an organizational capability. Software process can be an undifferentiated resource for an organization or a distinctive capability. The choice is the organization’s, based on its strategic goals and positioning. Third, learning does not necessarily accumulate continuously. Software processes, as capabilities, are subject to the same contested dynamic that was described in the case. Strategies for continuous process improvement are likely to be disrupted by discontinuities in learning and capability development. In support of [33], another implication is the importance of the interaction between macro-processes and micro-processes to the effectiveness of each other. The ultimate ‘quality’ of software is its fitness for purpose, determined at the macro level, but that quality cannot be achieved or sustained without improving the processes employed to build it. As Jeffery has argued [22], effective software processes occur within a framework of multiple organizational interactions. Also, software project success factors, processes and capabilities may only be some of the critical elements that, in aggregate, reinforce each other to drive project outcomes. Further research is needed to unlock these dimensions and interactions. Furthermore, because of the delivery context of software (that is, through projects in organizations), the perception of the effectiveness of software processes as reflected in the developed software artifact may be influenced by many other external factors that have nothing to do with software quality. This reinforces the need to understand the macro context and minimize impacts on software process performance from other contingent domains. A final implication arises for experience-based process improvement and process modeling. The case illustrates that intangibles contribute to performance outcomes as well as explicit process capabilities. Mechanisms are needed to capture and represent capabilities in the software process domain. This is partially recognized by [52]. This paper has highlighted the importance of contextual awareness. Benefiting from the implications will require ongoing interaction between the micro- and macroprocess domains and continued recognition that neither environment is an island. Acknowledgments. The paper benefits from comments provided by Liming Zhu. NICTA is funded by the Australian Government through the ICT Research Centre of Excellence program.
References 1. Argyris, C., Schön, D.A.: Organizational Learning: A Theory of Action Perspective. Addison-Wesley, Reading (1978) 2. Bannerman, P.L.: The Liability of Newness. PhD Dissertation, UNSW, AGSM (2004)
22
P.L. Bannerman
3. Bannerman, P.L., Staples, M.: A Software Enterprise Capabilities Architecture. NICTA Working Paper (2007) 4. Barney, J.B.: Firm Resources and Sustained Competitive Advantage. J Manage 17(1), 99– 120 (1991) 5. Barney, J.B.: Gaining and Sustaining Competitive Advantage, 2nd edn. Prentice Hall, Upper Saddle River (2002) 6. Basili, V.R.: The Experience Factory and its Relationship to Other Quality Approaches. Advances in Computers 41, 65–82 (1995) 7. Brooks Jr., F.P.: The Mythical Man Month: Essays on Software Engineering. AddisonWesley, Reading (1975) 8. Charette, R.N.: Why Software Fails. IEEE Spectrum 42(9), 42–49 (2005) 9. Christensen, C.M.: The Innovator’s Dilemma. HarperBusiness, New York (2000) 10. Cohen, W.M., Levinthal, D.A.: Absorptive Capacity: A New Perspective on Learning and Innovation. Admin. Sci. Quart. 35(1), 128–152 (1990) 11. Cooper, K.G., Lyneis, J., Bryant, B.J.: Learning to Learn, from Past to Future. Int. J. Project Manage 20(3), 213–219 (2002) 12. Dierickx, I., Cool, K.: Asset Stock Accumulation and Sustainability of Competitive Advantage. Manage Sci. 35(12), 1504–1511 (1989) 13. Feeny, D.F., Willcocks, L.P.: Core IS Capabilities for Exploiting Technology. Sloan Manage Rev 39(3), 9–21 (1998) 14. Ferlie, E.: Quasi Strategy: Strategic Management in the Contemporary Public Sector. In: Pettigrew, A., Thomas, H., Whittington, R. (eds.) Handbook of Strategy and Management, pp. 279–298. Sage Publications, London (2002) 15. Fisher, S.R., White, M.A.: Downsizing in a Learning Organization. Acad. Manage Rev. 25(1), 244–251 (2000) 16. Fuggetta, A.: Software Process: A Roadmap. In: Proceedings of the Conference on The Future of Software Engineering, pp. 25–34. ACM, New York (2000) 17. Garud, R., Nayyar, P.R.: Transformative Capacity: Continual Structuring by Intertemporal Technology Transfer. Strategic Manage J. 15(5), 365–385 (1994) 18. Hamel, G., Heene, A. (eds.): Competence-Based Competition. John Wiley & Sons, Chichester (1994) 19. Hamel, G.: The Concept of Core Competence. In: Hamel, G., Heene, A. (eds.) Competence-Based Competition, pp. 11–33. John Wiley & Sons, Chichester (1994) 20. Hammer, M.: Reengineering Work: Don’t Automate, Obliterate. Harvard Bus. Rev. 68(4), 104–112 (1990) 21. Hannan, M.T., Freeman, J.: Structural Inertia and Organizational Change. Am Sociol Rev 49(2), 149–164 (1984) 22. Jeffery, D.R.: Achieving Software Development Performance Improvement through Process Change. In: Li, M., Boehm, B., Osterweil, L.J. (eds.) SPW 2005. LNCS, vol. 3840, pp. 43–53. Springer, Heidelberg (2006) 23. Jeffery, D.R.: Exploring the Business Process-Software Process Relationship. In: Wang, Q., Pfahl, D., Raffo, D.M., Wernick, P. (eds.) SPW 2006 and ProSim 2006. LNCS, vol. 3966, pp. 11–14. Springer, Heidelberg (2006) 24. Johnson, J.: My Life is Failure. Standish Group International (2006) 25. Kotnour, T.: A Learning Framework for Project Management. Project Manage J. 30(2), 32–38 (1999) 26. Leonard-Barton, D.: Core Capabilities and Core Rigidities: A Paradox in Managing New Product Development. Strategic Manage J. 13, 111–125 (1992) 27. Levitt, B., March, J.G.: Organisational Learning. Annu. Rev. Sociol. 14, 319–340 (1988)
Macro-processes Informing Micro-processes
23
28. Lyytinen, K., Robey, D.: Learning Failure in Information Systems Development. Informa Syst. J. 9(2), 85–101 (1999) 29. March, J.G.: Exploration and Exploitation in Organizational Learning. Organ Sci. 2(1), 71–87 (1991) 30. McKenney, J.L., Mason, R.O., Copeland, D.G.: Bank of America: The Crest and Trough of Technological Leadership. MIS Quart. 21(3), 321–353 (1997) 31. McWilliams, A., Siegel, D.: Event Studies in Management Research: Theoretical and Empirical Issues. Acad Manage J 40(3), 626–657 (1997) 32. Nelson, R.R., Winter, S.G.: An Evolutionary Theory of Economic Change. Harvard University Press, Cambridge (1982) 33. Osterweil, L.J.: Unifying Microprocess and Macroprocess Research. In: Li, M., Boehm, B., Osterweil, L.J. (eds.) SPW 2005. LNCS, vol. 3840, pp. 68–74. Springer, Heidelberg (2006) 34. Oz, E.: When Professional Standards are Lax: The CONFIRM Failure and its Lessons. Commun ACM 37(10), 29–36 (1994) 35. Pettigrew, A.M.: Longitudinal Field Research on Change: Theory and Practice. Organ Sci. 1(3), 267–292 (1990) 36. Pinto, J.K., Slevin, D.P.: Critical success factors across the project life cycle. Project Manage J. 19(3), 67–74 (1988) 37. Polanyi, M.: The Tacit Dimension. Peter Smith, Gloucester (1966) 38. Prahalad, C.K., Hamel, G.: The Core Competence of the Corporation. Harvard Bus. Rev. 68(3), 79–91 (1990) 39. Purcell, K.J., Gregory, M.J.: The Development and Application of a Process to Analyze the Strategic Management of Organizational Competences. In: Sanchez, R., Heene, A. (eds.) Implementing Competence-Based Strategies, pp. 161–197. JAI Press, Stamford (2000) 40. Quinn, J.B., Hilmer, F.G.: Strategic Outsourcing. Sloan Manage Rev. 35(4), 43–55 (1994) 41. Ravichandran, T., Lertwongsatien, C.: Effect of Information Systems Resources and Capabilities on Firm Performance. J. Manage Inform Syst. 21(4), 237–276 (2005) 42. Reed, R., DeFillippi, R.J.: Causal Ambiguity, Barriers to Imitation and Sustainable Competitive Advantage. Acad. Manage Rev. 15(1), 88–102 (1990) 43. Sambamurthy, V., Kirsch, L.J.: An Integrative Framework of the Information Systems Development Process. Decision Sci. 31(2), 391–411 (2000) 44. Sauer, C.: Deciding the Future for IS Failures. In: Currie, W., Galliers, R. (eds.) Rethinking Management Information Systems, pp. 279–309. Oxford University Press, Oxford (1999) 45. Schmidt, R., Lyytinen, K., Keil, M., Cule, P.: Identifying Software Project Risks: An International Delphi Study. J Manage Inform Syst 17(4), 5–36 (2001) 46. Stinchcombe, A.L.: Social Structure and Organizations. In: March, J.G. (ed.) Handbook of Organizations, pp. 142–193. Rand McNally College Publishing Company (1965) 47. Szulanski, G.: Exploring Internal Stickiness: Impediments to the Transfer of Best Practice within the Firm. Strategic Manage J. 17(S2), 27-43 (1996) 48. Teece, D.J., Pisano, G., Shuen, A.: Dynamic Capabilities and Strategic Management. Strategic Manage J. 18(7), 509–533 (1997) 49. Turner, J.R., Keegan, A., Crawford, L.: Learning by Experience in the Project-Based Organization. In: P PMI Research Conf., pp. 445–456. PMI, Sylva (2000) 50. Winter, S.G.: The Satisficing Principle in Capability Learning. Strategic Manag J 21(1011), 981–996 (2000) 51. Yourdon, E.: Death March. Prentice Hall PTR, Upper Saddle River (1997) 52. Zhu, L., Osterweil, L.J., Staples, M., Kannengiesser, U., Simidschieva, B.I.: Desiderata for Languages to be Used in the Definition of Reference Business Processes. J. Softw. Informatics 1(1), 37–65 (2007)