Markus Alsleben
Creating Dynamic Capabilities R&D Network Management for Globally Distributed Research and Development in the Software Industry
Markus Alsleben
Creating Dynamic Capabilities R&D Network Management for Globally Distributed Research and Development in the Software Industry
Creating Dynamic Capabilities R&D Network Management for Globally Distributed Research and Development in the Software Industry Copyright © 2012 by Markus Alsleben Cover design by: Markus Alsleben, Cover graphic by Aleksandar Velasevic Cover graphic and illustrations licensed from iStockphoto.com
1st Edition November 2012
All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means including photocopying, recording, or information storage and retrieval without permission in writing from the author. ISBN-13: 978-1480121997 ISBN-10: 1480121991
Book Website www.alsleben.com Email:
[email protected]
Printed in U.S.A
PREFACE The journey, that led to the creation of this book, started more than six years ago in one of China’s largest software R&D center in Shanghai, the SAP Labs China. Being responsible for strategic planning and business development, a major part of my role was to attract development work to China. Despite the highly skilled and motivated Chinese colleagues, this was often a tough sales considering the high level of professionalism of senior developers in the German headquarters and the presence of another large scale R&D center in a low cost country, that SAP already had established in Bangalore, India, several years earlier. While my intention was to provide full transparency to development managers about the capabilities of developers, the quality of facilities and infrastructure to allow informed decisions, I often felt disquieted how location decisions were actually made and hence globalization occurred. In my opinion many of those decisions did not provide the best solution for the organization as a whole. This disquiet represented the starting point of my six year research into the phenomenon of globally distributed software development and methods to improve allocation and overall R&D network design on a global scale. It was my desire to develop a thorough understanding of the phenomena of globalization that impacted the global software industry fueled by the worldwide demand for software systems and its search for talented developers that led to an increasing global dispersion of software research and development in recent years. The organizational transformation and change initiative at SAP provided a perfect platform for this research with access to key stakeholders and resources. I therefore would like to take the opportunity to thank everybody at SAP who contributed to this research or supported it. Without this support, the open discussions and seamless collaboration such comprehensive case study and findings would have not been possible. While the journey took longer than initially anticipated, I believe the time was well spent as it yielded richer descriptions and insights for application in practice and future research into global R&D network management and optimization. While this book is primarily written for an academic audience, I believe that the theoretical foundations of globally distributed R&D in the software industry and the findings from the longitudinal case study provide managers of global R&D with a template and detailed solutions for global organizational transformation and change to improve the setup of global R&D organizations across industries.
iii
The doctoral research at the City University of Hong Kong provided an outstanding framework to guide my academic aspirations and the intended research, I therefore would like to express my sincere appreciation to my mentor and supervisor Prof. Kuldeep Kumar for his guidance, encouragement and continuous support throughout this research. Prof. Kumar not only shared his substantial experience of globally distributed work, but also brought me in contact with inspiring people from all over the world to be part of a community of practitioner academics, who through rigorous research bring real life experiences and thus relevance into the academic domain. This work would never have been possible without the support of my family. I would like to thank my wife and children for their patience in this long endeavor that required long night and weekend shifts.
Markus Alsleben
iv
ABSTRACT The sustainable long-term growth and survival of a corporation can only be achieved through the ongoing creation of innovative products and services. This is especially true in today’s global software industry, which is characterized by ever-shorter product cycles, intense global competition and knowledge-intense research and development (R&D). The effective organization and management of a corporate R&D function in a software enterprise has thus become a key success factor for sustainable competitive advantages. This dissertation designs, implements and evaluates an organizational transformation and its supporting framework for a globally dispersed R&D organization in the software industry as part of a two-year longitudinal case study at SAP, one of world’s largest software company. Based on the action design research methodology, this cross-disciplinary participatory study, which draws from the areas of strategic management, global R&D management, organizational theory and organizational change management, to take a systematic approach to investigate this phenomenon and obtain a normative model of global R&D organizational enhancements, thus informing managerial practice related to the improvement of global R&D networks. Although it focuses on global R&D network improvement in the software industry, the major research outcomes are likely to be generalized and applied to other industries that follow a globally distributed R&D model. The findings of this thesis indicate that the successful organizational transformation of a global R&D network requires that R&D network management be established as a new dynamic capability to achieve and secure competitive advantages in high-velocity environments. Dynamic capabilities are especially critical in responding to the frequent disruptive innovations, mergers and acquisitions and new business models that characterize the global software industry, as they allow for the ongoing reconfiguration of the firm’s tangible and intangible assets. In this study, the dynamic capability of R&D network management is understood to comprise the key abilities of sensing, seizing and transforming threats in the tradition of Teece et al. (2009). The case study conducted as part of this thesis describes how the enterprise under study established these key abilities through an organizational intervention. First, sensing incorporates the definition and frequent updating of location strategy for each of the three archetype business models (product, customer, v
infrastructure) found inside each company, including guidelines and policies for global work design and the allocation of R&D resources. Second, seizing provides geographical information system designs and the location-specific consolidation of internal and external key performance indicators to create global organizational transparency and identify global location portfolio trends at an early stage. Though the action design research, which the author actively conducted with a dedicated project team, the enterprise under study established the ability to seize by setting up a dedicated location management organization with cross-functional team members to run an integrated location management process in which individual location decisions are made and reviewed in accordance with location strategy and guidelines. The enterprise’s ability to transform itself in the global R&D network management context was achieved by defragmenting software development teams around the world, employing communication efficiency as efficiency criterion, principles of lean work organization as major design criteria and rationalizing historical locations that no longer fit the firm’s location strategy. This study makes a significant contribution to the strategy management field and its theory of dynamic capabilities by describing a model for implementing dynamic capabilities through organizational interventions. The study identifies the modular design of R&D networks as the next evolutionary stage in R&D globalization and shows how a fragmented global R&D organization was transformed into a modular global R&D function, an organizational form with significantly lower coordination costs. As global R&D networks continue to actively manage and reconfigure their global location portfolios and team allocations, this study shows how global R&D network management can be introduced as a dynamic capability through organizational interventions such as that implemented at SAP
vi
TABLE OF CONTENTS CHAPTER 1 INTRODUCTION 1.1. Context of research
1
1.2. Problem statement and significance 6 1.2.1. Initiating Problems and their Significance in the Context of the Researched Enterprise ........................................................................ 6 1.2.2. Limitations of previous research ......................................................... 9 1.3. Research Objective and Research Questions
11
1.4. Research Purpose 12 1.4.1. Personal Purpose .............................................................................. 12 1.4.2. Practical Purpose ............................................................................... 13 1.4.3. Academic Purpose ............................................................................ 13 1.5. Thesis Organization
15
CHAPTER 2 LITERATURE ABOUT THE PHENOMENON 2.1. R&D in the Context of the Software Industry
17
2.2. The Organization of Research and Development
24
2.3. The Software Artifact
27
2.4. Software Development 28 2.4.1. Software Engineering ........................................................................ 31 2.4.2. Software Development Methodology............................................... 33 2.4.3. Principles Of Software Engineering .................................................. 38 2.4.4. Software Architectures ...................................................................... 41 2.4.5. Organizational Considerations for Software Development............... 43 2.4.6. Economic Considerations in Software Development ........................ 47 2.5. Globally Distributed Software Development
53
CHAPTER 3 THEORETICAL UNDERPINNINGS 3.1. Organizational Theory 62 3.1.1. Work Design...................................................................................... 65 3.1.1.1. Classic work design theory........................................................... 65 3.1.1.2. Work Design in the Context of Collocated Software Development with Static Work Division ............................................................. 74 3.1.1.3. Work Design In The Context Of Globally Distributed Software Development With Static Work Division ..................................... 86 3.1.2. Organization Design ......................................................................... 92 vii
3.1.3. 3.1.4. 3.1.5. 3.1.6. 3.1.7. 3.1.8.
Organization Design of a Global R&D Organization......................... 94 Networks and the Network Organization........................................ 100 R&D Networks ................................................................................. 106 R&D Network Improvements .......................................................... 110 Organizational Transformation and Change ................................... 116 Organizational Learning .................................................................. 129
3.2. Strategic Management
137
3.2.1. Historical Perspective and Definition .............................................. 138 3.2.2. Emergent Strategies and the Resource Allocation Process ............ 141 3.2.3. Foundations of strategy – from the market based view (MBV) to the resource based view (RBV).................................................... 147 3.2.4. Strategic management in high velocity markets ............................. 149 3.3. Internal Dynamics 164 3.3.1. Informal Order................................................................................. 164 3.3.2. Political Processes in Organizations ................................................ 165 3.3.3. Cultural Aspects of Globally Distributed Work ............................... 171 3.4. External Dynamics 180 3.4.1. Globalization ................................................................................... 180 3.4.2. Disruptive innovations ..................................................................... 183 3.4.3. Merger and Acquisitions ................................................................. 185 CHAPTER 4 INITIAL SOLUTION ARCHITECTURE 4.1. Scope and boundaries of this research
188
4.2. Elements of the Initial Solution Architecture
191
CHAPTER 5 RESEARCH METHODOLOGY 5.1. The Pragmatic Epistemological Stance
199
5.2. Research Methodology 205 5.2.1. Action Research .............................................................................. 205 5.2.2. Design science research .................................................................. 214 5.2.3. Action Design Research .................................................................. 218 5.2.4. Case Study Research ....................................................................... 224 5.2.5. Data Acquisition and Analysis ......................................................... 226 5.2.6. Research Quality and Validity .......................................................... 233 5.2.7. Ethical research considerations ....................................................... 242 5.2.8. Summary – Research Methodology ................................................ 247 CHAPTER 6 CASE STUDY 6.1. Introduction to the Enterprise SAP Under Study viii
252
6.1.1. 6.1.2. 6.1.3. 6.1.4. 6.1.5. 6.1.6.
Drivers of Globalization at SAP ....................................................... 252 The SAP Labs Network.................................................................... 259 Organizational Features of SAP ...................................................... 266 The Evolution of the COO Board Area ........................................... 273 Evolution of the Location Strategy and Management Project ........ 278 Dynamics in the Project Environment ............................................. 280
6.2. Problem Formulation Phase of the SAP LSM Project 6.2.1. 6.2.2. 6.2.3. 6.2.4. 6.2.5. 6.2.6.
283
Location Management Problems at SAP......................................... 283 Root Causes of Location Management Problems ........................... 289 Lessons Learned from Previous Projects ......................................... 297 Stakeholder Analysis ....................................................................... 300 Project Organization ....................................................................... 303 Reflection and Learning – Problem Formulation............................. 304
6.3. Building, Intervention and Evaluation Cycles / Reflection 306 6.3.1. Strategy & Processes ....................................................................... 307 6.3.1.1. BIE – Strategy & Processes ........................................................ 307 6.3.1.2. Reflection – Strategy & Processes .............................................. 322 6.3.2. Data and ICT Tools .......................................................................... 326 6.3.2.1. BIE - Data and ICT Tools ............................................................ 326 6.3.2.2. Reflection – Data and ICT Tools ................................................. 333 6.3.3. Defragmentation ............................................................................. 337 6.3.3.1. BIE – Defragmentation ............................................................... 337 6.3.3.2. Reflection – Defragmentation .................................................... 352 6.3.4. Quick Wins ...................................................................................... 357 6.3.4.1. BIE – Quick Wins ........................................................................ 357 6.3.4.2. Reflection – Quick Wins ............................................................. 359 CHAPTER 7 FORMALIZATION OF LEARNINGS 7.1. Classes of Problems and Classes of Solutions
363
7.2. Share Outcomes and Assessment with Practitioners
368
7.3. Generalizing from ADR Research
371
7.4. Formalization of Learning in Light of Theory
375
7.5. Conclusion
388
7.6. Limitations of this research
390
7.7. Directions for Future Research
392
BIBLIOGRAPHY
396
APPENDIX A: Informed Consent – Introduction
434
APPENDIX B: Informed Consent – Interview
439 ix
TABLE OF FIGURES Figure 1: SAP revenue per employee from 1988 to 2009; own calculations based on (SAP AG, 2010) .............................................................. 7 Figure 2: Thesis Organization ......................................................................... 16 Figure 3: Overview of Research Questions and related Literature Review (own graphic) ......................................................................................... 18 Figure 4: Main problems in creating and managing transnational R&D processes (Boutellier et al., 2008e) ........................................................... 24 Figure 5: Five major trends drive the evolution of international R&D organizations (Gassmann & von Zedtwitz, 1999) ........................................ 25 Figure 6: Waterfall model with the software development phases (Royce, 1970) ............................................................................................. 34 Figure 7: The Rational Unified Process (RUP) (Rational Software, 1998, p. 3) 35 Figure 8: Spiral development model (Boehm, 1988)...................................... 36 Figure 9: Iteration cloud metaphor of agile project management (Oestereich & Weiss, 2008, p. 3) .......................................................................... 37 Figure 10: Dependencies between software engineering principles (Balzert, 2009a, p. 49) ................................................................................. 39 Figure 11: Cohesion and coupling in modular design (own graphic) ............. 40 Figure 12: High level architecture overview of SAP ERP (R, 2008) ................. 42 Figure 13: Quality model for external and internal quality (Al-Qutaish, 2009) 49 Figure 14: Sources of inspiration for organizational theory (Hatch & Cunliffe, 2006, p. 6) ..................................................................................... 64 Figure 15: Evolution of work design in software development ...................... 66 Figure 16: Classic taxonomy of task interdependence (Kumar et al., 2009, p. 647) ............................................................................................... 67 Figure 17: Task analysis and task synthesis (Brauchler & Landau, 1998) ........ 71 Figure 18: Intra-organizational communication paths (Kosiol, 1962).............. 73 Figure 19: Sticky forms of task interdependence (Kumar et al., 2009, p. 653) 79
x
Figure 20: An illustration of the relative importance of team interface management and project structuring and support during the concept and the development phases (Hoegl & Weinkauf, 2005) .................... 83 Figure 21: Independent clustering of materials interactions for climate control system (Pimmler & Eppinger, 1994) .............................................. 84 Figure 22: The CAGE framework at the country level (Ghemawat, 2007) ...... 87 Figure 23: The CAGE framework at the industry level: correlates of sensitivity (Ghemawat, 2007) ......................................................................... 89 Figure 24: Distributed work environments and sticky task interdependencies92 Figure 25: Unaligned organizational design (Galbraith et al., 2002, p. 5) ...... 93 Figure 26: Four phases of organization design (Galbraith et al., 2002, p. 10) 94 Figure 27: Methodical framework for the design of a global R&D organization (Gerpott, 1991, p. 61) ................................................................... 95 Figure 28: A small network with eight vertices and ten edges (Newman, 2003) ........................................................................................... 101 Figure 29: Abstraction of the Konigsberg Seven Bridge Problem to a network diagram (Newman et al., 2006) ................................................... 101 Figure 30: Continuum of network structures (Siebert, 1991) ........................ 105 Figure 31: Basic global R&D network model (Gerpott, 1991) ...................... 107 Figure 32: The metanational R&D process (Doz et al., 2001) ....................... 109 Figure 33: Communication-economic network model (Fisch, 2003, p. 1386)115 Figure 34: Change Process according to Lewin (Hatch & Cunliffe, 2006) .... 120 Figure 35: Greiner’s six phased dynamics of successful organizational change (Greiner, 1967) ............................................................................. 121 Figure 36: Kotter’s eight step model of organizational transformation (Kotter, 2007, p. 99) ................................................................................. 123 Figure 37: A theoretical model of the dynamics of planned organizational change (Robertson et al., 1993, p. 621) ...................................... 124 Figure 38: The five phases of Growth (Greiner, 1972, p. 41) ........................ 126 Figure 39: The self-design strategy (Cummings & Worley, 2005, p. 494) .... 128 Figure 40: The complete cycle of choice (March & Olsen, 1979, p. 13)....... 130 xi
Figure 41: Single, double loop and meta learning own graphic based on (Argyris, 1994; Visser, 2007) ........................................................ 131 Figure 42: How organizational learning affects organizational performance (Cummings & Worley, 2005, p. 499) (based on the study of Snyder & Cummings, 1998) .................................................................... 132 Figure 43: Strategic planning process (Welge & Al-Laham, 2008, p. 186) ... 139 Figure 44: Process of strategy formulation and implementation (Christensen & Dann, 1999, p. 4) (based on (Bower, 1986)) ................................ 143 Figure 45: A research model of dynamic capabilities (Wang & Ahmed, 2007) ........................................................................................... 151 Figure 46: Components of dynamic capabilities (own graphic modeled on the concept of (Teece et al., 1997)) ................................................... 153 Figure 47: Foundations of dynamic capabilities and business performance (Teece, 2009, p. 49) ..................................................................... 154 Figure 48: Learning, dynamic capabilities and operational routines (own graphic based on the model of (Zollo & Winter, 2002)) .............. 158 Figure 49: A dual-process model of capability dynamization (Schreyögg & Kliesch-Eberl, 2007) .................................................................... 160 Figure 50: A model of the politics of strategic decision making in high-velocity environments (Eisenhardt & Zbaracki, 1992) ............................... 168 Figure 51: A variance model of the political aspects of strategic decision making (Nutt & Wilson, 2010) ..................................................... 171 Figure 52: Levels of culture and their interaction (Schein, 1984) .................. 175 Figure 53: Correlation between dimensions of organizational culture in two software companies and quality/productivity, n=464, (Mathew, 2007) ........................................................................................... 177 Figure 54: The eclectic paradigm of international production (own graphic based on the framework of (Dunning, 1988)) ............................. 181 Figure 55: The basic mechanism of internationalization (Johanson & Vahlne, 1977, p. 26) ................................................................................. 182 Figure 56: The impact of sustaining and disruptive technological change (Christensen, 2003, p. xvi) ........................................................... 184 Figure 57: Initial solution architecture for this study (own graphic) .............. 190 xii
Figure 58: Expanded paradigm contrast table comparing five point of view (Teddlie & Tashakkori, 2009, p. 87) ............................................. 201 Figure 59: Action research interacting spiral (Stringer, 2007, p.9) ................ 207 Figure 60: The SSM learning cycle (Checkland, 1999, p. 13) ....................... 210 Figure 61: The general form of a purposeful activity model (Checkland, 1999, p. 8) ............................................................................................. 211 Figure 62: Design science research cycles (Hevner, 2007, p. 88) ................. 215 Figure 63: Origin of action design research principles (own graphic based on the studies of (Davison et al., 2004; Hevner et al., 2004; Sein et al., 2011)) .......................................................................................... 218 Figure 64: Action design research: stages and principles (Sein et al., 2011) 219 Figure 65: The generic schema for an organization - dominant BIE cycle (Sein et al., 2011) ................................................................................. 221 Figure 66: Data analysis in qualitative research (Creswell, 2007, p. 185) ..... 230 Figure 67: Coding Framework of the SAP LSM Case Study (own graphic) .. 233 Figure 68: Integrated research design quality framework of this study (own graphic) ....................................................................................... 234 Figure 69: Detailed time line of events of the longitudinal ADR case study 250 Figure 70: Details of the interviews conducted at SAP ................................ 251 Figure 71: SAP global R&D locations as of June 2010 ................................. 259 Figure 72: Number of locations for each of SAP’s major product development programs (SAP AG, 2006) ........................................................... 262 Figure 73: Coffee corner at the SAP campus (own photo) ........................... 268 Figure 74: Bridges connecting main buildings on the SAP campus (own photo).......................................................................................... 270 Figure 75: Internal satirical illustration of the ‘green traffic light syndrome’ at SAP (Adapted from “Dilbert” by Scott Adams, Source unknown, translation by author) .................................................................. 273 Figure 76: Project organization and organizational arrangements surrounding the SAP LSM Project (Own graphic) ............................................ 279 Figure 77: Dynamics in the LSM Project Environment (Own Graphic).......... 280 xiii
Figure 78: Voice of Customer Analysis (n=43, multiple choices could be given - own graphic) ............................................................................. 302 Figure 79: Split of LSM Project into four distinct Work Streams (Own Graphic) ................................................................................................. 303 Figure 80: Final Version of the SAP Location Strategy Cycle ....................... 321 Figure 81: Final version of the complete location strategy and management process ........................................................................................ 322 Figure 82: SAP location information system architecture (own graphic) ...... 326 Figure 83: Screenshot of the SAP location Dashboard................................. 330 Figure 84: Model of the globally dispersed software development process at SAP (own graphic) ....................................................................... 339 Figure 85: Conceptual framework of cost components in globally distributed development over multiple locations (own graphic) ................... 341 Figure 86: Estimated effectiveness of distributed development over several locations (own graphic based on project data) ........................... 347 Figure 87: Estimated effectiveness of distributed development over several time zones (own graphic based on project data) ........................ 348 Figure 88: Effectiveness estimation of distributed development over several time zones (own graphic based on project data) ........................ 350 Figure 89: Defragmentation process (own graphic) ..................................... 352 Figure 90: Generalization of problem/solution instances ............................. 364 Figure 91: Presentation of project artifacts to project stakeholders, Walldorf 23. Februar 2010 (own picture) ................................................... 368 Figure 92: Final solution architecture process view – own graphic drawing from (Thom & Wenger, 2010, p. 22) .................................................... 372 Figure 93: Decision process for the acquisition of dynamic capabilities (own graphic) ....................................................................................... 377 Figure 94: Foundations of the dynamic capability R&D network management (own graphic based on the study of (Teece, 2009, p. 49)) .......... 382 Figure 95: Evolution of R&D organizations (own graphic based on the study of (Gassmann & von Zedtwitz, 1999)) .............................................. 385
xiv
ABBREVIATIONS ADR
Action Design Research
ASEAN
Association of Southeast Asian Nations
BRIC
Brazil, Russia, India and China
CAGE
Cultural, Administrative, Geographic and Economic Distance
CAR
Canonical Action Research
CASE
Computer Aided Software Engineering
COO
Chief Operating Officer
ERP
Enterprise Resource Planning
EU
European Union
GDP
Gross Domestic Product
GDSD
Global Distributed Software Development
IBP
Integrated Business Planning
ICT
Information and Communication Technology
ISO
International Organization for Standardization
LoC
Lines of Code
LSS
Large Scale Software
M&A
Merger and Acquisition
MNC
Multinational Corporation
NAFTA
North American Free Trade Agreement Zone
NATO
North Atlantic Treaty Organization
NIH
Not Invented Here
OECD
Organization for Economic Co-operation and Development
R&D
Research and Development
SMB
Small and Mediumsized Business
SSM
Soft Systems Methodology
TCO
Total Costs of Ownership
USA
United States of America
xv
xvi
CONTEXT OF RESEARCH
CHAPTER 1 1.1.
INTRODUCTION
Context of research
The Sustainable long-term growth and survival of a corporation can only be achieved through the ongoing creation of innovative products and services. This is especially true in today’s global software industry characterized by ever-shorter product cycles, intense global competition and, knowledge intensive research and development (R&D). The effective organization and management of a corporate R&D function in a software enterprise thus becomes a key success factor for a sustainable competitive advantage (Boutellier, Gassmann & Zedtwitz, 2008c). The objective of a corporate R&D organization is to support the innovation process effectively and efficiently so that the results can be sent to market as rapidly as possible (Großmann, 1994). An effective R&D organization allows companies to create a sustainable competitive advantage and ensures their long-term success (Boutellier, Gassmann & Zedtwitz, 2008c). Originally, corporate R&D featured dedicated R&D centers in company headquarters. In the last two decades, especially among multinational hi-tech companies, the organization and management of corporate R&D activities, which were once limited to the home country of multi national companies (MNCs), have become globally distributed. This global distribution is the result of the search for talented candidates at low cost, collaboration with international research communities, the distribution of risk, and support for production or sales activities in emerging or industrialized countries (Agerfalk, Fitzgerald, Holmstrom Olsson & Conchuir, 2008; Boutellier, Gassmann & Zedtwitz, 2008b; Brockhoff, 1998; Gerpott, 1991; Kuemmerle, 1997; Kuemmerle, 1999). As Picot and Ashkenas pointed out (Ashkenas, 2002; Picot, Reichwald & Wigand, 1996), the organization’s R&D has become boundaryless, a phenomenon that the literature review in chapter two will explicate. In recent years this transition of the R&D organization from local to global has occurred in line with recent changes to other corporate functions such as manufacturing and sourcing. MNCs today increasingly conduct research and development activities in a globally dispersed setup, often with decentralized responsibility of R&D activities (Boutellier et al., 2008e).
1
INTRODUCTION The factors and preconditions that led to this global dispersion of R&D activities have been extensively researched (Belderbos, Lykogianni & Veugelers, 2008; Brockhoff, 1998; Gerpott, 2005; Khurana, 2006). Noteworthy precursors of this development were advances in technology, the opening and rapid development of previously closed economies (such as those of India, China, Eastern Europe), the stronger modularization of products, and advances in methodologies for coordinating and controlling globally dispersed R&D activities, such as through a combination of ICT support and managerial practices (Argyres, 1999; Hoegl & Weinkauf, 2005). A globally dispersed R&D set up can provide substantial benefits for knowledge generation. It, however, poses significant challenges for the overall effectiveness of the corporate R&D organization. Increased coordination efforts over multiple time zones, cultural differences, knowledge management and varying demographics and level of R&D staff seniority are examples of the challenges that managers of globally distributed R&D must address (Boutellier, Gassmann & Zedtwitz, 2008a; Conchuir, Helena, Par & Brian, 2006; Helena, Eoin, Par & Brian, 2006; Mistrík, Grundy, Hoek & Whitehead, 2010). In addition to these challenges the organizational structure of the R&D function in MNCs is often the complex result of long-term organic and inorganic growth. This makes effective global R&D management even more difficult. Boutellier describes this as “jungle-growth” (Boutellier et al., 2008c) to indicate that the currently evolved structure is often a result of various opportunistic short-term decisions that may not have been made in accordance with a long-term strategic plan (Gerpott, 1991; Kuemmerle, 2005). Questions about organizational structure have a long history in organizational research and organizational theory. An organizational structure can be defined as “the sum total of the ways in which it divides its labor into distinct tasks and then achieves coordination among them” (Mintzberg, 1979, p. 2). Adam Smith’s (Smith, 1776) account of the division of labor, Frederick W. Taylor’s (1911) concept of scientific management to achieve higher efficiencies or Henri Fayol’s (1917) general organizational principles are examples of classic organizational theory. Organizational theory at that time was mainly concerned with creating structures for large industrial corporations that produced large-volume of physical products for growing markets. The organizational structures in those days were mainly hierarchical often organized either by function or division.
2
CONTEXT OF RESEARCH With recent dynamic changes in the business environment, companies have been examining ways to further optimize their organizational structure. Organizational structures should thus not be seen as a static construct, but as the dynamic results of ongoing and never ending searches for efficient division of labor and coordination (Picot, Dietl & Franck, 2008; Schreyögg, 2008). As companies start to search for new approaches, some eventually arrive at a more effective way of arranging and coordinating resources (Miles & Snow, 1992) such as through network organizations (Siebert, 1991) or virtual organizations (Mowshowitz, 1994). These organizational structures have emerged with the promise of better suiting a globally competitive and ever changing environment through flexible recombination of resources rather than the previous hierarchical arrangements. Similar trends also characterize dynamic changes in the corporate R&D function required to deliver innovative products and services within ever-shorter timeframes and with increasingly complex products and environments. Therefore, it should be no surprise that many global corporations have evolved their organizational structures in the global R&D function into the form of a global R&D network (Gassmann & von Zedtwitz, 1999). A global R&D network is defined as an organizational structure consisting of “many interdependent R&D units that are closely interconnected by means of flexible and diverse coordination mechanisms” (Gassmann & von Zedtwitz, 1999, p. 243). In contrast to a traditional centralized R&D organization in which foreign R&D units are often used as listening posts or extended workbenches without ownership of research topics or product topic, in the R&D network organization foreign R&D units typically assume strategic roles that affect the entire company through the ownership of technologies or component development based on their individual capabilities. R&D networks thus possess benefits similar to the network organization (Siebert, 1991) or virtual organization (Mowshowitz, 1994) because “flexible connections and relations between network partners enable better utilization of available competencies, contribute to the realization of specialization and scale effects, thus reduce the risk of duplicate development” (Gassmann & von Zedtwitz, 1999, p. 244). The network as new organizational structure can also been applied to the R&D functions of the global software industry; a relatively new industry that already has seen major paradigm shifts in work organization during the last four decades
3
INTRODUCTION (Campbell-Kelly, 2003) such as the fundamental departure from the previous waterfall-based paradigm of software engineering in the mainframe age to iterative, agile and lean software development today (Poppendieck & Poppendieck, 2006; Rajlich, 2006). This paradigm change has resulted in a transition from the typical organizational design along task differentiation to a more fluid organizational design. Software development defies many of the assumption that classic organization theory presupposed for work organization. For instance, Taylor suggested separating conception from execution and widening the application of division of labor (Schreyögg, 2008a; Taylor, 1911). Taylor’s principles however have very limited application in software development due to the high degree of skills, knowledge and collaboration required to create the software artifact (Allen, James & Gamlen, 2007; Whitehead, Mistrík, Grundy & Hoek, 2010). Software development is also non-repetitive (Mistrík et al., 2010) and non-deterministic. The software design process itself can create modified or new requirements that necessitate design changes. Software developers thus often use heuristics rather than clear design or work instructions (McConnell, 2004). It is important to note that despite such fundamental discrepancies between the classic organizational theory and the special characteristics of software, development presently occurs in traditional hierarchical structures as function, division, and matrix. R&D networks in the global software industry were initiated through the recent global dispersion of R&D in general and the evolution of global locations and their capabilities in particular over the last two decades. Recent academic research into R&D networks has already provided some insights into their genesis and structure (Allen et al., 2007; Gassmann & von Zedtwitz, 2003; Gassmann & von Zedtwitz, 1999; Hellström, Eckerstein & Helm, 2001; Perks & Jeffery, 2006). Additional research into the growth of R&D networks, whether through the allocation of new research topics and teams among research locations or by setting up new research locations, has yielded models predicting dispersion and allocation (Fisch, 2001; Fisch, 2003). As many MNCs have dispersed their R&D activities around the world and established global R&D networks, they must ensure they maintain the efficiency and competitiveness of their global R&D organization and take steps to improve it. Gerpott identified seven major challenges that require global R&D organizations to reevaluate their global dispersion and establish an integrated framework for global R&D location decisions (Gerpott, 1991, pp. 53-57):
4
CONTEXT OF RESEARCH Changes in the overall business structure; Structures grown by the R&D organization, often in an uncoordinated manner; Consolidation of international acquisitions/joint ventures; Pressure to shorten innovation cycles; Establishing a presence close to centers where new technologies are developed (“pockets of innovation”); Being in the vicinity of customers as a competitive advantage; Circumventing politically motivated market barriers. Gerpott’s contribution is of critical importance, especially for large complex organizations, which often have difficulty in changing and reacting appropriately to new challenges. Large corporations often try to address problems and challenges through isolated patchwork efforts, while shunning large restructurings or modification. As Miles more specifically notes: “Research over the past decade has increasingly confirmed what managers and organizational theorists have long understood—organizations, particularly large, complex firms, have a difficult time responding to changes in their competitive environment. Instead of adapting incrementally as market and/or technological changes occur, managers tend to wait until environmental demands accumulate to crisis proportions before attempting a response, and then they often fail. When managers do behave incrementally, they frequently make patchwork alterations to the existing organization as each new market or technological shift occurs but without considering the ultimate systemic impact. Such adjustments gradually move the organization away from its core structural logic, creating an idiosyncratic system highly dependent on a few key individuals or units to function” (Miles & Snow, 1992, pp. 69-70). While practitioners have addressed the problem of systemic global R&D network improvement in isolated cases (Gerpott, 1991), rigorous academic research has so far failed to yield both insights into how to improve global R&D networks and a normative model of the same. One potential reason for this unsatisfactory situation is that such major systemic restructurings of R&D network structures occur only infrequently or gradually over time, thus reducing the opportunity to
5
INTRODUCTION observe the phenomenon and conduct rigorous research. Another explanation for the lack of systematic research may be related to the fact that few companies have reached the stage reorganizing their R&D network after a long phase of R&D network growth, as the first signs of consolidation and reorganization have appeared only recently (Ricknäs, 2008). This thesis addresses the lack of past research through the systematic study of global R&D network improvement in a large multinational software company, thus enabling development of a normative model including strategies and a framework supporting global R&D network management and improvement based on rigorous academic research. The restructuring or reorganization of a global R&D network organization is a substantial undertaking, especially for large MNCs, that may take several years from initial conception to implementation until a favorable organizational structure is established and measurable positive impacts are achieved. The research design must consider such complexity and the duration of change efforts required. Therefore, this thesis employs an action design research approach to identify the methods, frameworks and resources necessary to successfully transform a global R&D network in one of the largest software companies in the world. 1.2. 1.2.1.
Problem statement and significance Initiating Problems and their Significance in the Context of the Researched Enterprise
Typically global software companies such as SAP, Microsoft and Oracle are required to develop large-scale software (LSS)1 products in ever-shorter cycles to accommodate changing customer, statutory and technological requirements. To meet these challenges MNCs in the global software industry have to design and maintain an effective business strategy and global organizational structures for R&D that allow for the effective division of labor and the coordination of software development projects across multiple geographies. In this research we focus on SAP, a German headquartered business software MNC with multiple globally distributed locations in Europe, Americas, East Asia, South Asia and Australia.
1 LSS refers to delivered software systems developed by a team of developers, intended to be in sustained operation for a long time, and typically representing 50K-500K+ source code statements. (Scacchi, 1995, p. 38)
6
PROBLEM STATEMENT AND SIGNIFICANCE SAP, after two decades of continuous employee and revenue growth, was beginning to experience in 2008 productivity and agility problems in its globally dispersed R&D organization. The average revenue per employee2 almost has become flat after a decade of continuous improvement (see Figure 1). At the same time it was observed that development cycles for SAP’s core ERP product had become longer over time, with the cycle now taking more than 1.5 years from beginning to end, thus making it more difficult to react to changing customer needs or technological developments. To address these and other problems, the SAP board appointed a Chief Operating Officer who in April 2008 initiated a company wide initiative, the “COO Program and Projects”. No economies of scale after 1997 280 Employee Revenue in thousand EUR 256
260 239
258
257
237
236
240
233
237
239
234
224 220 201
225
224
2008
2009
207
200 179 180 154
160
140
133
137
135
135
1991
1992
119 120
100 1988
1989
1990
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
Figure 1: SAP revenue per employee from 1988 to 2009; own calculations based on (SAP AG, 2010)
As one project in the COO P&P initiative, the Location Strategy and Management Project was a key board sponsored project initiated to address urgent issues in the global R&D organization especially related to the division and allocation of development projects to the global R&D locations. Past experience at SAP had shown, that a lack of global strategies or more importantly an overall common policy framework led to uncoordinated allocation of development projects and teams to locations, with most development projects being spread out to three or more locations around the world, making the effective coordination and the software development itself a challenging task. In the context of this thesis a location strategy should be understood as: 2 The annual total revenue of SAP divided by the total number of employees in the corresponding year
7
INTRODUCTION The strategic framework of a company including comprehensive information about how the development of different software components is divided and assigned to teams in different geographies considering software product requirements and specific location characteristics. A location strategy should also include a governance structure with principles guiding the development work and processes to enforce it. The research presented here identifies which foundations are required: processes, the consideration of dynamic factors, and data. The lack of a common policy framework resulted in several problems including the following: A low level of effectiveness among global development teams due to high communication overheads and friction in the development process across multiple locations, leading to high overall software development costs; Products being developed not according to specifications and global standards, as a consequence of inter-site misunderstandings and cultural differences. This problem was further amplified through inexperienced management, as well as a lack of global standards and processes; Conflicting ad hoc business policies such as sudden starts and halts in hiring through “head count freezes”, policies of primarily hiring for open positions in low-cost locations, and strong financial pressure to shift workforces rapidly to low-cost countries despite ongoing commitments to the workforce and product release schedules; A lack of information on location options including financial, human resource and macroeconomic indicators used to evaluate and compare such options for the allocation of R&D activities among potential R&D centers. In an initial interview, one development executive clearly described the R&D organization problems facing SAP for its future development: “Given the way SAP runs things [in software development] and its [R&D organizational] structure, I think that what you have to realize is that this [organizational structure] is a really expensive way to run the company. If margin pressure becomes a critical success factor for us, we’re going
8
PROBLEM STATEMENT AND SIGNIFICANCE to be at [a] disadvantage.” (American SAP Executive Vice President of SRM Development) In the problem formulation phase of the action design research methodology of this study we will provide further evidence of this problem.
1.2.2.
Limitations of previous research
R&D internationalization and global dispersion of the R&D function has been intensively studied over the last three decades; the precursors giving rise to these developments are widely agreed upon (Belderbos et al., 2008; Brockhoff, 1998; Gerpott, 2005; Khurana, 2006; Thursby, 2006). Various models now exist to describe the “whys”, or the causes of dispersion from different theoretical angles. Kuemmerle (1997) differentiates between a home base augmenting approach, in which foreign R&D sites augment the knowledge of the home base and a home base exploiting approach in which existing home based knowledge is localized for a particular foreign market to support production or market activities in that market. Furthermore the “hows”, or what organizational setup MNCs adopt to conduct global R&D activities, have also been studied i.e. the empirical research of Gassmann and von Zedtwitz (1999) into R&D organizational macro archetypes offers good insights into why companies have adopted one or more of the five models and how these models have evolved over time. Research aimed at understanding how MNCs transform their global R&D organizations to find a better fit and meet external challenges, remains primarily the domain of practitioners (Gerpott, 1991) and suffers from a lack of rigor, especially in the global software industry. The objective of this dissertation is to address the above limitation of R&D organizational transformation research and propose a rigorous academic research framework enabling us to better understand the global R&D transformation phenomenon and the factors that must be addressed to conduct a successful transformation and thus achieve greater efficiency through the reconfiguration of existing organizational resources, i.e., to achieve a more efficient division of labor and coordination of R&D activities. The multiple stakeholders involved in the R&D organizational shaping process often have conflicting definitions of what constitutes an “efficient” R&D organization. These conflicting definitions are typically influenced by both their functional role in the organization and the strategies and goals of their
9
INTRODUCTION particular organizational unit. Therefore, the first challenge when transforming organizational structures is arriving at a common understanding of the definition of efficiency and identification of other goals of the intended transformation. Only after a clear definition has been commonly agreed among all stakeholders can an effective model and methods for the dispersion of research activities and resource allocation be designed and implemented. Fisch (2001, 2002, 2003) defines three types of efficiencies actors pursue when internationalizing R&D and allocating R&D activities to various global locations: physical, decision and communication efficiencies. However, Fisch’s framework, which is described in detail in section 3.1.6, has not been applied in the global software industry. This study thus empirically validates Fisch’s framework in the context of a global software company to understand which efficiencies are major drivers in the transformation of the global R&D organizations. The inherent difficulty of measuring productivity in software development makes this issue one of special interest (Balzert, 2009). Other limitations of previous research relate to studies of dynamics surrounding the R&D organization. Despite previous research in the globally dispersed R&D domain, prior studies have not adequately addressed dynamics. Over time, R&D networks are subject to socioeconomic and internal organizational dynamics that change the size and structure of the existing R&D network. One major source of such dynamics is merger and acquisition (M&A) activities, which have been a considerable force behind the internationalization of R&D to incorporate external intellectual property and takeovers of an established customer base, product portfolio or R&D resources (Dörrenbächer & Wortmann, 1991; IBM, 2009; Oracle, 2009; Wate, 2009). In addition to M&A activities, internal organic growth across various geographies adds size and complexity to corporate R&D networks. It is not uncommon today for a specific R&D project to be allocated globally among multiple locations spanning several time zones. Such global dispersion, however, increases the need for coordination, trust building and efficient communication, thus increasing coordination costs when R&D activities are dispersed around the world (von Hippel, 1994; Williamson, Winter & Coase, 1991). These coordination costs often offset benefits gained through globalization and the offshoring of R&D (Conchuir et al., 2006). Prior research has focused more on new R&D activity dispersal decisions and less on improving existing networks and the allocation of R&D activities (Battin,
10
RESEARCH OBJECTIVE AND RESEARCH QUESTIONS Crocker, Kreidler & Subramanian, 2001). The contemporary literature lacks a conclusive framework or normative model describing how MNCs can efficiently establish and continuously improve their global R&D networks, subject as they are to ongoing organizational changes and both macro- and microeconomic fluctuations. Literature related to this topic has been rather generic and not specific to any particular industry. In contrast, this thesis focuses on the global software industry to address these limitations and to provide new insights. 1.3.
Research Objective and Research Questions
The objective of this study is to improve the globally dispersed R&D organization of a MNC operating in the global software industry. We will do so through utilizing an action design research approach (Sein et al, 2011) in a project group set up within the enterprise. The focus of this study is on the spatial allocation of R&D activities, overall R&D network characteristics, and continuous R&D network improvement. As previously pointed out in section 1.2, these areas pose significant challenges for the researched enterprise and require further inquiry to address them. The study is aimed at building understanding of the factors, strategic frameworks and governance models MNCs employ when transforming a global R&D organization. Furthermore, this thesis considers the dynamic nature of the hightech software industry to verify how such strategies and models can be used to anticipate and react to dynamic changes, thus ensuring a continuous process of adaptation to new realities. The objective of this study is therefore to design, implement and evaluate a model of organizational transformation and its supporting framework for a globally dispersed R&D organization in the software industry. To accomplish this research objective, the following research questions are addressed: • How and under the application of what strategies, processes and governance models does the selected company SAP operating in the software industry establish, maintain and improve its global R&D organization? To build a strategic global R&D management foundation, this study first inquires into the overall R&D portfolio, R&D strategy and R&D work design by seeking answers to the following sub-questions.
11
INTRODUCTION Which R&D activities are being conducted, and how are these guided by corporate and R&D strategy? How are major R&D activities effectively divided given their structural requirements? How are R&D activities allocated among sites (R&D network nodes), and which criteria and guiding principles are employed? How can R&D networks be characterized and to what internal and external dynamics are they subjected? How and according to what criteria can R&D networks be efficiently managed and continuously improved to support business strategy and operations, especially considering dynamic factors such as socioeconomic changes? 1.4. 1.4.1.
Research Purpose Personal Purpose
With more than 12 years of experience in the information technology field gained while working for MNCs in Europe and Asia, I have worked as part of a project group tasked with defining the location strategy and governance model for SAP, the world’s largest business software company. As an element of the action design research approach selected for this thesis, the study utilizes a real-life setting that allows for conclusions to be drawn and for insights gained to be fed back directly into the global transformation project as part of the research process. The direct application of concepts and understanding developed in prior research that are gained through this study, as well as the study of related theories, helped to build up a better understanding of the phenomena involved, and allowed me to effectively guide the project team towards the goal of successfully transforming the global R&D organization and deriving an R&D location strategy including a governance model for this company. This thesis represents a unique opportunity to enhance my understanding of effective global R&D management, to quickly validate the insights and findings gained in the study through discussions with key stakeholders at SAP, and to implement governance models and processes.
12
RESEARCH PURPOSE 1.4.2.
Practical Purpose
The changing economic environment with its stronger focus on more cost-effective global R&D activities triggered many MNCs to reassess not only their global R&D product portfolios, but also their R&D locations and the allocation of projects among them. Through the application of rigorous scientific principles, this study addressed these challenges and improved the global R&D network of a corporation in the global software industry using practical feedback as part of the action design research methodology utilized. The results of this study provided the researched company with a better understanding of the determinants of effective global R&D dispersion, the strategies required and its governance model to improve decision making when allocating new development projects, and informed it on how to reorganize its existing R&D activity dispersion policy to improve its organizational resources and better support the strategic goals of the enterprise. A clear location strategy framework also provided all employees across its global R&D sites with greater transparency on how projects are globally dispersed and allocated and what future directions specific R&D sites will take. From a local development center perspective, this model increases our understanding of how to leverage the strengths of a particular location while addressing and improving weaknesses to increase its overall attractiveness and ecosystem. Although the focus here is on global R&D network improvement in the software industry, at this stage it is believed that certain factors, if not major elements of this study, can be generalized and applied to other industries characterized by the global distribution of R&D work such as pharmaceuticals or high–tech industries. R&D network improvements are of particular interest to MNCs, as they see the potential for significant improvements to be made by optimizing their R&D networks to accelerate the innovation process, increase the quantity of innovations, and reduce innovation costs (see also section 3.1.5) (Booz, Allen & Hamilton, 2006).
1.4.3.
Academic Purpose
Multinational companies have been increasingly dispersing their R&D activities around the globe over the last two decades. Most of this dispersion has involved
13
INTRODUCTION a shift from a centralized R&D function to a decentralized “networked” setup (Gassmann & von Zedtwitz, 1999). The majority of academic research has focused on the increasing globalization of R&D activities with individual R&D projects being used as the unit of analysis. However longitudinal studies of the process of transformation towards an integrated R&D network setup as part of a global R&D organization remain scarce. While spatial aspects of more operational activities such as supply chain management have been extensively researched, few studies have addressed R&D network transformations at the strategic management level. As pointed out previously, multinational companies conducting globally dispersed R&D have been subject to continuous external and internal dynamics. Most previous studies of this issue provide an ex post analysis of the phenomenon after the fact and do not put forward a conclusive model that allows companies to anticipate dynamics and to ensure the ongoing optimization of their R&D organizations in response to organizational and business environments changes. Therefore there is a need to provide a systematical understanding of how MNCs can effectively transform and manage their global R&D organizations and of the strategies and governance models they can employ to effectively coordinate and continuously improve their globally dispersed R&D functions. This cross-disciplinary study, which draws from the areas of strategic management, global R&D management, organizational theory and organizational change management represents possibly the first systematic attempt to investigate into this phenomenon and obtain a normative model of global R&D organization enhancement, thereby informing managerial practice. The action design research methodology (Sein et al., 2011) that this research employed in this thesis represents an innovative combination of the established research methodologies action research and design research. The findings of this study are also expected to contribute to the ongoing development and progression of this new research methodology. We will further discuss this innovative methodology and its application in the research in Chapter 5.
14
THESIS ORGANIZATION 1.5.
Thesis Organization
This thesis employs an action design research methodology (Sein et al., 2011) to achieve the research objective to improve the globally dispersed R&D organization of SAP, a MNC operating in the global software industry. Chapter 1 thus provides an introduction to the globalization of R&D activities in the software industry, R&D networks and their management. It also presents the problem statement of this study, its significance, the research objective, and the research questions. The following chapters go on to construct a theoretical foundation for the study. The employed action design research methodology is a prescription-driven research and thus solution rather than problem-focused. This study thus requires the construction of an initial solution architecture that guides the action design research. To construct such solution architecture, Chapter 2 reviews literature concerning global R&D network management in the software industry whereas Chapter 3 reviews the theoretical underpinnings of this phenomenon, drawing mainly from the areas of organizational theory and strategic management. The findings of both chapters are then utilized to create the initial solution architecture in Chapter 4, which includes tentative theoretical assumptions that guide the empirical inquiry. Chapter 5 then provides a detailed account of the selected empirical research methodology action design research, the case study design, and methods of data acquisition and analysis. Chapter 5 also describes criteria adopted on research quality, research validity and ethical research to ensure the results derived are both valid and ethically obtained. Chapter 6 outlines the single case study that serves as the empirical foundation of this thesis. The final chapter, Chapter 7 analyzes the results of the case study to achieve the research objective and answer the research questions laid out in Chapter 1. It also discusses contributions to academia and practice, the limitations of the research and directions for future research. Figure 2 exhibits the overall structure of this dissertation.
15
INTRODUCTION
Figure 2: Thesis Organization
16
R&D IN THE CONTEXT OF THE SOFTWARE INDUSTRY
CHAPTER 2
2.1.
LITERATURE ABOUT THE PHENOMENON
R&D in the Context of the Software Industry This literature review provides an overview of and introduction to the phenomena of distributed global software development (Chapter 2) and the theoretical underpinnings (Chapter 3) relevant to this research, thus enabling the construction of the initial solution architecture (Chapter 4) framing this thesis.
Literature Search Strategy Various sources have been searched for relevant information to answer the research questions stated above. Such sources included: The City University Online Library, with focus on academic books and papers; The Online Library of the University of Mannheim to access relevant German language papers and books; General references from articles and books provided by Professor Kumar and associates; Google Scholar to search for articles containing the below keywords and to verify cross-references and article citations. Terms used for the search included, “global distributed software development”, “R&D management”, “allocation of R&D”, “internationalization of R&D”, “work design”, “organizational design”, “network organizations”, “network design”, “organizational transformation and change”, “strategic management”, “organizational dynamics” “location”, “location analysis” and related terms. Relevant references were downloaded and stored together with their citation details in the Sente 6.5 reference management software to facilitate searches, categorization, annotation and citation of sources. Books and papers deemed relevant which are not available in a digital format were scanned and subjected to optical character recognition (OCR) to build a keyword index facilitating a full text search of all papers in the reference database.
17
LITERATURE ABOUT THE PHENOMENON Structure of Literature Review Given the objective of this study is to formulate an improved global R&D network organization design, I looked for studies that enable me to accomplish the research objective and sub objectives. Before a network improvement concept can be designed, a comprehensive understanding of R&D activities in the global software industry is required. The inquiry process exhibited in Figure 3 builds up this understanding in accordance with the research questions laid out in Chapter 1 in six steps: the definition of R&D activities, their division and integration, their spatial allocation, R&D networks, R&D network improvement and a strategic framework for ongoing improvement.
Figure 3: Overview of Research Questions and related Literature Review (own graphic)
The literature review first examines the characteristics and context of R&D activities undertaken in global software development and how they are guided by strategic management. The review of literature concerning this phenomenon provides definitions of R&D in the global software industry context, describes the historical development path and determinants of R&D internationalization,
18
R&D IN THE CONTEXT OF THE SOFTWARE INDUSTRY sets out the unique characteristics of the software artifact, and yields principles and organizational forms of software development (sections 2.1 – 2.55 and section 3.2). Second, considering the size of large-scale software products and the unique characteristics of software, the next step in the inquiry process is to investigate the division and integration of R&D activities. Drawing on work design and organizational theory, it builds an understanding of how large projects can be effectively divided and integrated given structural requirements and how organizational design can occur on a global level (sections 3.1.1 – 3.1.3) Third, following the review of work design and organizational design, literature that describes how R&D activities are allocated across the global R&D network is then examined. Here, the resource allocation process and the role of emerging strategies provide insights into allocation modes adopted in global enterprises and the criteria utilized for R&D allocations (section 3.2.2). Fourth, because the R&D network is the center of inquiry in this study, the next stream of literature reviewed is examining the properties of generic networks, social science networks, and the characteristics of network organizations and global R&D networks (sections 3.1.4 and 3.1.5). Considering the dynamic environment in which software R&D occurs, literature examining internal and external dynamics are reviewed to understand how such dynamics may influence R&D networks (sections 3.3 - 3.4). Fifth, studies examining criteria used for R&D network efficiencies and R&D network improvements are reviewed to provide the theoretical foundations of actual global R&D network improvements (section 3.1.6). Literature on organizational learning and organizational transformation and change is then reviewed to provide the theoretical underpinnings of the subsequent improvement process (sections 3.1.7 and section 3.1.8) and to ensure R&D network improvement are successful. Sixth, literature in the strategic management domain is reviewed to provide insights into how to encapsulate R&D network improvements in a strategic framework that ensures a continuous process of redesign can react to changes and adapt the R&D network to new realities (section 3.2). The insights gained through this six-step process of literature review and inquiry process enable the initial solution architecture described in Chapter 4 to be created and to guide empirical enquiry. The initial solution architecture evolves throughout the
19
LITERATURE ABOUT THE PHENOMENON empirical research process as understanding grows and initial assumptions are modified. Following the literature review process as exhibited in Figure 3, this section and sections 2.2 and section 2.3 review literature concerning research activities undertaken in the context of the global software industry. Before a global R&D organization can be improved, it is important to obtain a working definition of R&D in the context of the enterprise under study. Section 2.2 then reviews literature on R&D management and R&D internationalization to further illuminate the context and historical development of global R&D organizations. R&D activities in the global software industry are related to the conception, design and implementation of the software artifact. Section 2.3 therefore reviews literature that presents the unique characteristics of software artifacts. Defining R&D Framing this research with a clear definition of “R&D” in the context of the global software industry is problematic as the term is often used ambiguously. The software R&D process “differs from other technology R&D as there is no tooling or manufacturing phase of product development; rather, when R&D is finished, the program is ready to copy, ship and use” (Barr & Tessler, 1996, p. 1). One of the most commonly adopted definitions of R&D was drafted by the Organization for Economic Co-operation and Development (OECD) in their Frascati Manual in 1963 to standardize research and facilitate the comparison of R&D research and statistics. Research and development (R&D) comprise creative work undertaken on a systematic basis in order to increase the stock of knowledge, including knowledge of man, culture and society, and the use of this stock of knowledge to devise new applications. (OECD, 2002, p. 30) While initially focused on institutionally structured R&D in the natural and engineering sciences that produces tangible technological innovations in primary and secondary industries, the sixth revision of the Frascati Manual released in 2002 also added definitions of R&D for newer industries especially for the R&D of intangible technological innovations such as those seen in the software industry.
20
R&D IN THE CONTEXT OF THE SOFTWARE INDUSTRY According to the OECD (2002,p. 46), “for a software development project to be classified as R&D, its completion must be dependent on scientific and/or technological advance, and the aim of the project must be the systematic resolution of scientific and/or technological uncertainty”. The main focus is novelty and the accumulation of knowledge stock through such as the development of new technologies, programming methods or tools. In this context, however, the Frascati Manual explicitly excludes application software and information system development, as it is assumed to involve the use of existing (OECD, 2002, p. 33): Software-related activities of a routine nature which do not involve scientific and/or technological advances or resolution of technological uncertainties are not to be included in R&D. Examples are: • Business application software and information system development using known methods and existing software tools. • Support for existing systems. • […] • Adaptation of existing software • Preparation of user documentation.
The OECD assumes in its Frascati Manual that because business application development, the “D” in R&D, is work of a routine nature that lacks novelty or the advancement of knowledge, the development of business applications should not be considered an R&D activity. The Frascati Manual further assumes that a clear organizational distinction can be made between innovative “R”, prototyping “D” and execution-driven “M” for manufacturing with clear handover points - a setup more commonly found in industrial R&D rather than in the global software industry. This also applies to SAP, where research is not exclusively conducted in its dedicated research organization, SAP Research, which only accounts for less than 1% of the total workforce and a small fraction of SAP’s overall patent applications. Most innovation activities occur within dedicated application development units in charge of developing applications or smaller sub-modules. SAP’s most recent innovative business applications, such as “Business One” or the on-demand solution “BusinessByDesign” for small and medium-sized businesses (SMBs),
21
LITERATURE ABOUT THE PHENOMENON have been almost exclusively conceived and developed within the development organization rather than in SAP Research. Business application development in SAP cannot, therefore, be considered a routine activity, as various incremental or radical innovations are typically made in the development of software, especially considering the continuous technological changes that affect the development process. The OECD acknowledges the overall difficulties of both pinpointing R&D in software development and the incremental nature of R&D in the software development process: The nature of software development is such as to make identifying its R&D component, if any, difficult. Software development is an integral part of many projects which in themselves have no element of R&D. The software development component of such projects, however, may be classified as R&D if it leads to an advance in the area of computer software. Such advances are generally incremental rather than revolutionary. Therefore, an upgrade, addition or change to an existing programme or system may be classified as R&D if it embodies scientific and/or technological advances that result in an increase in the stock of knowledge (OECD, 2002, p. 46). The main requirement in the OECD’s Frascati framework in regarding software development activities as R&D is that such activities must lead to an advance in the computer software area. The assumption that business software application development lacks novelty and is of a repetitive nature is not applicable to the enterprise under study given the high percentage of advanced developments undertaken in the development organization rather than in the firm’s dedicated research unit. Considering the remaining ambiguity in the OECD Framework with regard to software development, and given that SAP also conducts R&D within its regular business units and not exclusively in a dedicated research unit, this dissertation adopts Matheson’s broader definition of R&D that better represents the existing situation at SAP (Matheson & Matheson, 1998). Matheson employs the term R&D in the broadest sense to mean “any technologically related activity that has the potential to renew or extend present business or generate new ones, including competency development, technological innovation, and product or process improvement” (Matheson & Matheson, 1998, p. 1). This dissertation
22
R&D IN THE CONTEXT OF THE SOFTWARE INDUSTRY thus considers software development in the SAP context as R&D in the Matheson sense, due to the high degree of novelty and advanced development that occurs in its development organization. R&D Phases The OECD (2002, p. 30) differentiates between three distinct classes or phases of R&D in its Frascati Handbook: basic research, applied research and experimental development: Basic research is experimental or theoretical work undertaken primarily to acquire new knowledge of the underlying foundation of phenomena and observable facts, without any particular application or use in view. Applied research is also original investigation undertaken in order to acquire new knowledge. It is, however, directed primarily towards a specific practical aim or objective. Experimental development is systematic work, drawing on existing knowledge gained from research and/or practical experience, which is directed to producing new materials, products or devices, to installing new processes, systems and services, or to improving substantially those already produced or installed. R&D covers both formal R&D in R&D units and informal or occasional R&D in other units. However Gerpott (2005) points out that the OECD approach and similar approaches that represent attempts to classify the different forms of R&D are of limited use for the selection and management of R&D activities, as management decisions are instead based on the duration of the intended R&D activity and the degree of certainty about the application of the R&D activity to specific products or processes in the enterprise. According to Gerpott, activities that target an application in the medium to long term and have a higher degree of uncertainty qualify as research activities, whereas activities that target a short to medium term application and have a lower degree of uncertainty qualify as development activities. Gerpott thus suggests defining company-specific rules for differentiating between phases in the R&D process rather than applying a stringent external framework. This thesis follows Gerpott’s suggestion given that SAP defines R&D according to the time to application. Here, research activities typically have a potential time required to application of about five years, while development activities have application time frames closer to one year or less.
23
LITERATURE ABOUT THE PHENOMENON 2.2.
The Organization of Research and Development
The effective R&D management of globally dispersed R&D processes is a multi-layered problem in which R&D managers have to ensure the overall success of R&D projects despite often contradicting goals and developments in the various layers of the R&D organization. Figure 4 shows a distillation of this multi-layered problem to the four layers most relevant to the management of globally dispersed R&D processes. The successful generation of innovation depends to a considerable extend on the organizational structure in which R&D processes are performed (Großmann, 1994). Global R&D management must simultaneously consider and navigate regional and legal frameworks to achieve effective integration into local markets, and must take into account the internal organizational structures of hierarchies and project organizations to ensure the overall success of global R&D processes. Global R&D managers also need to understand and navigate informal networks of researchers, a factor confirmed by recent research on social network analysis in R&D projects (Allen et al., 2007; Allen & Cohen, 1969; Amrit, 2008). These multiple layers thus create the boundaries within which global R&D management must define an effective organizational structure and formulate management practice.
Figure 4: Main problems in creating and managing transnational R&D processes (Boutellier et al., 2008e)
24
THE ORGANIZATION OF RESEARCH AND DEVELOPMENT While no general prescriptive model exists for R&D organizations that provides a best-fit organizational structure or management framework, the study of Gassmann and von Zedwitz (1999) on the R&D organizations of 33 MNCs reveals five common archetypes of R&D organizations and traces overarching trends in the evolution of R&D organizations over time. Gassmann and von Zedtwitz show that in most cases, R&D internationalization starts with an ethnocentric centralized R&D function in which all R&D activities are concentrated in the company’s home country. While this model typically offers great efficiencies and reduced coordination costs, internal isomorphism3 (De Meyer, 1993) can lead to an insensitivity to signals from foreign markets and establish a not-invented-here syndrome (Katz & Allen, 1982). While the evolution of R&D organizations may follow various paths over time (see Figure 5), one potential path is to develop into a geocentric centralized R&D organization in which tightly coordinated listening posts are established and provide the central R&D unit with experience and information from foreign markets, thus allowing for the adjustment of products and processes to international environments (arrow “1” in Figure 5).
Figure 5: Five major trends drive the evolution of international R&D organizations (Gassmann & von Zedtwitz, 1999)
Another potential path is from an ethnocentric centralized R&D function towards an R&D hub model, often also described as the “hub and spoke” model. Here, the R&D organization wants to tap into foreign resource markets and conduct 3 Isomorphism is “defined as the similarity in management systems that may exist between organizations which interact with each other” (De Meyer, 1993, p. 111).
25
LITERATURE ABOUT THE PHENOMENON decentralized R&D under the tight control and coordination of a central R&D center (arrow “2” in Figure 5) to satisfy foreign market requirements or when foreign technological development becomes too significant to ignore. While efficiencies in this model can remain significant, coordination costs increase in comparison with those incurred under the ethnocentric centralized R&D model. In their study, Gassmann and von Zedtwitz (1999) identify an overall trend towards an integrated R&D network characterized by authority for technology or component development based on the individual capabilities of R&D units that grow over time (arrow “3” in Figure 5). Despite the intended benefits such as a free flow of information and the idea that R&D centers act as primus inter pares4 based on their compentence, Gassmann and von Zedtwitz note that this model requires considerable multidimensional coordination and control efforts to work effectively. Still, they argue that companies are increasingly adopting the integrated R&D network model, as it offers a balance between coordination costs and the costs of achieving local market efficiencies. For some companies, an integrated R&D network of equal R&D units does not mark the end of their R&D evolution process. Driven by cost pressures, these companies have recentralized parts of their integrated R&D networks to create a small number of leading research centers, thus offsetting some of the coordination challenges and achieving better control over the R&D organization through the improved exploitation of scale effects (arrow “5” in Figure 5). While Gassmann and von Zedtwitz identify the cost pressure that forces MNCs to adjust their global R&D organizations, they do provide any description of the organizational form to this may lead. Here this thesis provides an organizational design prescription in Chapter 7 to address increased cost pressure. A special form of R&D organization is the polycentric decentralized R&D organization, which represents a decentralized federation of R&D organizations without a central supervising R&D center. Polycentric organizations either are a result of merger and acquisition activity or originate from the initiative of local subsidiaries in response to localization requests from local customers, and offer a high degree of local market orientation. According to Gassmann and von Zedtwitz (1999), the polycentric decentralized R&D organization is a “dying form” due to its high costs and a lack of cross-R&D unit synergies, and is not considered further in this study.
4
26
Latin: The first among equals.
THE SOFTWARE ARTIFACT While Gassmann and von Zedtwitz’s (1999) definition of R&D organizational macro archetypes offers empirical proof of how and why companies adopt the five models, explanations of the micro structures of the R&D organization, especially in terms of the allocation of individual R&D projects to the macro organization, structural dynamics and transformation processes of a given form, do not form part of their study. Here, Großmann (Großmann, 1994) provides insights by relating R&D projects to R&D organizational structure based on their position in the technology life cycle. He argues for example, that new technology projects where the company occupies a relatively low technological position should be conducted by adopting an international network model, whereas projects rather late in the technology life cycle where the firm has strong technological standing should be conducted under a centralized approach (Großmann, 1994). In summary it can be seen that after a phase of strong R&D internationalization over the last three decades, today’s MNCs increasingly operate their R&D organizations in a network setup. An R&D network in the context of this dissertation should be understood as a global organization where R&D projects are subdivided, allocated to multiple R&D locations that collaboratively work on subtasks and subtasks later integrated. Each specific R&D location hereby represents a node in the R&D network.
2.3.
The Software Artifact
The software artifact is a critical resource that is deeply embedded in the fabric of the modern day creation of economic value. The design, manufacturing and distribution of innovative products and services are no longer conceivable without software thus making it an essential resource in the information age. Several German authors even argue “the competitiveness of the German economy critically depends on the production of software-intense products and services with the highest quality” (Broy, Jarke, Nagl & Rombach, 2006, p. 210). Ensuring the supply of software is challenging for a number of reasons, as software is becoming increasingly complex, is subject to ever-higher quality standards and is constrained by the limited supply of skilled employees in developed countries (Balzert, 2009). Enterprises are increasingly ensuring a sufficient supply of software through the use of standard software packages that avoid
27
LITERATURE ABOUT THE PHENOMENON major development projects and uncertainty in comparison with individually developed software, as well as through the increased use of software development outsourcing to countries like China and India. Software is defined as “Computer programs, procedures, and possibly associated documentation and data pertaining to the operation of a computer system.” (IEEE, 1990, p. 66) and possesses several unique characteristics. First software is an intangible, non-material good that can be transferred and duplicated at very low cost. Second software is not subject to wear and tear as are other technical products. No spare parts are used to fix software bugs or design flaws, which have to be corrected in the source code of the customer using it; however as software can be changed more easily and quickly than technical products the challenge is not making the change, but bug fixing itself and the distribution of updates (software maintenance) (Balzert, 2009). Third, because software ages as requirements and technologies change over time, gradually rendering software systems obsolete (Parnas, 1994) so that today’s software systems will become tomorrow’s legacy systems. Software systems thus require continuous maintenance and improvement. Fourth software is difficult to measure through specific metrics as no direct relationship can be established between such metrics and the quality of software (Balzert, 2009). Fifth, software development is not subject to scale effects, as economies of scale do not apply (Jackson, 1998). The development of software is a collaborative effort (Whitehead et al., 2010) and highly skilled developers are required to conceive and program software, as it is a complex artifact that often has millions of lines of code5. Due to the size and complexity of modern software systems, developers use software architectures, development methodologies and tools to secure the development of high-quality software that fulfills both functional (i.e. business functions) and non-functional (i.e. reliability, response time) customer requirements. These building blocks of modern software systems are reviewed in the next section.
2.4.
Software Development
Software development is defined as “the process by which user needs are translated into a software product. The process involves translating user needs 5 Present software system sizes: Windows XP: 45 million LoCs, Windows Vista: 50 million LoCs, Mac OSX 10.4: 86 million LoCs, SAP (ABAP): 238 million LoCs. Source: FERRARI, SERGIO, http://www.sdn. sap.com/irj/scn/weblogs?blog=/pub/wlg/12853
28
SOFTWARE DEVELOPMENT into software requirements, transforming the software requirements into design, implementing the design in code, testing the code, and sometimes, installing and checking out the software for operational use.” (IEEE, 1990, p. 67) In both academic and practitioner literature, however, various metaphors have been used to describe software development. Some considered it science (Gries, 1981), others art (Knuth, 1997), others process (Humphrey, 1989), or even bazaar in the context of open source development (Raymond, 1999). In the development of business applications the metaphor of software building is frequently used. The building metaphor implies the need for prerequisite tasks such as planning, resources coordination, project management, and the presence of an overall architecture to reduce risk and successfully complete the software building process (McConnell, 2004). Other metaphors that describing the software development process have also been suggested (McConnell, 2004, pp. 13-19): Software penmanship: writing software code. This metaphor has been criticized, as it could imply that multiple unfinished drafts are thrown away during the process, making the method too expensive and ineffective for the development of large software projects; Software farming: growing a system. This metaphor could imply that only incremental adjustments occur throughout the development process with a lack of control during the growth phase – software just grows without clear control or direction; Software construction: This building metaphor implies various stages of planning, preparation and execution. Its use has been criticized, as large and complex structures require a disproportionately greater amount of such activities. As powerful as such metaphors are in describing the large-scale development of business applications, Starr points out “the use of a metaphor illuminates a similarity but doesn’t imply equivalence” (Starr, 2003, p. 40). Rather than adopting various apparently useful metaphors, Starr suggests carefully examining such similarities and “[discarding] an attractive relationship if it ultimately does not contribute to the production of better software” (Starr, 2003, p. 40). One of the most important limitations of the building metaphor is that software building is not a repetitive task, but is instead a rather unique event, unlike the process of building a structure or a car where either identical products are produced or
29
LITERATURE ABOUT THE PHENOMENON identical routines or skills are used. Therefore, analogies to other industries or manufacturing processes must be carefully examined for their applicability to software development and their contribution to the development of better software. Furthermore, the often-used construction metaphor conveys an impression of simplicity and clear determination that may lead to the assumptions that software consists of interchangeable building blocks, that their required number can be determined in advance, and that even moving represents a value-adding activity, which is simply not the case (Braithwaite, 2007). Software development can be characterized as an “intensive technology” (Thompson, 1967) whereby solutions to unstructured or weakly structured problems are developed. The selection and combination of activities and methods depends on the object to be constructed and is unknown a priori. Building on Thompson’s conceptualization, Stabell’s “value shop value chain” provides a good representation of the software development process: that of an iterative system of value generation (Stabell & Fjeldstad, 1998). Iterative as at begin of software development not all requirements or their technical implementation are clear and the degree of certainty increases throughout the development process. Simon (1973) sees the solution to such unstructured problems, or as he calls them “ill-structured problems”, not in the development and application of formal methods like sophisticated algorithms, but rather in the use of heuristics. Heuristics are problem-solving techniques based on experience. Rather than searching for an optimal solution, heuristic methods provide an approximation that is hopefully close to the best answer at a higher speed or utilizing fewer resources than by devising and applying an algorithm. A well-known example is the “divide and conquer” heuristic of first decomposing a problem, working on its sub-problems, and recombining the sub-solutions into a final solution. Other examples of heuristic methods include the heuristics analogy, generalization, induction and specialization (Pólya, George, 1971). Problem solving, especially in large-scale software development, the context of this thesis, requires ongoing collaboration and communication, which increases exponentially with team size. Software development projects have increased considerably in scale and scope over recent decades, with some development projects exceeding 6000 man-years (Balzert, 2009). The increasing size and complexity of software artifacts has lead to considerable budget and time overruns in software development projects, or has even led to their premature termination. In a study of over 600 companies conducting software development projects, Boehm (1989, p.1) found that 35% of participants had at least one
30
SOFTWARE DEVELOPMENT software development project they considered “runaway projects” where not only budgeted development costs and/or time had been exceeded but where the projects was completely out of control. Software development is rather unique among engineering disciplines, as it is not restricted by physics and material properties but is only constrained by the degree of complexity and overall development costs (Young & Faulk, 2010). Therefore the management of complexity and development costs is of paramount importance to safeguard new software development. In modern software development projects, increasing size and complexity are managed through the application of software engineering principles, the use of software architectures that partition a large software artifact under development into modules or packages that encapsulate functionality and allow for parallel execution and software development methodologies that structure, plan and control the software development process. These elements of modern software development are reviewed in detail below to further guide the inquiry into the global management of R&D in the large-scale software development context. 2.4.1.
Software Engineering
This section is reviewing principles of software engineering for two reasons. Firstly the enterprise that will be studied builds software and understanding the process of software development and the underlying principles is important to evaluate organizational design alternatives. Secondly principles of software engineering are applied in the organizational redesign as exhibited in the case study in CHAPTER 6. Software engineering is a very young field. Its beginnings lie in the 1940s and 1950s, when most software development was conducted on an ad hoc basis by single developers or small teams in a setup similar to a workshop. However, increasingly large software development projects required more formal methods, which led to the rise of software engineering, as methods used for small developments did not scale up in large-scale developments (Jalote, 2005). Principles and methods of software engineering originated from engineering, mathematics and adjunct disciplines. The term “software engineering” was first used at a NATO-sponsored conference in 1968 to raise awareness about the increasing importance of software and the difficulties surrounding software development, calling for action to improve the quality of such development (Bauer, 1969).
31
LITERATURE ABOUT THE PHENOMENON Software engineering is defined as: “(1) The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is, the application of engineering to software. (2) The study of approaches as in (1)” (IEEE, 1990, p. 67). Contemporary software engineering is a differentiated discipline in which participants are required to master a diverse range of skillsets and knowledge to conceive, implement and maintain software. Key areas of knowledge in modern software engineering are requirements management, design and construction, and testing and maintenance, as briefly discussed below (Abran, 2005). Software is developed to contribute to solving real world problems (Kotonya & Sommerville, 1998). Specifying and validating such real world problems and the software required to solve them is defined as software requirements management (Abran, 2005). To achieve a complete specification of requirements “all the functionality, interfaces and constraints have to be specified before the software development has commenced” (Jalote, 2005). This undertaking is especially difficult in the context of large standard software development projects where software size, complexity and the diversity of customer requirements often impede a common understanding between customers and developers of what software should accomplish and how it should behave when used. After software requirements have been specified, the preliminary software design process can commence. This is defined as “the process of defining the architecture, components, interfaces, and other characteristics of a system or component” (IEEE, 1990, p. 56). The first step in software design is to design the software architecture, a work breakdown structure that describes decomposition of the software product into components and specifies their interrelationships (Abran, 2005). The second step of software design is the detailed definition of components to prepare for subsequent implementation in the software-coding phase. Software coding is the activity central to the production of software code in accordance with the software architecture and detailed design prescriptions. Software coding relies heavily on the use of software tools to conduct ongoing unit and integration testing of finished or partly finished components and identify software defects and problems. “Software testing consists of the dynamic verification of the behavior of a program on a finite set of test cases, suitably selected from the usually infinite executions domain, against the expected be-
32
SOFTWARE DEVELOPMENT havior” (Abran, 2005). While software testing has long been seen as an isolated activity undertaken after software coding has been completed (see the below description of the waterfall model), software testing has more recently been conducted concurrently with the development process. Preventive testing to discover defects at an early stage is less costly than fixing defects at later stages, such as when they have already been delivered to customers (see section 2.4.6.). As previously discussed, because software ages, regularly updated versions containing defect fixes or the latest required functionality updates are required. Software maintenance thus ensures that software evolves and stays up-to-date after it has been delivered to customers. It is estimated that software maintenance costs account for around 60-80% of total lifecycle costs, more than initial development costs (Jalote, 2005), thus requiring software companies to recover these maintenance costs through paid maintenance contracts with their customers. 2.4.2.
Software Development Methodology
Large-scale software development requires effective organization to produce the software artifact. Both organizational structure and value-generating business processes need to be defined and managed to ensure software is developed with a high degree of efficacy. The main value-generating process of a software enterprise is the software development process. Software engineers have long studied and designed software development methodologies that provide an organization framework for the software development process. The software development process defines the order in which activities are undertaken, lists required development tasks, defines semi-finished products and their completion criteria, sets out responsibilities, and identifies required skills and the tools, notations and standards to be applied to develop the software artifact (Balzert, 2008). The software development process is also described as the software development life cycle to stress its evolutionary nature. Software engineers use a variety of different development processes and continuously evolve such approaches based on experience gained from development projects. In this respect, software engineering represents a form of meta-engineering and differs considerably from other engineering disciplines, as Young and Faulk (2010, p. 440) point out: “Every field of engineering has explicit design processes that are often formalized, studied and taught to at least some degree. What is less common is treating the process of design itself as a thing to be designed. It is commonplace in software engineering not only to con-
33
LITERATURE ABOUT THE PHENOMENON trast different approaches to software development, but also to reason about what makes one approach more suitable in some situations (say, when requirements are unclear because the application domain is relatively unprecedented) and another approach more suitable in others (say, when the software will control a safety-critical system). It is commonplace not just to adopt and follow a prescribed “best practice”, but also to combine and alter features of different design processes in systematic ways”. The selection of a development process has a considerable impact on overall software quality, the error rate and maintenance over the entire software life cycle (Balzert, 2008). Considering the typical maintenance period of ten years from when a version of the software is first delivered to the end of its maintenance period, decisions about the development process adopted have a considerable economic impact on the software enterprise. One of the earliest development process models developed is the so-called waterfall model first described by Royce (1970), although he did not name it as such. In this sequential model, later phases depend on prior ones and are started when a prior phase is finished. Initially designed without feedback loops, the model was later extended by their inclusion to indicate that development can revert back to prior phases in the event of change requirements (see Figure 6).
Figure 6: Waterfall model with the software development phases (Royce, 1970)
34
SOFTWARE DEVELOPMENT The major critique of the waterfall model is that the assumption that clear specifications can be obtained prior to developing the software is unrealistic, as this assumes near-perfect knowledge of customer requirements at the start of the development process, whereas such requirements typically surface during later phases of the development process. Furthermore, the clear distinction between phases is criticized, as in modern software, phases tend to be more concurrent as previously pointed out in the case of software testing. Despite these criticisms, the waterfall model is often used in development projects with clear and fixed requirements or for software updates with clearly defined additional development scope. As a model that evolved from the waterfall model, the IBM Rational Unified Process (RUP) (see Figure 7) describes the concurrent nature of software development in which the various disciplines of software engineering, with their diverse skills and activities, interact through the phases of software development with varying intensity. The RUP clearly acknowledges that development activities run concurrently through the phases of software development with varying workloads, and cannot be clearly encapsulated as suggested in the waterfall model.
Figure 7: The Rational Unified Process (RUP) (Rational Software, 1998, p. 3)
The spiral development model proposed by Boehm (1988) captures the iterative nature of software development better than the sequential waterfall model. It combines methods of rapid prototyping in which functional incomplete prototypes are developed and presented to obtain user feedback for further development.
35
LITERATURE ABOUT THE PHENOMENON
Figure 8: Spiral development model (Boehm, 1988)
Various other software development process models have been designed and used to attempt to deliver better software. Examples include reusable software development processes that create software components and store them in a repository for later reuse, throwaway prototypes used to validate the various phases of the software development process, which are later “thrown away” and replaced by a new, different prototype or other forms of prototypes such as evolutionary or operational ones (Bersoff, 1984; Davis & Bersoff, 1991). Agile methods have more recently been adopted in an attempt to address the shortcomings of traditional software processes, which often led to the software development delays, cost overruns or failures previously described. The reasons for such failures have been manifold, including “the lack of customer involvement, poor requirements, unrealistic schedule, a lack of change management, lack of testing and inflexible and bloated [development] processes.”6 Agile development methodologies (Beck et al., 2001) are designed to address these shortcomings and “aim to accept customer changes any time; have short daily communications between customers and developers, develop and deliver small increments instead of one release at the end of the development; and value working software over documentation though only projects with 10 or fewer people were considered to be appropriate” (Huen, 2007, p. 17).
6
36
http://www.objectmentor.com/omSolutions/agile_why.html
SOFTWARE DEVELOPMENT Agile development assumes that the initial development goal is fuzzy at the beginning and that uncertainty reduces step-by-step in iterations throughout the project (see Figure 9). Every iteration can be seen as a learning loop at the end of which results are analyzed and compared with the initial goal for the current iteration. In each iteration all development activities such as requirements analysis, software design, coding and testing occur concurrently (Oestereich & Weiss, 2008). At the end of each iteration an incremental, but workable piece of software has been developed. Milestones at the end of each iteration help align intermediate results in larger development projects to identify and address inconsistencies and software bugs. The length of iterations varies, ranging from weeks to months. In the implementation phase, such iterations can occur on a daily basis, with a new software build packaged and compiled for automatic testing purposes at the end of each day - a practice Microsoft used in designing the Windows NT operating system (Microsoft, 1999).
Figure 9: Iteration cloud metaphor of agile project management (Oestereich & Weiss, 2008, p. 3)
More recently, agile development methods have been combined with elements of the lean philosophy to form the lean software development approach. The term “lean” was first popularized by the study of Womack et al. (1992) on Japanese manufacturing practices such as the Toyota production system. Lean software development employs the following principles adopted from the lean philosophy (Poppendieck & Poppendieck, 2006): Eliminate waste: Reduction of extra software features, requirements churn and organizational boundaries;
37
LITERATURE ABOUT THE PHENOMENON Build quality in: Specification instead of requirements, code developed for automated testing, continuous integration and nested synchronization; Create knowledge: Use of scientific method (hypothesis, experimentation and selection of best alternative), challenge and change existing standards, develop organizational capability development to rapidly respond to changes; Defer commitment: start development with incomplete specifications, change tolerant code, use of flexible software architectures, deferring of irreversible decisions at the last responsible moment; Deliver fast: reduction of cycle times with shorter iterations - fast and high quality development are not trade-offs; Respect people: Engaged, thinking people provide the most suitable competitive advantage in software development; Optimize the whole: Focus on the entire value stream of software development, measurement of the unified process, not parts, deliver a complete product, not just software. The overall aim of lean software development is to accelerate development and improve software quality through the reduction of no value-adding activities such as testing and shorter cycle times.
2.4.3.
Principles Of Software Engineering
Software engineering principles are the foundations that govern the actions of developers in constructing the software artifact; they are based on the very principles human beings use to solve complex problems (Constantine, 1995). The following eight key principles of software engineering are widely applied in the evaluation of specification, design, implementation and evaluation of modern software (Balzert, 2009) and have the various interdependencies as laid out in Figure 10. These principles partly represent conflicting goals; for example, modularization and locality may lead to conflicts under the overall goal and constrain the overall efficiency of the software system. Therefore, goals need to be prioritized before the start of the software development process. In the context of this thesis, the
38
SOFTWARE DEVELOPMENT principles of abstraction, cohesion, coupling and modularization are reviewed in more detail as they contribute to a better understanding of structural requirements for the design of software development work (see section 3.1.1). Abstraction represents a sophisticated task that isolates the main features of a concrete entity so development can focus on relevant characteristics. Two common abstraction mechanisms are used for software systems: functional abstraction and data abstraction (Jalote, 2005). Functional abstraction partitions the overall function of a system into smaller functions that together represent the overall system function. In a functional abstraction system, decomposition occurs in functional modules. Using data abstraction, a system is considered as a set of objects that provides services; system decomposition with data abstraction thus occurs with respect to the objects in the system. The choice of abstraction mechanism has a direct relationship with the characteristics of the software design process and the software artifact such as changeability (Parnas, 1972). Abstraction is thus a foundation for the principle of modularization.
Figure 10: Dependencies between software engineering principles (Balzert, 2009a, p. 49)
Coupling refers to “how strongly different modules of a software system are interconnected. […] Highly coupled modules are joined by strong interconnections, while loosely coupled modules have weak interconnections” (Jalote, 2005b, p. 255). Loose coupling between the modules of a software system is a desired property in software development. It facilitates changes in and the
39
LITERATURE ABOUT THE PHENOMENON maintenance of individual modules, as they can be worked upon in isolation from other modules. To achieve loose coupling, the number of interfaces between modules and the complexity of each interface need to be kept low. Cohesion refers to “how tightly bound the internal elements of the module are to one another. Cohesion of a module gives the designer an idea about whether the different elements of a module belong together in the same module” (Jalote, 2005a, p. 257). Coupling and cohesion are related to each other, a software system is considered to have a simple structure if coupling is minimized and cohesion is maximized (see Figure 11). Simple structures mean “low complexity, good comprehension, quick learning and easy changeability” (Balzert, 2009a, p. 38). Modularization refers to “a special form of design that intentionally creates a high degree of independence or loose coupling between component designs by standardizing component interface specifications” (Garud, Kumaraswamy & Langlois, 2003, p. 364), and to a high degree of cohesion in which elements are tightly bound within modules. Modularization is a major principle of modern software engineering, and is “clearly a desirable property in a system [as it] helps in system debugging—isolating the system problem to a component is easier if the system is modular; in system repair—changing a part of the system is easy as it affects few other parts; and in system building—a modular system can be easily built by ‘putting its modules together’” (Jalote, 2005a, p. 253).
Module A
Module B
Figure 11: Cohesion and coupling in modular design (own graphic)
Effective modularization cannot be achieved through arbitrary work partitioning and abstraction, but “each module needs to support a well-defined abstraction and have a clear interface through which it can interact with other modules” (Jalote, 2005), a task that requires substantial experience and is typically performed by senior software developers and software architects.
40
SOFTWARE DEVELOPMENT 2.4.4.
Software Architectures
Software architecture is the high-level element of software design, the framework that holds together the more detailed parts of the design process, which are founded on the principles of software engineering reviewed above. The term originated in the early 1970s and was attributed to Fred Brooks, well known for his work on the IBM Mainframe, who asked a fellow colleague if the term “architects” was a suitable metaphor for what they did in software (Weinberg, 1971). Software architecture refers to “the fundamental organization of a system embodied in its components, their relationships to each other, and to the environment, and the principles guiding its design and evolution” (The Institute of Electrical and Electronics Engineers, Inc., 2000, p. 3). The quality of software architecture determines the conceptual integrity of the system as a whole, and ensures the scalability, extendibility, robustness and performance of the software product (Brooks, 1995). Software architecture is a precondition to the division of labor as it provides for a high-level partitioning of work so that multiple developers or development teams can work independently on a software system (McConnell, 2004). The metaphor “architecture” has been used widely in software engineering as “important decisions may be made early in system development in a manner similar to the early decision-making found in the development of civil architecture projects” (The Institute of Electrical and Electronics Engineers, Inc., 2000, p. 2). Due to the unstructured problem-solving nature of software development, it is not possible to develop of optimal software architecture. Software architectures are thus only an approximation that should be “good enough”, logically consistent and address the above-mentioned purposes (Lang, 2004). As Conway pointed out: “It is an article of faith among experienced, system designers that given any system design, someone someday will find a better one to do the same job. In other words, it is misleading and incorrect to speak of the7 design for a specific job, unless this is understood in the context of space, time, knowledge, and technology. The humility which this belief should impose on system designers is the only appropriate posture for those who read history or consult their memories” (Conway, 1968). Software architectures can be described through four structures (Balzert, 2009). The logical structure of the software architecture is based on functional requirements (see Figure 12) whereas the process structure, the implementation struc7
Underlined by the author of this thesis.
41
LITERATURE ABOUT THE PHENOMENON ture and the physical structure are based on both functional and non-functional requirements.
Figure 12: High level architecture overview of SAP ERP (R, 2008)
Software architectures are critical to the development of modern, large-scale software systems, as they allow for the decomposition of complex problems. Software architectures also facilitate the coordination and control of software development through modularization if managerial responsibility is organized by module so coordination focuses on interfaces. In complex development projects, they also facilitate communication between stakeholder and development groups, as they create common ground on which to clarify requirements and the expected behavior of the final software product. Software architectures also support reusability, both for the architecture itself and its internal components, for future developments. Given the rapid evolution of customer requirements and technologies, software architectures are critical to the realization of non-functional requirements such as overall changeability, ease of maintenance and ease of functional enhancements. Reusability and changeability are especially important to the development of large-scale software systems including their overall development costs, and ensure end users pay a manageable total cost of ownership (TCO) over the whole life cycle of the software system.
42
SOFTWARE DEVELOPMENT Outlining the importance of software architectures, Bass et al. (2003, p. xii) state: “A software architecture is the development product that gives the highest return on investment with respect to quality, schedule, and cost. This is because an architecture appears early in a product’s lifetime. Getting it right sets the stage for everything to come in the system is life: development, integration, testing, and modification. Getting it wrong means that the fabric of the system is wrong, and it cannot be fixed by weaving in a few new threads or pulling out a few existing ones, which often causes the entire fabric to unravel”. Despite the wealth of advantages provided by software architectures, using software architectures also brings several disadvantages that need to be considered. First, development using modular software architectures is less efficient overall than using an integrated development approach. Second, designing good software architectures requires very senior and experienced developers with a comprehensive understanding of the intentions of stakeholders, the business context, technology and potential changes in the corporate environment software must accommodate both today and in future. Such experience also facilitates the resolution of conflicts between different software engineering principles and non-functional requirements. Modular software architectures are also prone to single points of failure, as they use a “broker approach” in which modules broker information flows on behalf of other modules. The failure of a broker module can result in the failure of the entire software system. 2.4.5.
Organizational Considerations for Software Development
Various approaches have been suggested to improve the speed, quality or predictability of software projects through improvements in: • Development methodologies: LEAN (Poppendieck & Poppendieck, 2006), AGILE (Beck et al., 2001; Fogelström, Gorschek, Svahnberg & Olsson, 2009) or Software Factories (Greenfield & Short, 2004). • Software architectures: LEAN architectures (Coplien & Bjørnvig, 2010). • Development tools: Model-based development, collaboration tools, and integrated development environments (Balzert, 2009). • Organizational design: Heuristic decision model for the software industry (Lang, 2004).
43
LITERATURE ABOUT THE PHENOMENON While these are all valid approaches, this thesis focuses on organizational design improvements made to improve software development especially in the context of a globally distributed development organization. Successful organizational design in the software industry depends on competitive strategies and thus strategic management and the specifics of software architectures (Lang, 2004). Research into the correlations between organizational design and product architectures has a long history in software engineering. Conway (1968) points out that the developed software artifact is a representation of the organization that develops it, a phenomenon that later has been described as “Conway’s law”: “[…] We have demonstrated that there is a very close relationship between the structure of a system and the structure of the organization which designed it. In the not unusual case where each subsystem had its own separate design group, we find that the structures (i.e., the linear graphs) of the design group and the system are identical. In the case where some group designed more than one subsystem we find that the structure of the design organization is a collapsed version of the structure of the system, with the subsystems having the same design group collapsing into one node representing that group”. In his conclusion, Conway (1968) suggests utilizing communication needs as the main design criteria for organizations engaged in designing an artifact: “[…] We have found a criterion for the structuring of design organizations: a design effort should be organized according to the need for communication. This criterion creates problems because the need to communicate at any time depends on the system concept in effect at that time. Because the design, which occurs first, is almost never the best possible, the prevailing system concept may need to change. Therefore, flexibility of organization is important to effective design. Ways must be found to reward design managers for keeping their organizations lean and flexible. There is need for a philosophy of system design management which is not based on the assumption that adding manpower simply adds to productivity”. Furthermore Conway describes the design of the artifact not only as iterative, but also as a mutual adjustment process between organizational design and product design so that an unison can be achieved with an optimized communication pattern within and among the two systems. Bass et al. (2003, p. 29)
44
SOFTWARE DEVELOPMENT also recognize this interplay between software and organizational architectures: “Not only does architecture prescribed the structure of the system being developed, but that structure becomes engraved in the structure of the development project and sometimes the structure of the entire organization”. Weick (1976, pp. 6-8) sees several advantages in transferring this concept of loose coupling from software development to organizational design: Persistence: Loose coupling allows some portions of an organization to persist. It lowers the probability that the organization will have to, or be able to, respond to each little change in the environment that occurs; Sensing mechanism: Loose coupling may provide a sensing mechanism as they preserve many independent sensing elements and therefore “know” their environments better than is true for more tightly coupled systems which have fewer externally constrained; Localized adaption: If all of the elements in a large system are loosely coupled to another, then any one element can adjust to and modify a local unique contingency without affecting the whole system. These local adaptations can be swift, relatively economical, and substantial; Diversity: in loosely coupled systems where the identity, uniqueness, and separateness of elements is preserved, the system potentially can retain a greater number of mutations and novel solutions than would be the case with a tightly coupled system, A loosely coupled system could preserve more “cultural insurance” to be drawn upon in times of radical change than in the case for more tightly coupled systems; Fail safe: If there is a breakdown in one portion of a loosely coupled system then this breakdown is sealed off and does not affect other portions of the organization;8 Self determination: If it is argued that a sense of efficacy is crucial for human beings, then a sense of efficacy might be greater in a loosely coupled system with autonomous units than it would be in a tightly coupled system where discretion is limited; Inexpensive: a loosely coupled system should be relatively inexpensive to run because it takes time and money to coordinate people. 8 This suggested advantage of a loosely coupled organization may be debatable as loose coupling can lead to single points of failure as previously pointed out in Chapter 2.4.4.
45
LITERATURE ABOUT THE PHENOMENON Considering the dynamic nature and environment of software development, these are all highly desirable characteristics that speak for the adoption of a modular organizational design in global software development. As definition of the interface in a modular software design takes place in either the architecture or component design, which occurs at the beginning of the design phase, it would allow for design of the development organization accordingly before the bulk of the development effort is complete. If a horizontal division of labor occurs according to the implementation structure of the software artifact, it could be assumed that the interfaces, and therefore the coordination efforts, between organizational units are low (Sanchez & Mahoney, 1996). Most coordination efforts would focus on interface specifications between modules and subsystems. Despite the advantages of a modular organizational design that matches the product architecture of the software artifact, Bass et. al (2003) identify several limitations that must be considered when this software engineering principle is applied to organizational design. First there are several cases in which the horizontal division of labor is not necessarily deducted from the implementation approach as in (i) a customer-oriented organization or (ii) a function oriented organization. In a customer-oriented organization, customer requests may trigger architectural changes, which lead to a customer-oriented design rather than one oriented on software architecture. Several Software enterprises such as Microsoft (Cusumano, 1995) use a function-oriented organization deducted from the logical view of the software architecture as the principle on which work is organized; however, the definition may be somewhat blurred between a logical approach and the implementation view, especially in large projects where differences exist between logical and implementation structures, such as in function sharing. Second, when considering a match between software architecture and organization, it must be remembered that the longevity of software architectures might be limited, subject to frequent changes due to technological advances or changing customer requirements, whereas the organizational structure is usually expected to be long-term, necessitating increased coordination and work efforts over time as architecture and organization diverge (Lang, 2004). While the first limitation can be attributed to the choice of a particular organizational design, the second limitation can be rectified through continuous monitoring and adjustment. Here Sosa et al. (2004) provide methodologies to identify misalignments between product architecture and organizational design. It must, however, be considered that a continuous adjustment of an organiza-
46
SOFTWARE DEVELOPMENT tion may cause significant adjustment costs in terms of relearning or severance costs for employees that choose not to be part of a frequently changed system. Especially in the context of large software development with an established knowledge base, such high disengagement costs of relearning may prohibit a flexible development organization featuring constant reengagement. In sum, it can be seen that “underlying all major software analysis and design methods is a very set of basic principles. [..] It’s all based on how human beings solve complex problems” (Constantine, 1995). While not all of these guiding principles can be transferred individually from the immaterial world of software to the material world of organizations due to the abovementioned limitations, they provide an initial framework for this study of global R&D network improvement. 2.4.6.
Economic Considerations in Software Development
Software companies follow the same economic principles as do companies from other industries; the creation of high-quality innovative products with a high degree of efficacy to ensure long-term growth and survival of the organization. This is inherently challenging, not only due to the complex nature of software development as characterized by high skill requirements applicable to development work and the previously discussed management of software projects, but also because of the ongoing global environmental and technological changes to which software development is subject. Managing a successful software business is a challenging task, as documented by the large number of software projects abandoned or stricken with considerable cost overruns (Charette, 2005) or as in the case of Baan, with the financial collapse of the software enterprise (Baker, Spiro & Hamm, 2000). Key factors in the successful management of a software business include, among others, the quality of the software artifact and the productivity of the development organization. Software Quality Globally interconnected business is constantly undertaken online, in many cases 24 hours a day, seven days a week. Software errors that trigger system or process downtime are immediately visible and cause an economic impact, decrease brand value, and in critical cases can endanger lives. In addition, the interdependence of connected computer systems in the modern world amplifies the effects of individual software errors, which in turn affect other systems
47
LITERATURE ABOUT THE PHENOMENON in addition to the one running the erroneous software. Particularly in industries with real-time systems managing high transaction volumes such as the financial sector or the airline industry with its high number of passengers, firms critically depend on high-quality software to run their core business processes around the clock. They practically require error-free software, as errors in a single line of code can lead to severe failures, as shown by the AT&T network outage in 1991 and the explosion of an Ariane 5 rocket in 1996 (Charette, 2005). While in the early days of software development, an error rate of 7-10 errors per 1000 lines of code (LoC) was quite common, the error rate has been reduced tremendously through improvements in software development methodologies and quality assurance such as automated testing and error tracing. With errors reduced by an order of magnitude, Balzert (2009) estimates current error rates in software sit at 0.2-0.05 defects per 1000 LoC. However, compared to defect targets in discrete and process manufacturing such as those given by total quality strategies like the Six Sigma method piloted by Motorola of 3.4 defect parts per million, software quality still lags behind by an order of magnitude. The currently high software defect rate, however, needs to be put into perspective, as not every such code error manifests as a failure that leads to serious consequences including accidents or outages. Most software errors remain dormant in the system or affect only a limited number of users. Still, software quality must be further improved to reduce the costs of system outages, maintenance costs or the costs of lost business, which Charette (2005) currently estimates at billions of USD each year. The later the stage at which a software error is discovered and fixed, the higher the costs associated with the fix. It is thus imperative to fix software errors as early as possible in the development process, as costs rise exponentially over time. Fixing an error in the maintenance phase after shipment is a hundred times more expensive than fixing it in the initial requirements gathering phase (Balzert, 2009). Software quality is an ambiguous concept, and can only be measured indirectly through the creation of models that define quality factors. These quality factors are measured individually before being combined to provide an overall assessment of the quality of a software product. The ISO-9126 standard was established on the basis of a software quality management study by McCall et al. (1977) that provides a model and quality factors to assess the quality of software products (see Figure 13). ISO-9126 has recently been updated and revised
48
SOFTWARE DEVELOPMENT and incorporated into the new ISO standard on software quality measurement ISO 25000:2005. The new standard addresses the shortcomings of the previous standard, by providing a comprehensive quality framework for software through an holistic approach that not only assesses the internal quality of the software, but also its quality in use (Al-Qutaish, 2009). Software quality management standards are a necessary but not sufficient condition for ensuring quality, as they also need to be enforced. However, enforcement is often difficult due to the volume of standards and the subjective nature of judgments on conformance to standards (Card & Glass, 1990).
Figure 13: Quality model for external and internal quality (Al-Qutaish, 2009)
While the quality criteria exhibited in Figure 13 are important to the development of high-quality software, quality from the customer perspective also includes a low total cost of ownership (TCO). In addition to software license fees, TCO also include the cost of the installation service, operational expenses and maintenance fees, which in the case of business software represents an annual payment of approximately 20% of the initial purchase price. Productivity in Software Development Customers of software products also face the challenges of a globally interconnected world with ever-shorter product cycles and competitive pressure. To address these challenges, they seek to increase overall productivity, cost efficiencies and innovative capabilities through implementation of software products. Business software customers are typically conservative when deploying software in their enterprise, as they want to implement proven and stable software products that run business critical processes such as financial and supply chain management. At the same time, however, they also look for new “cutting edge” functionality that provides them with new processes or capabilities that may help them differentiate themselves from their competitors.
49
LITERATURE ABOUT THE PHENOMENON Achieving a higher level of cost efficiency by either production the same volume of goods at lower cost or increasing production at the same cost is one of the basic business management challenges posed by a market economy. The globalization of software development has further increased pressure for cost-effective development, with developing countries such as India and China offering highly skilled labor at a fraction of the cost of developers in developed economies such as countries in the EC or the USA. With the labor costs of highly educated software developers being the largest cost driver of software development, labor cost reductions have clearly been the focus of software enterprises in improving overall cost efficiency and productivity. However, rather than leading to the release of employees or a reduction in software development costs, labor arbitrage has led to more complex software through additional innovations or functional improvements due to the strong demand for new products and functionalities (Balzert, 2009). Productivity in its most basic form is measured as the ratio between output factors and input factors. Measuring productivity in software development is inherently difficult. While input factors of software development can be measured precisely, output factors are difficult to quantify. Empirical research has long sought to determine which specific attributes of software development, such as developer skills, tools, techniques or a combination thereof, yield significant improvements in large-scale software development productivity. Boehm’s often cited COCOMO (constructive cost) model identifies several development cost drivers grouped into the categories of product, computer, personnel and project attributes that negatively impact development productivity (Boehm, 1981, pp. 347-473). The product attribute cost drivers identified are software reliability, data base size and product complexity, while personnel cost drivers are mainly attributed to the capabilities and experience of analysts and programmers. Overall, Boehm finds that “team capability and product complexity had the greatest effect in driving software costs” (Scacchi, 1995). In a comprehensive study of 44 programing projects undertaken in 17 International Telephone & Telegraphy (ITT) subsidiaries in nine different countries, Vosburgh et al. (1984) analyze nearly 100 environmental variables to determine their correlation with development productivity. The authors group these environmental factors in two categories product related factors and project-related factors. Product-related factors are driven by user requirements and are not under the control of project management, whereas project-related factors are
50
SOFTWARE DEVELOPMENT driven by the tools and methodologies selected for the project, project personnel and project management (Vosburgh et al., 1984, p. 145). To improve overall software development productivity Vosburgh et al. (1984, pp. 151-152) conclude that: “[…] substantial programming productivity improvements can be achieved by applying technologies and management practices that affect the factors identified in this study. […] To be successful, a productivity improvement program must address the entire spectrum of productivity issues. Key features of such a program are management commitment and an integrated approach [as] there is no panacea for the programming productivity problem. No single technology can guarantee large productivity gains in all cases”. The research of Vosburgh et al. provides a solid understanding of the impact of input factors, on productivity research on output factors still remains unsatisfactory until present day. Two major principles have been suggested to measure output related productivity, lines of code (LoC) and function points (Albrecht, 1979). Software is a product comprising of compiled source code lines and since the early days of software development, the number of lines of code has been used to measure the output of software developers. While LoC is a useful measure providing a highly accurate gauge of the size of a software system, it is not a sound indicator of development productivity. Differences in languages and formatting conventions lead to a high degree of variation in the number of lines of code. In addition, the same problem can be solved by a compact program or a lengthy program with many LoCs. Previous research has also shown that individual productivity variances among developers in a development team typically differ by orders of magnitude (Sackman, Erikson & Grant, 1968). Because well-designed code is typically very compact, the LoC measure of development productivity punishes developers who produce effective, high-quality source code. This paradox reflects the study of Jones, who showed at an early stage that in cases where machine language is substituted by high-level languages, LoC is increased, but development costs are reduced (Jones, 1978). Another set of suggested output criteria is the function point method (Albrecht, 1979) based on customer-required functions and program attributes (Balzert, 2009a, p. 691) where “a function point is a composite measure of a number of
51
LITERATURE ABOUT THE PHENOMENON program attributes including the number of inputs, outputs, function calls, file accesses, etc. that are multiplied by weighting factors then added together” (Scacchi, 1995). The function point method has been criticized in the following terms: it is “based solely upon program source code characteristics and does not address production process or production setting variations nor their contributing effects” (Scacchi, 1995, p. 9). Fowler (2003) further criticizes the value of function points by arguing that because not all function points in a software system provide customer utility, they cannot be used as a unit measuring “true” productivity. For him software must ultimately provide utility and value for the customer and profit for the software enterprise, or as Hasso Plattner, founder of SAP, has noted: “I’m not interested in whether we are better than the competition. The real test is, will most buyers still seek out our products even if we don’t market them?” Hasso Plattner, founder of SAP quoted by (Kim & Mauborgne, 1997, p. 106). Acknowledging the limitations of output factors in the measurement of software development productivity, Balzert (2009, p. 209) suggests a more pragmatic approach along the lines of that of Vosburgh to improve software development on the basis of input rather than output criteria: “I think of the below formula and target an improvement in all parameters Development Effort=ComplexityProcess x Personal × Environment × Quality” Productivity improvements in software development in a globally distributed context are of special interest in this thesis. Here, the study of Ramasubbu and Balan (2007) is instrumental, as it shows that work dispersion, even in high process maturity environments, negatively affects development productivity with an exponential decrease in productivity as dispersion between two location increases. They also find that dispersion has an indirect effect on conformance quality because of its endogeneity with productivity. “Distributed team members often had difficulties in managing uncertainties caused by interdependent tasks. Uncertainty in the observed projects’ interdependent tasks arose primarily because of 1) information asymmetry between the remote teams, and 2) ambiguous authority. Information asymmetry (either related to customer initiated changes, updated schedules, etc.) between remote teams hinders coordination and task orchestration, which in turn affects project performance. Ambiguous authority refers to the break down in a planner’s decision-making authority because of a lack of complete control over the processes
52
GLOBALLY DISTRIBUTED SOFTWARE DEVELOPMENT at both the remote sites. Ambiguous authority leads to poor project management and hence eventually impacts project performance (Ramasubbu & Balan, 2007, p. 129). Ramasubbu and Balan’s study investigates dispersion between two locations only; it can be assumed that dispersion beyond two locations further amplifies the phenomenon and further decreases productivity at an even greater exponential rate. 2.5.
Globally Distributed Software Development
The Benefits and challenges of globally distributed software development (GDSD) have been well researched in the past decade (Agerfalk et al., 2008; Battin et al., 2001; Conchuir et al., 2006; Ebert & De Neve, 2001; Prikladnicki, Audy & Evaristo, 2006; Simons, 2006). Key drivers identified have been access to highly skilled software engineers, often combined with substantial cost benefits derived through the allocation of R&D activities to developing countries and access to “markets of the future” - rapidly developing countries such as the so-called BRIC nations of Brazil, Russia, India and China. Benefits of GDSD Given that labor costs are the major cost driver in software development, GDSD offers MNCs considerably reduced development costs by moving development work to low-wage countries where developer wages are a fraction of their US or European counterparts. Conchúir et al. (2006, p. 2) note that the “base annual salary of US$15.000 for a software developer in India, is one quarter of the salary of an Irish developer, who in turn earns half that of a developer in the US”. Unsurprisingly, such a factor cost advantage in the intense and highly competitive software industry presents a cost reduction opportunity many managers can hardly resist, despite the fact that “cost-benefit tradeoffs for GDSD are still not well understood” (Conchuir et al., 2006, p. 3), and offsets to factor cost advantages may therefore occur. GDSD allows for the leveraging of time-zone effectiveness, also known as the “follow the sun principle” (Treinen & Miller-Frost, 2006). With a global R&D organization spanning several time zones, software development activities can be handed over from one unit at the end of its workday to another unit where the workday has just started. While this concept has been widely applied in the context of software maintenance, where tasks are clearly defined, in the ambig-
53
LITERATURE ABOUT THE PHENOMENON uous context of “development projects, global software development makes a project multi-site and multi-cultural and introduces a new set of communication, technical, managerial, and coordination challenges” (Jalote & Jain, 2004, p. 1), and the concept thus has limited application in development projects. Changing demographics in industrialized countries have led to an inadequate supply of talented employees, thus creating a highly competitive executive search environment coined the “war for talent” by Chambers et al. (1998). In this context, GDSD can be seen as a method to mitigate the demographic risk of an aging workforce and a decreasing labor pool of skilled employees (Dyroff, 2009). It gives access to large skilled labor pools outside established industry clusters, avoiding the “war for talent” in industrialized countries by recruiting in emerging economies. Corporate software packages require statutory, functional and language localization to satisfy the requirements of foreign users and governments. Proximity to markets and customers facilitates requirements gathering, the accurate development of required software characteristics, and sales of the finished software product to the local market (Balzert, 2009). Another benefit often assumed of GDSD is innovation and the sharing of best practice as part of a global development project. However, Conchúir et al. (2006) point out that global distribution often inhibits the free flow of information, thus stymying innovation and best practice sharing, and that high-wage employees often feel threatened by their low-wage colleagues, making information sharing and the fostering of innovation unlikely. One motivation for the globalization of R&D observed in Japanese companies in particular is the breaking through internal isomorphism, as noted by De Meyer (1993). Internal isomorphism is “defined as the similarity in management systems that may exist between organizations which interact with each other” (De Meyer, 1993, p. 111). Isomorphism potentially leads to the rejection of innovative new ideas that can be critical to the long-term survival of the enterprise, a phenomenon also dubbed as “not invented here syndrome” (see also section 3.1.7). This is especially critical in the development of new, disruptive technologies to overcome the limitations of the organizational context, as disruptive technologies (see also section 3.4.2) can only be conceived outside the existing structural and geographical context (Kuemmerle, 2005).
54
GLOBALLY DISTRIBUTED SOFTWARE DEVELOPMENT In addition to the well-known and researched benefits of GDSD, Agerfalk et al. (2008) outline several, as they claim, previously unknown benefits of GDSD; improved resource allocation through the reassignment of high wage employees after transferring work to low-wage countries, structured forms and records of communication, improved documentation and clearly defined processes. The authors also see improved task modularization as an “unknown” benefit, as GDSD enforces work partitioning that is “splitting their work across feature content into well defined independent modules [thus R&D] sites having responsibility for the whole lifecycle of particular functions/modules” (Agerfalk et al., 2008, pp. 5-6). One question that can be asked, however, is what is cause and what is effect if these “unknown” benefits are a result of GDSD or an enabler thereof. Challenges of GDSD Managers often face multiple challenges in ensuring the successful management of globally distributed software development projects, as the anticipated benefits of GDSD often fail to materialize either in part or in whole. Cramton and Webber (2005, p. 762) find in their study that “teams with geographically dispersed members report significantly less effective performance than collocated teams”, as the geographical dispersion of software development projects leads to a number of management challenges. First, geographical, temporal and cultural distances lead to a loss of richness in communication. Software development is an interactive and iterative task that requires frequent person-to-person communication. Informal communication is particularly important to aligning intermediate work results, clarifying questions and jointly solving problems (Herbsleb, 2003). The global dispersion of a development project makes such person-to-person communication more difficult. Herbsleb et al. (2000, p. 326) note that “diminished communication across distance and the loss of the subtle modes of face-to-face communication and coordination that co-located work affords, appear to have rather dramatic and unfortunate consequences [… ] a significant slowdown of work that spans sites”. Communication can also be negatively affected by cultural factors that “are expressed in different languages, values, working and communication habits and implicit assumptions [and] are believed to be embedded in the collective knowledge of a specific culture” (Kotlarsky, 2005, p. 38) quoting (Baumard, 1999)). These cultural factors can have a considerable impact in the way in which people interpret a certain situation and how they react to it (Kotlarsky, 2005), which in turn can cause additional delays (Herbsleb, 2003).
55
LITERATURE ABOUT THE PHENOMENON Second, while the general costs of “coordination and control of software development arise from the underlying technical dependencies among work artifacts; as well as the structure of the development process” (Mistrík et al., 2010, p. 393), the globally dispersed setup causes additional coordination costs “due to overhead management, separated and dysfunctional processes, tools and teams” (Ebert, 2006, p. 10). Ebert (2006, p. 10) estimates that in a two-site setup overhead costs can add 35% to development costs due to interface control, management, replications, frictions etc.”, thus rendering a significant portion of the cost factor benefits initial envisaged ineffective. Third GDSD requires a critical mass of developers to be effective, as large-scale software development requires a significant knowledge base; “dispersing R&D makes it more difficult to preserve the integrity of the historic knowledge base of the firm” (De Meyer, 1993, p. 110). In GDSD utilizing several sites and small team sizes critical knowledge becomes fragmented and often cannot be obtained when required. Fourth, R&D units in developing countries such as China, India and Brazil face significantly higher employee attrition rates than do their counterparts in industrialized countries. Considering the importance of knowledge stock in the development of large-scale software products, higher employee turnover creates an outflow of knowledge that needs to be rebuilt through expensive training programs or knowledge transfer sessions delivered by senior developers, or bought in the form of additional salary increases or other retention benefits, thus tying up critical resources and potentially creating delays in the development project. Fifth, distributing software development work among countries that lack of intellectual property (IP) protection creates the risk of IP appropriation by competitors (Yang, 2003), especially when combined with high attrition rates that facilitate the outflow of critical IP. Secrecy for new products or critical software functions is easier to manage in a co-located setup (De Meyer, 1993).
Mitigation strategies The challenges posed by GDSD must be addressed to successfully deliver the software artifact in accordance with predefined quality standards and leverage the benefits of globally dispersed development. Several authors have thus suggested mitigation strategies, including the use of architectures (Arora, Gambardella & Rullani, 1997; Clerc, Lago & van Vliet, 2007; Huen, 2007; Ramesh &
56
GLOBALLY DISTRIBUTED SOFTWARE DEVELOPMENT Dennis, 2002), a standardized global software process (Vanzin, Ribeiro, Prikladnicki, Ceccato & Antunes, 2005), ICT tools (Malhotra, 2001), shared understanding and knowledge management (Brandon, 2004; Carlile, 2003; Takeishi, 2002), boundary objects (Carlile, 2002), iterative and incremental development (Huen, 2007), managerial practices (Malhotra, 2001), and collaborative organizational practices (Orlikowski, 2002) to mitigate the risk of GDSD and to ensure effective coordination and control, thus safeguarding the success of large-scale global software development projects. Carmel coined these mitigation strategies as centripetal forces (i.e. the use of ICT), which counteract the previously mentioned challenges or centrifugal forces to which GDSD is subject (Carmel, 1999).
Summary – Review of Literature on R&D Activities in the Global Software Industry Sections 2.1 to 2.5 at the beginning of this chapter are aimed at building an understanding of the globally dispersed R&D phenomenon in the context of the global software industry to guide the action design research project on improving a global R&D organization as described later in this thesis. Obtaining a definition of R&D applicable to the enterprise under study has proved difficult, as the most commonly applied definition in the OECD’s Frascati Handbook, intentionally excludes software development, as it assumes a lack of novelty in the development of software. As software development in the enterprise under study involves a considerable degree of novelty, the definition of Matheson et al. (1998, p. 1) has been chosen; R&D is thus defined as “any technologically related activity that has the potential to renew or extend present business or generate new ones, including competency development, technological innovation, and product or process improvement”. The literature review on global R&D management in section 2.2 presented global R&D management as a multi layered problem that simultaneously considers and navigates the regional and legal framework to establish an effective integration into the local markets and the internal organizational structures of hierarchies and project organizations to ensure overall success of the global R&D processes (Boutellier et al., 2008e). The review of literature on R&D international yielded multiple insights, most notably Gassmann’s model of R&D internationalization that defined archetypes of global R&D organizations and evolutionary paths to arrive at those (Gassmann & von Zedtwitz, 1999).
57
LITERATURE ABOUT THE PHENOMENON The review on literature concerning the software artifact reveals the intangible and immaterial properties of software and the importance of software to economic value creation today (Broy et al., 2006). Furthermore, software is found to be increasingly complex and subject to higher quality standards due to increasing customer and statutory requirements (Balzert, 2009). Despite the non-material nature of software, technological advances mean software ages (Parnas, 1994), so that software systems require ongoing maintenance (Balzert, 2009), which can account for up to 60-80% of total lifecycle costs (Jalote, 2005). Software development has been characterized as “intensive technology” (Thompson, 1967) where value is generated iteratively (Stabell & Fjeldstad, 1998) and ill structured problems (Simon, 1973) in the development process are often addressed with heuristics (Pólya, George, 1971). The development of software artifacts is knowledge-intense and thus highly collaborative (Allen et al., 2007; Whitehead et al., 2010); because it is also non-deterministic and non-repetitive (Mistrík et al., 2010), no finite work breakdown structure is available at the start of development. Software development does not provide economies of scale (Jackson, 1998); it is labor intense with some projects exceeding 6000 man-years (Balzert, 2009) and is often prone to cost overruns and late delivery (Boehm, 1989). The review of literature on development methodologies stresses the importance of employing suitable development methodologies in large-scale software development to produce a software artifact of the required quality. Software development methodologies have evolved considerably from the initial “waterfall” development methodology (Royce, 1970) to concurrent (Rational Software, 1998), iterative (Boehm & Papaccio, 1988), agile (Beck et al., 2001; Huen, 2007; Oestereich & Weiss, 2008) and most recently lean (Poppendieck & Poppendieck, 2006) development methodologies that enable the construction of ever-larger and increasingly complex software artifacts. In addition to software development methodologies, engineering principles are employed in modern software development to conceive, develop and maintain software (IEEE, 1990, p. 67), thus ensuring the quality of the software artifact. These principles are of special importance in the large-scale software development context under study (Jalote, 2005). Unlike other engineering disciplines typically constraint by physical laws, software development is only constrained by complexity and costs (Young & Faulk, 2010). Among the software engineering principles revealed by the literature review are those of cohesion, coupling and modularization, which are considered most
58
GLOBALLY DISTRIBUTED SOFTWARE DEVELOPMENT relevant to this thesis. Cohesion describes how tightly bound internal elements of a software module are, while coupling refers to how strongly modules are interconnected (Jalote, 2005). Cohesion and coupling are principles that enable the modularization of the software artifact. Modularization creates loose coupling between components of the software artifact and promotes a high degree of cohesion within modules. Modularization is a highly desirable property of modern software systems, as it facilitates changeability, maintenance, division of labor, and coordination of software development (Jalote, 2005). Modularization lays the foundation for software architectures (Weinberg, 1971), which describe the “organization of [a software system] embodied in its components, their relationships to each other, and to the environment, and the principles guiding its design and evolution” (The Institute of Electrical and Electronics Engineers, Inc., 2000, p. 3). High-quality software architectures ensure the scalability, extendibility, robustness and performance of the software product (Brooks, 1995), and play a critical role by providing “the highest return on investment with respect to quality, schedule, and cost” (Bass et al., 2003, p. xii). One of the most insightful papers examined in the review of literature on software engineering is the classic study of Conway entitled “How committees invent”. In 1968 Conway pointed out the critical role of communication in software development by suggesting that “a design effort should be organized according to the need for communication” and establishing a linkage between software engineering and organizational design. The relationship is so evident in software enterprises that “architectures [prescribe] the structure of the system being developed, but that structure becomes engraved in the structure of the development project and sometimes the structure of the entire organization” (Bass et al., 2003, p. 29). Several authors suggested modular organizational design based on modular product architectures at an early stage (Berg, 1975; Sanchez & Mahoney, 1996; Weick, 1976), as it offers multiple benefits that are especially desirable in dynamic market environments such as persistence, effective environmental sensing, adaptability, diversity, fail safeness and lower costs (Weick, 1976). While organizational design based on modular product architectures has desirable properties, this conceptualization also has limitations where the organization is built around a strong customer or functional focus (Cusumano, 1995). In such cases, modular organizations can exhibit higher overhead costs through process breaks and introduced frictions (Bass et al., 2003). Second, this concept may
59
LITERATURE ABOUT THE PHENOMENON prove expensive in organizations where the underlying product architecture is subject to frequent change, as this would require the costly ongoing adoption of organizational structures (Lang, 2004). To operationalize the organizational transition towards a modular organization, the study of Sosa et al. (Sosa et al., 2004) provides methodologies for measuring gaps through continuous monitoring and facilitating adjustments to close gaps between product architectures and organizational structures. Organizational improvement must be quantifiable, and it must be measured by comparing metrics before and after an improvement has been made. However, performance metrics used in software development, such as productivity, are difficult to obtain. While input factors including labor and infrastructure costs can be readily obtained, measuring output factors through, for example, lines of code (Albrecht, 1979) or function point metrics has been found ineffective in the measurement of productivity (Jones, 1978; Scacchi, 1995). Moreover, these metrics do not relate to utility for end customers, or to software enterprise profit (Fowler, 2003). Authors have provided different suggestions on how to improve software development productivity, either by improving input rather than output criteria (Balzert, 2009) or by addressing the entire spectrum of productivity issues with sufficient management commitment using an integrated approach, as there is no single solution or technology that addresses productivity problems (Vosburgh et al., 1984). Software development in the contemporary world is often globally distributed; the literature review indicates that multiple benefits can be attributed to GDSD such as labor cost arbitrage (Eoin 2006), increases in innovative capabilities and flexibility (Moitra, 2008), time-zone effectiveness (Treinen & Miller-Frost, 2006) avoiding demographic gaps (Dyroff, 2009), competing for talented developers (Chambers et al., 1998), proximity to market to improve requirements gathering (Balzert, 2009), avoiding isomorphism (De Meyer, 1993), improved resource allocation (Agerfalk et al., 2008) and process formalization (Agerfalk et al., 2008). However, research also indicates that many of these assumed benefits fail to materialize, as geographically dispersed teams often report less effective performance than do collocated teams (Cramton & Webber, 2005), with performance decreasing exponentially as dispersion between two locations increases (Ramasubbu & Balan, 2007). The less effective than expected performance of GDSD is often attributed to the multiple challenges GDSD faces. The literature review reveals several such challenges including distances that interfere with effective communication (Herbsleb et al., 2000), task dependencies increasing
60
GLOBALLY DISTRIBUTED SOFTWARE DEVELOPMENT overhead costs (Ebert, 2006; Mistrík et al., 2010), the lack of critical mass in foreign R&D operations (De Meyer, 1993), higher attrition rates (Dyroff, 2009), and the risk of appropriation of intellectual property (Yang, 2003) as secrecy is more easily managed in a collocated setup (De Meyer, 1993). Authors have thus suggested various strategies by which the potential risks of GDSD can be mitigated through architectures (Arora et al., 1997; Clerc et al., 2007; Huen, 2007; Ramesh & Dennis, 2002), a standardized global software process (Vanzin et al., 2005), ICT tools (Malhotra, 2001), shared understanding and knowledge management (Brandon, 2004; Carlile, 2003; Takeishi, 2002), the use of boundary objects (Carlile, 2002), iterative and incremental development (Huen, 2007), managerial practices (Malhotra, 2001), collaborative organizational practices (Orlikowski, 2002). The effectiveness of such mitigation strategies remains questionable, as Ramasubbu et al.’s (2007) study indicates that productivity decreases due to geographical dispersion also occur in the highly mature environments in which such mitigation strategies are typically applied. That concludes the first step of the literature review process presented in Figure 3. With a solid understanding gained of the phenomenon under study through the literature review, the next chapter builds on the theoretical underpinnings of this thesis by reviewing theories related to the intended organizational improvement.
61
THEORETICAL UNDERPINNINGS
CHAPTER 3
THEORETICAL UNDERPINNINGS
The literature review presented in Chapter 2 following the review process exhibited in Figure 3 has given us a comprehensive understanding of R&D activities in the global software industry through an extensive review of literature concerning R&D, software artifacts and software development. Sections 3.1.1 to 3.1.3 of this chapter now reviews literature concerning organizational theory, work design and organizational design, subjects that help explain how large R&D activities are effectively divided and integrated taking their structural requirements into consideration, and draws on several theories from the fields of economics, organizational theory, strategic management, and software engineering. 3.1.
Organizational Theory
Modern organizations produce products and services to cater for the needs of their customers. While needs can be almost unlimited, the resources available to produce such products and services are typically scarce. The conflict between the fulfillment of people’s needs and the scarcity of resources is the origin of the organizational problem in its general form. As one of the earliest accounts addressing the organizational problem, Adam Smith’s famous pin factory example shows how the division of labor and the specialization of workers yield substantial productivity gains (Smith, 1776). However, it is typically not possible to retain all productivity gains, as after being divided, specialized labor must be coordinated or traded in the form of exchanges, which in itself requires resources and hence reduces productivity (Picot et al., 2008). Furthermore, it cannot simply be assumed that workers willingly and completely adopt the organizational structures produced by the division of labor. Motivational efforts and resources are therefore required to make the division of labor and specialization effective, potentially resulting in a further reduction in initial productivity gains. Maximizing the net productivity gain achieved through the division of labor, which provides the initial productivity gain through lower coordination, exchange and motivation costs, is a challenge that any organization must face. How to design and maintain an effective organization that addresses the organizational problem – the organizational theory – is a major field of study within the social sciences.
62
ORGANIZATIONAL THEORY Given Scott’s (1961, p. 20) belief that organizational theory is located on the periphery of general system theory, he therefore suggests using the same approach as that adopted in general system theory to studying organizations and investigate: 1. their parts (individuals) in aggregates, and the movement of individuals into and out of the system; 2. the interaction of individuals with the environment found in the system; 3. the interactions among individuals in the system; 4. general growth and stability problems in the system. While Scott’s suggestions are of a rather generic nature, Schreyögg (2008) defines organizational theory more specifically according to the five main research problems it addresses. According to Schreyögg, the first research problem of organizational theory is how to structure tasks using analytical methods and organizational models to find the right amount and most suitable type of formalization required to effectively organize the execution of tasks. The second research problem is how to integrate individuals with the organization to align individual and organizational needs in such a way that high levels of motivation become available to the organization to allow it to achieve or exceed given organizational goals. Organization and environment, the third research problem, is concerned with how organizations can strive in challenging environments and the particular role played by organizational design in this context. In addition to formal structures in organizations, informal structures, more recently called emergent processes in organizations, also take shape. Schreyögg defines the fourth problem of organizational theory as how these emergent processes function in organizations and how they relate to the formal structures of the organization. The final problem with which organizational theory is concerned is organizational change and transformation. This problem includes both the implementation of organizational changes and the framework adopted to ensure the organization remains open to inputs from within and outside, and is able to engage in a process of continuous change. This literature review builds up an understanding of these five areas to provide a theoretical foundation for the ADR case study of this thesis. Although the term “organizational theory” might imply “that there is a single, integrated, overarching explanation for organizations and organizing— there are in fact many organization theories and they do not always fit neatly togeth-
63
THEORETICAL UNDERPINNINGS er” (Hatch & Cunliffe, 2006, p. 5). Organizational theory is in no way a unified body of thought; rather it is a term that subsumes the broad array of available organizational theories that over time have been infused by ideas and contributions from other fields of science (see Figure 14).
Figure 14: Sources of inspiration for organizational theory (Hatch & Cunliffe, 2006, p. 6)
The availability of these various organizational theories should not, however, be seen as a burden, but as Hatch and Cunliffe point out, as a benefit enabling researchers and managers to make sense of their organization from various perspectives simultaneously: “Because of the complexity and pluralism of organizations, managers who make sense of and use multiple perspectives are better able to bring their knowledge of organization theory to bear on the wide range of analyses, decisions and plans their organizations make each and every day.” (Hatch & Cunliffe, 2006). Considering the breadth of available organizational theories, this study focuses on organizational theories that contribute unique perspectives on concepts critical to this thesis as laid out by Schreyögg (2008): first, work design with the structuring of tasks through work decomposition and integration; second, organizational design, the structural dimension of organizations that refers to
64
ORGANIZATIONAL THEORY the formation of organizational units based on the results of initial work design analysis and synthesis; third, organizational learning, the behavioral dimension of organizations concerned with the establishment of organizational routines to allow the organization to utilize experience in future decisions and actions; and fourth, organizational transformation and change, which refers to organizational interventions that adjust organizational strategies and structures to change internal or external environmental factors and modify organizational behavior and culture, thus enhancing the performance of the organization.
3.1.1.
Work Design
Because organizations produce and deliver goods and services that exceed the capacity of individual employees, work involving conceiving of or manufacturing such products and services has to be divided and allocated among multiple employees utilizing various organizational forms such as teams, departments, divisions and subsidiaries. “How work is conceived in broad terms, translated across organizational levels, and structured for the units and the individuals who perform the work is referred to as work design” (Torraco, 2005, p. 85). Work design is the first step taken before larger organizational structures can be designed and implemented. It consists of two major activities: work decomposition, in which the main tasks involved in producing a product or service are divided into task components and subtasks; and work integration, in which subtasks are allocated to individual actors or organizational units and are integrated into the final work product. 3.1.1.1.
Classic work design theory
Building on classic work design theories such as those developed and advanced by Smith (1776), Taylor (1911), Fayol (1917) and Kosiol (1978) this chapter reviews work design in the software development context. Together with classic work design, three other forms of software development are reviewed more closely and characterized by the three concepts of mode of locality, mode of work allocation and degree of stickiness (see Figure 15). These three forms of work design represent a historic progression from the physical work of industrial manufacturing to the knowledge work of modern software development, and from the previously collocated knowledge work of software development to a globally dispersed form of knowledge work initiated by the advent of globalization in the software industry.
65
THEORETICAL UNDERPINNINGS
Figure 15: Evolution of work design in software development
Classic Work Design - Work Decomposition Work decomposition creates both a horizontal division of labor in which one large task is subdivided into smaller tasks and a vertical division of labor in which planning tasks are separated from execution tasks (Taylor, 1911). Subtasks can be further subdivided until an individual employee or an organizational unit can perform a task. The first step in the decomposition of a large undertaking is task analysis, in which tasks and their various attributes are analyzed. This includes the analysis of actions/processes to be performed, the work objects involved, tools required, the purpose of a particular tasks, task repetition including its timing and frequency of repetition (Brauchler & Landau, 1998), the necessary sequence in which these basic tasks are undertaken in cases where this is not arbitrary, and the identification of tasks that can be performed in parallel (Schreyögg, 2008). Based on the results of the task analysis, tasks are divided into smaller subtasks. Division typically occurs according to the type of activity, the work object, the rank of a task, the requirements of physical adjuncts, location and time (Kosiol, 1962; Kosiol, 1978). The result of task decomposition is a work-breakdown structure, an overlap-free hierarchical tree structure in which each hierarchy level represents a similar degree of work decomposition. Task partitioning not only makes the construction of a large artifact possible in the first place, but it also increases productivity through economies of scale, economies of scope and specialization, a fact that was famously illustrated by Adam Smith’s pin factory example where the introduction of division of labor
66
ORGANIZATIONAL THEORY increased productivity by a factor of 240 in comparison to a worker performing all required steps by himself (Smith, 1776). Although tasks may have been partitioned, they still have various degrees of interdependency. “Interdependence between two tasks is the extent to which performance and outcome of one task is impacted by, or needs interaction with, performance and outcome of the other task” (Kumar et al., 2009, p. 644) quoting (Crowston, 1997). These interdependences must be coordinated and controlled during production to achieve an integration of a final product or service, thus making task interdependencies an important contingency in classical work design. According to coordination theory, coordination can be defined as managing dependencies among activities where the different forms of dependency require alternative coordination processes to manage them (Malone et al., 1999). These additional coordination and control efforts partially offset the initial productivity gains realized by task partitioning. The classic taxonomy of task interdependencies (see Figure 16) was developed in the 1960s and 1970s, and has been most often researched in the context of collocated setups in either manufacturing or clerical work (Thompson, 1967; van de Ven, Delbecq & Koenig, 1976). Thompson defines three modes of interdependencies: pooled or independent interdependence, where subordinate tasks are processed in a pool and each task is executed independently of the others; sequential interdependence, where subordinate tasks are processed in a sequential order with subsequent tasks depending on the output of previous tasks; and reciprocal interdependence, where subordinate tasks depend on each other with work alternating between subordinate tasks and task outputs become the inputs of others. By adding the fourth mode of team interdependence, Van De Ven et al. (1976) further extended Thompson’s conceptualization; here, work enters the unit and tasks are performed simultaneously. Pooled Interdependence
Work Enters Unit
Reciprocal Interdependence
Work Leaves Unit Seqential Interdependence
Work Enters Unit
Work Leaves Unit
Work Enters Unit
Work Leaves Unit Team Interdependence
Work Enters Unit
Work Leaves Unit
Figure 16: Classic taxonomy of task interdependence (Kumar et al., 2009, p. 647)
67
THEORETICAL UNDERPINNINGS Classic Work Design - Integration / Coordination Classic coordination theory puts forward three modes of achieving the coordination of tasks (March & Simon, 1958; Thompson, 1967); organizations use these modes in an additive, rather than exclusive, fashion to achieve overall task coordination. Based on a hierarchical design of authorization relationships, the first mode of coordination is coordination through standardization, by which rules and regulations constrain the actions of each actor or unit to ensure consistency with those undertaking an interdependent task. One key assumption is that rules are internally consistent, which requires a limited number of stable, repetitive situations so the number of overall rules that need to be established is also limited. Mintzberg (1979) differentiates between three forms of standardization: the first is the standardization of work processes, where work content is specified and programmed; the second is standardization of outputs, where work results are specified and interfaces among tasks are predetermined; and the third is standardization of skills and knowledge, where the kind of training required to perform work is predefined to create a shared understanding and similar mindsets, thus enabling the production of similar output. The second mode of coordination is coordination by plan, where schedules and plans govern the actions taken to complete interdependent tasks. Coordination by plan does not require the high degree of standardization needed for the other methods presented, and is thus more suitable for more dynamic situations, especially in changing task environments, which refers to “those parts of the environment that are relevant or potentially relevant to goal setting and goal attainment” (Dill, 1958). In the third mode, coordination by mutual adjustment, actors transfer new information during the action process to mutually adjust actions and achieve coordination between tasks. Such a transfer of information may involve communication across hierarchical lines. This form of interpersonal coordination by mutual adjustment is a coordination mode used to an increasing extent in variable and unpredictable situations that frequently render plans and schedules obsolete (March & Simon, 1958). These various forms of interdependence commonly influence how task partitioning, coordination and control occurs. Thompson argues that coordination modes relate to specific types of interdependency; coordination by standards is appropriate in the case of pooled task interdependence; coordination by plan is suitable in case of sequential task interdependency; and coordination by mutual adjustment is applicable in the case of reciprocal task interdependencies (Thompson, 1967). Thompson’s key propositions about coordination
68
ORGANIZATIONAL THEORY modes at work was later tested in an empirical study conducted by van de Ven et al. (1976) in which they analyze task interdependencies and coordination modes applied in 197 work units within a large agency. Their study generally confirms Thompson’s key propositions. They observe that with increased work flow interdependence from pooled to sequential and, finally, reciprocal interdependency actors increasingly make use of all coordination mechanisms, however significantly more do so in form of personal and group coordination and interaction rather than via rules and plans. While it is possible to program pooled or sequential interdependence via standards and schedules, this is not feasible in the case of reciprocal interdependencies due to the potentially high number of combinatory elements, and is especially impossible in the case of team interdependence, which requires an instant feedback and communication to accomplish coordination. The reason for the relationship between task interdependence and coordination modes is that organizations organize themselves to minimize coordination costs using extensive rules and plans, while more expensive forms of coordination are used less frequently (Thompson, 1967) and are only used in cases of more complex task interdependencies such as reciprocal and, in particular, team interdependence, which require much higher levels of continuous inter-actor awareness, communication, information processing, mutual knowledge, trust, and mutual adjustment (Kumar et al., 2009). Thompson points out “standardization requires less frequent decisions and a smaller volume of communication during a specific period of operations than does planning, and planning calls for less decision and communication activity than does mutual adjustments. There are very real costs involved in coordination” (Thompson, 1967, p. 56). When established, standardization incorporates the coordination of parts of the work program thus leading to a corresponding reduction in the need for continuous coordination (March & Simon, 1958). Drawing on transaction costs theory, which itself is grounded in inter-organizational theory (Coase, 1937; Williamson et al., 1991), Clemons and Row (1992) define coordination costs as costs of coordinating decisions and operations among economic activities to improve resource efficiency. Thus, the amount of coordination costs becomes a key criterion for selecting the mode of coordination for a given task. Classic Work Design - Integration/Task Synthesis As an analog to task analysis using various approaches and criteria to decom-
69
THEORETICAL UNDERPINNINGS pose tasks, task synthesis combines previously decomposed tasks with systems of work integration. These systems of work integration can be conceived of as forms of coordination that combine structural elements previously decomposed according to various criteria. Coordination is a core organizational principle and integrative design method adopted by organizations (Kosiol, 1962) whereby both the stability and flexibility of organizational and business process designs are established and secured. Schreyögg (2008) argues that from the main task perspective, task decomposition represents interruptions of the value chain; tasks are completed by different actors in various locations at different times, which ultimately creates the problem of how to combine all of these tasks with the main task and produce a finished product. Thus, organizational integration is, to a great extent, the planning and execution of coordination. It is clear that the higher the degree of work decomposition and the greater the number of subtasks generated, the more complicated work integration will be. Work integration should, however, not be seen as a purely “mechanical problem” in which subtasks are simply assembled to form a larger structure, but should also take into consideration the diverging interests of actors and their environment (Schreyögg, 2008). Task Decomposition Relationship Organizational design creates a complex system comprising various relationships; Kosiol (1962) identifies five key relationships and criteria on the basis of which task synthesis is conducted. The first step in the task integration process, the work breakdown structure, is based on the results of the previous task analysis. According to the work breakdown structure, individual or groups of tasks are assigned to individual job positions that are then filled by suitably skilled employees who perform these tasks (see Figure 17). In addition to the work breakdown structure, several criteria are utilized to integrate work undertaken in job positions or organizational units (Kosiol, 1968, p. 49ff): • • • • • •
70
Action: Similarity of actions to be performed; Object: Actions to be performed on similar products or groups of products; Phases: Grouping according to timing of the overall task (i.e. planning, manufacturing, distribution etc.); Fixed asset utilization: Optimization in the use of tangible assets (i.e. a flexible production line); Employees: Skills and capabilities of employees; Location: Grouping by location (i.e. manufacturing plant, logistic center or R&D center).
ORGANIZATIONAL THEORY Task synthesis also determines the degree of centralization or decentralization. Overall, a combination or centralization of tasks is useful where standardized operations are required and provide an economic benefit (i.e. cost reductions, economies of scale or scope), whereas a decentralization of tasks is practical in cases where a centralized unit would be overwhelmed by the complexity or volume of information.
Figure 17: Task analysis and task synthesis (Brauchler & Landau, 1998)
Supervision Relationship The work breakdown structure initially contains only execution tasks directly related to the highest task node - the production of products or services. In its original form, it does not include management and coordination tasks required to achieve overall work integration. However, as previously seen, task interdependencies also require that such coordination tasks ensure integration of the overall product or service. To identify these required coordination tasks and integrate them as part of the work design, task synthesis utilizes the task ranking information produced by the task analysis. According to Kosiol’s (1962) task analysis methodology, tasks have different ranks referring to the vertical division of labor in which supervisory tasks are separated from execution tasks, where supervisory tasks have higher ranks than execution tasks. Based on this ranking information, supervisory tasks can be combined with management positions that supervise and coordinate the execution of subordinate tasks, and in the case of large products and services requiring a higher degree of coordination, such management positions can be further aggregated into management or control units that supervise and coordinate larger parts of the product or service generation process (such as the production line management team or the corporate control team).
71
THEORETICAL UNDERPINNINGS By subordinating execution tasks to superior coordination tasks and assigning both types of tasks to job positions, task synthesis creates execution job positions subordinated to supervisory job positions, and thus creates departments as the first structures of the organizational design. Support Staff Relationship Managers and employees performing supervisory tasks often receive support from staff that provide them with assistance or support functions. Therefore, support tasks are often separated from supervisory tasks to allow supervising managers and employees to dedicate their time to their main supervision tasks and save on factor costs through the delegation of lower value-adding tasks to lower paid employees. Task synthesis acknowledges this practice and creates support job positions to which these supporting tasks are assigned. Relationships between such support staff positions or support staff departments can be developed to form their own coordination networks (communities of practice) in which the exchange of information about best practices can enhance the overall efficacy of support staff. Additional Relationships The three relationships outlined above—task decomposition, supervision and support staff relationships—together represent the organizational system. Based on these relationships, task synthesis integrates tasks to create job position and department plans. Despite the separation of task synthesis into three distinct steps, these three steps are actually integrated and are inseparable in practice. The three key work integration relationships are complemented by two additional types of relationships considered only indirectly in task synthesis: work relationships and council relationships. Work Relationships While task analysis creates a hierarchical work breakdown structure, task synthesis integrates tasks into an organizational structure that will, for the production of large products or services, have a similar hierarchical structure. Despite the integration of coordination and supervisory tasks in supervisory positions, continuous communication and information exchange not only within departments, but also between departments, is required to enable an integrated work process. This notion of interdepartmental communication (McCann & Galbraith, 1981) and information exchange can be closely related to how the enterprise’s processes are organized, where business processes that manifest the value
72
ORGANIZATIONAL THEORY stream of the organization run across the traditional departmental structure (see Figure 18). Such lateral coordination mechanisms are today defined as lateral coordination networks.
Figure 18: Intra-organizational communication paths (Kosiol, 1962)
Kosiol compares the communication system of an organization with a traffic system, where analogous to the transportation of material work objects, immaterial orders, information and decision proposals flow between departments and job positions. The exchange of information is especially important for intellectual work on immaterial knowledge objects that require a larger amount of information. Council Relationships Council relationships represent a special case of the organizational communication system formed through work relationships, often through casual talks over time that establish an intra organizational network. Unlike work relationships, council relationships are occasionally formed through work on special assignments across departmental boundaries, and only exist for a limited time. In all instances, councils are established to obtain a diverse range of opinions from different angles and to achieve faster and more flexible coordination and decision making than that possible via the “official” path. Cross-functional coordination is often beneficial, as many decisions require cross-departmental teamwork. Kosiol differentiates between various forms of councils: decision councils, consultative councils, information councils and executive councils. Kosiol’s critically acclaimed work design framework featuring the decomposition (task analysis) and integration (task synthesis) of work represents one of the first integrated work design frameworks in German organizational literature. Although this generic framework has been widely applied, especially in German-speaking countries, it is nevertheless subject to inherent limitations, as Berg (1975) points
73
THEORETICAL UNDERPINNINGS out, because it does not provide a clear methodology stating in which order the task analysis criteria should be applied to create a work breakdown structure. Applying the task analysis criteria in different sequences results in different work decompositions with varying coordination, resource and other cost implications for the enterprise. Furthermore, the creation of a work breakdown structure poses the inherent dilemma whereby estimating coordination efforts and costs is only possible after a complete work breakdown structure has been created and implemented. It is thus difficult, if not impossible, to make an ex ante cost estimate for large work breakdown structures. Here, the value of individual experience and organizational learning cannot be overemphasized, as they guide both task analysis and task synthesis to ensure feasible options are selected and that decomposition criteria are applied in an effective order in task analysis. Experience is also highly influential in the synthesis of tasks with job positions. Experienced actors choose work decompositions and integrations that can also be effectively coordinated and controlled, thus increasing the likelihood of the overall work design and output being highly efficacious (Berg, 1975). One common example is the experience-based functional decomposition of enterprises and the integration of purchasing, manufacturing, sales and distribution and finance and control tasks into functional departments. Summary: Classic Work Design Although the work of Kosiol provides a useful initial framework for organizational inquiry and design in the context of this thesis, in redesigning tasks, this framework has to be augmented by additional criteria and methods. The framework is not fully conclusive, as the decomposition and integration of work requires past experience among decision makers to achieve feasible and effective results. While Kosiol includes the organization’s communication system as an integral part of the framework, he does not consider it a distinct design criterion for work decomposition or integration. 3.1.1.2.
Work Design in the Context of Collocated Software Development with Static Work Division
As discussed in Chapter 2, the specific characteristics of software and software development mean work design in collocated software development differs from the classical work design of collocated industrial manufacturing in several respects and requires different approaches to work decomposition and integra-
74
ORGANIZATIONAL THEORY tion. Classic organizational design is mainly concerned with the formalization of organizational structures to create a rigid, functionally structured organization that reaps the benefits of the division of labor, while addressing coordination and transaction costs mainly through functional structure, clear job descriptions and organizational rules. This differentiation of work was until recently the main theme of organizational theory. In modern times, especially in the production of complex and knowledge-intense products, organizational design is more concerned with the integration of work and organizational substructures to improve quality, agility and overall productivity (Schreyögg, 2008). While the basic principles of work decomposition and integration in large software developments still apply, one main difference lies in the nature of software as a non-material, knowledge-ingrained artifact, the development of which requires highly skilled workers and intense team communication. According to classic organizational theory, the horizontal division of labor is combined with work decomposition and work integration to achieve economies of scale, economies of scope, and specialization and the vertical division of labor separate planning from execution. While this form of organizing creates considerable specialization advantages and increases productivity in an industrial or manufacturing environment, software development does not offer economies of scale through the gradual reduction of fixed costs of fixed assets via higher volumes and utilization, as it is a design task rather than a manufacturing task (Jackson, 1998). Software development can be classified as an intensive technology (Thompson, 1967) that creates value by solving customer problems through a spiral flow rather than the sequential flow of chains seen in classic work design (Stabell & Fjeldstad, 1998). Unlike the mere assembly of pre-produced parts on a conveyor belt observed in classic work design, developing software is a creative task with inherently incomplete work breakdown structures and job descriptions. One key characteristic of software development especially relevant to work design is the high amount of knowledge required to conceive of and develop software. Software developers require a substantial amount of knowledge to perform their tasks, which also involves simultaneous planning and coordination, and thus the vertical division of labor and the separation of planning from execution; neither Taylor’s scientific management method nor the rank criteria Kosiol uses in his task analysis methodology, with workers serving as pure executioners, are fully applicable to work design for collocated software development.
75
THEORETICAL UNDERPINNINGS The knowledge-intense nature of software development dictates the use of heuristic approaches rather than deterministic ones to develop high-quality code. Therefore, not all problem-solving knowledge is codified, and a considerable amount of tacit, non-codifiable knowledge is still an essential part of technology (Polanyi, 1966). Software development is thus often compared to artisan workshops with skilled craftsmen who build up their tacit knowledge through development projects and collaboration with more experienced senior craftsmen. This results in substantial variations in productivity among software developers, with the productivity and quality output of developers diverging by orders of magnitude (DeMarco & Lister, 1987; Sackman et al., 1968). To “solve a problem, needed information and problem solving capabilities must be brought together - physically or ‘virtually’ - at a single locus” (von Hippel, 1994, p. 429). Classic economic theory assumes that unlike physical goods transfers, the cost of transferring information equals zero. However, Teece’s (1977) empirical study of 26 international technology transfer projects across process and discrete industries finds the costs of technology transfers range from 2% to 59% of total project costs, with an average of 19%. Considering the pure knowledge nature of technology transfer and the complexity of large-scale software development, it is reasonable to assume knowledge transfer costs in the software industry are in the upper range of those observed in Teece’s empirical study, which indicates significant costs are incurred when transferring information in large-scale software development. These information transfer costs are comparable to friction in mechanical engineering, where additional energy is required to overcome resistance. Von Hippel (1994, p. 430) defines this phenomenon in the information transfer context as stickiness, “an incremental expenditure that is required to transfer a unit of information to a specified locus in a form usable by a given information seeker”. When these costs are low, information stickiness is considered low, and if these costs are high, stickiness is considered high (von Hippel, 1994). Various factors contribute to the degree of information stickiness, with related efforts and costs depending on the nature of the information and the amount of information, or on the attributes of the seeker and provider of information. In the software development setting, all of these characteristics can create stickiness, as the nature of information can be related to complex product architectures and problem-solving routines, the amount of information can be linked to the large software product to be developed, and the attributes of the sender and receiver of information contribute to knowledge gaps between junior and senior
76
ORGANIZATIONAL THEORY developers jointly working on a development project. The high degree of information stickiness in the software development process, especially in innovative software product development where initial specifications and architectures are still vague and intense communication is required, has led to a tendency “to carry out the innovation related problem solving activity at the locus of sticky information” (von Hippel, 1994, p. 432). Therefore, until the 1990s, many large global software companies like Microsoft (Redmont, USA), Oracle (Redwood, USA), Apple (Cupertino, USA) and SAP (Walldorf, USA) developed their software products almost exclusively in teams collocated at their global headquarters. Effective communication is a key success factor for problem solving in R&D teams in general (Allen & Cohen, 1969; Allen & Henn, Günter, 2007) and in software development in particular (Amrit, 2008; Lang, 2004). In a collocated development setup in which developers share the same room, building or campus, fellow developers can be reached immediately or at short notice to discuss and solve problems relating to the overall design and coding of a software module. The opportunity for developers to discuss issues and align their work face–toface facilitates the transfer of tacit information and accelerates issue resolution in software development projects (Herbsleb, 2003). Work Decomposition In Collocated Software Development The large-scale development of software products with millions of lines of code clearly requires a division of labor to conceive of and develop such software, as these efforts exceed the capacity of any individual developer. While an individual developer can handle a small development project alone, larger projects require teams of software developers and a work design that subdivides a large undertaking and allocates it to development teams. In software development, however, work design takes a different course to that seen in its classic application in the industrial volume-manufacturing context, as software development does not increase productivity through economies of scale and economies of scope (Jackson, 1998). Therefore, software quality and structural integration of the software artifact become the work design priorities in software development. Considering the frequent changes that occur in customer requirements and technology, one important software quality criterion also relevant to work design is re-usability and changeability in the development of large software systems to manage development costs and ensure a manageable total cost of ownership (TCO) over the whole life cycle of the software system for end users (Balzert, 2009). To achieve this goal, large-scale software development projects
77
THEORETICAL UNDERPINNINGS use modular software architectures where changes can be contained to only one or several modules rather than affecting the whole software product. Modular software architectures can be seen as work breakdown structures that apply the basic software engineering principles of information hiding, loose coupling and cohesion. These are referred to as good programming practices that reduce maintenance and modification costs over time (Constantine, 1995). Especially in large-scale software development, where communication and coordination efforts increase exponentially with team size, modules that encapsulate functionality help to contain most team communications within a particular module development area. Work decomposition in software development thus encapsulates functionality and communication among the development team through modular design, as well as application of the software engineering principles of coupling and coherence to reduce overall coordination effort and thus coordination costs. The communication requirements of socio-technical systems can thus be seen as key criteria for work decomposition, and ultimately for effective work design, in the context of knowledge and communication-intense R&D projects like software development (Berg, 1975; Galbraith, 1973). As a design parameter, communication requirements are a major extension to the work of Kosiol (1962), who acknowledges the importance of communication in the organization, something he compares to traffic in the organization. However, he only peripherally considers communication structure as a work design parameter, instead viewing it as a design principle of work design. Work decomposition in software development is an iterative process in comparison to the near-complete work decomposition seen in classic work design, and it often stops at a relatively high-level software module or functionality cluster assigned to a team rather than an individual. In work design for software development, iterative work decomposition and integration are undertaken at a similar hierarchical level to the given software architecture, and further details of work decomposition and work integration are left to assigned teams to ensure an effective work design (Berg, 1975). Task Interdependencies In Collocated Software Development Software development requires that information and problem solving capabilities be brought together to solve various problems throughout the software development process. As previously discussed, such transfers cause additional
78
ORGANIZATIONAL THEORY expenditure on knowledge and information transfer during the software development process. Such costly knowledge or information transfer can be referred to as sticky information. Based on the works of von Hippel (1990, 1994), Kumar et al. (2009) revise the classic taxonomy of task interdependence. They introduce sticky forms of task interdependence, as the phenomenon, as stickiness has a considerable impact on task dependencies in collocated software development (see Figure 19). “Sticky task interdependencies appear when the work is novel, ambiguous, uncertain, equivocal and complex” (Kumar et al., 2009, p. 655) - characteristics that precisely describe the work of software development. Pooled Interdependence
Work Enters Unit
Reciprocal Interdependence
Work Leaves Unit
Work Leaves Unit Seqential Interdependence
Intense Interdependence
Work Enters Unit
Work Enters Unit
Work Enters Unit
Work Leaves Unit Integration Interdependence
Work Enters Unit
Work Enters Unit
Work Leaves Unit
Fully sticky Partially sticky Work Leaves Unit Work Leaves Unit
Figure 19: Sticky forms of task interdependence (Kumar et al., 2009, p. 653)
While transferring work from one activity to the other in the context of nonsticky, simple tasks requires only a minor effort, with “sequential, integration, and reciprocal interdependence, work hand-offs require intense, communication, information sharing and work transfer activities” (Kumar et al., 2009, p. 655) and thus increase communication costs. Such work hand-offs occur frequently in the software development process, such as among development teams between different phases of development (specifications to development) or within teams working on various aspects of an software module such as business process logic and the graphical user interface. In addition to these sticky forms of task interdependence Kumar also introduces a fifth form of interdependence namely integration interdependence. Although
79
THEORETICAL UNDERPINNINGS this form of task interdependence was previously described in a case study Galbraight (1968) conducted in the manufacturing operation of Boeing’s commercial plane division, it was neither clearly specified; nor was it included in Van de Ven’s (1976) analysis of task interdependencies. Integration interdependence, also termed fit dependency by Melone et al. (1999, p. 429), describes a form of task interdependence “where multiple activities collectively produce a single resource”. As a form of interdependence typically found in engineering R&D projects where different components or modules are developed in parallel and have to be integrated into a product as, such as in aircraft manufacturing or software development under one or a set of common constraints (such as the total liftoff weight of aircraft, or the total cost of ownership or transaction performance in software development). Compared to pooled interdependence, where tasks are independent of each other, tasks with integration interdependence depend on each other to produce a finished product. Integration interdependence requires a fitting process to ensure that dependencies between tasks such as component development are identified and managed throughout the development process thus ensuring fit in the integration of the finished product which occurs in larger R&D projects through dedicated integration managers who identify and manage task interdependencies. Information and communication technology (ICT) is frequently used to support integration management, especially in large-scale R&D projects, as seen in the form of computer-aided design and simulation in the highly complex and collaborative design of the B-2 stealth bomber (Argyres, 1999), or through the creation of daily software builds such as those introduced by Microsoft in the Windows NT development in 1999 (Microsoft, 1999). In this way, finished and partly finished program modules are combined on a daily basis to test their functionality and identify integration issues through automated testing of the daily build. Among the various forms of task interdependence in large-scale modern software development, integration interdependence can be considered the most distinctive. While all other forms of task interdependence can be found throughout the software development process, especially at operational levels, the modular design of contemporary software architectures requires a high level of integration management (the fit process) to ensure the production of high-quality software that integrates the various software modules and compo-
80
ORGANIZATIONAL THEORY nents into the final software product. The sticky nature of information transfer due to the knowledge-intense nature and complexity of large software artifacts requires that special attention be paid to work decomposition and subsequent work integration. Decomposing work by applying criteria that “slice” through encapsulated software modules has the potential to create a high number of sticky interfaces and thus result in high communication and coordination costs that can endanger the successful development and marketing of the software artifact. In collocated software development, the concept of stickiness applies to all forms of interdependence discussed above; while their impact on coordination and communication costs still exists, it can often be contained by the spatial vicinity of developers in a room, building or campus, which allows for rapid face-to-face communication to transfer information or resolve issues. To address stickiness in collocated software development, von Hippel (1994) suggests avoiding a work decomposition that creates sticky partitioning, or at least that a work decomposition that minimizes sticky partitioning be chosen to achieve a work breakdown structure that incurs low coordination and communication costs. Considering the requirement of flexibility and change throughout the development process, von Hippel explains the value of an optimized work decomposition that reduces stickiness: “Changes introduced to task specifications after task work is under way can be costly because they often make what is already done valueless, and/or may degrade the solution ultimately arrived at, as project participants strive to “save” work already done by making suboptimal adaptations to change. I propose that the cost of such changes will be less, other things being equal, if tasks are arranged so as to reduce the problem-solving interdependence among them” (von Hippel, 1990, p. 409). Development teams use ICT to simulate or actually conduct near instant integration and manage such integration interdependence between development tasks. ICT can also be used to reduce the overall stickiness of tasks by decreasing the costs of transferring information, such as through the provision of central knowledge repositories, collaborative tools or communication infrastructure to exchange required information efficiently. Enterprises can also invest into the management and enablement of residual sticky information (von Hippel, 1994)
81
THEORETICAL UNDERPINNINGS such as through the codification of tacit knowledge9 (Nonaka, 1991) to further reduce the stickiness of tasks.
Further Work Design Contingencies In Collocated Software Development In addition to the various forms of task interdependence, software development is also highly susceptible to additional contingencies requiring consideration in work design (Fenema, 2002). The first contingency is uncertainty, a ubiquitous feature of software development seen, for example, in the form of uncertain customer requirements or the uncertain feasibility of architectures or new technologies. Work decomposition and integration in software development can thus no longer be deterministically planned and executed, as such detailed plans would require frequent revision throughout the development process. Software architectures, or the work breakdown structure in software development, cannot thus be set out in full detail to the level of lines of code (comparable to a hand movement in classic work design), but the developer must be left with considerable freedom on how to develop a particular function. While task interdependencies change throughout software projects (Hoegl & Weinkauf, 2005), their management is especially important in early phases of software development (such as the concept and development phases) where uncertainty is high and every aspect of the project is still in flux with frequent changes in the nature of the work. As product architectures stabilize as the project progresses, software development becomes more execution–driven, and while less team interface management is required, more project structuring and support is nevertheless needed (see Figure 20).
9 This suggestion made by Nonaka is the subject of controversy in academic circles, as many scholars object to the idea of the codification of tacit knowledge that by definition is non-codifiable. However, Nonaka outlines approaches taken by Japanese companies that invest in this transformation
82
ORGANIZATIONAL THEORY
Figure 20: An illustration of the relative importance of team interface management and project structuring and support during the concept and the development phases (Hoegl & Weinkauf, 2005)
Uncertainty increases the need for information processing, which results in coordination modes shifting from rules and standards to hierarchical communication and goal setting (Galbraith, 1973). Therefore software developers increasingly default to personal and informal means of coordination to share information as uncertainty increases (Kraut & Streeter, 1995). While addressing the need for increased information processing, such as personal coordination, often in form of face-to-face meetings, can create a considerable burden, especially in largescale software development projects. Other methods of addressing the need for additional information processing include the provision of slack resources, enhancing information processing capacity through investments in information systems, and creating interdepartmental communication links (Galbraith, 1973). While providing additional resources may not always be a feasible option, as it tends to increase complexity and project duration, especially in late phases of the development project (Brooks, 1995), contemporary software development makes extensive use of ICT to achieve increased inter departmental alignment, such as in the form of computer aided software engineering (CASE) tools for collaboration engineering change management. Complexity is the second contingency to be considered in collocated software development. It results from the differentiation of large tasks and assigning subtasks to multiple interrelated units (McCann & Galbraith, 1981). Complexity
83
THEORETICAL UNDERPINNINGS refers to the number of elements connected and the number of relationships established (Haeckel & Nolan, 1993), as size increases the number of elements and relationships, leading to an overall increase in complexity, which typically increases communication and coordination costs. The third contingency, differentiation, refers to the functional diversity of the workforce through different backgrounds and experiences that enable staff to approach problems from diverse angles. Functional differentiation increases the need for information processing, as developers with different experiences and backgrounds need to communicate with each other to establish a common basis of understanding, which makes informal coordination modes such as interpersonal coordination more important than formal rules (Lawrence & Lorsch, 1986). Work Integration In Collocated Software Development Similar to the work integration strategies of classic work design, the starting point of work integration is the work breakdown structure as created through previous tasks analysis and work decomposition. The work break down structure in contemporary software development is the modular software architecture (see Chapter 2). Modular architectures employ the software engineering principles of cohesion and coupling to achieve a product breakdown structure that clusters interactions (see Figure 21) and communications and thus provides a structure with low coordination and communication costs than a non-modular, integrated structure. Applying the principle of loose coupling makes sub-modules more or less independent of each other and provides the flexibility to change parts of the software system over time, with changes then limited to one or a few modules not requiring the complete software system to be rewritten. J
D M
K
L
N
A
B
E
F
I
H C
2
2 2
P
O G
Refrigeration Controls J Heater Hoses D Command Distribution M Air Controls K
Sensors L Actuators N Radiator A Engine Fan B
Front End Air Chunk
2 2
Compressor F Accumulator I Evaporator Core H
2 2
Condenser E
2 2
Refrigerant Chunk
2 2
2
2 2
2 2
Heater Core C 2
Blower Motor P Blower Controller O Evaporator Case G
Interior Air Chunk
2
2
2
2 2
Figure 21: Independent clustering of materials interactions for climate control system (Pimmler & Eppinger, 1994)
84
ORGANIZATIONAL THEORY Modularization is thus the leading principle of both work decomposition and work integration in collocated software development. The uncertainty prevalent in the development process leads to a more granular work decomposition in which “chunks of work” (Pimmler & Eppinger, 1994) are assigned to teams rather than individual tasks being assigned to individual workers, as seen in classic work design. Modular product designs and modular software architectures in software development lead to a modular organization in product development (Sanchez & Mahoney, 1996). Using software architectures as a design prescription for work design and organizational design in software development is a concept suggested many years ago (Berg, 1975; Conway, 1968; Parnas, 1972). While it seems obvious to integrate work based on the appropriate software architecture, several limitations of this approach need to be considered (Bass et al., 2003). High-quality software architectures require considerable effort to develop, and need software architects with considerable architectural knowledge, which is typically built up through long experience. Such experience is required in both work decomposition and work integration, as experienced software architects already consider work integration and its feasibility in the early stages of work decomposition for software artifacts. More importantly, unlike product architectures in material goods, where the physical properties of the final product and the dependencies of sub modules can be precisely defined as product architectures are created, product architectures in the software industry, which is subject to rapid technological change, are less stable and keep changing over time, making frequent adjustments of organizational structures inevitable. Another limitation of using software architectures as design prescriptions for organizational design is the “locking in” or “Taylorism” of software development, which limits knowledge transfer and innovation by focusing development resources on a particular development area or functional module rather than allowing frequent changes in assignments that give developers the opportunity to build up a more diverse skill set over time. The development methodology selected also has a profound impact on work design and how work is decomposed and integrated (Kraut & Streeter, 1995). While the traditional waterfall approach, which assumes stable development requirements and a sequential step-by-step approach, partitions work according to development phases to produce functional software only at the end of the
85
THEORETICAL UNDERPINNINGS development process, agile development methods partition work in such a way that fully functional software is produced at the end of every development cycle iteration (Highsmith & Cockburn, 2001). 3.1.1.3.
Work Design In The Context Of Globally Distributed Software Development With Static Work Division
The phenomenon commonly described as globalization was triggered by technological improvements, especially in communication technology, transportation, and policy changes that accelerated the development of previously underdeveloped countries (Ghemawat, 2007). The euphoria of globalization many multinational companies experienced in recent years, promised the advent of a borderless world of global economic activity and exchange, leading, as some claimed, to the death of distance (Cairncross, 1997), a flat world (Friedman, 2006) and the advent of the boundaryless organization (Ashkenas, 2002; Picot et al., 1996). This led to the expectation of operating a frictionless world where the perceived benefits of globalization could be fully realized. Despite the promise of a frictionless globalized world, today’s reality is that “most types of economic activity that can be conducted either within or across borders are still quite localized by country” (Ghemawat, 2007), and the term semi-globalization would thus be more appropriate to describe the current state in which frictions still exist through borders and barriers. This coincidences with the observations of Florida who refers to the current status as a “spiky world” in which substantial differences in economic activity exist between and inside countries (Florida, 2005). As described in section 2.5, the main benefits of globalization in the software industry have been access to highly skilled software engineers, often combined with substantial cost benefits through the allocation of R&D activities to developing countries, and access to “markets of the future” — rapidly developing countries such as the so-called BRIC states of Brazil, Russia, India and China. Most organizations assess the benefits of a global distribution of work on strategic level through traditional business cases including risk/benefit analysis, required investments, estimated cost benefits, available subsidies and the regulatory framework in foreign countries (Lewin & Peeters, 2006). In these business cases, however, benefits are expected to be fully realized and underestimate the
86
ORGANIZATIONAL THEORY considerable impact of distance mainly due, as Ghemawat (2007, p. 39) points out, to the lack of a common analysis framework for strategic decision-making. “The evidence just presented suggests that distance effects can be huge. So let’s look at existing tools for “country analysis”–for example, the kinds of diligence that the company would conduct before deciding to set up shop in a new country-and see how well they account for the effects of distance. The answer, basically, is that they don’t!” To address this shortcoming, Ghemawat (2007) developed the CAGE framework in which factors relevant to distance are assessed on both a country and an industry level to apply discount factors to the estimated gross benefits of globally dispersed work. This results in the calculation of more realistic net benefits when globally dispersing work. Ghemawat’s study is of special importance, as this thesis builds a framework that supports decision-making for the allocation of development resources. Its importance to this study cannot, therefore, be over-emphasized. The CAGE framework supports decision makers in identifying four distinct distances that should be considered when globally distributing work. While most applied measures of distance are multilateral, such factors should be bilateral by comparing, for example, the home country to a foreign country to assess the negative or even positive impact of such differences in terms of benefit realization. In addition to considering the effects of geographic distance, Ghemawat added cultural distance, administrative distance and economic distance to his CAGE framework to facilitate a focused bilateral comparison of countries (see Figure 22).
Figure 22: The CAGE framework at the country level (Ghemawat, 2007)
87
THEORETICAL UNDERPINNINGS The first type of distance measured in the CAGE model is cultural distance, which has the potential to diminish economic exchange between countries due to religion, lack of trust or variations of egalitarianism. Similarly, administrative distance in the form of differences in laws and governmental institutions can negatively impact a calculated gross benefit, especially when laws frequent change as typically seen in transition economies like the BRIC countries. Especially weak governments unable to fully enforce laws and policies can dampen bilateral economic activity. Administrative distance can, however, be reduced through trading blocs such as the North American Free Trade Agreement Zone (NAFTA), the European Union (EU) and the Association of Southeast Asian Nations (ASEAN), all of which create a common political and economic framework to foster cross-border economic activity and thus reduce administrative distance. Third, geographic distance, the most obvious type of distance, refers to differences in time zones, climate, access to the ocean, topography, and transportation and communication infrastructure. The most obvious impact of geographic distance is to increase transportation costs. However, Ghemawat points out that foreign direct investment decreases with increasing distance, not only through higher transportation costs, but also via steeper communication costs, i.e., through travel costs and travel time. Fourth, economic distance in the CAGE framework refers to the economic wealth (GDP) and size of an economy, factor costs for labor and skill and education differences between employees. Employing the CAGE framework to compare a potential foreign location with the home country of an organization yields an initial understanding and assessment of the potential impacts of that foreign location on benefit realization. In addition to country-specific distance, more specific information can be obtained in a second step by applying the CAGE framework at the industry level (see Figure 23). The global software industry provides a good setting for this framework, as certain distance factors like the economic distance created by lower costs of skilled labor in India and the cultural distance caused by English language skills in India positively offset others such as administrative and geographic distance.
88
ORGANIZATIONAL THEORY
Figure 23: The CAGE framework at the industry level: correlates of sensitivity (Ghemawat, 2007)
Cultural distance matters to a considerable extent in the software industry, as differences in language comprehension, especially of the global industry standard language, English, negatively impact on software development projects due to the high knowledge content of software artifacts and the need for ongoing coordination and communication between countries during the software development process. Administrative distance can occur in the form of weak intellectual property protection legislation, which is critical for the long-term success and survival of software companies. Trade barriers and protectionism can hamper the success of global software companies in foreign countries. This is the case in China, where the central government has a strong desire to build up national champions in the software industry through preferential treatment, posing considerable challenges to global software companies. While direct production-related transportation costs such as those incurred for raw materials can be neglected in the global software industry, geographic distance also increases information costs through travel time and the costs of managers and developers traveling to face-to-face meetings. Economic distance in the software industry impacts both supply and demand-side costs. On the supply side, factor cost differentials between countries, most of which are personnel expenses for highly skilled software developers in the case of the global software industry, can be six times lower in developing countries than they are in industrialized countries. These substantial cost benefits provide a strong incentive to disperse work globally among developing countries despite
89
THEORETICAL UNDERPINNINGS the negative impacts of geographic and administrative distance, which are often outweighed by such labor cost arbitrage. On the demand side of the software business, economic distance affects the addressable market in terms of sales margins and volumes, as these are typically associated with the per capita income of a foreign country. While the CAGE framework can be applied both internationally and intra-nationally to compare regional or locational differences in larger countries (such as the BRIC countries) or particular regional markets against each other, Gemawat does not explicitly suggest applying the framework at a lower level such as between cities. However, it should be possible to compare individual locations (cities) against each other to support allocation decisions, as some countries have huge regional or city-level differences. Greater granularity on a location level is thus required. Work Design In Globally Distributed Software Development After the opportunities and challenges of globally dispersed work have been reviewed in a thorough strategic analysis and the general decision to disperse activities has been made, the decision still has to be executed on an operational level through development managers and their support staff (finance, control, human resources, ICT, etc.) who analyze and decide on the details of the work design. As previously described, work decomposition in software development is mainly predetermined by the software architecture of the product to be developed. However, strategic aspirations to reap the benefits of globally dispersing work, such as those obtained through labor arbitrage, can strongly influence the minutiae of how tasks should be decomposed and allocated among locations, and their final integration. Efforts to translate these strategic aspirations into successful operational steps and work designs repeatedly suffer from the lack of prescriptive top-down corporate strategies, managerial experience and processes, so that the perceived benefits of globalization on a strategic level are offset by friction and diseconomies on an operational level (Kumar et al., 2009). As globally distributed work is a new phenomenon, effective management practices must be built up through learning by doing and trial and error. Operational implementation thus brings back together more random experiments and improvisations than planned activities (Lewin & Peeters, 2006). Experience plays a paramount role in effective work decomposition and integration (Berg,
90
ORGANIZATIONAL THEORY 1975); a lack of the same in the context of globally distributed work can lead to a dysfunctional or ineffective work design that causes longer development cycles in multi-locations development than those seen in collocated development (Herbsleb, 2003; Herbsleb et al., 2000) and can even result in failure considering the dynamic market environment in which software companies operate today. The major difference in work design between globally distributed software development and collocated development is clearly the distance between developers and teams. In the context of this thesis, distance should be understood as incorporating the four types of distance outlined in the CAGE framework that can increase or reduce the benefits of globally dispersed work (Ghemawat, 2007). These distances change the characteristics of the task interdependencies previously discussed. When work is decomposed and globally dispersed to multiple locations, the task interdependencies of the distributed work tasks govern how teams in different locations work and communicate with each other and how coordination between the teams is achieved. With more than one work location involved in software development, the locus of problem solving, whether physically or virtually, needs to move iteratively among these locations, and as software development involves highly sticky content, this results in increased information and coordination costs when compared to collocated software development (von Hippel, 1994). Stickiness increases further with the four distances: 1) Cultural distance between development teams in different locations may impede the establishment of a common understanding about issues at hand or problem-solving approaches and issue resolution; 2) Geographic distance impedes the informal meetings or direct observation of task performance among others possible in a collocated setup, and frequent status updates are suddenly required, such as by email, long distance calls or video conferences (Kumar et al., 2009); 3) Administrative distance may refer to different visa regulations or statutory HR policies that limit flexibility in arranging face-to-face meetings between collaborating team members, especially at short notice; 4) Considering the high cost differential between developing and industrialized countries, economic distance between team members can create job loss fears among highly paid software developers who train lower paid colleagues (Ebert, 2006). Such fears can increase stickiness to a
91
THEORETICAL UNDERPINNINGS considerable extent, as developers from industrialized countries may resist transferring their knowledge as a means of retaining their job. Pooled Interdependence
Reciprocal Work Enters Units Interdependence
Work Enters Units
Work Leaves Units
Work Leaves Units Intense Interdependence
Seqential Work Enters Units Interdependence
Work Enters Units
Work Leaves Units Work Leaves Units Integration Interdependence
Work Enters Units
Partially
sticky
Work Leaves Units
Work Enters Unit
Fully
sticky
Work Leaves Unit
Figure 24: Distributed work environments and sticky task interdependencies
The greater stickiness of problem-solving information caused by the four distances in globally distributed software development increases efforts to transfer such information. Effective task partitioning to reduce problem-solving interdependence (von Hippel, 1990) and contain cost increases becomes even more important in a globally distributed context. The high communication requirements of software development (Allen & Henn, Günter, 2007; Grinter, Herbsleb & Perry, 1999; Herbsleb, 2003) make it necessary to create a coherent global work design. In such a work design, work items are allocated to a particular location in a coherent and collocated approach to avoid splits through encapsulated modules, thereby reducing stickiness and thus information and coordination costs (Ebert, 2006). Ebert (2006) suggests splitting globally distributed software development work according to feature content such as defined customer requirements, “which allows sending a team that can implement a set of related functionality – as opposed to artificial architecture splits”. In short, development teams that work on a coherent task should not be split across locations. 3.1.2.
Organization Design
After the work design process has been completed, larger organizational structures can be defined to create the framework in which work will occur. Galbraith defines organization design as “the deliberate process of configuring structures,
92
ORGANIZATIONAL THEORY processes, reward systems and people practices and policies to create an effective organization capable of achieving the business strategy.” (Galbraith et al., 2002, p. 2). In most cases, however, organizational design does not represent a new creation from scratch, but rather symbolizes a redesign of existing structures and processes. Therefore, the term “organizational design” is used in this thesis synonymously with organizational redesign. The reasons for undertaking organizational design are plentiful, and are often caused by a misalignment between the five elements of strategy, structure, processes, reward systems and people practices (see Figure 25). Processes and Lateral Capability
Reward Systems
People Practices
*GTUSVDUVrFJTOhU BMJHOFEUPUIF TUSBUFHZ
*GUIFEFWFMPQNFOU PGDPPrEJOBUJOH NFDIBOJTNTJTMFGU UPDIBODF
*GUIFNFUSJDT BOErFXBrET EPOhUTVQQPSU UIFHPBMT
*GQFPQMFBrFOhU FOBCMFEBOE FNQPXFred
CONFUSION
FRICTION
GRIDLOCK
INTERNAL COMPETITION
LOW PERFORMANCE
t/PDPNNPO dirFDUJPO QFPQMF QVMMJOHJOEJfferFOU dirFDUJPOT t/PDSJUFSJBGPS EFDJTJPONBLJOH
t*OBCJMJUZUP NPCJMJ[FrFTPVrDFT t*OFfGFDUJWF FYFDVUJPOMPTU PQQPSUVOJUZGPS DPNQFUJUJWF BEWBOUBHF
t-BDLPG DPMMBCPSBUJPO BDrPTTCPVOEBSJFT t-POHEFDJTJPOBOE JOOPWBUJPODZDMF UJNFT t%JfmDVMUUPTIBrF JOGPrNBUJPOBOE MFWFSBHFCFTU QSBDUJDFT
Strategy
Structure
*GTUSBUFHZJT NJTTJOH VODMFBr PSOPU BHrFFEVQPO
tWrPOHrFTVMUT difGVTFEFOFrHZ t-PXTUBOEBrET t'SVTUSBUJPOBOE UVrOPWFS
t&fGPSUTXJUIPVU rFTVMUT t-PXFNQMPZFF TBUJTGBDUJPO
Figure 25: Unaligned organizational design (Galbraith et al., 2002, p. 5)
Remedying misalignments of organizational design is critical as the empirical grounded contingency theory states a correlation between various contingencies as strategy (Chandler, 1990) or environment (Lawrence & Lorsch, 1986) and organizational effectiveness (Donaldson, 2001). The creation and adjustment of structural features to improve organizational effectiveness (compare Chapter 3.1.4) was the main focus of organizational design since its conception. More recently however the focus has shifted to behavioral features (Schreyögg & Syndow, 2010) such as the self-designing organization or the learning organization to adjust organizations to today’s high velocity environments. These, behavioral concepts of organizational design will be further discussed below
93
THEORETICAL UNDERPINNINGS in Chapter 3.1.7 and 3.1.8. Dealing with the multidimensional complexity of modern organizations, organization design requires a structured approach. According to Chandler structure follows strategy (Chandler, 1990). Galbraith et al. (2002) therefore suggest a generic four-step approach (see Figure 26) originating from current strategy to determine the design framework, conduct a high level design, work out the details and implement the design. I. DETERMINING THE DESIGN FRAMEWORK Leader and Executive Team
Strategy t8IBUPrHBOJ[BUJPOBMDBQBCJMJUJFTEP XFOFFEUPEFMJWFSPOUIFTUSBUFHZ
Leadership Team
Current State Assessment t8IBUJTUIFHBQ CFUXFFOXIFrF XFBrFBOEXIFrF XFXBOUUPHP
II. DESIGNING THE ORGANIZATON Structure
Process and Lateral Capability
Reward Systems
People
t8IBUTUSVDUVrFBOE t)PXXJMMXPSLHFU t)PXEPXFNFBTVrF t)PXEPXFTFMFDU BOErFQMZQFPQMF BOErFXBrE PrHBOJ[BUJPOBM DPPrEJOBUFEBOE JOUPOFXrPMFT QFSGPrNBODFBUBO rPMFTNFFUPVS JOUFHSBUFEBDrPTT NBOBHFUIFJS JOEJWJEVBM BUFBN TUSBUFHJDEFTJHO CVTJOFTTVOJUT QFSGPrNBODF BOE BOEBO DSJUFSJB TVQQPSUUIFJS PrHBOJ[BUJPOBMMFWFM EFWFMPQNFOU
Steering Committee and Workgroups
Whole Organization
III. DEVELOPING THE DETAILS t8IBUBrFUIFEFUBJMT t)PXEPBMMQJFDFTXPSLUPHFUIFS IV. IMPLEMENTING THE NEW DESIGN t)PXBrFXFHPJOHUPNBLFUIFUSBOTJUJPO
Figure 26: Four phases of organization design (Galbraith et al., 2002, p. 10)
Galbraith suggests a participatory approach in which affected stakeholders are invited to take part in the design process to ensure they accept the final solution, a practice widely proposed in the context of organizational change and transformation (see also section 3.1.7), and which is also applied in this thesis. 3.1.3.
Organization Design of a Global R&D Organization
Gerpott (1991) notes that many MNCs globally disperse their R&D organization opportunistically to capitalize on ad hoc opportunities rather than deducing business requirements from business strategy. Task duplications, task overlaps, the lack of a research focus or the subcritical mass of foreign R&D units are thus frequently observed phenomena in R&D organizations of MNCs (Gerpott, 1991, pp. 53-54). He suggests a structured organization design process be adopted to
94
ORGANIZATIONAL THEORY develop a concept for a global R&D organization (see Figure 27).
Figure 27: Methodical framework for the design of a global R&D organization (Gerpott, 1991, p. 61)
The model focuses on the core task of R&D, the acquisition of new technologies and the best research setup to acquire them. The starting point in developing a global R&D organization concept is to assess the MNC’s R&D strategy and the technologies to be researched/acquired (Boxes 1 and 2 in Figure 27). Gerpott suggests establishing new R&D units in foreign countries to acquire technologies that are attractive and will be important in future in areas in which the MNC has a relatively strong technology position. For less attractive technologies or areas in which the MNC has a weak technology position, he suggests other forms of cooperation such as joint ventures, the acquisition of patents and licenses, or an extension of home country research capacity. In this model, the selection of R&D research locations depends on the required alignment with external sources, which in Gerpott’s model is determined by the maturity of the researched technology and proximity to customers (Box 3 in Figure 27). Immature technologies require proximity to research clusters such as universities or high-tech parks, and with the increasing importance of customer interaction, proximity to innovative users of such new technologies. More mature technologies can be researched close to existing production facilities, as often observed in the chemical and pharmaceutical industries, and with the increasing importance of customer interaction, R&D can occur close to key markets for such technologies. As Chiesa (1995, p. 27) notes,
95
THEORETICAL UNDERPINNINGS “The selection of a location is a key issue. Locational decisions are the result of multidimensional cost/benefit analyses, matching R&D- related and non-R & D related factors. Locational decisions can be explicit or implicit. As a matter of fact, designing a global R & D structure may require placing new labs abroad (explicit locational decision) but may also call for existing units to evolve, changing their mission, area of activity and role over time, and exploiting the specific resources that develop locally at each subsidiary for global benefit (implicit locational decision)“. Other side restrictions that need to be considered in the development of a global R&D organization are the existing global R&D location portfolio, positive and negative impacts of globalization, the MNC’s location strategy for geographical focus areas, and the overall requirements of coordinating and managing the global R&D network (Box 5-7 in Figure 27). Once these side restrictions have been analyzed and included in the concept, it can be implemented accordingly. Here, in a manner similar to Galbraight (1973), Gerpott (1991) suggests a participatory approach in which all managers and employees concerned are involved in planning and implementation to ensure acceptance of the concept and its implementation. Although Gerpott’s model provides an actionable framework for the design and implementation of a global R&D organization, it has several shortcomings that need to be addressed in actual implementation. First, the model focuses on technology acquisition as the key strategic lever of R&D globalization, whereas other strategic considerations (see Chapter 2.5) such as access to talented resources or high growth markets are either not or only partially considered. This is no surprise, as Gerpott formulated his concept in 1991, before the major shift of R&D activities to India or China commenced. Second, the importance of internal alignments and assessments not only in the implementation phase, but especially in the planning phase, is not entirely reflected in his model. Because R&D globalization relates to the entire MNC, a cross-functional approach is required to consider all relevant perspectives (i.e. legal, facility management, finance, human resources, IT, etc.) to avoid costly surprises or the failure of the newly founded R&D unit. Although Gerpott’s model certainly has shortcomings for design in today’s high-velocity world, it still offers a solid starting point for the design of a global R&D organization if its shortcomings are addressed in actual implementation.
96
ORGANIZATIONAL THEORY Summary – Division and Integration of R&D Activities Sections 3.1.1 to 3.1.3 review literature on organizational theory, work design and organizational design to explain how large R&D activities are effectively divided and integrated, taking their structural requirements into consideration. The review of work design literature is grouped into three categories representing the evolutionary progress from classic collocated work design in the industrial age to collocated software development and, finally, work design theories for globally distributed software development. Work design refers to “how work is conceived in broad terms, translated across organizational levels, and structured for the units and the individuals who perform the work” (Torraco, 2005, p. 85). Work design entails major activities including work decomposition, the partitioning of work into smaller tasks, and work integration, in which tasks are grouped and assigned to organizational units. Several authors have provided criteria for the decomposition of work and the generation of a work breakdown structure in an industrial context. Brauchler and Landau (1998) propose that an initial task analysis be undertaken to provide for decomposition based on task, while in addition to task properties, Kosiol (1962, 1978) also includes characteristics of the surrounding environment. While the division of labor increases overall work productivity, task partitioning through work decomposition creates task interdependencies (Crowston, 1997), that require coordination and control, which in turn offset initial productivity gains (Malone et al., 1999). The review of Thompson’s (1967) work provides the basic taxonomy of task interdependencies comprising pooled, reciprocal and sequential task interdependencies, a taxonomy later extended by van de Ven, Delbecq and Koenig (1976) through the inclusion of team interdependence, and subsequently by Kumar et al. (2009) to include integration interdependence. Task interdependencies represent one of the key contingencies in co-located work design; they determine required modes of coordination (Mintzberg, Raisinghani & Théorêt, 1976) and thus total “coordination costs, the costs of coordinating decisions and operations among economic activities in order to improve resource efficiency” (Clemons & Row, 1992, p. 13). The more complex task interdependencies are, the higher the required levels of continuous inter-actor awareness, communication, information processing, mutual knowledge, trust, and mutual adjustment (Kumar et al., 2009), and thus the higher are coordination costs. After a work breakdown structure has been generated and task interdependencies have been identified, the second step in the work design process is the integration or synthesis of tasks and their assignment to organizational
97
THEORETICAL UNDERPINNINGS units. Kosiol (1962) defines six criteria according to which task synthesis occurs: similar actions, objects, phases, fixed asset utilization, employees and locations. Grouping of tasks is typically based on the work breakdown structure, the ranks of tasks, and type of task (core/support task) (Kosiol, 1962). However, the work design framework proposed by Kosiol imposes considerable limitations, as the costs of a particular work design can only be assessed on a ex post basis, and decision makers require considerable experience to arrive at a cost–effective, feasible plan for the decomposition and integration of work (Berg, 1975). In the context of co-located software development, work design also applies the basic principles of decomposition and integration. Software development is not, however, the execution of sequential manufacturing tasks previously assumed, but is a knowledge-intense design task (Jackson, 1998) with inherently incomplete work breakdown structures (Stabell & Fjeldstad, 1998; Thompson, 1967) due to the iterative problem solving it involves. This iterative, collaborative problem solving undertaken in software development, together with the uncertainty, complexity and differentiation of its workforce (Fenema, 2002), typically creates tasks with a high degree of interdependency, as seen in reciprocal interdependencies that cause high coordination costs. In this collaborative and highly knowledge-intense context, senior developers are similar to senior craftsman in that they possess a considerable amount of tacit knowledge (Polanyi, 1966) acquired through prior experience. This results in substantial variations of productivity and quality among software developers by orders of magnitude (DeMarco & Lister, 1987; Sackman et al., 1968), and underlines the importance of effective communication as key success factor in R&D teams in general (Allen & Cohen, 1969; Allen & Henn, Günter, 2007; Berg, 1975; Galbraith, 1973) and in software development in particular (Amrit, 2008; Lang, 2004). The transfer of information between collaborating parties leads to costs (Teece, 1977), especially where tacit information is transferred. Von Hippel (1994, p. 430) refers to this phenomenon as “stickiness”, “an incremental expenditure that is required to transfer a unit of information to a specified locus in a form usable by a given information seeker”. The high coordination costs of the highly interdependent tasks software development involves, combined with a high degree of stickiness in R&D activities, has led software enterprises to develop primarily “at the locus of sticky information” (von Hippel, 1994, p. 432) – the headquarters of global software enterprises. Therefore, in addition to task interdependencies, the degree of stickiness needs to be considered as an
98
ORGANIZATIONAL THEORY additional contingency in the work design of co-located software development. Work integration typically occurs on the basis of the work breakdown structure of the software artifact, which in modern software development is represented by modular software architectures. Scholars have long suggested using software architectures as design prescriptions for organizational structure (Berg, 1975; Conway, 1968; Parnas, 1972). Research has shown that modular product architectures lead over time to modular organizations (Sanchez & Mahoney, 1996). However Bass et al. (2003) also point to limitations of this approach in cases where product architectures frequently change due to rapid technological advances causing organizational adjustment costs, or in cases where a strong customer or functional focus increases process breaks and inefficiencies. In globally dispersed software development, work design adds distance as an additional contingency. While the benefits of a global dispersion of work have often been overestimated (Cramton & Webber, 2005), the impact of distance has often been underestimated (Ghemawat, 2007), with dispersion often reassembling random experiments (Lewin & Peeters, 2006) rather than planned actions. The perceived benefits of globalization on a strategic level are often offset by friction and diseconomies on the operational level due to a lack of prescriptive top-down corporate strategies, managerial experience or global dispersion processes (Kumar et al., 2009). Ghemawat’s (2007) CAGE framework provides an analytical structure that informs this inquiry about the considerable impacts exerted by cultural, administrative, geographical, and economic distances on the global dispersion of work. Distances amplify the effects of high task interdependencies and high degrees of stickiness on the work design of globally distributed software development, leading Kumar et al. (2009) to propose there exist special globally sticky forms of task interdependencies. Undertaking globally dispersed software development at multiple locations around the world requires that the locus of problem solving moves iteratively among such locations, whether physically or virtually, as software development involves highly sticky content (von Hippel, 1994). This results in increased information and coordination costs when compared to collocated software development, and led Ebert (2006) to the proposition that architectural splits in global work design should be avoided wherever possible to ensure development teams working on coherent tasks are not split across locations. After the work design has been completed, larger organizational entities can be constructed. The review of literature on organizational design frameworks shows
99
THEORETICAL UNDERPINNINGS the structured approach of Galbraith et al. (2002) provides a comprehensive, prescriptive framework applicable to this thesis. Organizational design rarely involves designing from scratch, but consists of an organizational redesign that changes the existing organizational structure. Redesigns often originate from misalignments to strategy, structure, processes, reward systems and human resource practices (Galbraith et al., 2002). It is critical that misalignments of organizational design be remedied, as the empirically grounded contingency theory shows a correlation between various contingencies such as strategy (Chandler, 1990), environment (Lawrence & Lorsch, 1986) and organizational effectiveness (Donaldson, 2001). Despite the extensive review of literature on global R&D organizational design and management undertaken here, global R&D organization design prescriptions and models that could inform this study remain scarce. Gerpott’s (1991) methodical framework is the only design process that provides a prescription for the design of a global R&D organization. While his model focuses on the acquisition of technology as a rationale for globally dispersed R&D, and centers to a lesser extent on labor arbitrage or the globally efficient design of globally dispersed projects, Gerpott’s suggestions cannot be applied in their entirety, but only partially inform this thesis. That completes the review of literature on theories of the division and integration of R&D activities in the global software industry. The following sections of this chapter review network and allocation theories as previously laid out in the research process illustrated in Figure 3. 3.1.4.
Networks and the Network Organization
The literature review undertaken in Chapters 2 and 3 following the review process exhibited in Figure 3 yields a comprehensive understanding of R&D activities in the global software industry (sections 2.1 to 2.56) and how these R&D activities are effectively divided and integrated, taking account of their structural requirements (sections 3.1.1 to 3.1.3) and the strategies and processes employed in the allocation of tasks among global organizational units (section 3.2.2). The entity examined by this thesis is the global R&D network of an MNC in the software industry. Before design prescriptions can be generated, it is critical to understand the characteristics of networks and their importance in the social sciences. This section thus briefly reviews the literature on networks and network organizations before further inquiry into R&D networks and their improvement commences.
100
ORGANIZATIONAL THEORY Network Topology In it’s most basic form, a network is a set of fundamental items called vertices, or sometimes nodes, with connections between them called edges (see Figure 28).
Figure 28: A small network with eight vertices and ten edges (Newman, 2003)
Systems that can be represented in network form are found in various scientific disciplines and applications throughout the world (Newman, 2001; Newman, 2003): in the form of social networks such as the World Wide Web or business relationship networks, in information networks such as citations between academic papers, in technological networks including electricity or telecommunication networks, in biological networks like neural networks, metabolic networks and food webs, and in distribution networks such as blood vessels. Networks have been studied in the form of mathematical graph theory since Leonard Euler’s famous seven bridges of Königsberg problem was stated in 177610, where parts of the city of Konigsberg were represented as vertices (nodes) and connecting bridges as edges.
Figure 29: Abstraction of the Konigsberg Seven Bridge Problem to a network diagram (Newman et al., 2006)
10 The problem was to find a walk through the city that would cross each bridge once, and only once. The islands could not be reached by any route other than via the bridges, and every bridge had to be crossed completely every time. Euler showed that such a walk did not exist, and his negative resolution of the problem led to the first theorem of graph theory.
101
THEORETICAL UNDERPINNINGS While Euler studied a simple network of only four nodes (parts of Konigsberg) and seven edges (bridges), recent advances in information storage and processing have enabled network theory researchers to study ever-larger networks such as the World Wide Web, social networks and telecommunication networks, some of which have billions of nodes and edges (Leskovec & Horvitz, 2008; Strogatz, 2001). For Newman et al. (2006b), this new ability to study real-life networks signals not only an opportunity to extend previous research, but also a complete departure from the “old science of networks” to a “new science of networks” in which real-life networks are studied and perceived as dynamic and evolving to dynamic rules, so that networks are ultimately not just topological objects, but also provide a framework upon which distributed dynamic systems are built (Newman, Barabási & Watts, 2006b; Watts, 2004). In addition to the sheer size of some networks, further complications often make it more difficult to gain an understanding of networks (Strogatz, 2001, pp. 268269):
102
(i)
Structural complexity: the wiring diagram could be an intricate tangle;
(ii)
Network evolution: the wiring diagram could change over time. On the World-Wide Web, pages and links are created and lost every minute;
(iii)
Connection diversity: the links between nodes could have different weights, directions and signs. Synapses in the nervous system can be strong or weak, inhibitory or excitatory;
(iv)
Dynamical complexity: the nodes could be nonlinear dynamical systems. In a gene network or a Josephson junction array, the state of each node can vary across time in complicated ways;
(v)
Node diversity: there could be many different kinds of nodes. The biochemical network that controls cell division in mammals consists of a bewildering variety of substrates and enzymes;
(vi)
Meta-complications: various complications can influence
ORGANIZATIONAL THEORY each other. For example, the present layout of a power grid depends on how it has grown over the years — a case where network evolution (2) affects topology (1). When coupled neurons fire together repeatedly, the connection between them is strengthened; this is the basis of memory and learning. Here nodal dynamics (4) affect connection weights (3). This new perception of the dynamic nature of networks cannot be emphasized enough, as graphical representations of a network are often incapable of showing changes in the temporal dimension, i.e. network evolution (2) or network dynamics (4), which may lead to the false impression that the network is of a static nature. Here, more recent work has recognized that dynamics evolve over time (Barabasi & Albert, 1999). “Many networks are the product of dynamic processes that add or remove vertices or edges over time to the existing network structure” (Newman et al., 2006b, p. 7). While the structure and properties of a simple network such as Euler’s are more obvious, analyzing the properties of large-scale real-life networks and characterizing their structure is inherently more complex. Strogartz (2001) indicates that the structure of a network affects its functions; thus, an analysis of a network’s anatomy and properties provides an understanding of how it functions in terms of performance, robustness and stability, qualities particularly important for building and managing critical networks such as electricity or computer networks. Networks in Social Sciences Networks have been studied extensively in the social sciences since the early 1930s. Newman (2003, p. 174) points out that “of the academic disciplines, the social sciences have the longest history of the substantial quantitative study of real-world networks”. The study of friendships, relationships and other links between people has utilized much of the terminology and concepts provide by graph theory to analyze empirical data and address questions of status, influence, cohesiveness, social roles and identities in social networks (Newman et al., 2006). Perhaps the most recognized early study of social networks is Travers and Milgram’s (1969) famous “small world experiment”. Interestingly, their research problem did not originate from the scientific field, but surfaced in the form of
103
THEORETICAL UNDERPINNINGS the short novel “Chains” by Frigyes Karinthy (1929), who claimed that the world was becoming smaller in social terms. Karinthy wrote that people were increasingly connected via their acquaintances, and that these acquaintances formed a dense web of friendship surrounding the earth in which everyone was only five acquaintances away from anybody else (Newman et al., 2006). Leskovec and Horvitz’s (2008) recent analysis of social networking data from the Microsoft MSN Service comprising over 30 billion conversations of 240 million users showed a mean average path length of 6.6 among various other properties, indicating that “seven degrees of separation” would be a better approximation11. Travers and Milgram’s (1969) study of a network structure and its properties not only laid the foundation for the new science of networks, but specifically established the basis for modern social networking. A strong indication of the importance of their groundbreaking investigation is the exponential growth in the number of publications related to social networks in sociological journals since their study was published in 1969 (Borgatti & Foster, 2003). Organizational Networks Considering the rapid changes in the global economy during recent decades, many companies, especially those in mature industries, have looked for new organizational models to address the challenges they face and find a new setup that offers more flexibility, reduced costs and faster time to market than do previous hierarchical structures. An organizational structure is the result of a search for effective coordination of economic activity. “As environmental changes accumulate, existing organizational forms become less and less capable of meeting the demands placed on them. Managers begin to experiment with new approaches and eventually arrive at a more effective way of arranging and coordinating resources” (Miles & Snow, 1992). The network organization can be seen as a result of such experiments, and many authors believe it is the answer to the many modern challenges managers in the globalized world face today (Sydow, 2010). Network organizations can be found in various areas in the modern business world, as seen in supply chain management in the form of supply and demand networks (William, 2007), and inter and intra-company networks have been the focus of intense organizational research over the past three decades (Ahuja & Carley, 1998; Allee, 2000; Axelsson & Easton, 1992; Miles & Snow, 1992; Siebert, 1991a; Sydow, 2010). 11 A more recent study by Backstrom et al. that analyzes the interlinkages of 720 million Facebook users finds an average distance of 4.74 degrees of seperation between members of this online community (Backstrom, Boldi, Rosa, Ugander & Vigna, 2011).
104
ORGANIZATIONAL THEORY In traditional forms of organization, markets and hierarchies serve as the main organizational constructs, whereas networks can be seen as a hybrid organizational form of economic activities between them (see Figure 30). In a network organization, not all activities are tied into hierarchal control, but are augmented by market elements (Thorelli, 1984; Williamson, 1991).
Figure 30: Continuum of network structures (Siebert, 1991)
While this thesis is mainly concerned with the how to manage network, previous research has focused on the question why enterprises establish network organizations, mainly in relation to economic and institutional economic reasoning (Siebert, 1991a; Williamson, 1991). The goal of enterprise networks is to achieve a collective efficiency gain through a collective strategy for several formally independent enterprises and thus enhance the competitive positioning of an individual enterprise (Siebert, 1991). As every member of the network can specialize (division of labor) in the work they perform best, greater economies and efficiencies are derived through cooperation or a market setup, thus resulting in the benefit of flexibility. In the management science context the concept of a network describes the cooperation in and/or between autonomous organizations, companies or organizational units simultaneously embedded in a net of relationships (Sydow, 2010). Miles et al. differentiate between stable networks, in which an organization forms market-based linkages between upstream and downstream partners, internal networks, in which intra-organizational units utilize market mechanisms to allocate resources, and dynamic networks in which independent business elements form temporary alliances for value creation (Miles & Snow, 1992, p. 64). Dynamic networks are also described as virtual organizations (Mowshowitz, 1994) in which the dynamic assignment of satisfiers to abstract requirements
105
THEORETICAL UNDERPINNINGS based on predefined assignment of abstract criteria ensures ongoing high efficacy along the value chain (Mowshowitz, 1997). Internal networks are created if previously hierarchical organizations introduce transfer pricing between organizational units to establish market mechanisms thus creating an intra-organizational hybrid between hierarchical and market control. Despite their benefits in resource coordination and efficiency gains, network organizations are not immune to failure as Miles and Snow (1992, p. 66) point out, but are prone to extension and modification failures: “Internal networks thus can fail from overextension, but they can fail perhaps even faster because of misguided modification. The most common managerial misstep in internal networks is corporate intervention in resource flows or in the determination of transaction prices. Not every interaction in the internal network can and should flow from locally determined supply and demand decisions. Corporate managers may well see a benefit in having internal units buy from a newly built or acquired component, even though its actual prices are above those of competitors in the marketplace. Such prices may be needed to sort out the operation and develop full efficiency. However, the manner in which corporate management handles such “forced” transactions is a crucial factor in the continuing health of the network.” Miles and Snow suggest that managers of internal networks should, rather than forcing transactions or simply believing in the “invisible hand” of internal market allocation, manage the internal economy and correct internal market mechanisms where appropriate, such as in the case of startup units that may require internal “subsidies” to be competitive in internal resource and task allocation, thus allowing them to make adjustments where strategically required. The causes of potential network organization failures are of critical importance to this inquiry, as intended modifications of a global R&D network must take shortcomings in internal market mechanisms and managerial practice into account. 3.1.5.
R&D Networks
As previously discussed, MNCs increasingly organize their global R&D organizations in the form of a network (see Chapter 1). A global R&D network in its most basic form can be depicted as shown in Figure 31, with an R&D unit in the corporation’s home country serving as a network node and supplemental R&D units in foreign countries acting as additional network nodes. Nodes represent research facilities in a location, their research and support staff, and available
106
ORGANIZATIONAL THEORY infrastructure. Vertices, which in global R&D networks represent various types of interlinkages such as research collaborations, reporting lines or communications, connect these nodes. These interlinkages can be uni- or bi-directional and vary in their intensity. In many current global R&D networks, the home country R&D unit still represents the largest node, as R&D globalization is a recent phenomenon.
Figure 31: Basic global R&D network model (Gerpott, 1991)
“R&D network” is an ambiguous term that refers to both inter- and intra-organizational networks, describing R&D collaboration among corporations in the case of inter-organizational networks and referring to the various own R&D units of a corporation and their relationships in the case of intra-organizational R&D networks. Inter-organizational R&D networks The formation of inter-organizational R&D networks can offer substantial economic incentives, as it allows for the removal of horizontal competitive, technological, vertical and network externalities (Colombo, 1998) and fosters innovation (Chesbrough, 2006) to expand existing and create new markets. Inter-organizational R&D networks have been the subject of intense academic research in recent years, revealing several topological characteristics König (2010, p. 17) subsumes in his work: (i)
Networks are sparse, that is, from all possible connections between firms, only a small subset is realized;
(ii)
Networks are characterized by high clustering. This means that the
107
THEORETICAL UNDERPINNINGS collaborating partners of a firm are likely to be connected among each other; (iii)
The distribution of links over firms tends to be highly heterogeneous with only a few firms being connected to many others;
(iv)
Highly connected firms form the core of the R&D network, while firms on the periphery are connected to this core by only a few links.
In his study, Koenig deducts mathematically that inter-organizational R&D networks face the risk of becoming locked into inefficient network structures, as firms face a tradeoff between the benefits of new collaborations that extend the network and the costs thereof; high collaboration costs thus induce less collaboration and hence ineffective networks (König, 2010, p. 169). Through computer simulations, König was able to replicate network structures that resemble empirical R&D networks by assuming low costs of collaboration and high severance costs to delete links in his model. More importantly, by incorporating environmental dynamics into his model and introducing link decay, as uncertain innovations change the profitability of some R&D collaborations in a rapidly changing technological environment, König achieved stationary networks that were “continually adapting their links to the changing environment” (König, 2010, p. 169) thus compensating for link decay. While inter-organizational R&D networks are not the focus of this study, and computer simulations only provide an abstraction of reality, these insights into efficient setups and reactions to dynamics inform the inquiry into intra-organizational R&D networks, especially the insights gained that networks tend to “lock in”. Intra-organizational R&D networks In addition to inter-organizational forms of R&D collaboration, many corporations also internally resort to network organizations as a means of organizing their global R&D. In such intra-organizational R&D networks, Doz et al. (2001) identify three basic functions combined by a “metanational R&D process” (see Figure 32) a global R&D network must perform to “leverage metanational innovation rather than execute home country orthodoxies [,] ensuring...successful R&D”. The first function is the idea-driven acquisition of new capabilities and market knowledge, where network nodes plug into local academic and technological communities to sense the world and learn locally embedded knowledge. These newly acquired capabilities and ideas are put into operation in the second function of the R&D network: the operations function. Here, the problem-solving
108
ORGANIZATIONAL THEORY capabilities of researchers and the management practices of R&D managers drive the performance of actual R&D activity.
Figure 32: The metanational R&D process (Doz et al., 2001)
To ensure ideas and capabilities are brought together to capitalize on market opportunities, Doz et al. introduce a middle mobilizing function that dynamically combines the other two functions in a manner similar to a magnet, attracting ideas acquired by sensing units and research capabilities from the operations function of the R&D network. R&D networks thus require active management of the R&D network, thereby ensuring the continuous creation, (re-)alignment and deletion of idea-sensing and operating units. In R&D network management, with its ongoing sensing of new ideas and sizing up of opportunities, managing the transformation of an R&D network can be understood as a dynamic capability (Teece, 2009) critical to the long-term success of the network (see also section 3.2.4). Summary – R&D Networks Sections 3.1.4 and 3.1.5 review literature on network theory, networks in social sciences, organizational networks and R&D networks to identify the properties and beneficial characteristics of R&D networks relevant to this study. Networks refer to systems of interconnected nodes and are found in various scientific disciplines (Newman, 2001; Newman, 2003). Contemporary research into networks perceives of networks not as static entities, but as frameworks
109
THEORETICAL UNDERPINNINGS upon which distributed dynamic systems are built (Newman et al., 2006b; Watts, 2004). Several contingencies such as complexity, network evolution, connection diversity, dynamical complexity, node diversity and meta-complication often impede understanding of real-life networks (Strogatz, 2001). In organizational science, networks are seen as a hybrid form of coordination between the market and a hierarchy, where not all activities are tied into hierarchal control, but are augmented by market elements (Thorelli, 1984; Williamson, 1991). A network organization can be highly effective, not only because network members can achieve greater economies of scale through specialization (Siebert, 1991), but also because the structure of the network and its participating nodes can change dynamically (Mowshowitz, 1997). Despite their merits, network organizations can fail, as Miles and Snow note, from overexpansion or corporate intervention in transfer pricing between network participants (Miles & Snow, 1992, p. 66); organizational networks thus require a “visible hand” that ensures the overall health of the network. Although R&D networks come in both intra- and inter-organizational network forms, the focus of this thesis is the former. Doz et al.’s (2001, 2006) conceptualization of the metanational R&D process provides an accurate description of the R&D network in the enterprise under study, SAP, and is used in the context of this study. Most importantly, Doz et al. describe the mobilizing function of the R&D network as a dynamic capability, a definition of critical importance in improving the R&D organizational structure in the enterprise under study as reviewed in the following sections. 3.1.6.
R&D Network Improvements
The literature review in Chapters 2 and 3 following the review process exhibited in Figure 3 provide a comprehensive understanding of R&D activities in the global software industry (sections 2.1 to 2.5), how these R&D activities are effectively divided and integrated considering their structural requirements (sections 3.1.1 to 3.1.3) and the strategies and processes employed in the allocation of tasks among global organizational units (section 3.2.2). Sections 3.1.4 and 3.1.5 then review literature concerning networks and their roles in social sciences and organizational theory to obtain an understanding of how R&D networks can be characterized and the dynamics to which they are subject.
110
ORGANIZATIONAL THEORY At this point, the dissertation has established a comprehensive understanding of R&D activities in the global R&D network organization context and their underlying theories sections 3.1.6 to 3.1.8 thus go on to review theoretical models of R&D network efficiencies to obtain design prescriptions for the intended improvements. Furthermore, theories of organizational transformation and organizational learning are reviewed to ensure the intended organizational improvements are effectively implemented and sustainable. Review of R&D Network Models A stronger focus on the effective allocation of resources and the consolidation of locations has become a key concern among R&D managers of high-tech MNCs aiming to achieve greater R&D effectiveness and efficiency. Faced with the ever-increasing speed of the global economy and “jungle growth” in various R&D organizations, many MNCs see significant potential for improvement in their R&D networks with regard to speed, quality of innovation, and cost reductions in the innovation process (Booz, Allen & Hamilton, 2006). As Doz et al. (2001) note, R&D networks require active management to adapt to environmental dynamics and remain competitive. However, questions arise about what efficiency criteria to apply to global R&D networks when assessing their current efficiency or improving them to reach a higher level of efficiency. Improvements should be understood as using an improvement process to increase the efficiency of the R&D network through structural or other changes according to defined efficiency criteria giving special consideration to organizational and environmental side restrictions. Various models have been used to explain the global distribution of R&D. Pearce (1989) utilizes physical and decision-based models to explain global R&D allocation and define efficiency criteria: Physical Efficiency (Chiesa, 1995, p. 21; Pearce, 1989, p. 38ff) In the first model, the global R&D network is described through a physical specification in which R&D network nodes represent physical bodies subject to gravitational forces among them, with larger nodes creating stronger gravitational forces than smaller ones. In such a physical efficiency model, global R&D allocation is efficient if these gravitational forces of push and pull factors between an MNC’s headquarters and subsidiaries are in equilibrium. In this model, push factors represent positive
111
THEORETICAL UNDERPINNINGS factors of R&D globalization such as factor cost benefits and access to talented resources, whereas pull factors refer to negative factors such as increases in coordination and communication efforts. Network improvements would therefore be targeted at correcting the imbalance between pull and push factors and adjusting them accordingly. While this explanation provides a plausible explanation for empirical observations, it remains questionable whether an analogy to physics provides a sound foundation for building robust theory in the social sciences, and it is debatable whether this model can be operationalized to improve a global R&D network. Decision Efficiency (Pearce, 1989) Global R&D management requires that a multitude of decisions are taken. Some of these questions are: is an R&D unit required in a foreign country? If so, where should it be located? What research projects should be carried out using which research resources in which locations? The decision model of R&D globalization thus sees efficient global allocation as a decision problem in which efficient allocation is achieved when individual R&D location decisions are made efficiently utilizing all available information. While decision theory provides this model with solid theoretical foundations via its large body of empirical evidence, it is obvious that the sum of efficient individual R&D decisions is not necessarily efficient overall. Furthermore, it needs to be considered that human beings are bounded rational (Simon, 1982) and that information asymmetries exist that lead to a principal/agent problem (Eisenhardt, 1989a), thus casting doubt on the validity and efficiency of any R&D decision taken, which in summary makes decision efficiency a poor criterion for operationalizing R&D network improvements. Economic Efficiency Economic efficiency refers to the relationship of goods outputs to resource inputs, whereas an economic system can be defined as economically efficient if it either maximizes the output for a given amount of resources or minimizes the input of resources required to produce a given output. Subsets of economic efficiency include allocation efficiency, where resource allocation is measured according to predefined criteria, and pareto efficiency, where a pareto efficient economic system is defined as one where better resource allocation cannot be achieved without putting participants at a disadvantage. In the global R&D network context, economic efficiency could be utilized as an optimization criterion by stating that
112
ORGANIZATIONAL THEORY the development of novel, innovative ideas or software functionality should be maximized for a given resource budget, or the amount of resources utilized for a predefined set of software functions should be minimized. The application of this basic economic principle, however, poses significant challenges as already discussed in section 2.4.6, as the quantitative measurement of productivity, the ratio between output and input, is inherently difficult in software development. Utilizing economic or allocation efficiency in the global allocation of resources to a global R&D network that results in quality software or cost-effective research is thus infeasible, especially if used ex ante as design criteria. While economic efficiency is the underlying principle to ensure the long-term survival of an enterprise, and an R&D network ultimately has to contribute to business success, using economic efficiency as design a criterion has some major shortcomings: -
While input factors in software development such as labor, infrastructure, IP, etc., can be measured relatively easily, measuring output is complex and inherently difficult (see section 2.4.6);
-
Not all costs are typically quantifiable or taken into consideration (e.g. coordination costs), and the benefits of R&D in monetary terms are often uncertain, especially for new product development (see section 3.1.1);
-
Most costs are unknown or highly uncertain in the design phase (i.e. actual labor costs in new locations);
-
Economic efficiency overestimates the value of low-cost locations such as India and China with their quantifiable cost differentials and underestimates the value of high-cost ecosystems such as Silicon Valley. Companies such as Google do not use economic efficiency or budget for their R&D operations due to the high value they gain from this sophisticated ecosystem with its highly talented human resources (Samsonowa, 2010).
Innovation Efficiency (Edvinsson et al., 2004; Hollanders, 2007) Given that the main objective of an R&D organization is the development of innovative ideas, it is clear that every organization intends to maximize the amount and quality of innovative ideas they generate in their R&D setting. Innovation efficiency could be therefore considered a design criterion for establishing and continuously enhancing R&D networks. However, theoretical foundations for this approach remain scarce. Hollanders developed the concept of innova-
113
THEORETICAL UNDERPINNINGS tion efficiency (Hollanders, 2007) as a macroeconomic performance indicator to compare innovation efficiency between countries, whereas Edvinsson et al. (2004) frame innovation efficiency in a microeconomic context. Edvinson et al. state that higher innovation efficiency can be obtained by improving efficiency in six dimensions: stakeholder contributions, reuse of assets, exploitation, invention of assets, operating context and performance. While the authors provide a holistic innovation management conceptualization, it is too generic to deduct prescriptive design criteria and too multivariate to improve global R&D networks. While innovation efficiency is highly desirable in today’s hyper-competitive global software industry, its operationalization and adequate measurement as a design criterion remains questionable. Innovation efficiency also lacks solid theoretical and practical underpinnings that would allow researchers and practitioners to apply it as a major design criterion for global R&D networks. Communication-Economic Network Efficiency (Fisch, 2001; Fisch 2003) Effective communication was at an early stage identified as a critical success factor for effective R&D (Allen & Cohen, 1969; Allen & Henn, Günter, 2007; Kuemmerle, 1997). Addressing the shortcomings of previous models explaining global R&D distribution and allocation, Fisch (2001, 2003) created a model that focuses on communication economic efficiency in an R&D network. His communication-economic network model (see Figure 33) is grounded in network theory, information theory, organizational learning theory and transaction costs theory, and has been empirically validated in 15 German-speaking MNCs in Germany, Austria and Switzerland. It defines R&D by its information processing and communication function.
114
ORGANIZATIONAL THEORY
Figure 33: Communication-economic network model (Fisch, 2003, p. 1386)
The model consists of R&D subsidiaries (subi) that have information processing demand, a local knowledge base (kbi) and access to information supply from the local environment (envi). Information processing is limited by the information processing capacity (ipci) of the local R&D unit. All of these elements are connected within the country via local broadband channels. Only a limited amount of information is lost, as both explicit and tacit information can flow within the same R&D unit. However, international connections are narrow band, and information is lost here due to geographic and cultural distance (compare with section 3.1.1). In Fisch’s model, the global R&D network configuration is communication-efficient, if communication losses are minimized and the total information processing capacity of the R&D network is maximized. In this model, R&D network improvements target the reduction of information losses in the overall R&D network. While architectures, standards, ICT tools and knowledge management systems can contribute to the reduction of information losses (compare with section 2.5), it can be hypothesized that the selection of a global work design based on information loss minimization could lead to a communication-efficient global R&D network. As noted in section 3.1.1, modern software development utilizes the principle of modularization to encapsulate
115
THEORETICAL UNDERPINNINGS communication-intense development within software modules, while developer communication between modules is less intense due to the use of standardized interfaces between them. Because large-scale software development projects typically exceed the capacity of a single R&D unit, global distribution is required. Applying software engineering principles such as modularization to the organizational design of global R&D networks would result in the allocation of communication-intense module development work to areas of high-bandwidth communication, which would be within an R&D unit, while other modules could be allocated to other R&D units as inter-module communication is less intense. Communication efficiency as design principle for organizational design would thus lead to a modular organizational design. Having reviewed models of R&D efficiency criteria put forward in the literature, the model and criteria proposed by Fisch (Fisch, 2001) are chosen as the theoretical foundations for efficiency and design criteria in seeking to improve the global R&D network considered in this thesis.
3.1.7.
Organizational Transformation and Change
The research objective of this dissertation is to deliver a transformed global R&D network. Therefore, it is critical to review the literature in the field of organizational transformation and change, a subset of organizational theory, to understand theories and concepts that enable and safeguard organizational transformation. Tushman et al. (1986, p. 29) point out that a “fit of external opportunity, company strategy, and internal structure is a hallmark of successful companies [; however,] the real test of executive leadership is to maintain this alignment in the face of changing competitive conditions”. Organizational transformation and change is therefore defined as the ongoing adjustment of organizational strategies and structures to modify internal or external environmental factors directed at changing the basic character or culture of an organization. While organizational change is often used in the context of incremental changes, organizational transformation is distinguished from other types of strategic change by the scope of change in which a majority of individuals and organizations must change their behavior (Cummings & Worley, 2005).
116
ORGANIZATIONAL THEORY This review of literature on organizational transformation is aimed at gaining an overview of theories and concepts that support organizations in performing organizational change to achieve an ongoing fit in times of changing competitive conditions. Organizational transformation and change was not always seen as an implementation problem. Traditional organizational theory conceived of it as mainly a planning problem, and assumed that after an organizational change had been decided, it would also be implemented as such (Schreyögg, 2008). When organizational transformation and change is reduced to a mere planning problem, the main task is to select the optimal organizational design. As research on the success of organizational transformation and change projects has shown, however, it cannot simply be assumed that all planned changes will be implemented to their fullest extent after planning has been finalized (Greiner, 1967; Kotter, 1995). Organizational changes take a long time in which members of the organization may object to the new design or unforeseen events occur so that the original organizational design will be no longer implementable (Schreyögg, 2008). These findings led to the development of new theories of organizational transformation and change acknowledging that the success of newly developed organizational structures depends to a great extent on their acceptance by members of the organization. Understanding the concerns of members of the organization and overcoming resistance to change are important factors in the successful implementation of organizational change (Benne & Chin, 1969). According to Benne and Chin (1969) change resistance can be divided into two categories: individual and organizational resistance. Individual Resistance Several psychological theories describe why people resist change. One frequently applied theory is that people tend to maintain behavioral patterns they are used to as they derive satisfaction from them (Allport, 1937). When people are required to change these behavioral patterns, they see it as limiting their need satisfaction and therefore resist change. As an alternative explanation for individual resistance, cognitive map theory states that previous negative experiences of organizational change create a predisposition to resist future organizational changes (Backman, 1974). Another explanation is the so-called frustration-regression effect, which states that change programs typically discount prior acquired routines and skills, which do not typically provide for success in the new changed environment. The resulting frustration leads to clinging on to old ways of doing things (Schreyögg, 2008).
117
THEORETICAL UNDERPINNINGS Organizational Resistance Over time, organizations establish norms and patterns of collective orientation also described as organizational culture. Organizational culture is defined as “the specific collection of values and norms that are shared by people and groups in an organization and that control the way they interact with each other and with stakeholders outside the organization” (Hill & Jones, 2007, p. 381). Change programs that question these norms and values often face intense resistance. The stronger the organizational culture established, the greater the potential for resistance to occur. Phenomena of organizational resistance include: “Not invented here” syndrome (NIH) (Katz & Allen, 1982) The “not invented here” syndrome is defined as “the tendency of a project group of stable composition to believe it possesses a monopoly of knowledge of its field, which leads it to reject new ideas from outsiders to the likely detriment of its performance” (Katz & Allen, 1982, p. 7). Especially in the context of global R&D, the not invented here syndrome hampers the adoption of outside innovation - a serious obstacle to open innovation (Chesbrough, 2006) and thus access to critically needed innovation. Deep structure (Gersick, 1991; Romanelli & Tushman, 1994) Gersick (1991, p. 14) defines a deep structure as “the set of fundamental ”choices” a system has made of the basic parts into which its units will be organized and the basic activity patterns that will maintain its existence”. A deep structure can be simply understood as the design of the playing field and the rules of the game. This determination of fundamental choices provides an equilibrium and inherently establishes “resistance to change as subunit managers seek to maintain a complex network of commitments and relationships” (Romanelli & Tushman, 1994, p. 1144). Structural inertia (Hannan & Freeman, 1984) Organizations spend considerable resources on the stabilization of business processes and routines and building up external relationships, which creates structural inertia that helps organizations to perform in a given business environment and become survivors of an evolutionary selection process. External sources of structural inertia are “legal, barriers to entry and exit from realms
118
ORGANIZATIONAL THEORY of activity and exchange relations with other organizations that constitute an investment that is not written off lightly” (Hannan & Freeman, 1984, p. 149). Trying to radically change this structure often threatens legitimacy and creates resistance to change. Path dependency (Arthur, 1990; Arthur, 1994; Arthur, Ermoliev & Kaniovski, 1987; David, 1994) Drawing on evolutionary processes in biology, Arthur et al. (1987) transfer to economics findings that self-reinforcing events create structures a system locks into. Successful choices (allocations) that companies makes can become strong attractors (Arthur et al., 1987) so that economies of scale, learning and network effects create an ever-stronger pull to follow this path. As a company increases its reliance on such a path, it becomes increasingly locked-in. New ideas, structures and processes are less likely to be accepted, and resistance to change is reinforced. In a changing market environment, companies can become victims of their own prior success by not being able to adapt to new conditions.
Design of Change Processes Given the many forms of change resistance seen in organizations, designing and implementing an effective change process that addresses and overcomes these forms of resistance is of vital importance for the overall success of the planned organizational transformation and change program. Kurt Lewin’s (1943, 1958) early experimental research on food detestation during World War Two resulted in ground-breaking insights into how to overcome resistance to change and on the design of an archetype change process. According to Lewin, each change process is cyclical, and requires an unfreezing phase in which concern for change is created, a movement phase in which change occurs, and a refreezing phase which stabilizes the performed change. The first phase of “unfreezing” creates a sense of urgency for change and questions previous practice. The second phase is the movement phase in which change is conducted, and is followed by a “refreeze” phase in which the performed change is stabilized. Greiner (1967) sees the “unfreezing” phase as especially critical to overall success, as the failure of change projects is often related to a lack of unfreezing, where an attempt is made to achieve change transformation targets too quickly.
119
THEORETICAL UNDERPINNINGS Unfreeze
Movement
Refreeze
Figure 34: Change Process according to Lewin (Hatch & Cunliffe, 2006)
Lewin used group participation extensively in the design of his change process. As changes need to be accepted and supported by stakeholders, stakeholders and affected employees should actively participate in the change process and receive early information about the planned changes. Change processes in groups are less frightening; when groups are used as the change medium and cooperation among the affected individuals is established, changes are on average conducted more quickly.
Patterns of Successful Transformation Processes Greiner (1967) advanced Lewin’s change process design through a secondary analysis of 18 case studies of successful or failed change processes. Based on his findings, he derived a model of successful transformation with six distinct phases (Greiner, 1967, p. 126).
120
ORGANIZATIONAL THEORY
Figure 35: Greiner’s six phased dynamics of successful organizational change (Greiner, 1967)
The first “pressure and arousal” phase unfreezes the organization and creates a willingness to change with stakeholders as a reaction to either external or internal pressure. While present crisis signals typically increase the willingness to change, Greiner finds that change is more successful when stakeholders confirm the need to change. The second “intervention and reorientation” phase initiates the organizational intervention through a new outsider brought in at the top of the company to provide a new angle on existing problems. Geiner suggests that the external consultant should not provide ready-made solutions from the start, but should rather provide support to members of the organization to jointly create a new solution. In the third “diagnosis and recognition” phase, managers start to collect information from discussions with subordinates and try to identify causes of problems
121
THEORETICAL UNDERPINNINGS in in-group sessions. Greiner finds that the participation of decision-makers is very important to demonstrate to all members the importance of the intended change efforts, especially to show that ideas provided from the bottom up have been discussed and accepted. In the fourth “intervention and commitment” phase, new solutions are generated with the support of the new top manager brought in through widespread and intensive searches for creative solutions, with the newcomer playing an active role, while collaboration and participation are key concepts in the development of new solutions. Testing of the new solution occurs in the fifth “experimentation and search” phase, in which tests are conducted to assess whether the new solution is functioning and whether top management support is really being provided. In the sixth and final “reinforcement and acceptance” phase, positive results and continuous information about the success of the new solution encourage the team to expand it permanently within larger units, so that new structures became daily practice. Based on his study of over 100 US American companies Kotler (1995, 2007) defined, in a manner similar to Lewin and Greiner, a phased change process consisting of eight steps for the successful performance of organizational transformation and change (Kotter, 1995; Kotter, 2007; Kotter & Schlesinger, 2008). While the change processes reviewed above focus mainly on the required steps, considerably more information is needed to guide how these steps should be performed in specific situations. Effective organizational change requires a systematic approach that identifies the underlying causes of organizational issues. Preconceptions often exist as to what the problems are, and applying quick fixes to an organizational subunit instead of to larger organizational areas can guide organizational interventions in the wrong direction (Cummings & Worley, 2005). “In situations requiring complex organizational changes, that change is a long-term process involving considerable innovation and learning on site. It requires a good deal of time and commitment and a willingness to modify and refine changes, as the circumstances require” (Cummings & Worley, 2005, p. 40). Organizations typically demand quick fixes to their problems rather than investing time and resources to understand their problems, evaluating potential actions and implementing them thoroughly.
122
ORGANIZATIONAL THEORY
Figure 36: Kotter’s eight step model of organizational transformation (Kotter, 2007, p. 99)
123
THEORETICAL UNDERPINNINGS Therefore, an overarching theoretical framework is needed to explain organizational factors and their interrelationships that affect the success of organizational transformation and change efforts. Robertson et al.’s meta-analysis of 47 organizational change initiatives (Robertson et al., 1993) provides a comprehensive and empirically validated theoretical model of the dynamics of planned change (see Figure 37). INTERVENTION ACTIVITY
ORGANIZATIONAL WORK SETTING Social Factors Organizating Arrangements
Physical Setting
Technology
Individual Behavior
ORGANIZATIONAL OUTCOMES
Organizational Performance
Individual Development
Figure 37: A theoretical model of the dynamics of planned organizational change (Robertson et al., 1993, p. 621)
Central to Robertson’s theoretical model is the individual behavior of members of the organization. As organizations comprise the collective actions of their members, implementing organizational change to positively influence organizational outcomes is only possible when individual change occurs. In their model, intervention activity is not directly aimed at individual behavior, but is instead channeled through the organizational work settings that create the framework and foundation of individual behavior in organizations. Therefore, according to Robertson’s model, intervention does not directly influence individual behavior, but first has an impact on the various work setting variables, which themselves, in turn, have a positive or negative impact on the individual behavior of members of the organization. Work setting factors may include:
124
ORGANIZATIONAL THEORY Organizing arrangement: organizational structure, management levels and span of control, workers’ council; Social factors: organizational culture, level of collaboration, social networks inside the organization; Physical setting: spatial arrangement of employees worldwide, work materials, intranet access, work space; Technology: availability of decision support and reporting systems. It could be said that the work setting variables and their relationships create a lens that either focuses or distorts the impact of intervention activities and thus the quality of change in individual behavior. Understanding organizational work settings thus becomes a critical prerequisite of every organizational transformation. Intervening activities must be tailored to the specific situation, and if required, must be sidelined through supportive activities adjusting the organizational work setting to ensure the organizational transformation is successful overall. While various additional change management processes have been developed and implemented by practitioners in recent years, especially through specialized business consulting companies, they share many of the characteristics of the four models presented above, such as an intense research phase to study organizational problems and the environment, a phased approach, extensive group participation, up-front information sharing about planned changes, and the tailoring of change actions to specific organizational work settings. New Trends in Change Management Initial research on organizational transformation saw organizational transformation and change as large-scale planned efforts mainly dependent on external resources to successfully trigger and implement a change process. However, these assumptions were subsequently challenged. Greiner’s (1972) meta research based on studies written and analyzed at the Harvard Business School indicated that organizational development should be seen as a reaction to crisis situations encountered on the growth path of organizations rather than as a planned process of organizational transformation. Greiner identifies five distinct phases of growth during which every enterprise must advance and overcome several crisis situations along the way (see Figure 38). Based on insights from Greiner’s research and considering the dynamic environment of today’s business
125
THEORETICAL UNDERPINNINGS world in particular, Schreyögg (2008) postulates that organizations should see organizational transformation and change as a core responsibility of management and not as an intervention planned by external parties. Size of Organization
PHASE 1
Large
PHASE 2
PHASE 3
PHASE 4
PHASE 5
5. Crisis of ?
Evolution stages Revolution stages
4. Crisis of RED TAPE
3. Crisis of CONTROL
5. Growth through COLLABORATION
4. Growth through COORDINATION
2. Crisis of AUTONOMY
2. Growth through DELEGATION 1. Crisis of LEADERSHIP 2. Growth through DIRECTION
1. Growth through CREATIVITY
Small Young Age of Organization
Mature
Figure 38: The five phases of Growth (Greiner, 1972, p. 41)
Similar to the findings of Greiner, Romanelli and Tushman’s (1994) research finds that organizations alternate between long periods of convergence and short intermezzos of discontinuous system-wide shifts to sustain their survival, or in Tushman’s formulation “a pattern of convergence punctuated by upheaval” (Tushman et al., 1986, p. 31). Phases of convergence are relatively stable periods with fewer requirements for change where organizational changes are more of a fine-tuning of existing operations rather than radical change. Times of discontinuous change or upheaval require fundamental transformation of the organization as a reaction to major changes in the company, its internal structures or the company’s environment.
126
ORGANIZATIONAL THEORY The theories of Lewin and Tushman have been criticized, as they see organizational transformation and change as an exception and regard phases of convergence and stability as the rule. This is especially questionable in fast-moving and dynamic sectors like the global software industry, which are subject to an ongoing product innovation cycle. When organizations invest considerable resources to transform and change their organizational structures, managers expect such investment to be sustainable and to deliver benefits and savings not only as a one-off effect, but also on an ongoing basis. Organizational theorists have proposed various concepts by which organizational transformation and change can be sustained in such a dynamic environment.
The self designing organization (Hedberg, Bystrom & Starbuck, 1976; Huber, 1991; Weick, 1977) Several researchers have proposed that “organizations should operate themselves as “experimenting” or “self-designing” organizations, i.e., should maintain themselves in a state of frequent, nearly-continuous change in structures, processes, domains, goals, etc., even in the face of apparently optimal adaptation” (Huber, 1991, p. 93). Hedberg, Nystrom, and Starbuck (1976) see this mode of operation as efficacious or even required to survive in high-velocity environments. The inherent flexibility of the state of flux in which such an organization would operate could facilitate the adoption of changing or new environments. One example often cited is Weick’s (1977) concept of the self-designing organization. Weick developed the concept of a self-designing organization by studying a coordination breakdown in the Skylab III space mission caused by excessive coordination requirements mission control provided in the form of work lists, which ultimately led to a one-day strike among astronauts in space. Weick suggests that if organizations encounter a problem, they should not hire external consultants to conduct the required transformation, but by “turning students [into] teachers”, should implement the transformation themselves. In the case of the Skylab incident, he suggests the astronauts should have been given the authority to determine and implement alternative routines within a given framework rather than being micro-managed by mission control.
127
THEORETICAL UNDERPINNINGS
Figure 39: The self-design strategy (Cummings & Worley, 2005, p. 494)
Cummings and Worley (2005) describe a process by which a self-designing organization can be achieved in three interacting stages: 1. Laying the foundation First, the organization acquires knowledge about how it functions, principles for organizational high performance, and the self-design process itself. Typically starting with senior executives, this information is then cascaded down to lower-ranked managers and employees. The second step is to formulate a value statement (or transformation charter) to define corporate values and intended outcomes, as well as required resources and organizational conditions for successful implementation. Third, the current state of the organization is diagnosed to determine the required changes in line with corporate strategy and the transformation charter. 2. Designing Rather than providing a highly detailed design, in this second stage of self design, only broad parameters of the organization are defined with minimum specifications, on the understanding that the design will be adjusted by the executing units and that the changes made will require further adjustments over time. 3. Implementing and Assessing The implementation phase consists of an ongoing cycle of action research in which structures and behaviors are changed, progress is assessed and modifications are made. A feedback loop, in which information is collected about how well implementation is progressing and the degree of fit of the organizational design, provides for ongoing learning throughout the implementation phase and beyond.
128
ORGANIZATIONAL THEORY This continuous flow of feedback initiates subsequent design, diagnosis, valuation and knowledge acquisition activities that give the organization the ability to transform itself continually (see Figure 39). According to this self–design concept, organizations are in a process of continuous transformation at various levels, leading to a fluid and continuously changing organization he calls “the chronically unfrozen organization” (Weick, 1977). Weick (1977, p. 41) sees it as a self-organizing ever-changing entity: “The real subtlety in a chronically unfrozen system is that it may never have to redesign itself. With its steady diet of improvisation, its continual rearrangements of structure, its continual updating to meet changing realities, it may never need a major redesign”. Weick’s thesis can be seen as a precursor of the subsequently developed concept of the network organization (see 3.1.4), where the “network approach to organizational design marks a shift from thinking about stable patterns of interaction to recognizing the need for constant change in support of organizational adaptation to environmental complexity and dynamism” (Hatch & Cunliffe, 2006, p. 308). According to this concept an organizational structure is no longer a result of transformation and change initiatives in times of crisis, but is a result of a continuous overlapping learning process that leads to problem awareness and ongoing organizational adjustments similar to Weick’s self-design of organizations, leading to a fluid organization (Schreyögg & Syndow, 2010). Despite the obvious advantages of creating a flexible organization that can adapt to changes in a dynamic environment, Weick also warns that “continuous improvisation and anarchies are costly in time, costly in coordination costs, expensive in dollars, and costly in terms of the demands they make on people’s attention” (Weick, 1977, p. 40). It requires a considerable amount of attention and creates a state of permanent alertness to make sense of what is happening, leading to persistent stress. Lounamaa and March (1987) criticize this approach by stating that modifying a situation before it can be comprehended is likely to lead to random drift rather than improvement, especially when a considerable amount of changes occur in a short timeframe. Therefore, they suggest reducing the frequency or magnitude of change so that results can be analyzed before a new change occurs. 3.1.8.
Organizational Learning
Organizations that quickly adapt to environmental changes are typically more successful than organizations that do no adapt to such changes. The previous chapter suggests that organizational transformation and change projects are
129
THEORETICAL UNDERPINNINGS means of adaptation by which companies identify gaps between their organization and its environment to initiate change management efforts transforming their organization, thereby achieving a better fit and hence becoming more successful. However, the effectiveness of such “one-off” planned organizational changes and strategic planning initiatives into which organizations invest considerable resources remains questionable. Well-intended strategies often fail (Mintzberg, 1994a); unintended and confusing effects emerge, unforeseeable changes occur and the organizational environment rapidly changes. (Klimecki & Laßleben, 1998). If planned organizational interventions often fail, what is required is a new mode of organizational change that constantly adapts the organization to external and internal changes. In organizational theory, “learning is conceived as a possibility to cope with constantly changing environments, thereby ensuring success and survival” (Klimecki & Laßleben, 1998, p. 1). March and Olsen (1979) were the first to transfer lessons learned from behavioral research into individual learning to define the organizational choice cycle (see Figure 40).
Figure 40: The complete cycle of choice (March & Olsen, 1979, p. 13)
In this cycle organizational learning occurs as a reaction to changed environmental factors (stimulus) in the form of choices taken that lead to environmental actions (response) (March & Olsen, 1979). Learning occurs in the organizational choice cycle in four stages. At the start of the cycle, members of the organization possess cognitions and preferences that form their understanding or “model of the world” based on previous organizational experiences. If they recognize a discrepancy between the current and a desired state, they formulate a problem and design potential solutions, after which they initiate an organizational decision process. Choices (decisions) are then made and action is implemented, resulting in environmental responses that can restart the cycle. This adaptive rational learning cycle, also described as single-loop learning, leads members of the organization to develop ever-better choices or actions based on their pre-
130
ORGANIZATIONAL THEORY vious experience, and thus results in continuous organizational change through this feedback loop. March and Olsen, however, point out that this cycle might not always be performed perfectly; various disturbances may occur in any of the four stages and influence the outcomes of particular stages or the overall cycle of choice, potentially resulting in an imperfect or open cycle. This initial model of organizational learning describes reactive learning, whereas more modern concepts focus on proactive learning, in which learning is seen as the acquisition and development of cognitive structures (Greeno, 1980). While single loop learning depicts a stimulus-response model of learning from past experience, it only works well in an environment where problems and issues are openly discussed. Here, error identification and correction creates an effective learning cycle. However, an open environment is not the norm, as members of the organization may hide or ignore information (Argyris, 1985; Hedberg, 1981) or overestimate their capabilities in addressing problems through actions (Argyris, 1977).
Figure 41: Single, double loop and meta learning own graphic based on (Argyris, 1994; Visser, 2007)
Argyris (1977) thus suggests a double loop learning model (see Figure 41) whereby not only are corrective actions taken in the case of an event, but the overall assumptions, governing variables and theories of action underlying the learning loop are also questioned to derive new qualities of actions. Double loop learning is often the result of “a crisis precipitated by some event in the environment, a revolution from within (new management) or from without, or a crisis created by existing management to shake up the organization” (Argyris, 1977, p. 117). While double loop learning describes a more effective form of learning in dynamic contexts, not only to react to environmental changes with a fixed response as in single loop learning, but also to respond through new actions resulting
131
THEORETICAL UNDERPINNINGS from challenging previous theories, a more comprehensive review of the whole complex of learning in an organization might be required to overcome a crisis or improve performance. Such so-called meta learning is defined as “the reflection on and inquiry into the process of (single-loop and double-loop) learning at the individual and group level in organizations. This form of learning is discontinuous, cognitive, and conscious. It is, to a large extent, amenable to steering and organizing. It is directed at organizational and individual improvement” (Visser, 2007, p. 665). Meta learning thus provides a reflection of learning contexts, the organizational framework of learning, to ensure learning occurs continuously in an organization. In academic literature, the terms organizational learning and knowledge management are often not clearly distinguished from each other, and are sometimes even used synonymously. Organizational learning (OL) enhances an organization’s ability to acquire and develop new knowledge, whereas knowledge management (KM) focuses on how such knowledge can be organized and used to improve performance (see Figure 42). Organizational learning emphasizes organizational structure and learning, and thus is typically one of the responsibilities of an organization’s human resource function, whereas KM is more closely related to tools and techniques that enable an organization to collect, organize and translate information into useful knowledge (Cummings & Worley, 2005).
Figure 42: How organizational learning affects organizational performance (Cummings & Worley, 2005, p. 499) (based on the study of Snyder & Cummings, 1998)
Organizational learning and knowledge management must be aimed at promoting a general increase in the organization’s problem-solving ability so they can become a source of strategic renewal and enhance the organization’s ability to acquire and apply knowledge more quickly than its competitors, thereby achieving sustained competitive advantages (Cummings & Worley, 2005). The study of Klimecki and Laßleben (1998, p. 30) shows that “a high degree of networking activities at the managers’ side corresponds with a high degree of networking activities on the side of the reporting organization members. In our view, this
132
ORGANIZATIONAL THEORY indicates that communicativeness and responsiveness of managers contribute to large and dense OL networks, thus catalyzing OL processes”. Knowledge Management Organizations are systems of knowledge; they acquire and produce knowledge to continuously enhance their knowledge base (Grant, 1996). Knowledge management (KM) has been defined as “the formal management of knowledge for facilitating creation, access, and reuse of knowledge, typically using advanced technology” (O’Leary, 1998, p. 34) and is assumed to have a positive impact on performance (Vera & Crossan, 2005). Knowledge management is closely linked to the management of information technology; common examples of KM tools include Wikis, data warehouses, knowledge repositories, electronic document systems, best practice databases, collaborative tools and decision support systems. There are a range of different philosophical views and conceptual paradigms about what knowledge is and how it can be studied. For Polanyi (1966) knowledge is not a static construct, but is instead a dynamic activity, which could be better described as the process of knowing. He differentiates between explicit knowledge, which is articulated and specified either verbally or in writing, and tacit knowledge, which is unarticulated, intuitive and non-verbalizable (i.e. perception). There is growing interest in studying the alignment between the firm’s knowledge and its strategy, structure, environment and leadership (Bierly & Chakrabarti, 1996; Hedlund, 1994; Sanchez & Mahoney, 1996; Zack, 1999). One conclusion reached in the literature is that learning and the actual relation of knowledge only lead to better performance, when they support and are aligned with the firm’s strategy, or as Vera and Crosan (2005, p. 136) note “the co-alignment between firms learning–knowledge strategy and its business strategy positively moderates the relationship between learning–knowledge and performance”. In sum, organizational learning (OL) focuses on learning as a process of change, whereas organizational knowledge (OK) stresses knowledge as a resource that provides competitive advantages and involves studying the process associated with its management. (Vera & Crossan, 2005)
133
THEORETICAL UNDERPINNINGS The learning organization (Garvin, 1993; Pedler, Boydell & Burgoyne, 1989a; Senge, 1990) “Firms that purposefully construct structures and strategies so as to enhance and maximize organizational learning have been designated ‘learning organizations’” (Dodgson, 1993, p. 337). Pedler et al. (1989, p. 2) define a learning organization as “an organization, which facilitates the learning of all its members and continually transforms itself”. They point out that a self-learning organization emphasizes of individual and organizational self-development that goes beyond traditional measures of training. Senge (1990) identifies the system thinking approach (Checkland, 1999; Checkland, 2010) as the instrumental discipline in the learning organization. As part of a government sponsored research project, Pedler et al. (1989, pp. 3-4) identify distinct characteristics of a learning company which: (i) has a climate in which individual members are encouraged to learn and to develop their full potential: people perform beyond competence, taking initiatives, using and developing their intelligence and being themselves in the job; and which... (ii) extends this learning culture to Include customers, suppliers and other significant stakeholders wherever possible: some Total Quality programs for example have buyer-supplier workshops, invite customers to join in-organization training and development programs and so on; but which also ... (iii) makes human resource development strategy central to business policy so that the processes of individual and organizational learning become a major business activity, such as in IBM where the CEO is reputed to have said ’Our business is learning and we sell the by-products of that learning’; which involves ... (iv) a continuous process of organizational transformation harnessing the fruits of individual learning to make fundamental changes in assumptions, goals, norms and operating procedures on the basis of an internal drive to self-direction and not simply reactively to external pressures. Pedler et al. collected feedback from key personnel at seven large British companies on the reasons organizations were interested in becoming “learning organizations”. Interviewees mentioned the increasing pace of change,
134
ORGANIZATIONAL THEORY competitive pressures to compete/survive or grow, and the failure of previous superficial organizational re-structuring as the main reasons for rethinking their current organization to achieve a process of changing “the way we do things here”. As a preliminary roadmap, the authors make nine recommendations for implementation (Pedler, Boydell & Burgoyne, 1989b, p. 7): 1. Strategy formation, implementation and evaluation should be structured like a learning process with a feedback loop to learn from the consequences of strategic decision-making; 2. Open discussions should be held over organizational strategy, and differences should be recognized and constructively resolved to reach decisions; 3. Management control systems assist learning from the consequences of managerial decisions; 4. Through information and automation, information and communication technology (ICT) empowers the members of the organization to question current operating assumptions and seek information for individual and collective learning about organization norms, goals and processes; 5. Engage in cross-organizational exchange of information on expectations and feedback on satisfaction among all members of the organization to assist learning;12 6. Members of the organization with outside contacts should act as ’environmental scanners’ for the organization and feed this information back to other organization members; 7. Organizational members ought to share information and jointly learn with ’significant others’ outside the organization, i.e. key customers and suppliers;13 8. The culture and management style within the organization should 12 Similar recommendations are given as part of total quality and LEAN methodologies (see, for example, Poppendieck & Poppendieck, 2006). 13 Similar to the “Voice of Customer” in the quality function deployment (QFD) methodology (Mizuno, Akao, Y ji & Ishihara, 1994).
135
THEORETICAL UNDERPINNINGS encourage experimentation, learning and development from successes and failures; 9. Resources and facilities for self-development should be available for all. While it is acknowledged that a learning organization is a highly desirable organizational form, most critics of the concept point to the lack of real-life implementation, with the concept being perceived as more of a vision than a wholly applicable idea. Garvin (1993) criticizes the “utopian view” among academics as being far to abstract to provide a framework for action. It also mainly focuses on organizational culture, without adequately considering other organizational dimensions that must be taken into account in an organizational transformation as part of a holistic approach (Robertson et al., 1993). It remains unclear how the suggestions provided all link up to support an organization’s strategic objective and how their contribution could be measured (Finger & Brand, 1999). To ensure overall strategic control of organizational learning, Finger and Brand (1999, p. 147) suggest “establishing a true management system of an organization’s evolving learning capacity […] by defining indicators of individual and collective learning and by connecting them to all other indicators that help monitor the progress towards an organization’s strategic objectives. Summary – R&D Network Improvement Sections 3.1.6 to 3.1.8 review literature on theories of global R&D networks, organizational transformation and change and organizational learning to build an understanding of which improvement criteria should be chosen to improve a global R&D network, how to design the improvement process, and how to integrate organizational learning to ensure sustainable and ongoing R&D network improvement. Authors have suggested that many types of efficiency criteria, such as physical efficiency (Chiesa, 1995, p. 21; Pearce, 1989, p. 38ff), decision efficiency (Pearce, 1989) and communication efficiency (Fisch, 2001; Fisch, 2003) can be adopted to explain the formation of R&D networks. R&D in global software development is a highly collaborative and knowledge–intense activity. Early studies highlight effective communication as a critical success factor for effective R&D (Allen & Cohen, 1969; Allen & Henn, Günter, 2007; Kuemmerle, 1997). Therefore, in the context of this thesis, communication efficiency is chosen as key criterion to achieve the optimal dispersion of R&D activities in a global R&D network (Fisch,
136
STRATEGIC MANAGEMENT 2003). Fisch’s (2003) conceptualization of a communication-efficient R&D network draws on the concepts of cohesion and coupling to address narrow-band communication among international R&D site locations and create a modular organization. The advantage of such a loosely coupled or modular organization (Weick, 1976) is that it reduces overall coordination and control efforts. The foregoing extensive review of literature on organizational transformation and change makes valuable contributions to the design of a global R&D network improvement process. The archetype change process suggests a step-by-step approach (Greiner, 1967; Kotter, 1995; Kotter, 2007; Kotter & Schlesinger, 2008) should be taken to transform an organization by creating an awareness for change (unfreezing), conducting the change process, and establishing a new organization (refreezing) (Lewin, 1943; Lewin, 1958). Considering the high-velocity environment in which global software development occurs, the R&D network must continuously adapt to new realities. The organizational learning literature reviewed above provides us with various concepts such as the self-designing organization (Hedberg et al., 1976; Huber, 1991; Weick, 1977) and total dynamism (Eisenhardt & Martin, 2000). While the operationalization of such concepts has been criticized (Schreyögg & Kliesch-Eberl, 2007), studies show that single loop, double loop or even meta learning can be applied to achieve sustainable organizational change (Argyris, 1994; Visser, 2007). By reviewing the properties of R&D activities, theories of work design, resource allocation, and R&D networks, and most recently in this section, theories of R&D network improvement and organizational transformation and change, the literature review so far has established a solid understanding of the phenomenon under study and provided the theoretical underpinnings required to improve a global R&D organization in the software industry. The next section reviews theories of strategic management to establish a link between R&D network improvements and the overall conception, implementation and control of strategy. 3.2.
Strategic Management
Previous sections reviewing literature on organizational theory examine methods of effective organizational transformation and change, and show how organizations can continuously change and adjust to new realities through organizational learning.
137
THEORETICAL UNDERPINNINGS This section applies these findings to the strategic management domain by looking at how strategic management can include learning mechanisms to ensure sustainable competitive advantages for both the R&D function and the entire organization. Before addressing this question in the context of dynamic capabilities in section 3.2.4, section 3.2.1 reviews the historical genesis of the strategic management field, section 3.2.2 contrasts intended strategies with emergent strategies, and section 3.2.3 examines the foundations of sustainable competitive advantage.
3.2.1.
Historical Perspective and Definition
The modes by which enterprises conduct corporate planning and management to decide on resource allocations and corporate goals have always reflected the firm’s socioeconomic environment and changes in market structure or consumer demand. Therefore, such modes have evolved through various paradigm shifts in the socioeconomic environment of organizations during the last century. Corporate planning was mainly dominated by financial planning in the relatively stable and predictable markets that prevailed until the end of the 1950s. Thereafter, dynamics and changes became more common features of the external environment as increasing market growth rates and greater consumer awareness made it necessary for organizations to extend their previous planning approach by adopting more of a long-term planning focus including external environmental factors. This long-term planning approach extrapolated trends over a medium term of approximately five years to support strategic decisions - a feasible approach in times of stable growth and a limited impact of environmental factors on corporate planning (Zahn, 1981, p. 149). In tandem with disruptive events such as the end of the Bretton Woods Agreement, which saw the introduction of flexible exchange rates among the leading currencies, and the global oil crisis, the increasing complexity of organizations required a more comprehensive conceptualization of corporate planning and management to enable them to react flexibly to changes in technological or market developments. Strategic planning was seen as the answer to the dynamic changes in the environment, as it represents a concerted attempt to understand phenomena relevant to the environment, especially market developments, and to develop strategies to shape the environment and react to changes therein (Welge & Al-Laham, 2008, p. 13).
138
STRATEGIC MANAGEMENT
Figure 43: Strategic planning process (Welge & Al-Laham, 2008, p. 186)
The strategic planning process (see Figure 43) is based on the rational, normative model of strategic choice (Andrews, 1980; Ansoff, 1965; Hofer & Schendel, 1978). It has been widely applied in corporate planning departments, and “has had enormous impact on how strategy making processes are conceived in practice” (Mintzberg, 1990, p. 171). This sequential model suggests that managers should start strategic planning by defining their corporate goals and mission, after which they should “analyze both their external environment and internal operations […] to define strategy in the context of opportunities and threats,
139
THEORETICAL UNDERPINNINGS and firm strengths and weaknesses […] to optimize achievements of the firms goals” (Hitt & Tyler, 1991, p. 329). After the strategic analysis phase, an overarching corporate strategy should be formulated and broken down hierarchically into business unit strategies, where business units define their strategies to meet overall corporate goals and make them consistent with the corporate strategy. “Business-unit strategy, however, is only an expression of intentions, until people in the operating-departments of the company carry it out. […] So a bridge between business-unit strategy and department operations is crucial” (Newman, Logan & Hegarty, 1989, p. 136). Functional strategies such as R&D strategy, marketing strategy and human resource strategy serve as such a bridge. They define more tactical directions taken to support and operationalize overall business unit and corporate strategy. In this thesis, R&D strategy is of special interest given the growing importance of differentiation (Kim & Mauborgne, 2005) through innovative products to achieve competitive advantages in a highly competitive, constantly changing environment. R&D strategy defines the areas in which R&D activities are to be concentrated and the distribution between basic research, applied R&D and the medium and long-term adjustment of R&D capacity (Brockhoff, 1999). After strategy has been formulated in the strategic planning process, it is assumed that it will be implemented, with strategy controls used to measure its impact and success. However, strategic planning was later heavily criticized for failing to deliver on its promises, with one critic stating “companies had set up elaborate planning systems and devised sophisticated strategies, but little or nothing had changed in corporate performance” (Wilson, 1994, p. 13). Furthermore, setting up a dedicated strategy and planning department separated from the actual operational business created acceptance problems of “imposed strategies” (Mintzberg, 1994a; Mintzberg, 1994b). Strategic planning was lamented for being more concerned with strategy formulation and less concerned with the actual implementation of strategy and the provision of resources to put it into action. Mintzberg (1987, p. 66) therefore sees defining and implementing strategy not as a planning exercise, but as an iterative activity – a craft: “The crafting image better captures the process by which effective strategies come to be. The planning image, long popular in the literature, distorts these processes and thereby misguides organizations that embrace it unreservedly”.
140
STRATEGIC MANAGEMENT Scholars and practitioners later advanced strategic planning in the direction of strategic management to address the problem of strategy implementation. Strategic management is a rather new term coined during a conference in 1977 (Schendel & Hofer, 1979) and has previously been described as business policy. While strategic management intersects with other well-developed fields such as “economics, psychology, political and behavioral sciences” (Bowman, Singh & Thomas, 2002, p. 35) the term remains ambiguous. In their peer evaluation of 447 abstracts of academic journal articles through a panel of strategic management scholars, Nag, Hambrick and Chen (2007, p. 942) formulated an implicit definition of strategic management later confirmed through a survey of scholars in adjacent fields which resulted in no significant difference in their explicit definition: “The field of strategic management deals with (a) the major intended and emergent initiatives (b) taken by general managers on behalf of owners, (c) involving utilization of resources (d) to enhance the performance (e) of firms (f) in their external environments.” Their definition of strategic management points to performance enhancement as the ultimate goal. It indicates that strategic management is directed towards the future of an enterprise, and is “avowedly normative [as] it seeks to guide those aspects of general management that have material effects on the survival and success of the business enterprise” (Teece et al., 1997, p. 28).
3.2.2.
Emergent Strategies and the Resource Allocation Process
The literature review in Chapters 2 and 3, which follows the review process exhibited in Figure 3, yields a comprehensive understanding of R&D activities in the global software industry and how such R&D activities can be effectively divided and integrated given their structural requirements. This section reviews the roles of emerging strategies and the resource allocation process in the allocation of R&D activities among global R&D organizational unit, and how this shapes the overall global R&D organization over time. Emerging vs. Intended Strategies The formulation of intended, deliberate strategies often takes considerable effort and resources. Nevertheless, not all intended strategies are implemented, and of those implemented, not all are successful. Strategy implementation often
141
THEORETICAL UNDERPINNINGS faces serious challenges, as according to Mintzberg and Waters (1985, p. 258), strategies can only be implemented as designed if: (i)
Strategic intentions have been precisely articulated in each important detail so the desired goal is clear to everybody with no ambiguities remaining;
(ii)
Strategic intentions are commonly accepted by members of the organization either in the form of shared intentions or via acceptance from leaders through some form of control;
(iii)
They are undisturbed by outside forces that interfere with or alter outcomes in an environment that is either perfectly calm, predictable or under the full control of the organization.
These conditions are rarely met; if at all, they are more likely to occur in stable environments where companies can effective enforce strategy implementation requirements. More often, however, strategies develop in an unintended way from the actions and decisions taken in the enterprise – they emerge and can only be recognized as such ex post as a consistent “pattern in a stream of decisions” (Mintzberg & Waters, 1985, p. 257). Few strategies in organizations can be clearly labeled as intended or emerging; various forms of strategies exist in the continuum between the two extremes (Mintzberg & Waters, 1985). Dynamics further impede a clear distinction as intended strategies can be altered and therefore become emergent, while emergent strategies can be formalized and selected by managers as intended strategies (Welge & Al-Laham, 2008, p. 21). Emergent strategies “do not mean that management lost control over organizational events, but that it is prepared and capable to learn as a reaction to the environment and the processes in the organization (Ortmann, 2010, p. 3). Management should thus remain constantly aware of emergent strategies, identify them and potentially support them if they are congruent with the organization’s overall targets and goals. In this thesis, the emergent strategy perspective is considered valuable in observing strategy phenomena outside the bounds of formal strategy and in understanding strategy formation in organizations. It demonstrates that strategic management is not only an objective and quantitative pursuit, but also possesses subjective and qualitative elements, as Welge and Al-Laham (2008, p. 21) note, “strategic management is always a vision, a firstly unspecified comprehension about in what direction the enterprise should develop”.
142
STRATEGIC MANAGEMENT While Mintzberg and Waters’ (1985) work provides insights into the real world of strategy formulation, it has also been criticized, as from the emergent strategy perspective, every action or decision can be viewed as strategic, thus causing strategic management lose its focus for truly strategic initiatives as everything becomes strategic (Welge & Al-Laham, 2008). Furthermore, one question that needs to be asked is how emerging strategies “relate to the overall goal system of strategic management, how they consider strength and weaknesses of the organization, the market environment” (Welge & Al-Laham, 2008, p. 22) and ultimately contribute to enhancing an organization’s performance. The Resource Allocation Process Given the wealth of intended and emerging strategies that exist in an organization, a decision that must be made is which of these strategies are ultimately selected and implemented. Decisions of this type are typically made in the resource allocation process (Bower, 1986) (see Figure 44).
Figure 44: Process of strategy formulation and implementation (Christensen & Dann, 1999, p. 4) (based on (Bower, 1986))
143
THEORETICAL UNDERPINNINGS The resource allocation process is the key organizational process that filters intended and emergent strategies and determines which of the proposed strategies receives funding and approval for development and implementation and which do not (Christensen & Dann, 1999). The resource allocation process in most organizations is managed by a corporate finance committee that reviews and approves capital projects. Bower (1986) examines the considerations that influence such committees in deciding whether a particular project is approved through the ranks, whether as part of an intended strategy or as part of an emerging strategy that “bubbles up” (Christensen & Dann, 1999). Bower and Gilbert (2005, pp. 27-33) define three distinct sub processes that underlie the resource allocation process: definition, impetus and structural context: Definition is the process by which technical and economic characteristics of a project are determined. New projects represent a response to a problem or opportunity perceived by operational managers; Impetus is the force that moves a project towards funding. Decisions to fund a project often do not depend on the future predicted net present value, but on the track record of the middle manager between operations and top management that determines the selection of a project. As predictions about the future are inherently difficult top managers are more likely to approve projects that have been successful in the past. Middle managers therefore act as a filter as they carefully weight risk and reward and typically sponsor projects that have a low probability of failure; Structural context is the set of forces that shapes the previous processes of definition and impetus. The structural context with its organizational roles, responsibilities and incentive systems therefore shapes strategy. While middle managers act as facilitators between operational level project proposals and top management, they also act as a filter in selecting which projects to support. In addition to such management hierarchies that serve as a filter, other mechanisms also filter the selection of projects by setting the boundaries of the filtering process. What is defined as strategically relevant and which strategic options will be considered are determined by the rules and regulations of a firm (Ortmann, 2010). Not only do existing resources limit the range of possible options, but the perception of the world in an organization also determines
144
STRATEGIC MANAGEMENT what will be considered possible, relevant and meaningful (Ortmann, 2010). Future knowledge acquisition also depends on past knowledge acquisition, as Cohen and Lewinthal (1990) have shown in their conceptualization of absorptive capacity. While filtering mechanisms ensure an effective resource allocation process, the process can break down in certain scenarios requiring an organizational intervention (Eisenmann, 2005, pp. 299-300): in dynamic market environments that require quick decision making, in disinvestment scenarios, in the development of disruptive technologies, and in partial ownership situations such as joint ventures and alliances. “Overall, it seems challenging for established firms to keep the resource allocation process flexible and transparent or time, particularly when it comes to investment projects in untried areas that require improvisation” (Kuemmerle, 2005, p. 184) quoting (Scott, 1987). Therefore it is important to continuously monitor and adjust the internal structural context to ensure the right ideas “bubble up” and move towards the resource allocation process, as “a key problem […] is that ideas get intercepted and do not bubble up from the frontline of management to the top of a firm” (Kuemmerle, 2005, p. 184). Research on the resource allocation process of MNCs provides important insights into the genesis of global R&D networks. As previously discussed, decisions made by most MNCs on the global expansion of R&D units have not been entirely rational with opportunistic, short-term decisions (Gerpott, 1991) leading to uncoordinated “jungle growth” (Boutellier et al., 2008c). Kuemmerle (2005, p. 181) finds, that “only in very few instances did managers involved in an expansion decision really consider overall firm strategy and synergies with the rest of the firm’s existing international network”. The reasons for this phenomenon relate to the structural context of the resource allocation process for international R&D expansions in MNCs. MNCs typically use sophisticated evaluation systems to evaluate the return of investment of proposed expansion projects. “Not surprisingly, most expansion projects that reach the relevant decision-making body show that they pass these previously set hurdles. Often this is not a reflection of the true nature of the project, but a result of ‘gaming’ the system – forecasts get tweaked so that the project passes the hurdle. This happens particularly when the firm does not have a reliable system of ex-post evaluation for past expansion decisions” (Kuemmerle, 2005, p. 181).
145
THEORETICAL UNDERPINNINGS This phenomenon is more prevalent in large enterprises or if a large number of decision makers are involved (Kuemmerle, 2005) due to micro-political processes (Narayanan & Fahey, 1982) of negotiations and bargaining that foster such a “gaming culture” in MNCs. It is thus no surprise that managers often systematically underestimate and under-report the cost of geographic expansion (Kuemmerle & Ellis, 1999). The lack of ongoing monitoring of the progress and success of the expansion and the lack of ex post analysis alluded to impede learning from good practice in many companies. It also encourages bad practices, as it motivates managers to tweak the system without fear of sanctions. The absence of feedback loops in the structural context of the resource allocation process impedes an effective compliance check against strategic goals and expansion (structure) being brought back in line with strategy, thus fostering the previously mentioned “jungle growth” of international R&D expansion. To address these shortcomings, changes to the structural context are necessary to ensure an effective, strategy-compliant resource allocation process that utilizes double loop learning. Based on his extensive study of global R&D organizations, Kuemmerle (2005) provides several recommendations to achieve this goal. First, managers and corporate staff should see the resource allocation process as a positive opportunity and should support ideas “bubbling up” through the hierarchy, encouraging open communication so staff point out items that require the attention of top management. Second, as investment proposals are inherently uncertain, managers should initiate intra-firm discussions within cross-functional teams to encourage information gathering for uncertainty reduction. Third, highly uncertain investments such as international R&D expansion should receive intensive nurturing, attention and ownership through regular reviews. Summary – Allocation of R&D Activities Following the literature review process exhibited in Figure 3, this section reviews literature illuminating how the R&D resource allocation process is conducted in organizations. The studies of Mintzberg (1985) and Welge and Al-Laham (2008) show the limitations of intended strategies and the presence of emerging strategies in strategic management. Based on a previous study by Bower (1986), Christensen and Dann (1999, p. 4) attribute the realization of strategy to the resource allocation process in organizations where intended and unintended strategies receive funding for execu-
146
STRATEGIC MANAGEMENT tion. Bower’s study is especially instrumental for this thesis, as it clearly describes the framework in which resource allocation decisions are made and how such allocations must be influenced in improvement projects. Managing and improving an R&D network must be closely integrated into the strategic management process, as both (1) are aimed at raising performance, typically as part of a company-wide initiative; (2) are closely related to the functional R&D strategy of an MNC; and (3) inform strategic management by highlighting opportunities and limitations of the global R&D portfolio setup. Managers and project teams intending to improve R&D networks need to be aware of emergent strategies and political processes (see also section 3.3.2) to effectively influence decision makers in the resource allocation process and safeguard funding required for R&D network improvements. Kummerle (2005) recommends that ongoing improvements be made to an R&D network through an effective resource allocation process and double loop learning, suggestions considered in this thesis.
3.2.3.
Foundations of strategy – from the market based view (MBV) to the resource based view (RBV)
Strategic management is concerned with the question of how companies succeed. Companies are assumed to be successful if their performance is better than that of the weakest competitor in the market. Superior performance is not, however, sufficient to ensure the long-term survival of the company; superior performance also needs to be sustainable. Similar to contingency theory as previously discussed (see section 3.1.2), under the market-based view, the environment surrounding a company is analyzed and a planning and implementation process is undertaken to fit the organization to its environment. From the market-based view, competitive advantage is “due to competition arising from the structure of the market” (Makhija, 2003, p. 433). One analytical framework commonly used to derive design prescriptions through analysis of the environment is Porter’s (1979, 1998) five forces model. The five forces – the threat of new entrants, the bargaining power of customers, the bargaining power of suppliers, the threat of substitute products or services and the rivalry among competitors – determine the competitiveness of a particular industry and thus the overall rents (profits) that can be achieved in it. Porter (1998, pp. 35-40) provides three distinct strategic options for companies to improve performance in the marketplace and outperform other firms in an industry: cost leadership to provide products and services at significant lower cost to markets, differentiation to provide a new, unique offering to customers
147
THEORETICAL UNDERPINNINGS that differentiates the company from its competitors, and focus on a niche, be it a particular buyer group, product line or geography, to serve a particular target well. The market-based view has been challenged in recent years as “industry environments have become unstable, so internal resources and capabilities rather than external market focus has been viewed as a securer base for formulating strategy. It has become increasingly apparent that competitive advantage rather than industry attractiveness is the primary source of superior profitability” (Grant, 2010, p. 125). Especially in the context of high-velocity environments such as the global hi-tech industry with their high rates of change, “new companies are built around specific technological capabilities” (Grant, 2010, p. 127). This resource-based view of a company sees the company’s resources, rather than its market position, as the main reason for achieving superior performance. One early proponent of the resource-based view, Penrose (1959, p. 23), saw the firm as a “collection of productive resources the disposal of which between different uses and over time is determined by administrative decision. When we regard the function of the private business firm from this point of view, the size of the firm is best gauged by some measure of the productive resources it employs. The physical resources of a firm consist of tangible things—plant, equipment, land and natural resources, raw materials, semi-finished goods, waste products and by-products, and even unsold stocks of finished goods. There are also human resources available in a firm—unskilled and skilled labor, clerical, administrative, financial, legal, technical, and managerial staff.” While some of these resources can be considered a source of superior performance, they require additional attributes to offer sustainable competitive advantage. These resource attributes are valuable, rare, imperfectly imitable and non-substitutable (VRIN), features that “enable or limit the choice of markets it may enter, the levels of profit it may expect” (Wang & Ahmed, 2007, p. 32). Possession of these VRIN resources allows firms to implement strategies that cannot be imitated by competitors and thus create a sustainable competitive advantage (Barney, 1991). This VRIN framework is widely applied in the strategic management domain despite its limited empirical validation (Schroeder, Bates & Junttila, 2002).
148
STRATEGIC MANAGEMENT Resources are not however, sufficient in themselves; they need to be managed and integrated into distinctive capabilities so the firm can make effective use of its resources (Penrose, 1959). Capabilities present a repository of historical experiences and organizational learning (Winter, 2000); therefore, developing capabilities can take a long time and is often costly. In the context of the firm an organizational capability is defined as “a high-level routine (or collection of routines) that, together with its implementing input flows, confers upon an organization’ s management a set of decision options for producing significant outputs of a particular type” (Winter, 2000, p. 983). It is assumed that capabilities build the foundation for both sustainable competitive advantage and provide the foundation for strategy formulation. Capabilities decisive for a firm’s value creation have been described as core competencies14 (Pralahad & Hamel, 1990). “Core competencies make a disproportionate contribution to customer perceived value, they are a differentiator as they are competitively unique and they are extendable as they allow [the firm] to enter new product markets beyond established ones” (Pralahad & Hamel, 1990, pp. 202-207). The resource-based view and the concept of organizational capabilities have been criticized in the context of high-velocity markets, where competitive advantages gained through organizational capabilities and core competencies may not suffice (Eisenhardt & Martin, 2000; Teece, 2009) as such core competencies often become core rigidities (Leonard-Barton, 1992). This critique led to the development of new concepts to determine sustainable competitive advantage in the context of such high-velocity markets. These concepts are reviewed in the following section.
3.2.4.
Strategic management in high velocity markets
The importance of Dynamic Reconfiguration: Dynamic Capabilities (Eisenhardt & Martin, 2000; Helfat, 2007; Teece, 2009; Teece et al., 1997; Wang & Ahmed, 2007) Porter’s assumption that industry structure and product market share, mediated by enterprise behavior, determine enterprise performance is a concept of a 14 Capabilities and competencies are often used synonymously. Hamel and Prahalad argue (Hamel & Prahalad, 1992) that “the distinction between competencies and capabilities is purely semantic”, “although strictly a capability refers to the potential and competence suggests an applied and well-practiced capability” (The Open University, 2006).
149
THEORETICAL UNDERPINNINGS rather static nature. It ignores important intra-organizational factors that restrain choices (i.e. path dependency), factors that impact imitation and appropriability issues, the roles of network effects and the blurred nature of industry boundaries, which are better described as ecosystem, the community of organizations, and institutions that impact the enterprise, its customers and suppliers in the context of high-velocity markets (Teece, 2009). These omissions makes Porter’s model one of limited utility, especially in high velocity markets. Most importantly, the five forces model views market structures as exogenous, when in fact market structures are the (endogenous) result of innovation, learning and interaction among market participants (Teece, 2009) leading to an insufficient appreciation of the nature of innovation, which creates opportunities and changes the rules of the game. Especially in the high velocity global software industry markets are not a given; they are created and shaped by companies and thus are endogenous. Furthermore, Teece (2009) argues that the resource-based view of an organization in today’s fast-moving global environment is not sufficient to provide insights into how to achieve sustainable competitive advantages. This is especially applicable to the global software industry, which operates in high-velocity markets also described as hyper competition, where “market stability is threatened by short product life cycles, short product design cycles, new technologies, frequent entry by unexpected outsiders, repositioning by incumbents and radical redefinitions of market boundaries as diverse industries merge” (D’Aveni & Gunther, 1994, p. 13). The RBV breaks down in such markets; as the duration of a resource-based advantage is inherently unpredictable, maintaining a competitive advantage becomes a major strategic challenge (Eisenhardt & Martin, 2000). In such environments the flexible recombination of organizational resources becomes a key differentiator in reacting to dynamic market changes and thus achieving sustainable competitive advantage. Teece et al. (1997, p. 515) describes winners in the global marketplace as such companies “that can demonstrate timely responsiveness, rapid and flexible product innovation, coupled with the management capability to effectively coordinate and redeploy internal and external competences”. Teece et al. (1997, p. 515) define these qualities of recombination as dynamic capabilities; “dynamic” as companies continuously renew their competencies to adapt to changing market environments and “capabilities” as they are
150
STRATEGIC MANAGEMENT “adapting, integrating, and reconfiguring internal and external organizational skills, resources and functional competences” to achieve this goal. Examples of important dynamic capabilities mentioned include strategic decision making (Eisenhardt, 1989a), resource allocation routines (Burgelman, 2002), knowledge acquisition (Cohen & Levinthal, 1990), new product development (Helfat & Raubitschek, 2000) and most importantly, R&D network management (Doz et al., 2001).
Component Factors of Dynamic Capabilities and Organizational Performance (Wang & Ahmed, 2007) Wang and Ahmed (2007) identify three correlated component factors of dynamic capabilities in their meta-analysis of empirical research on such attributes: adaptive capability, absorptive capability and innovative capability. Adaptive capability refers to the ability of an enterprise not only to identify and capitalize on emerging market opportunities, but also to adapt to environmental changes and align internal resources with external demand. According to Wang and Ahmed (2007), the presence and development of adaptive capability often coincidences with evolution of the organizational form to adjust to changes in the external environment.
Figure 45: A research model of dynamic capabilities (Wang & Ahmed, 2007)
151
THEORETICAL UNDERPINNINGS The second capability, absorptive capability, was earlier identified by Cohen and Levinthal (1990, p. 128) through an empirical study of R&D activity in the American manufacturing sector. It refers to the organizational ability “to recognize the value of new information, assimilate it, and apply it to commercial ends”. Their study finds that firms with higher absorptive capacity, moderated through the firm’s own R&D activity and prior knowledge, have a greater ability to acquire external knowledge from partners and to integrate and transform it into embedded knowledge, an ability especially important in high-velocity markets with rapid technological change. The third capability, innovative capability, “links a firm’s innovativeness to marketplace-based advantage in terms of new products and/or markets” (Wang & Ahmed, 2007, p. 39).Empirical evidence indicates that each of the three component factors of dynamic capabilities, and thus dynamic capabilities themselves, have a positive correlation with organizational performance (Rindova & Kotha, 2001; Wang & Ahmed, 2007). However, Wang and Ahmed (2007) link organizational performance to dynamic capabilities only indirectly, as they create and shape a firm’s resource position (Eisenhardt & Martin, 2000) and capabilities, which determine the overall success of its products in the market place and thus drive organizational performance (see Figure 45). They also point out that “dynamic capabilities are more likely to lead to better firm performance when particular capabilities are developed in line with the firm’s strategic choice” (Wang & Ahmed, 2007, p. 42). Teece et al’s (1997) initial concept of dynamic capabilities has been further advanced in recent years through the research of Eisenhardt and Martin (Eisenhardt & Martin, 2000) and Zollo and Winter (Zollo & Winter, 2002). This literature review thus first introduces Teece’s initial concept before providing an overview of the two more recent conceptualizations of dynamic capabilities. It also provides an alternative conceptualization Schreyögg and Kliesch-Eberl (2007) developed as a critique of the previous three models.
Teece’s Integrative View of Dynamic Capabilities (Teece et al., 1997) Teece’s et al. (1997) integrative view of dynamic capabilities combines the traditional concept of organizational competences with the concepts of organizational learning and self-renewal to explain how organizations not only build up competencies or capabilities, but also reconfigure them over time to achieve
152
STRATEGIC MANAGEMENT sustainable competitive advantage. Integrated as Teece et al. use both static and dynamic elements in their conceptualization of dynamic capabilities. At the core of Teece’s et al. concept lie organizational processes that serve three distinct purposes (see Figure 46). First, organizational processes provide static patterns of coordination and integration that are idiosyncratic for each organization and provide guidance on how such tasks are typically performed. The importance of such patterns in achieving competitive advantages has been confirmed by empirical research finding distinct operational routines for integration and coordination (Clark & Fujimoto, 1991), such as where lean manufacturing (Womack et al., 1992) leads to higher performance than that achieved in peer organizations. Second, the organizational process includes a dynamic component of organizational learning that ensures the organization remains congruent with the changing external environment through ongoing adjustments. Once learning occurs, the third component of the organizational process, the reconfiguration and transformation of organizational resources, takes place to initiate and accomplish the transformation. As change is costly, Teece suggests that firms must develop processes to reduce low-payoff change as part of organizational learning. Positions
technological and organizational resources and market positions
Paths
Past decisions focus strategic choices
Processes of dynamic capabilities reconfigure organizational resources
Learnings improve decision making
Processes
- Static: Organization specific pattern of coordination and Integration - Dynamic: Organizational learning - Reconfiguration
Figure 46: Components of dynamic capabilities (own graphic modeled on the concept of (Teece et al., 1997))
With the term “positions”, Teece subsumes organizational resources and assets that provide the basis for a company’s competitive advantage. Examples are
153
THEORETICAL UNDERPINNINGS technological assets like intellectual property, financial assets, brand assets, and market assets such as market share. In Teece’s definition, “positions” also include organizational boundaries that separate the internal organization from markets, and more broadly, its ecosystems. In Teece’s conception, “paths” describe the path-dependent nature of dynamic capabilities due to past decisions that focus organizational choices, such as through prior investments in certain technologies or enterprises. Paths are typically amplified through increasing returns that reinforce the path taken, as in network effects or economies of scale in the production of goods. In sum, “organizational processes, shaped by the firm’s asset positions and molded by its evolutionary and co-evolutionary paths, explain the essence of the firm’s dynamic capabilities and its competitive advantage” (Teece et al., 1997, p. 519). Key Abilities of Dynamic Capabilities: Sensing, Seizing and Transforming (Teece, 2009) Teece states that enterprises should behave more like biological systems that sense, seize and transform to react to environmental changes. He identifies these as the three key abilities that characterize dynamic capabilities: first, the ability to sense and shape opportunities and threats; second, the ability to seize these opportunities; and third, the ability to maintain a competitive position through the transformation of organizational resources. While already present in his original concept (Teece et al., 1997), Teece (2009) expands the concept of sensing, seizing and transformation in his later studies to move towards a comprehensive framework for enterprise-level competitive advantage in times of rapid technological change. “The need to sense and seize opportunities, as well as reconfigure when change occurs, requires the allocation, reallocation, combination and recombination of resources and assets” (see Figure 47) (Teece, 2009, p. 48).
Figure 47: Foundations of dynamic capabilities and business performance (Teece, 2009, p. 49)
154
STRATEGIC MANAGEMENT SENSING In dynamic sectors such as the global software industry, very little depends on optimization against known constraints, but the driving force is innovation with its discovery and exploitation of new ideas. To find innovative ideas, companies need to scan and search through technologies and markets. Organizations must thus process internal and external information and rate its value, thus demonstrating a high absorptive capacity (Cohen & Levinthal, 1990). Modern organizations involve their ecosystem of partners, customers and suppliers to drive open innovation (Chesbrough, 2006). Teece notes that decentralized organizations are typically better than centralized organizations in sensing opportunities and threats, as they are closer to sensing changes in markets, technologies or the ecosystem. SEIZING To seize market opportunities, organizations need to select the right product architectures or business models at the right time with the right investments. Here, Teece emphasizes the pairing of top management skills to influence decision making with entrepreneurial thinking that overrides established path-dependent decision-making patterns that typically hamper innovation and the seizing of opportunities and their necessary funding. Early investment and seizing is required, especially when opportunities offer increasing returns through network effects or as a result of platform strategies where the winner takes all. MANAGING THREATS / TRANSFORMING Organizations that demonstrate the ability to sense and seize are likely to be successful in the market place. Over time however, successes reinforce patterns of organizational routine and decision making that can create path dependencies (Arthur, 1994; David, 1994) or transform core competencies into core rigidities (Leonard-Barton, 1992). These issues often require organizations to reorganize or readjust their business models to regain evolutionary fitness for long-term survival. Teece (Teece, 2009) postulates that managers should find the right balance between decentralized autonomous decision making and centralized coordination through semi constant readjustments to reach a state that Simon and Herbert (2002) describe as “near decomposability” a property of biological systems that describes how to delineate one module from another, which is related to the concept of modularization (Baldwin & Clark, 2000) (see also Chapter ). Alternatively, in simpler terms - managers should find the right
155
THEORETICAL UNDERPINNINGS degree of granularity in modularizing the organization to achieve a balance between effective coordination and specialization through the division of labor. Co-specialization, the interlinking of previously unrelated assets to create unique offerings, is another foundation of the dynamic creation of competitive advantages over competitors without the ability to create co-specialized assets. Excellence in the orchestration of these three abilities through managers supports an enterprise’s capacity to successfully innovate and deliver superior long-term financial results (Teece, 2009). Such excellence, however, requires top management leadership skills and support to orchestrate the organization, and thus to ensure that dynamic capabilities are effective in driving organizational renewal, as the orchestration of dynamic capabilities runs across traditional organizational boundaries and is subject to intra-organizational emerging phenomena such as organizational politics (see section 3.3.2). While the simplicity of Teece’s concept of adding the dimensions of organizational learning and self renewal to an existing competency framework is quite appealing, Schreyögg and Kliesch-Eberl (2007) points out a major paradox in the endeavor to develop a concept to describe sustainable competitive advantage in dynamic environments. Competencies are defined as distinct and reproducible patterns of action or repertoires of practice. If, according to Teece’s concept, these patterns become dynamic and their structure changes case–bycase, they lose their character of a reproducible pattern and become, according to Schreyögg, spontaneous acts of improvisation. Dynamic capabilities dissolve these patterns and dynamism thus goes too far, as they lose the character of a pattern and the strategic power attributed to them by the resource-based view.
Total Dynamism (Eisenhardt & Martin, 2000) Based on Teece’s work, Eisenhardt and Martin (2000, p. 1107) define dynamic capabilities as: “The firm’s processes that use resources— specifically the processes to integrate, reconfigure, gain and release resources—to match and even create market change. Dynamic capabilities thus are the organizational and strategic routines by which firms achieve new resource configurations as markets emerge, collide, split, evolve, and die”
156
STRATEGIC MANAGEMENT Eisenhardt and Martin suggest two different categories of dynamic capabilities depending on the velocity of the market in which the organization is operating. While in moderately dynamic markets, existing competencies available for recombination should be maintained and should be subject to incremental adjustments, in high-velocity markets, competencies should have a high degree of variability. While the first category utilizes existing patterns, potentially with minimal adjustments, Eisenhardt and Martin suggest that in high-velocity markets, depending on the situation, resource combinations should occur without utilizing existing recombination patterns, suggesting a fully flexible and open system that responds to environmental signals with new processes of improvisation and self-organization (Eisenhardt & Martin, 2000). This concept of a stateless form of establishing combination patterns, however, creates the paradox that it does not describe how to build up and manage dynamic capabilities, but postulates total flexibility and the case-by-case recombination of resources, which can be considered adhocracy (Mintzberg, 1979), or more moderately considered an ongoing learning process (Zollo & Winter, 2002). Schreyögg and Kliesch-Eberl (2007) criticize this conceptualization of dynamic capabilities for going too far and being too abstract to be actionable, as organizations require boundaries and reference patterns for the selection and recombination of resources. They also point out that because this idealized model of total flexibility does not include efficiency gains, specialization and synergies, recombination costs are neglected.
The Role of Organizational Learning in the Genesis of Dynamic Capabilities (Zollo & Winter, 2002) Zollo and Winter (2002) conceptualize dynamic capabilities as higher order innovation routines that systematically and continuously create, modify and enhance organizational routines (see Figure 48). According to their concept dynamic capabilities emerge from organizational learning when organizations adjust their operating procedures “through a stable activity dedicated to process improvements [i.e.] an organization that develops from its initial experiences with acquisition or joint ventures a process to manage such projects in a systematic and predictable fashion” (Zollo & Winter, 2002, p. 340). Dynamic capabilities emerge as a meta-competency of methods, practices and change processes to innovatively modify the lower order routines of organizational competencies.
157
THEORETICAL UNDERPINNINGS According to Zollo and Winter (2002), learning occurs in the form of experience through the accumulation of tacit knowledge (learning by doing) or through knowledge articulation (collaborative discussions) and codification (in the form of manuals, blueprints or databases). Experience accumulation is effective in stable environments with a stable workforce, making it possible to accumulate experience over time. In such a stable environment, a single learning may suffice to provide an organization with adequate operational routines. Here, dynamic capabilities are mostly unnecessary, as they are costly to establish and maintain and are not required, because operational processes may already provide sufficient competitive advantage.
Figure 48: Learning, dynamic capabilities and operational routines (own graphic based on the model of (Zollo & Winter, 2002))
In high-velocity environments, however, operational routines, “dynamic capabilities and even the higher-order learning approaches […] will need to be updated repeatedly [as] failure to do so turns core competencies into core rigidities” (Zollo & Winter, 2002, p. 341). Organizational learning, especially in the form of knowledge articulation and codification, may require considerable human or financial resources. While opportunity costs have to be considered, they are often used as an argument to suppress learning when it is most valuable and needed, such as in project debriefings to capture knowledge. Zollo and Winter state that learning investment in dynamic capabilities will be at its lowest when firms count on the experience accumulation process of “learning by doing”, while learning investment will be
158
STRATEGIC MANAGEMENT at its highest level when the organization relies on knowledge articulation to improve a certain activity. In their study, Zollo and Winter identify several contingencies that moderate the effectiveness of organizational learning, and hence the creation or improvement of dynamic capacities. Environmental features such as technological change and organizational features such as acceptance of change or the properties of the task at hand co-determine the most effective form of organizational learning, in the form of either experience accumulation or knowledge articulation and codification. Zollo and Winter point in particular to the role of knowledge codification as a critical moderator of organizational learning. While the codification of operational routines is common practice in organizations, organizations rarely codify lower frequency tasks such as re-engineering projects. Zollo and Winter point out that managers would rather consider whether investment in the project itself is worthwhile than engage in knowledge codification of the learning experience and hence the development or improvement of a dynamic capability. The meta-competencies Zollo and Winter present are expected to result in the continuous revision of existing organizational capabilities and the systematic creation of new ones, thus constantly adjusting the organization to new external and internal requirements to achieve a fluid state of a dynamic organization in a high-velocity environment. Schreyögg (2008) sees the separation of competencies from innovation routines as a practical concept employed to establish dynamism in the organization without dissolving organizational capabilities, as suggested by Eisenhardt and Martin (2000). It must, however, be questioned whether the idea of “systematic innovation” Zollo and Winter prescribe can address organizational capabilities that are dysfunctional or do not adequately fit changed market requirements. Innovations can neither be planned nor routinized; this is especially unlikely in the context of high-velocity markets that require the continuous readjustment of organizational capabilities.
Competence Monitoring (Schreyögg & Kliesch-Eberl, 2007) Based on his critique of previous conceptualizations of dynamic capabilities, Schreyögg and Kliesch-Eberl (2007) suggest the concept of competence monitoring as an alternative perspective on dynamic capabilities. To dynamize the organization, they envision a control function whereby the enterprise is continu-
159
THEORETICAL UNDERPINNINGS ously scanned for dysfunctional or ineffective operational capabilities to identify misalignments with the internal and external environments. This conceptual separation of dynamic capabilities from operational capabilities allows the firm to exploit the power of patterned problem solving (organizational capabilities) while safeguarding against dysfunctional tip over by installing an alerting surveillance function that identifies required areas of change, thus counterbalancing patterned selection capabilities and dynamization processes to ensure organizational renewal in areas that require it and maintaining effective established problem solving patterns in areas that require no change (see Figure 49).
Operational level t0
t1
t2
t4 … tn
t3
Capability Practices
lock-ins?
inertia?
cognitive traps?
Observational level
Capability Monitoring
internal environment external environment
Figure 49: A dual-process model of capability dynamization (Schreyögg & KlieschEberl, 2007)
The monitoring function should be designed to be flexible, as it should be a counterbalance to routinized operational capabilities. To guarantee a firm’s responsiveness and flexibility, therefore, the scanning process has to be in flux. Monitoring should primarily search for weak signals, such as via conferences, customer feedback, etc., to alert the firm to potential misalignments. Weak signals still allow for the timely adjustment of organizational capabilities, while strong signals such as a crisis typically originate at a very late stage that does not provide sufficient time to adjust. Managers must continuously listen to and learn from weak signals and keep the stream flowing; however, in reality, managers’ initial reactions are often to send out threatening messages or ignore such signals to keep such streams under control (Schreyögg & Kliesch-Eberl, 2007). Once misalignments have been identified, a decision about the required
160
STRATEGIC MANAGEMENT change must be made: whether the existing problem-solving pattern should be used or abandoned in favor of a new approach. In reality, on the continuum from total change to maintaining the status quo, various change options exist that should be considered before implementation. The monitoring function should be designed to be flexible, as it should be a counterbalance to routinized operational capabilities. To guarantee a firm’s responsiveness and flexibility, therefore, the scanning process has to be in flux. Monitoring should primarily search for weak signals, such as via conferences, customer feedback, etc., to alert the firm to potential misalignments. Weak signals still allow for the timely adjustment of organizational capabilities, while strong signals such as a crisis typically originate at a very late stage that does not provide sufficient time to adjust. Managers must continuously listen to and learn from weak signals and keep the stream flowing; however, in reality, managers’ initial reactions are often to send out threatening messages or ignore such signals to keep such streams under control (Schreyögg & Kliesch-Eberl, 2007). Once misalignments have been identified, a decision about the required change must be made: whether the existing problem-solving pattern should be used or abandoned in favor of a new approach. In reality, on the continuum from total change to maintaining the status quo, various change options exist that should be considered before implementation. Rather than postulating a permanent state of transformation, Schreyögg’s model adopts a more differentiated approach in looking at misaligned elements of organizational capabilities to decide which areas to change and which to retain. Especially in cases where the impact cannot be assessed or change signals are too weak or vague, Schreyögg recommends staying with the established pattern of organizational capabilities. Change costs in such situations may be too high, and as monitoring itself causes costs, accepting a higher level of risk might be a valid option in some cases.
Conclusion – Strategic Management in High-Velocity Markets Traditional strategic management perspectives lack descriptive power in the context of high-velocity markets such as the global software development industry. Previously considered sources of sustainable competitive advantage, such as those suggested in the resource-based view (RBV) or in the VIRO framework, no longer suffice in such an environment, as market dynamics render the competi-
161
THEORETICAL UNDERPINNINGS tive advantages of resources and operational routines obsolete. A dynamization of operational capabilities is thus required to readjust existing operational resources and capabilities to new realities in the changing market environment. The foregoing review considers several models that offer conceptualizations of dynamism in strategic management; these models need to be reviewed in terms of their operationalization in the corporate environment considered by this study:
162
-
Teece et al. (Teece et al., 1997; Teece, 2009) propose the concept of dynamic capabilities that integrate, allocate and modify organizational capabilities to new market realities, promising the evolutionary fitness and thus overall sustainable advantage of the enterprise. Teece envisions the enterprise as almost akin to a biological system that senses and seizes opportunities and addresses threats and transforms itself accordingly. However, it needs to be pointed out that managing dynamic capabilities is a daunting task. Changing static organizations and processes requires senior management skills, legitimation and authority provided by top management, and considerable negotiation skills, as dynamic capabilities cannot be executed against the business, but consent that this approach is required must be obtained to ensure long-term sustainable development. Therefore, it is suggested that the responsibility for dynamic capabilities vital to the success and long-term survival of the enterprise be the responsibility of top management to ensure legitimation and authority, and that the firms be staffed by experienced senior managers who have previously mastered organizational transformations.
-
Eisenhardt and Martin (2000) challenge the traditional view of capabilities as historically established patterns of practice and postulate a total dynamism of organizational capabilities concept, a radical approach similar to Mintzberg’s adhocracy whereby each form of dynamism of organizational capabilities is treated as a new case and thus triggers considerable adjustment costs. It remains questionable whether this concept of a stateless organization provides an actionable framework for organizational design, as total dynamism completely defies organizational design and is costly over time, as new problem-solving routines need to be developed for each new case not offering a learning curve or “economies of problem solving”.
STRATEGIC MANAGEMENT -
Zollo and Winter (2002) see dynamic capabilities as higher order corrective of organizational competencies that innovatively modify organizational competencies based on organizational learning. Criticism of this concept targets the routine nature of such modifications, as innovative solutions to organizational problems lack the ability to be routinized and such routines may not reach the organizational capabilities most need of dynamism.
-
Schreyögg and Kliesch-Eberl’s (2007) approach of capability monitoring represents an attemtpt to address the operational shortcomings of the concept of dynamic capabilities through an additional organizational function whereby the organization is continuously scanned for inhibitors to dynamism, such as path dependencies, structural inertia or the not-invented-here syndrome. Nevertheless, uncertainty over the effectiveness and potentially high costs of this approach make it hard to justify in a typical corporate context.
While the concepts reviwed above are undoubtedly important ideas that ensure the Schumperterian vision of “creative destruction” (Schumpeter, 1934) through the renovation and adjustment of operational capabilities, they lack clear guidelines for their operationalization or implementation in a real corporate situation. Questions about an operation model, resourcing and their position in the organization remain unanswered.
One feature shared by all these concepts is that they are imposed by a higher authority, a body that does the controlling and triggers the start of a new phase of recombination, rather than a model where organizational members closer to the problem provide a trigger and solution according to the previously discussed LEAN philosophy. The role of internal emerging phenomena is thus not adequately considered as it is in other organizational conceptualizations such as the previously mentioned self-designing organization (see section 3.1.7). It remains unclear whether the centralistic management of dynamic capabilities or the “grass roots” approach seen in the self-designing organization would be more successful in terms of practical implementation.
163
THEORETICAL UNDERPINNINGS 3.3. 3.3.1.
Internal Dynamics Informal Order
In this study, organizational behavior has primarily been seen so far as a result of planned, intended organizational design, with organizational members fully complying to designed processes and policies. In the global division of labor context, however, additional spontaneous forms of organizational integration occur which are of great importance for organizational performance, although they are not planned activities (Daft, 2009). Such forms of organizational integration, corporate practices, patterns and routines establish themselves over time and are often interrelated (Gherardi & Nicolini, 2002; Orlikowski, 2002). While these have previously been described as informal processes and structures, more recently they have also been described as emerging processes and structures. Two characteristics of emerging phenomena are that they cannot be traced back to a single intervention and their results are not predictable, as the structure that leads to a result only establishes itself during the process (Krohn & Küppers, 1992). Traditional methods of organizational design are thus inadequate to deal with emerging processes and structures, as they defy clear planning and execution - they emerge. Successful organizational design must, however, consider and include such emerging structures and processes and overcome this paradox. They are especially important to consider in the context of organizational transformation, as they may have a greater impact on transformation than do planned structures and processes, both in enhancing and eroding organizational performance (Schreyögg, 2008). Informal developments were traditionally considered harmful, as they establish their own principles of order without legitimation against the organizational power monopoly secured through contractual relationships between the organization and its members (Schreyögg, 2008). This perspective has changed in recent years; informal processes and structures are now seen as complementary and essential counterbalances to formal organizational design (Grün, 1980). Formal and informal structures do not exclude each other; they augment each other instead. Luhman (1964) remarks that the ongoing consequential and consistent formalization of organizations can lead to a path dependency that narrows the focus of the organization over time, a development that has the potential to endan-
164
INTERNAL DYNAMICS ger the survival of an organization, especially considering today’s complex and fast-moving environment. Here, he sees informal structures and processes as instrumental in overcoming such a path dependency, as they open up the organization to alternative approaches, allow for the flexible handling of formal expectations, and enable the firm to handle conflicting requirements from within the organization and its environment. Luhman uses the term “functional symbiosis” to describe the interplay of functional and informal organizational structures sustaining the organization and its performance. This leads to the paradox that organizations on one hand establish a formal structure to operate, and at the same time, have to accept and even support an informal organizational structure to reach organizational performance goals (Schreyögg, 2008). In the formal structure rules and policies define boundaries of individual behavior; by contrast the informal structure lacks clear codes of conduct, providing a grey area for employees to interact. Here, Luhmann (1964) speaks of a “useful illegality” to describe the benefit for the organization as its members transcend between formal and informal structures and processes to solve organizational problems, which allows for problems to be worked on more smoothly than would be possible by simply following a rigid framework. To be effective in organizations, members need to learn and navigate both formal and informal structures. Learning through interaction with other organizational members is especially important for informal structures and processes with invisible boundaries defined through interrelationship patterns such as collegiality or organizational culture. These interrelationship patterns ensure that the informal structure is not used too excessively as a constant shortcut that threatens the formal structure, and thus endangers the overall functioning of the organization. Informal structures can allow for ambiguous interpretations and thus occasionally cause conflicts. In such cases, members of the organization typically revert to formal structures and processes as a last resort to resolve such conflicts.
3.3.2.
Political Processes in Organizations
Personal interests and personal power are important elements of emerging structures and processes. They are studied in the context of political processes, with research findings showing that organizational decisions result from specific
165
THEORETICAL UNDERPINNINGS unpredictable dynamics between individuals and groups in every organization (Pettigrew, 1973). Problems of confidentiality, partial involvement and post hoc rationalization make it particularly difficult to uncover the political aspects of strategic decision making through convenient and conventional organizational research methods (Nutt & Wilson, 2010) such as using questionnaires or interviews. Political processes are caused by diverging interests among organizational members and the limited amount of resources available to satisfy all such interests. Decisions tend to become political when they have a non-determined outcome that allows all participants to see a chance that the decision will be realized, at least partly, according to their interest (Schreyögg, 2008). The larger the available decision space the more political decisions tend to become, as outcomes require coalitions, negotiations (Lewicki & Lewicki, 2004) and tactics between participants in the political process. Allison (1971) describes this as “game playing”, as actors in political processes have a decision space and rules that determine the overall framework for actions in a manner similar to real games. Motives for members of the organization to engage into political processes are manifold: the fight for power and prestige, career development, enforcement of one’s own interests and ideas, anxiety about losing face, etc. (Schreyögg, 2008). Political behavior is defined as: “The activities of organizational members [...] when they use resources to enhance or protect their share of an exchange [...] in ways which would be resisted, or ways in which the impact would be resisted, if recognized by the other parties to the exchange. (Frost & Hayes, 1977, p. 8) There is a lack of uniformity in defining organizational politics, as indicated by Gandz and Murray’s (1980) meta-analysis of 26 studies about workplace politics. Their study analyzes the politicization of organizational processes and the locus of political decision-making, finding that mainly interdepartmental coordination, promotions and transfers, the delegation of authority and facilities, and equipment allocation among others are perceived to be political. In the researched sample, Gandz and Murray find that the climate is considered more political at higher managerial levels and as less political among lower managerial and non-managerial groups. Study respondents agreed that the existence of organizational politics was commonplace in most organizations, and that successful executives had to be good politicians.
166
INTERNAL DYNAMICS Therefore, managers need to recognize this phenomenon and acquire and exercise political awareness for successful organizational design and transformation, as political processes influence many organizational decisions. Hayes (1984, p. 23) notes, “politically competent managers influence others and exercise power while keeping in mind survival and growth of the organization, they are able to realistically assess the reciprocity and interdependence of inter-organizational working agreements and bargains”. Political awareness is particularly important in strategic decision making, where interests, conflicts and power between individuals and groups intensively coincidence. Nutt and Wilson note “Strategic decisions are complex, significant and subject to uncertainty by their nature. Their complexity legitimates multiple views as to appropriate outcomes or solutions, as well as providing a power base for groups with special knowledge or skills to deal with the complexity” (Nutt & Wilson, 2010, p. 105). Wilson (2003) thus characterizes the strategic decision-making process as political in nature, a perspective shared by Mumford and Pettigrew (1975, pp. 20-21), who see organizational politics as a natural phenomenon15: “As long as organizations continue as resource sharing systems where there is an inevitable scarcity of those resources, political behavior will occur […] such political behavior is likely to be a special feature of large-scale innovative decisions. These decisions are likely to threaten existing patterns of resource sharing.” Rationality has long been recognized as a central characteristic of strategic decision-making and strategic planning. Although strategic decision-making triggers political behavior and is subject to a high level of uncertainty, decision makers typically apply a high degree of rationality by collecting and analyzing information so they are perceived as capable managers. Managers are expected to be especially rational when deciding upon issues of great relevance to the success or survival of the organization. Rationality and political behavior should, however, not be seen as irreconcilable, but rather as complementary in strategic decision making, or as Eisenhardt and Zbaracki (1992, p. 35) describe it, strategic decision-making is “an interweaving of both boundedly rational and political processes”. In the minds of most researchers and practitioners organizational politics have a negative connotation (Nutt & Wilson, 2010). Employees often intuitively per15 However, this perspective however can introduce a fatalistic view that has the potential to obstruct organizational transformation and change.
167
THEORETICAL UNDERPINNINGS ceive organizational politics as a negative characteristic of an organization, as they observe that politics negatively affect decision making and overall organizational performance, a circumstance confirmed by Eisenhardt and Bourgeois (1988) in their study of the high-velocity microcomputer. Their study shows that power centralization and conflicts are positively correlated with politics. Interestingly, they find that politics in companies in the microcomputer industry lead to stable alliance patterns supported by demographic similarities, which in turn lead to information restrictions. These resulting information restrictions and the overall amount of time consumed by political processes have a strong negative correlation with organizational performance (see Figure 50), a strong indication that organizational politics can lead to poor organizational performance in the microcomputer industry. Origins of Politics
Organization of Politics
Effects of Politics
+
Power Centralization
Demographic similarity
+
Time consumption
+
Politics
Conflict
+
+
Stability of Alliance Patterns
+
Performance
Information Restriction
+ -
Figure 50: A model of the politics of strategic decision making in high-velocity environments (Eisenhardt & Zbaracki, 1992)
Elbanna (2006) reports similar findings in a comprehensive meta-analysis identifying three distinct problem areas of organizational politics in strategic decision-making were identified: 1) INTRODUCTION OF BIAS Political tactics adopted by actors in the political process often influence decision making. When actors provide selective and biased information as part of such tactics, information required for strategic decision making may be distorted, which can alter the outcome of the strategic decision-making process. This is not the best outcome of the strategic decision-making process, which is typically achieved through open discussions and the sharing of information among decision makers. Organizational politics may thus lead managers to
168
INTERNAL DYNAMICS make decisions based on incomplete information, resulting in disappointing outcomes (Dean & Sharfman, 1996). 2) CREATION OF DISAGREEMENT The sometimes divisive nature of the political process may inhibit actors from agreeing on key strategic concepts and their effective implementation if they result from a political decision process. Zahra finds that the more organizational politics are observed in a manufacturing industry enterprise, the less likely it is that decision-makers will reach a consensus on making and implementing strategic decisions (Zahra, 1987). Therefore, the time and resources required to reach a consensus in a political decision process result in considerable opportunity costs, leading Mintzberg (1985) to the conclusion that overall, political behavior is a waste of organizational resources. 3) INCOMPLETE UNDERSTANDING OF ENVIRONMENTAL CONSTRAINTS Because organizational politics serve mainly internal interests and positions, decision makers are less likely to consider external constraints that will limit alternatives, as these may conflict with the interests of powerful individuals in the organization. With such a bias towards internal affairs, political processes are unlikely to encourage a complete and accurate analysis of strategic decisions, which increases the possibility of poor performance and unsuccessful choices. Considering the strong negative characteristics of the political process found in research on strategic decision making, it must be asked why strategic decision making is continuously subject to political influence, and why corrective mechanisms such as feedback from other members of the organization or external sources do not correct these shortcomings over time. The reason for this paradox lies in the nature of strategic decisions. Given they are long-term, complex and taken under conditions of uncertainty, the time between the making of a strategic decision and the point at which its results are available hampers the establishment of a feedback loop that would allow organizational learning to improve the strategic decision-making process. Despite this negative perspective on politics, some authors argue that political processes also have positive consequences. Eisenhardt and Bourgeois (1988) argue that despite the strong negative correlation found between organizational politics and the organization’s performance, organizational politics may be beneficial in a rapidly changing environment, as they serve as a mechanism for organizational adaptation. Similarly, Zahra (1987) finds that organizational
169
THEORETICAL UNDERPINNINGS politics may enhance the quality of long-term planning, effective strategy selection and effective strategy implementation. A broader perspective of organizational politics may thus be required, as political behavior can encourage the examination of multiple perspectives and assumptions that may result in better decisions. Therefore, Simmers (1998) differentiates between competitive and collaborative politics. Competitive politics are concerned with the fight for power and interests at the expense of others, a zero sum game with a win-lose outcome. In contrast, cooperative politics assume people will support each other despite conflict: organizational members are concerned with organizational well-being and development, and engage in win-win competition. Winning or loosing should not, however, be seen as absolute, but should instead be regarded as a temporary state in one of many parallel games; as Schreyögg (2008) points out, a winner in one political game might lose in another, and vice versa. Organizational politics should thus be approached with a balanced view and “evaluated according to their effect on the ability of an organization to pursue the appropriate mission efficiently in the long term” (Elbanna, 2006, p. 14) quoting (Mintzberg, Ahlstrand & Lampel, 1998). Such a pragmatic view of political processes focuses on outcomes rather than on the means by which political processes play out in organizations. In this context, Nutt and Wilson’s (2010) variance model provides a structured view of the political impact of strategic decision making by differentiating between antecedent conditions, the form of political behavior exercised, and the consequences for organizational performance (see Figure 51). As already pointed out, political behavior typically has a negative correlation with organizational performance. However, Nutt and Wilson show that if organizational managers have the political skills to handle conflict and political behavior constructively so they produce a diverse range of arguments while preserving a collaborative culture, active organizational politics may assist an organization to learn and adapt. Politics can become a positive attribute if applied constructively by skillful managers.
170
INTERNAL DYNAMICS
Figure 51: A variance model of the political aspects of strategic decision making (Nutt & Wilson, 2010)
While political processes are an important aspect of emerging processes, several longitudinal studies have indicated that their outcomes and influence might be overrated, such as in relatively similar patterns of distribution in budgeting processes; Schreyögg (2008) thus suspects that organizational inertia (Hannan & Freeman, 1984) and path dependency (Miller, 1993) also apply to political processes. This conclusion is analogous to the finding that political processes establish coalitions that are relatively stable over time (Eisenhardt & Bourgeois, 1988). 3.3.3.
Cultural Aspects of Globally Distributed Work
The increasingly global distribution of work has led to greater cross-country interaction, either intra or inter-company. This increase in cross-country and cross-cultural interaction has required global companies to find ways to understand cultural differences and provide guidance to their foreign subsidiaries on how to manage cross-cultural operations effectively. Several authors refer to the concept of “cultural intelligence”, which includes cultural strategic thinking, motivation and behavior required to manage cross-cultural situations effectively (Earley, 2006). While culture is generally understood as referring to national culture, this social concept can also be observed among organizations and companies, where it is referred to as organizational culture. As they have continued expanding globally, many global companies have rolled out their organizational cultures and values to foreign subsidiaries and adopted them to meet the requirements of
171
THEORETICAL UNDERPINNINGS a local environment (Collins, 2001). Here, global organizational culture meets a local national culture, requiring local managers and employees to adopt the global culture and, if necessary, adjust to differences to avoid cultural misunderstandings and establish an effective local organizational culture and operations. In what follows, the terminology of culture is first discussed before the concept of organizational culture is presented. The discussion is based on research conducted by Hofstede (1991), with the research of O’Reilly et al. (1991) and Mathews (2007) being specifically acknowledged in conceptualizing and operationalizing organizational culture.
Definition of Culture The term “culture” originates from ethnology, where many ambiguous definitions are used. In their article entitled “Culture: A Critical Review of Concepts and Definitions”, Kroeber and Kluckhorn (Kroeber & Kluckhohn, 1952) bring together 164 definitions of culture. A more recent approximation to this social construct states that “Culture can be thought of [as] a set of cognitions shared by members of a social unit [with] common elements […] including fundamental assumptions, values, behavioral norms and expectations and larger pattern[s] of behavior” (O’Reilly, Chatman & Caldwell, 1991). Given that a software company will be the object of further observation in this thesis, Hofstede’s definition of culture as “software of the mind” seems a valid definition for the topic at hand. He defines culture as “ […] the collective programming of the mind which distinguishes the members of one group or category of people from another” (Hofstede, 1991, p. 6). The analogy of culture as software of the mind is especially meaningful, as similar to human interaction, today’s business software is required to work across various geographies and cultures, to cover local and global requirements, and to operate in various languages to achieve globally consistent data and results. Hofstede’s cultural study of IBM operations across more than 50 countries (Hofstede, 1991) allows him to empirically validate cultural dimensions to explain differences in national cultures conceptualized twenty years earlier (Inkeles & Levinson, 1969). The bipolar dimensions identified are:
172
-
Power distance (small to large);
-
Collectivism vs. individualism;
INTERNAL DYNAMICS -
Femininity vs. masculinity;
-
Uncertainty avoidance (weak to strong);
-
Long term vs. short-term orientation.
The primary data was retrieved from databases of surveys IBM had conducted internally. They were then aggregated and averaged on a national level to provide a national culture profile with “typical” national cultural characteristics, which was then compared to the cultural dimensions of other nations. One potential benefit of such profiles is that they allow cross-cultural managers to anticipate potential differences in cultural dimensions when interacting with members of a different culture, and to prepare accordingly (Hofstede, 1991). Although Hofstede’s study has been the subject of controversy in academia (McSweeney, 2002), it is still considered a standard work in cultural research, partly because few studies have been conducted with a similar breadth or an improved methodology.
Organizational Culture The notion of an “organizational culture” is quite recent, as it is only since 1960 that the term has been used in the English language, and only more recently has it appeared in the context of commercial organizations, where Deal and Kennedy (Deal & Kennedy, 1982) point out that the content and type of organizational culture differ between successful and less successful companies. Similar to the numerous definitions of culture, the concept of organizational culture has been delineated in a number of ways (Martin, 2002). In one quite straightforward definition, organizational culture is described as “the way things get done around here” (Deal & Kennedy, 1982). Davis provides a more comprehensive definition: “[Corporate] culture is the pattern of shared believes and values that give members of an institution meaning, and provide them with the rules for behavior in their organization.” (Davis, 1990, p. 1) Schreyögg summarizes six key characteristics of organizational culture (Schreyögg, 2008a, pp. 365-366):
173
THEORETICAL UNDERPINNINGS 1) Implicit - Organizational cultures are implicit as they share joint beliefs that are taken for granted in daily business in an organization. Organizational culture is lived, not thought about and usually not self reflected about; 2) Collective - Organizational cultures refer to collective orientations, values, [and] action patterns that form the actions of individuals in the organization. Organizational culture therefore makes organizational practice consistent and coherent [to a certain degree]; 3) Conceptional - Organizational culture presents a conceptual world that provides purpose and orientation in a complex world through providing patterns of selection, interpretation of events and reactions through predefined actions; 4) Emotional - Organizational cultures also provide norms for emotions – what is loved, what [is] hated and what will be rejected aggressively. Organizational culture is holistic, not only analytic; 5) Historical - Organizational culture is the result of continuous historical learning processes. Certain actions lead to accepted problem solutions, others to less accepted solutions, creating and reinforcing over time selection patterns for members of the organization that soon are taken for granted; 6) Interactive - New organizational members learn organizational culture from shared practices of established members of the organization that demonstrate how to act in accordance [with] the organizational culture - learning organizational culture is therefore a process of socialization. Organizational culture can be seen as the collective knowledge base of an enterprise that grows over time from its basic assumptions and is continuously moving, as organizational learning is never-ending (see Figure 52) (Schein, 1984).
174
INTERNAL DYNAMICS
Figure 52: Levels of culture and their interaction (Schein, 1984)
To understand how groups of individuals or countries establish cultures, it could be assumed that organizations establish organizational cultures in the same way. However, Hofstede (1991) makes a clear distinction between national culture and organizational culture by clearly stating that because an organization is not a nation, the use of the term culture for both can be misleading. One major difference he points out is that people can freely choose what organization to join most of the time, unlike a nation, where belonging is determined by birth. To make a clear distinction between the two, Hofstede subdivides culture into national, occupational and organizational levels and argues that national culture is based mainly on values and less on practice, whereas occupational and especially organizational culture are based more on practice than on values, as values acquisition in human beings is already finalized by the age of 10. Values are mainly established through socialization with family members, whereas organizational practice is acquired through socialization with co-workers in an organization to learn practice: symbols, heroes and rituals (Hofstede, 1991). Therefore, it is argued that most values in an organization are hired, whereas organizational practice is trained and acquired through socialization with members of the organization to establish an organizational culture. Consequently, managers in an organization play an important role in maintaining organizational values. Another important characteristic of organizational culture is its strength. A culture can be described as weak or strong depending on the clarity of its boundaries and discrimination between inside and outside (fuzzier vs. clearer). An extremely
175
THEORETICAL UNDERPINNINGS strong organizational culture may show the characteristics of a cult. In a study conducted by Collins (2000), 13 out of 18 observed “visionary companies” with strong organizational cultures were found to have cult-like characteristics such as a fervently held ideology, indoctrination, tightness of fit and elitism. Although this finding might raise the concern that such organizations have an authoritarian culture of total control, Collins describes this cult as being like an organizational culture accompanied by high levels of operational autonomy, and therefore as stimulating progress. In line with Hofstede’s definition of organizational culture, different dimensions of organizational culture also require different measurement practices, with more of a focus on the practices of the organization and less on its values. The IRIC Study Study (Hofstede, Neuijen, Ohayv & Sanders, 1990) of 10 organizations in Denmark and the Netherlands provides a 6-dimensional model of organizational culture as perceived common practices: 1.
Process oriented vs. results oriented;
2.
Employee oriented vs. job oriented;
3.
Parochial vs. professional;
4.
Open systems vs. closed systems;
5.
Loose control vs. tight control;
6.
Normative vs. pragmatic.
These dimensions are not to be seen as normative (i.e. a results orientation is superior to a process orientation) in describing how an organizational culture should be constructed, but should instead be viewed as a description of a particular organizational culture. Gordon (1991) identifies the industry to which an organization belongs as an important moderator in framing organizational culture. Changes in the industry environment or customer requirements are likely to change the organizational culture, as some traits and behaviors that were effective beforehand may no longer be so, and require subsequent adjustment. While Gordon identifies industry as an important moderator of organizational culture, studies conducted in the software industry are still quite rare. Especially in this dynamic and knowledge-intense Industry, organizational culture may have an impact on outcome variables. To gain a better understanding of the
176
INTERNAL DYNAMICS relationship between organizational culture and productivity and quality in the software industry, Mathew (2007) employs a mixed method in a study of two Indian companies. He identifies 8 dimensions of organizational culture at these two software companies: 1.
Empowerment;
2.
Agreement (on issues on the basis of mutual give and take);
3.
Integrity or core values;
4.
Knowledge sharing or organizational learning;
5.
Concern for employees and trust;
6. Mission (vision, strategic direction and emphasis on goals and objectives); 7.
Customer focus; and
8.
High-performance work orientation.
Mathew analyzes the data and calculates correlation factors between the cultural dimensions he identifies and quality and productivity:
Figure 53: Correlation between dimensions of organizational culture in two software companies and quality/productivity, n=464, (Mathew, 2007)
As would be expected in a knowledge-intense industry, knowledge sharing, and the sharing of tacit information in particular, is positively related to the quality of software development. Knowledge sharing was frequently observed in both companies, which was practiced not only formally, but also informally in coffee corners and over lunch. The complexity and short product life cycles (versioning) of software products required high-performance teams to deliver the product on time and in line with quality standards. In the Indian companies Mathew observed, high performance was observed among employees, characterized by long work hours to meet such tight timelines. A clear mission and adherence to standards and processes were communicated, practiced and reinforced by top
177
THEORETICAL UNDERPINNINGS management throughout the organization. This vision and corporate alignment was identified as having a positive correlation with both quality and productivity, as it helped to reduce ambiguity and internal friction. In a particularly interesting finding, ambient artifacts such as parks, gardens and gyms contributed to a productive environment to allow developers “to refresh the mind” when working on complex software development problems. Relevance of Organizational Culture in Business Management In today’s corporate environment, managers are subject to increasing pressure to deliver more innovative products in ever-shorter cycles at a high level of profitability. Business managers thus have to engage in a continuous search for ways to build, increase or optimize the company’s capabilities to successfully sustain its position or excel in this competitive environment. Practitioners try to understand what makes great companies successful, often in the hope of emulating their practices or procedures to drive better performance in their own company. It is clear that in a highly competitive environment, the efficient management of resources (financial, human and other resources) is critical for the overall success of the company. A mismatch between people and the organizational environment is costly, as it leads to higher staff turnover, lower job satisfaction, and greater strain among non-fitting employees who struggle to cope and to excel in their environment. Costs occur through the replenishment of resources for open positions created by employee turnover, reinvestment to rebuild know how, and the initial lower productivity of new employees when performing new tasks (the learning curve). During the last three decades, various researchers have studied successful enterprises to understand the elements and factors that make them superior to their competitors (Collins, 2001; Collins, 2000; Peters & Waterman, 1982; Womack et al., 1992). One of the moderators of success identified in this context is the impact of organizational culture on output variables and the overall success of an enterprise. It is generally assumed that an organization’s culture successfully supports company strategy or its ability to execute a strategy superior to those of its competitors.
178
INTERNAL DYNAMICS Positive and Negative Aspects of Organizational Culture Similar to formal organizational structures, informal organizational structures like organizational cultures can have either positive or negative implications for organizational innovation, flexibility and performance. These different implications have been the subject of recent organizational culture studies (Alvesson, 2002; Mathew, 2007a; Saffold, 1988; Sørensen, 2002; Wiener, 1988) that provide insights into their nature. The positive aspects brought forward by these studies emphasize that organizational culture creates organization unison and hence reduces friction in the organization, thus increasing operational efficiency. Among the positive implications of organizational culture identified are seamless communication, quick decision making, fast implementation, reduced control efforts, mobility, team spirit and stability. Most of the negative implications of organizational culture relate to the inability of an organization to react to changes in the external environment and stay innovative and flexible. Examples mentioned here include a tendency toward isolation, the devaluation of new orientations, barriers to change, remaining fixed on traditional success patterns, and enforcing one’s own culture at the expense of diversity. These negative aspects of organizational culture surface and become an invisible barrier, especially when organizations seek to embrace organizational transformation and change (Lorsch, 1986). Organizational Culture and Performance Since Deal and Kennedy (1982) raised the notion that organizational patterns and culture differ between successful and less successful organizations, both academics and practitioners have been interested in understanding the link between organizational culture and an organization’s performance. The common belief is that an organization benefits from a highly motivated workforce committed towards common organizational goals, and shared values and that the stronger the organizational culture, the better is organizational performance. However Sørensen (2002, p. 70) shows that this assumption is not universally valid as “strong-culture firms excel at incremental change but encounter difficulties in more volatile environments”. Therefore, it can be assumed that strong organizational cultures function in similar ways to strong formal organizations in sharing the benefits of efficiencies in
179
THEORETICAL UNDERPINNINGS established markets and the challenges of highly volatile markets, thus focusing an organization on knowledge exploitation rather than knowledge exploration. 3.4.
External Dynamics
The global software industry is a highly dynamic environment that has been characterized by constant change throughout its short period of existence through phenomena such as globalization, disruptive innovations and mergers and acquisitions. While these phenomena have been mentioned in previous sections of the thesis, this section provides a brief outline and definitions of these phenomena to create a foundation for the solution architecture set out in Chapter 4.
3.4.1.
Globalization
The term “globalization” has often been used ambiguously to describe a phenomenon, cause or state (Bathelt & Glückler, 2003). This thesis recognizes globalization as “a social process in which the constraints of geography on economic, political, social and cultural arrangements recede, in which people become increasingly aware that they are receding and in which people act accordingly” (Waters, 2001, p. 5). This receding of constraints that Waters mentions i.e. through advances in transportation and information and communication technology (ICT) leads to time and space compression (Harvey, 2004) that accelerates the business environment and requires companies to accelerate their business operations in response. Contrary to the widespread believe that globalization has leveled the playing field of economic activity (Friedman, 2006), globalization is still an ongoing process leaving the world presently in a semi-globalized state (Ghemawat, 2007) with a “spiky distribution” of clustered global economic activity (Florida, 2005). Of the several theories of firm globalization, two are discussed as exemplars here: Dunning’s (1988) eclectic model that is based among others on transaction costs economics, among other disciplines, and the Uppsala model of globalization (Johanson & Vahlne, 2009; Johanson & Vahlne, 1977) that takes an organizational learning perspective on firm globalization. Dunning combines several theories in his theoretical framework to explain the globalization of firm activities. In his model, firms choose one of the three different forms of market entry - export, licensing or foreign direct investment
180
EXTERNAL DYNAMICS - depending on the presence of the three advantage categories, ownership, location and internalization (see Figure 54).
Figure 54: The eclectic paradigm of international production (own graphic based on the framework of (Dunning, 1988))
Ownership advantages refer to the competitive advantages firms possess through their ownership of trademarks, IP, and R&D capabilities, for example. Locational advantages refer to factor advantages such as cost differentials between foreign and home markets and to preferential tariffs, taxation or subsidies offered by foreign governments. In Dunning’s model, internalization is based on transaction cost theory, and provides an advantage if goods or services can be produced at lower cost than would be possible by their procurement via foreign markets or through knowledge transfer to external parties. According to Dunning’s model, firms decide to make foreign direct investments and thus globalize in the presence of advantages in all three categories. While Dunning’s model has been criticized for its neglect of dynamic factors such as firm strategy or environmental factors, its lack of empirical validation, and its assumption of rational decision making that leaves aspects of behavioral science including political behavior to one side (Perlitz, 1997) it provides basic insights into the market entry considerations of firms that can be exploited to inform this thesis. The second model explaining firm globalization is the Uppsala model of globalization. Based on an empirical study of Swedish firm internationalization between 1966 and 1975, Johanson and Vahlne (1977) developed a model that explains
181
THEORETICAL UNDERPINNINGS how firms globalize through gradual engagement and experience building in foreign markets. In their process model, market knowledge and market commitment, the latter of which encompasses already committed resources and the degree of commitment to the particular market, affect commitment decisions and how globalization activities are performed (see Figure 55).16
Figure 55: The basic mechanism of internationalization (Johanson & Vahlne, 1977, p. 26)
In this model, firm globalization occurs according to a step-by-step approach to keep the risk of globalization manageable in the face of initial uncertainty, lack of knowledge of the foreign market and costs of information acquisition. Unlike Dunning’s model, the Uppsala model provides a dynamic, longitudinal perspective of firm globalization that incorporates behavioral aspects. It has, however, been criticized for the unilinear path it describes, which leaves little room for strategic decision making and might thus be more applicable in describing the early stages of firm globalization (Andersen, 1993). This thesis utilizes both theories in constructing a solution architecture, as Dunning’s model provides advantage categories that can easily be verified in this study of the globalization of SAP, while Johanson and Vahlne (1977) provide a more dynamic model that could be used to describe and verify SAP’s rapid global expansion in recent years. The two theories are thus seen as complementary in the search for a more comprehensive theory of globalization in providing different angles on SAP’s globalization and global R&D network management.
16 It could be argued that the learning mechanism described in the model of Johanson et al. functions in a similar way to the concept of absorptive capability, where present knowledge acts as a modulator of future knowledge acquisition (Cohen & Levinthal, 1990).
182
EXTERNAL DYNAMICS 3.4.2.
Disruptive innovations
Rapid technological change has been omnipresent throughout global high-tech industries in recent decades, and such change has also shaped the global software industry through, for example, changes in technologies from mainframes to client-server systems and more recently to software as a service (SaaS). Along the way, many companies in the high-tech area once considered industry leaders such as Digital, Baan, and more recently, Nokia, have failed to capitalize on new technologies and faded away. In this scenario of Schumpeterian creative destruction (Schumpeter, 1934), Bower and Christensen (Bower & Christensen, 1995; Christensen, 2003), investigated this pattern of industry leadership failure and coined the term disruptive technology, or later, disruptive innovation as a cause of such failure. Most technological innovations fall into the category of sustaining technologies that improve performance, but do not change markets. From time to time however disruptive innovations occur that offer “a very different package of attributes from the one mainstream customers historically value, and they often perform far worse along one or two dimensions that are particularly important to those customers. As a rule, mainstream customers are unwilling to use a disruptive product in applications they know and understand. At first, then, disruptive technologies tend to be used and valued only in new markets or new applications; in fact, they generally make possible the emergence of new markets” (Christensen, 2003, p. 15). While disruptive innovations target future customer needs, they rarely address the needs of an established customer base. Only as further R&D investments are made and the disruptive technology becomes more mature can it serve the mainstream market (see Figure 56).Therefore, it is no surprise that compelling business cases for disruptive innovation can rarely be made in the resource allocation process of established companies (Bower & Gilbert, 2005) seeking to divert resources from highly profitable engagements with existing customers to invest in disruptive innovations for customers deemed too insignificant (Christensen, 2003). Christensen thus suggests that companies listen to customers beyond their established customer base to identify new products with the potential for profitable growth and thus justifying investment in disruptive technologies.
183
THEORETICAL UNDERPINNINGS
Figure 56: The impact of sustaining and disruptive technological change (Christensen, 2003, p. xvi)
Waiting until the market is ready for the disruptive innovation typically involves a delay until new entrants are in a position to challenge incumbents and it is too late to react - often creating a winner takes all situation through network and lock-in effects due to the high switching costs of customers i.e. when moving to competitive offerings from other software vendors. As the disruptive technology may not fit within mainstream resources and organizations, Christensen further suggests developing disruptive technology business in an independent organization and keeping it independent. Christensen’s conceptualization of disruptive innovation has been criticized for lacking both a concise definition of disruptive innovation and ex ante predictions for disruptive technologies (Danneels, 2004), issues especially important for practitioners. Despite the criticism aimed at Christensen’s disruptive innovation framework, it stresses the importance of proactive management of the resource allocation process to let the right ideas “bubble up” (Kuemmerle, 2005) through providing a suitable structural context, a factor previously pointed out in the review of the resource allocation process in firm globalization context in section 3.2.2. Disruptive technologies can have a considerable impact on the structure of global R&D networks through the formation of new product development teams working on such technologies and their allocation among R&D sites within the
184
EXTERNAL DYNAMICS network, which often results in resource shifts or the termination of obsolete products in product development.
3.4.3.
Merger and Acquisitions
The term ‘merger’ refers to the consolidation of two companies to form a new entity, whereas an ‘acquisition’ refers to a purchase of a target company with both terms often used interchangeably and subsumed under the acronym “M&A” (Bruner, 2009). The general motivation for M&A is typically a combination of various motives such as operating synergies (economies of scale and scope), financial synergy, diversification, market power, strategic realignment due to technological or regulatory change, hubris, buying undervalued assets, power aspirations or tax considerations (DePamphilis, 2011). The focus of M&A in the global software industry lies in the acquisition of new technologies, products and resources that either complement or extend the existing portfolio, as reflected by several M&A strategy statements: SAP: “We continue to undertake targeted acquisitions to support and complement our core focus of product and technological innovation.” (Popp, 2010) Oracle: “By combining with strategic companies, Oracle strengthens its product offerings, accelerates innovation, meets customer demand more rapidly, and expands partner opportunity.” (Oracle, 2009) Microsoft: “[M&A of] Companies that could bolster Microsoft’s position in categories that it is already a player in but does not dominate.”17 The high velocity of the global software industry and the constant threat of disruptive innovations create a high degree of pressure to innovate. As it is often difficult to innovate in large enterprises due to innovation inhibitors such as inertia, path dependence, isomorphism, or bureaucratic structures and control systems that stifle personal autonomy and individual creativity (Carrier, 1994) (compare with section 3.1.7), M&A is typically seen as a path for strategic realignment in dynamic environments to rapidly exploit new products enabled by new technologies (DePamphilis, 2011). However, M&A does not necessarily present a more successful path in the competitive marketplace of the global software industry, as the success of M&A 17
http://bijansabet.com/post/108164554/met-with-chris-liddell-cfo-at-microsoft-yesterday
185
THEORETICAL UNDERPINNINGS transactions varies considerably, as shown by several meta-analyses where “in aggregate, abnormal returns accruing to acquiring firms in the years following an acquisition are negative or, at best, not statistically different from zero” (Cartwright & Schoenberg, 2006, p. S2), which is “consistent with returns in competitive markets” (DePamphilis, 2011, p. 33). It is unfortunate that academic research has not to date been able to improve M&A practice to bolster the success of such transactions; as Cartwright and Schoenberg (2006, p. S4) note, “M&A research has now been ongoing for over 30 years […], despite this robust academic interest, empirical data reveal that there has been little change in acquisition failure rates over the same time period”. Mergers and acquisitions have a substantial impact on the global R&D setup of multinational companies; as Boutellier et al. (2008c, p. 718) point out, “often, international R&D sites are a consequence of non-R&D related corporate decisions such as mergers and acquisitions, and tax optimization”. Merger and acquisitions structurally alter the global R&D network of the acquiring company, often creating a polycentric setup through the addition of R&D sites belonging to the acquired company (Grimpe, 2005) and triggering R&D site consolidation. Further structural alterations occur too, as M&A also affects the resource allocation process, in that the external acquisition of technological knowledge substitutes own R&D activities and thus triggers an alternative use of investment budgets (King, Slotegraaf & Kesner, 2008), which introduces changes to the R&D network structure over time. The global software industry is a high frequency M&A industry where multiple acquisitions across the size spectrum typically occur within a given business year, resulting in constant changes in firms’ global R&D networks. Among the larger software companies, Oracle has acquired 39 companies since 2005, Microsoft has acquired 85 targets since 2005 (worth around $13 billion), and SAP has made 31 key acquisitions since the 1990s. Some of the larger companies acquired by SAP, such as Sybase and Business Objects, had already conducted multiple acquisitions prior to becoming part of the SAP group. Conclusion: External Dynamics The three phenomena of globalization, disruptive innovations and M&A activities exemplify the considerable external dynamics to which global software firms are subject. Globalization has triggered the significant global dispersion of what was previously an ethnocentric centralized software development function across the software industry. The Uppsala model of globalization describes globalization as a gradual learning process. As firms have learned how to set up
186
EXTERNAL DYNAMICS and manage foreign subsidiaries, more R&D activities have gradually shifted to foreign sites, a trend that has accelerated further in recent years, with substantially higher growth rates in foreign subsidiaries than those seen in home country sites. Because globalization remains an ongoing process, it can be assumed that factors like access to talent, labor cost arbitrage and proximity to new customers will continue to drive global decentralization in firms in the foreseeable future. Disruptive innovations that frequently occur in the global software industry over time render an existing customer base obsolete. However, large enterprises have difficulty reacting to disruptive innovations in a timely manner, as they are often unable to engage in continuous renovation or creative destruction to divert resources towards new disruptive innovations. The resulting obsolescence of products and the establishment of new product development teams in the organization result in continuous structural alterations of R&D networks through new team allocations among projects and locations. The global software industry has been characterized by frequent M&A activities often triggered by the need to acquire new technologies and resources that enable innovation. M&A activities not only add new nodes to global R&D networks, but also tend to substitute existing resources, change resource allocations over time, and thus alter the internal structure of the R&D network. In sum, external dynamics introduce continuous change to global software enterprises and constantly alter the structure of their global R&D networks. Globalization continues to disperse resources, disruptive innovations affect internal resource allocations, and M&A adds new nodes to R&D networks and substitutes existing R&D resources. Therefore, organizational design and global R&D network management activities must take into consideration the continuous influx of new locations and constant structural changes resulting from external dynamics. The sustainable effectiveness of global R&D networks requires an awareness of these dynamics and the agility to sense and respond (Haeckel, 1999) to changes. This indicates a clear need for global R&D network management to continuously realign resources and organizational structures through sensing, sizing and transforming, thus establishing global R&D network management as a dynamic capability.
187
INITIAL SOLUTION ARCHITECTURE
CHAPTER 4
INITIAL SOLUTION ARCHITECTURE
To answer the research question following the principles of the action design research methodology, this chapter constructs the initial solution architecture to guide empirical enquiry in this thesis. The solution architecture is described as initial, as it continues to evolve as the study progresses and after the thesis has been completed. While Chapter 2 reviews literature describing and analyzing the phenomena of globally distributed software development, Chapter 3 provides theoretical underpinnings relevant to this thesis. This chapter integrates both streams of literature to form a solution architecture facilitating a focus on the empirical enquiry and an answer to the research question. The solution architecture as shown in Figure 57 draws upon concepts and relationships of work design, R&D network management, and strategic management in the context of an MNC engaged in globally distributed software development. At the core of this study is the R&D network management functioning as a broker or coordinator in assigning R&D activities previously decomposed and integrated through work design in the R&D organization to suitable R&D locations. The assignment process should occur in line with strategies adopted by the strategic management function in the MNC under study. Strategic management defines the overall corporate strategy, which is cascaded down into the functional strategies of R&D strategy for the R&D organization, location strategy for the R&D network function, and other functional strategies such as human resource and facility strategies concerning global R&D locations. All of these functions are subject to the dynamics of both the internal organizational environment and the global environment. Based on the findings of the literature review, a more detailed description of each function and the surrounding environments is developed below. 4.1.
Scope and boundaries of this research
The scope of this study is defined under the following considerations: R&D activities This study focuses on R&D activities and does not cover other organizational activities related to supply chain management or administration. R&D activities are especially knowledge-intense activities that often require highly educated and skilled employees and mostly comprise complex tasks involving a high level
188
SCOPE AND BOUNDARIES OF THIS RESEARCH of coordination. Therefore, R&D activities are more strategic in nature than the operational tasks of an enterprise, and are of special concern for the long-term development of the enterprise (Gerpott, 2005). In this thesis, R&D activities should be understood in line with the definition of Matheson and Matheson (1998, p. 1), who define the term R&D in the broadest sense to mean “any technologically related activity that has the potential to renew or extend present business or generate new ones, including competency development, technological innovation, and product or process improvement”. R&D in global software development These R&D activities are studied in the global software development context, as characterized by its complexity and interconnected globally dispersed R&D activities. The software industry is an example of a knowledge-intensive hightech industry utilizing knowledge workers. Large-scale in-house global software development This study is concerned primarily with the in-house development of large-scale software artifacts, and focuses less on the allocation of R&D activities for 3rd party outsourcing, R&D alliances or collaboration with external R&D networks. The main reason for this focus is that the organization under study does not currently apply these latter concepts on a significant scale in its development function. In addition, the challenges of global distributed software development only occur in large-scale software development, as the development of smaller software systems is mostly organized in a collocated setup and thus does not provide a suitable environment to study the management and improvement of a global R&D organization. Despite the advent of modern concepts such as open source projects, software as a service (SaS) and cloud computing, the main focus of this study is the development of “traditional” software typically installed on physical machines (servers) in enterprises. It also needs to be pointed out that this study is more concerned with organizational aspects of software development and has less of a focus on different technologies and tools used for globally distributed software development. While R&D networks typically include external collaboration partners, in the context of this thesis, the term “R&D networks” is understood to refer to purely internal networks according to the definition of Miles and Snow (1992), with a clear focus on the operations network as defined by Doz et al. (2001).
189
190
Apply Software Engineering Principles (Coupling, Coherence, Modularisation)
Decomposition by Development Phase or Required Skills
Figure 57: Initial solution architecture for this study (own graphic)
JAVA GUI SCREEN
Test Module 1
Individual task
JAVA
Testing
Decomposition by Development Phase or Required Skills
Task group for certain phase / skill
Billing
Individual Module
Apply Software Engineering Principles (Coupling, Coherence, Modularisation)
Software Architecture
ERP
Software Product
Decomposition
Assigning individual or multiple tasks to a position
Individual Position
Grouping similar positions to project team or creating diverse teams by grouping different positions (collocated or distributed)
Development (Project) Team
Grouping of parts of or whole projects teams under one manager in a department (collocated or distributed)
Department
Grouping all development departments working for a particular product or cross product technology under a single management (distributed)
Product Organization
Grouping all product organisations under the global development organization (distributed)
Global Organisational Design
Integration
R&D Work Design
R&D Strategy
R&D Network management through - structural changes of the network - spatial reorganisation of product development - changes in allocation policies - qualitative changes in resource capabilities
Improvement
Manage the allocation of new product development to R&D network nodes (locations) based on the unique requirements of development projects and location characteristics in line with corporate and location strategy
Resources
Corporate Strategy - Guidance on future markets and development locations to establish new or adjust existing locations. Facility Management - Worldwide provisioning and management of office space through lease or purchase
Locations
Corporate Strategy – Guidance on future required technological skill sets and workforce composition Human ResourceManagement – Attract, train and retain talented resources
R&D Network
HR / Facility / Corporate Strategy
Allocation
Location Strategy
Environmental factors relevant to the organization
Organisational environment
Dynamics and complexity of relevant global environment
R&D Network Management
Corporate vision, goals and strategy
Strategic Management
Level of Integration
Global environment
INITIAL SOLUTION ARCHITECTURE
Level of Decomposition
ELEMENTS OF THE INITIAL SOLUTION ARCHITECTURE 4.2.
Elements of the Initial Solution Architecture
Strategic Management Strategic management is concerned with enhancing the performance of an enterprise through intended and emergent strategies to achieve sustainable competitive advantages in the marketplace, thus ensuring the long-term survival of the enterprise (Nag et al., 2007). Because R&D network improvements in MNCs also target improved performance, a close linkage with strategic management is required in this study from both the theoretical and practical perspectives. Along with the changing business environment, strategic management has evolved considerably since its early conception as a business policy focused on strategic planning and strategic management. The intended, deliberate strategies on which strategic management initially focused created problems surrounding the acceptance of such “imposed strategies” (Mintzberg, 1994a; Mintzberg, 1994b). Strategic management scholars have thus acknowledged the need to also incorporate emerging strategies to react to changing environments. The strategic management of an enterprise in the global software industry is a challenging undertaking. In this high-velocity environment (Eisenhardt & Bourgeois, 1988) featuring hyper competition (D’Aveni & Gunther, 1994), adopting a market-based view “due to competition arising from the structure of the market” (Makhija, 2003, p. 433) and the possession of valuable, rare, imperfectly imitable and non-sustitutable (VRIN) resources (Barney, 1991) are no longer sufficient to achieve sustainable competitive advantage (Eisenhardt & Martin, 2000). Dynamic capabilities have been suggested as a means to achieve sustainable competitive advantage, as they continuously renew competencies to adapt to changing market environments through “adapting, integrating, and reconfiguring internal and external organizational skills, resources and functional competences” to achieve this goal (Teece et al., 1997, p. 515). In the context of global R&D network management, two levels of dynamic capabilities exist. The first level represents R&D itself as dynamic capability (Helfat & Raubitschek, 2000), the second level represents the R&D network management function in multinational companies (Doz et al., 2001). Creating the organizational function of R&D network management requires that it be designed as a dynamic capability enabling the firm to react to environmental factors, thus ensuring a dynamic reconfiguration of adopted capabilities. Teece describes the key abilities of
191
INITIAL SOLUTION ARCHITECTURE dynamic capabilities as sensing, seizing and managing threats to transform the enterprise (Teece, 2009). Therefore, the R&D management function must incorporate these key abilities for a successful recombination of network R&D resources. Therefore, in creating an R&D network management function as part of this study, Teece’s conceptualization is adopted to facilitate the design of a flexible, reactive organization for a fluid enterprise (Schreyögg & Syndow, 2010) in the high velocity global software industry. Several authors have advanced conceptualizations on components of dynamic capabilities (Teece, 2009, p. 48), how to achieve total dynamism (Eisenhardt & Martin, 2000), how to form dynamic capabilities through learning (Zollo & Winter, 2002) and the implementation of competence monitoring (Schreyögg & Kliesch-Eberl, 2007). This study considers these findings in creating the R&D network management function as a dynamic capability.
R&D Network Management The management of globally dispersed R&D units is a multilayered problem that operates in geospatial, hierarchical, project process and informal network layers (Boutellier et al., 2008e). Previously collocated R&D management increasingly occurs in the form of global R&D networks (Gassmann & von Zedtwitz, 1999). However, organic growth along with opportunistic behavior often creates uncoordinated “jungle growth” (Boutellier et al., 2008e), which in only a few cases takes account of the firm’s intended strategies or potential synergies with existing R&D units (Kuemmerle, 2005, p. 181). This type of opportunistic behavior is a result of emergent strategies that develop in an unintended fashion, in addition to intended strategies, that can only be recognized as such ex post (Welge & Al-Laham, 2008), or as Mintzberg and Waters (1985, p. 257) describe this phenomenon, as a “pattern in a stream of action”. Allocation Emerging strategies should not be seen in a negative light, as they are a reaction to external and internal environmental stimuli (Ortmann, 2010) and can capture opportunities that would be too volatile for a formal intended strategy. In terms of the allocation of global R&D activities, it can be assumed that advantages of globalization, such as factor cost advantages and access to talented resources and markets, provided a strong stimulus that fostered opportunistic behavior and initially led to an emerging strategy of R&D internationalization in
192
ELEMENTS OF THE INITIAL SOLUTION ARCHITECTURE the global software industry. This was a strategy later formalized as an intended strategy of R&D internationalization in many companies. Emerging strategies can be executed through the provision of required resources in the resource allocation process (Bower, 1986). The resource allocation process determines what emergent and intended strategies receive funding (Christensen & Dann, 1999) and can be executed. Based on Bower’s definition of the three sub-processes in the resource allocation process — definition, impetus, and structural context — let us assume that R&D managers propose (define) and lobby on (impetus) allocation decisions to establish new international R&D locations and that the MNC provides an open environment (structural context) so these emerging strategies have the resources required for implementation. Emerging strategies for geographical expansion, however, often underestimate costs (Kuemmerle & Ellis, 1999), and often tweak business cases to pass hurdles set up by the relevant decision-making bodies to ensure the required resources are received (Kuemmerle, 2005, p. 181). Eisenhardt and Bourgeois (Eisenhardt & Bourgeois, 1988) identify this political process of “game playing” (Allison, 1971) as beneficial in quickly changing high-velocity environments such as the global software industry. Here, organizational politics may serve as a form of organizational adoption despite their identified negative correlation with organizational performance and the resulting “jungle” growth (Boutellier et al., 2008e) of R&D networks that require improvement. Improvement R&D network improvements are aimed at raising the overall performance and productivity of the R&D network. As discussed in section 2.4.6, measuring productivity in the software R&D context is inherently difficult. Therefore, before R&D network improvement can occur, improvement criteria that take the challenges of productivity measurement into account have to be defined. R&D is a knowledge-intense activity, and effective communication has been identified a key success factor in R&D (Allen & Cohen, 1969; Allen & Henn, Günter, 2007). Fisch (2003) thus proposes communication efficiency as a key criterion in achieving an optimal dispersion of R&D activities. Among the several desirable properties of loose-coupled (Weick, 1976) or modular organizations is communication efficiency, which reduces overall coordination and control efforts. It could thus be assumed that a communication-efficient setup in a global R&D organization would result in a modular organization. Here, Sosa et al. (2004) have identified a match between product architecture and organizational design as a main criterion. This thesis uses Fisch’s study of the op-
193
INITIAL SOLUTION ARCHITECTURE timal dispersion of R&D allocations and Sosa et al.’s investigation of modularity and modular organizations as a foundation for improving a global R&D network. Improvements need to consider both the historic and future context of the R&D network. While modifications to the resource allocation process can effectively alter future resource allocation outcomes in accordance with defined improvement criteria, the thorough analysis and reorganization of the historically grown portfolio of global R&D locations are required in line with defined improvement criteria. Organizational transformation and change methods typically reflect a step-bystep approach (Greiner, 1967; Kotter, 1995; Kotter, 2007; Kotter & Schlesinger, 2008) in transforming an organization by unfreezing, changing and refreezing (Lewin, 1943; Lewin, 1958). These one-off changes, however, do not suffice in a high-velocity environment that requires constant adaptation. Conceptualizations such as the self designing organization (Hedberg et al., 1976; Huber, 1991; Weick, 1977), which is based on research into organizational learning, are aimed at achieving a “chronically unfrozen organization” (Weick, 1977) which becomes an ever-changing entity. While the self-designing organization represents an attempt to ready the organization for continuous change, it must be questioned whether such a conceptualization, described in a similar form by Eisenhardt and Martin (2000) as total dynamism can be operationalized (Schreyögg & Kliesch-Eberl, 2007) and weather the costs of reorganization can be contained. In this context Sosa et al. (2004) provide methodologies to identify misalignments between product architecture and organizational design. Continuous adjustment of an organization may cause significant adjustment costs, such as relearning or severance costs for employees who choose not to be part of a frequently changed system. While the methods and conceptualizations of organizational transformation and change presented above are either one-off or go too far in dissolving the structure of the organization, organizational learning and adjustment through a single-loop, double-loop or even meta learning process are critical preconditions for sustainable organizational change (Argyris, 1994; Visser, 2007). Therefore, these conceptualizations are adopted in the present study of the organizational design of sustainable global R&D network management. Work Design In the context of globally distributed software development, work design needs to consider the contingencies of task interdependencies (Kumar et al., 2009;
194
ELEMENTS OF THE INITIAL SOLUTION ARCHITECTURE Thompson, 1967; van de Ven et al., 1976), the stickiness of information (Kumar et al., 2009; von Hippel, 1994) and multiple distances (Ghemawat, 2007) in the decomposition and integration of software development activities. First, the iterative and collaborative problem-solving nature of software development, together with its inherent uncertainty, complexity and workforce differentiation (Fenema, 2002), typically creates tasks with a high degree of interdependency, such as reciprocal interdependencies that lead to high coordination costs. Second, the transfer of information between collaborating parties causes costs (Teece, 1977) also referred to as “stickiness”, requiring additional efforts and/or costs to move the information to the locus of problem solving (von Hippel, 1994). High coordination costs of highly interdependent tasks in software development combined with a high degree of stickiness in such R&D activities led to software development primarily occurring in the headquarters of global software enterprises. Thirdly cultural, administrative, geographical and economic distances (Ghemawat, 2007) amplify the effects of high task interdependencies and high degrees of stickiness on globally distributed software development work design, and thus further increase coordination and information transfer costs. Software development deals with ill-structured problems (Simon, 1973), which are addressed iteratively such as through heuristics (Pólya, George, 1971). Therefore, the decomposition of software development work does not provide a complete work breakdown structure; task partitioning stops at a relatively high level, with chunks of work left to teams for further intra-team decomposition (Pimmler & Eppinger, 1994). Work integration typically occurs on the basis of the work breakdown structure of the software artifact, which in modern software development is represented by modular software architectures. Modular product architectures typically lead to modular organizations (Sanchez & Mahoney, 1996) mapping responsibility for the development of a particular software module to an organizational module (unit). This approach applies the software engineering principles of cohesion and coupling to organizational design to achieve both tight inner cohesion within an organizational module and loose coupling between organizational modules, thus focusing communication flows within the organizational unit and ensuring structured communications with other organizational modules (units) take place via clearly designed organizational interfaces to reduce overall coordination and information transfer costs.
195
INITIAL SOLUTION ARCHITECTURE Communication efficiency is thus an important design principle not only in the design of software, but also in the design of knowledge-intense organizations such as R&D functions in general (Allen & Cohen, 1969; Allen & Henn, Günter, 2007; Berg, 1975; Galbraith, 1973) and in software development functions in particular (Amrit, 2008; Lang, 2004). Work design in software development requires considerable experience; as a complete work breakdown structure cannot be obtained, work has to be integrated at a relatively high level, and cost-effective or feasible work designs cannot typically be obtained on an ex ante basis. The role of experience and the necessity of organizational learning in obtaining a global work design in the global software industry cannot be overemphasized. Globally dispersed software development undertaken at multiple locations around the world requires that the locus of problem solving, whether physically or virtually, move iteratively among these locations, as software development involves highly sticky content (von Hippel, 1994). This results in increased information and coordination costs when compared to those of collocated software development, and led Ebert (2006) to the proposition that architectural splits in global work design be avoided wherever possible to ensure development teams working on coherent tasks are not split across locations.
Organizational Environment Organizational design rarely depicts a design from scratch, but more often represents an organizational redesign that occurs in the context of a specific organizational environment with characteristic factors that need to be considered in successful organization transformation and change. The organizational environment influences how actors interact, what choices they make and what activities they pursue. It is mostly pre-shaped by the industry or industries in which the enterprise operates, its historic development, the behavior of its employees and its organizational culture. Furthermore, not all degrees of freedom exist for an organizational redesign, but the history of the organization typically limits design options (compare with sections 3.1.7 and 3.2.2). Options are also limited by statutory requirements such as labor laws and codetermination (Fitting, Wlotzke & Wissmann, 1978; Streeck & Kluge, 1999), especially in European countries that limit options and the degree of change and often require compromises in reorganizations. In-
196
ELEMENTS OF THE INITIAL SOLUTION ARCHITECTURE sights gained throughout the (re-)design process can inform strategic management and change restrictions on the organizational side, thus creating new or adjusted design principles. Global Environment Advances in communication and transportation technology, the opening up and rapid development of previously closed economies, and the liberalization of world trade were factors that initiated the globalization of markets and enterprises. Today’s enterprises thus increasingly operate in a global context, either through sourcing and distributing products from various countries or as a truly global organization with operations in various regions. Disruptive innovations and continuous M&A activities in the global software industry have further increased the speed of change to which global R&D organizations and networks are exposed. Therefore, in the formulation and implementation of organizational designs and redesigns, the dynamics and complexity of the global socio-economic environment must be taken into account. Changes in policies, statutory requirements, the competitive landscape, factor prices of resources or the political stability of countries can have an immediate impact on financial performance, supply chains and markets in an interconnected world. In making organizational design decisions, therefore, managers need to identify and capture design and redesign opportunities (i.e. factor cost advantages or access to resources and markets) and mitigate challenges related to such decisions on a global scale, such as by weighing political risks, long-term increases in factor costs, and increased costs of coordination and control. Change initiatives and change projects occur in the context of the organization, and the potential impact and dynamics of the global environment need to be considered to ensure a successful (re-)design.
197
RESEARCH METHODOLOGY
CHAPTER 5
RESEARCH METHODOLOGY
Chapter 1 highlights that although multinational corporations increasingly organize their global R&D functions in the form of global R&D networks and despite the widespread adoption of this specific organizational form in recent years, studies that rigorously inquire into the improvement of such global R&D networks remain non-existent. Thus, this study cannot utilize existing theory on global R&D network improvement for guidance. Chapter 2 reviews literature concerning phenomena among global R&D networks in the software industry, augmented by theories of work design, organizational design, strategic management, internal organizational dynamics and external socio-economic dynamics in Chapter 3. Chapter 4 then develops a solution architecture combining insights gained from the literature review concerning such phenomena and relevant theoretical underpinnings. This thesis employs an action design research methodology to pragmatically generate a design solution aimed at improving global R&D network management practices in the organization. It is also aimed at distilling a theory of global R&D network improvement to inform other academic research into this phenomenon. A participatory action research approach is selected, as this new phenomenon requires in-depth collaboration with actors engaged in the management and improvement of global R&D network organizations. This research methodology is further augmented by design research to an action design research (Sein et al., 2011), as successful organizational improvement requires that a variety of artifacts be constructed, such as strategies, policies, processes and ICT solutions. This chapter first provides an overview of the research methodology applied in this empirical study, including the research philosophy adopted and a detailed account of the research methodology chosen. The chapter then goes on to review research quality and the validity of the chosen research methodology, as well as ethical considerations affecting this thesis. It is important to point out that the author’s intention is not to exhaustively contrast various research philosophies and paradigms, but is to explain how the research paradigm and methods are chosen in light of the nature of this thesis.
198
THE PRAGMATIC EPISTEMOLOGICAL STANCE 5.1.
The Pragmatic Epistemological Stance
A pragmatic research approach is adopted in this thesis to achieve the research objective and answer the research questions laid out in Chapter 1. This pragmatic epistemological stance is aimed at creating prescriptive knowledge to improve a given situation and finding “solutions to problems that actually occur in the complex and highly multivariate field of practice [that] are developed in a way that, while valid for a specific situation, need to be adjusted according to the context in which they are to be applied” (Pragmatic Validity, 2010, p. 1). The pragmatic philosophical tradition dates back to the turn of the 19th century, when American authors such as C.S. Peirce, William James, G.H. Mead and John Dewey challenged the dominant positivist paradigm. Pragmatic epistemology “focus[es] on the outcomes of the research - the actions, situations, and consequences of inquiry - rather than antecedent conditions (as in postpositivism)” (Creswell, 2007, p. 22). American pragmatism has been marginalized in philosophical circles and is typically not included as a third point of comparison between positivism and interpretivism, “even though those two compelling points of view do not exhaust the paradigmatic possibilities” (Tashakkori & Teddlie, 1998, p. 22). Therefore, presently, “mainstream research in organization and management is modeled after natural science” (van Aken & Romme, 2009, p. 5) and takes a positivist epistemological stance aimed at gaining a value-free, absolute viewpoint by the application of scientific methods through a deductive and empirical approach. The absolute, objective viewpoint positivism seeks to accomplish is, however, pragmatists believe, an ideal, and the search for it should be abandoned as “the belief that a neutral algorithm underscores all scientific activities rests on a selective and distorted view of science as an accomplished and neatly demarcated activity” (Baert, 2005, p. 148). The researcher is always engaged in a cultural context with conventions and restrictions as well as language that makes the acquisition of an absolute truth impossible (Straub & Boudreau, 2004). Goles and Hirschheim (2000) note that paradigms change if they no longer provide the desired results. As social research is intended to “map the social world as accurately and completely as possible” (Baert, 2005, p. 151), the positivist epistemological stance utilizing “methods of natural science ignore[s] the meaningful dimensions of social life and as a consequence, do[es] not allow for accurate depicting of the social” (Baert, 2005, p. 152). As a result the positivist paradigm has been criticized in the social sciences, as it is believed to lead
199
RESEARCH METHODOLOGY to academic research lacking relevance to practice (Benbasat & Zmud, 1999), “minimal instrumental use of research literature and low participation in research by practitioners” (Hoshmand & Polkinghorne, 1992, pp. 56-57). In this context Hoshmand and Polkinghorne (1992, p. 56) note the “need for theories of action that can inform practice and provide more adequate maps of the social realities of practice”. Pragmatists strive for these theories of actions and it is thus no surprise that the pragmatist paradigm was recently revived through the work of neo-pragmatists like Donald Davidson, Richard Rorty, Willard Quine and Hillary Putnam (Baert, 2005) to overcome the limitations of the positivist paradigm in the social sciences. Pragmatists like Rorty (1991) challenge the viewpoint of an absolute truth or “god’s eye view”, as they believe that no one can “step out of history” given any position is situational and not absolute. Pragmatists are skeptical about finding “the one reliable method of science for reaching the truth about the nature of things” (Rorty, 1991, p. 65) as “for pragmatism, truth has no speculative function: all that concerns it is its practical utility” (Durkheim, 1914/2011). Pragmatism goes beyond the mere observation of phenomena as in positivist and interpretivist philosophy, it is intended to “to change existence” (Goldkuhl, 2004, p. 1). Pragmatists like Rorty thus propose “an edifying form of philosophy in which we no longer search for atemporal foundations, but redescribe ourselves in conversation with others” (Baert, 2005, p. 126). While it is not the intention to exhaustively compare the pragmatic paradigm with other research philosophies, several key characteristics are pointed out below. Teddlie and Tashakkori (2009) provide a more comprehensive comparison of the pragmatic paradigm with four other philosophical viewpoints, which is summarized in Table 51 below. The pragmatic paradigm differs from alternative paradigms in many metatheoretical assumptions about epistemology, axiology and ontology, and often occupies a middle ground between the extreme poles of positivism and interpretivism.
200
THE PRAGMATIC EPISTEMOLOGICAL STANCE
Figure 58: Expanded paradigm contrast table comparing five point of view (Teddlie & Tashakkori, 2009, p. 87)
Epistemology describes the relationship between the knower and the known, or in the social science context, between the researcher and participants. The positivist paradigm asserts that an objective view of the world can be obtained, whereas the interpretivist paradigm states that reality is socially constructed (Guba, 1999) and that only a subjective view can thus be obtained. Pragmatists challenge this binary contrast between the objective and subjective views of reality and see knowledge acquisition as a continuum rather than being constrained by extreme poles (Teddlie & Tashakkori, 2009). This epistemological continuum allows the pragmatic researcher “to select the approach and methodology most suited to a particular research question, providing a conceptual foundation for the use of both quantitative and qualitative tools” (Goles & Hirschheim, 2000, p. 261). The pragmatic paradigm thus overcomes the “tyranny of methods”, which is more concerned with how to conduct research rather than its results (Gallupe, 2007). Applying mixed methods thus enables the researcher to interweave different reasoning, or as Pierce notes in his analogy, “reasoning should not form a chain which is no stronger than its weakest link, but a cable whose fibers
201
RESEARCH METHODOLOGY may be ever so slender, provided they are sufficiently numerous and intimately connected” (Johnson & Onwuegbuzie, 2004, p. 19, quoting Pierce). Axiology refers to the role of values in research. Axiology differs substantially between the bipolar research philosophies of positivism and interpretivism. While the positivist paradigm asserts that research should be value-free, interpretivists research is bound to values and acknowledges bias and subjectivity (Goles & Hirschheim, 2000). Again, pragmatists occupy a middle ground between these poles. They acknowledge the important role of values in their research, as “values and visions of human action and interaction precede a search for descriptions, theories, explanations, and narratives” (Cherryholmes, 1992, pp. 13-14). While confirming the importance of values, pragmatists are, however, critical of the view that all insights, values, and perspectives are equally valid (Wicks & Freeman, 1998). For pragmatists, values are relevant and important only when they influence what to study (units of analysis and variables) and how to study (research methodology) in accordance with a value system to achieve original outcomes (Goles & Hirschheim, 2000). Pragmatism thus acknowledges the real life characteristics of social science research, as “this description of pragmatist’s behaviors is consistent with the way that many researchers actually conduct their studies, especially research that has important societal consequences” (Teddlie & Tashakkori, 2009, p. 90). Ontology refers to the nature of reality. In pragmatic ontology, unlike in interpretivism, the existence of an external reality that exists outside the researcher is acknowledged. However, pragmatists like Rorty deny that an absolute “Capital ‘T’ Truth” about this external reality can be obtained. Instead, they suggest that an instrumental or provisional “lower case ‘t’ truth” can be obtained through experience and experimenting (Johnson & Onwuegbuzie, 2004) and applied to specific situations. Limitations of the Pragmatic Epistemological Stance Despite its favorable emphasis on practical utility and advancement, it is important, as Johnson points out, to acknowledge the perceived limitations of the pragmatic epistemological stance (Johnson & Onwuegbuzie, 2004, p. 19): • Basic research may receive less attention than applied research because applied research may appear to produce more immediate and practical results;
202
THE PRAGMATIC EPISTEMOLOGICAL STANCE • Pragmatism may promote incremental change rather than more fundamental, structural, or revolutionary change in society; • Researchers working from a transformative-emancipatory framework have suggested that pragmatic researchers sometimes fail to provide a satisfactory answer to the question “For whom is a pragmatic solution useful?”; • What is meant by usefulness or workability can be vague unless explicitly addressed by a researcher. Validity of the Pragmatic Stance For the pragmatic epistemological stance “the test of knowledge is not whether it corresponds exactly to reality, […] instead the test for knowledge is whether it serves to guide human action to attain goals” (Hoshmand & Polkinghorne, 1992, p. 58). Therefore, the test is pragmatic, and the justification for research products is largely based on pragmatic validity that questions weather “the actions, based on this knowledge indeed produce the intended outcomes?” (van Aken & Romme, 2009, p. 7). Worren, Moore and Elliott (2002, pp. 1244-1245) suggest three approaches to assess pragmatic validity: 1) The first is simply to use the level of adoption as an indicator. It is likely that the models that are adopted widely and used extensively have some adaptive value for those using them; 2) The second approach is therefore, to assess pragmatic validity more directly, using an experimental methodology. Provide tools that differ in design (not in content) to two groups and analyze performance differences that may be attributed to the degree of pragmatic validity of the model employed; 3) The third approach is to ask the users of the tools about their opinions. In sum, the pragmatic paradigm occupies a middle ground between positivism and interpretivism and employs both quantitative and qualitative forms of inquiry. The key assertion of the pragmatic paradigm is that “knowledge is a form of action, which, like any action, brings changes to the world.” (Baert, 2005, p. 140). Knowledge in the form of theories also guides action to perform changes as intended leading to practical utility (Goldkuhl, 2004).
203
RESEARCH METHODOLOGY Pragmatists believe that they “consider the research questions to be more important than either the method they use or the worldview that is supposed to underlie the method” (Tashakkori & Teddlie, 1998, p. 21). Why the Pragmatic Paradigm is selected for this Thesis Established modes of knowledge acquisition no longer suffice in today’s increasingly complex and dynamic field of organizational science. Thus, a new epistemology is needed to acquire knowledge in complex organizational systems in all its richness, rather than trying to limit research to a “neatly demarcated area”18 in which the investigation can be rigorously conducted (Baert, 2005, p. 148). For Davenport and Markus, it is the predominant positivist paradigm with its focus on rigor that hinders relevance: “A cumulative research tradition hinders relevance in an era of rapid business change” (Davenport & Markus, 1999, p. 20). The objective of this thesis is to improve a global R&D network in an enterprise operating in a complex organizational setting in the software industry. The need to globally improve R&D networks is a new phenomenon on which there is currently insufficient research. Rather than validating existing theories, this study sets out to create theory in the multinational enterprise context through interaction with actors whose work relates to the research topic at hand. Collaboration with actors, through their interaction and tacit knowledge, is critical to achieve the research objective, and could not be achieved through a positivist or purely interpretive approach, as too much of the rich information available through interaction with actors in the MNC would be lost (Baert, 2005). Thus, this phenomenwon can only be adequately studied in the improvement project context by participating in an action program to improve the problematic global R&D network management situation. Therefore, a pragmatic epistemological stance is adopted in this thesis to go beyond an observer’s perspective, whether 18 “Organizations have become increasingly complex and pervasive, yet at the some time more amorphous. The boundaries of the field and the phenomena of interest are shifting and expanding, causing theoretical and empirical difficulties. As organizations and their environment evolve and became more complex, many researchers chose to focus on a smaller set of variables, and to fix or isolate those variables, ‘as opposed to (studying) systems of interrelationships among clusters of variables’. Researchers found they could enhance or protect their reputations by narrowing the scope of the problems they investigated. The phenomena of interest they are trying to study is enlarging and evolving. It will not hold still long enough for them to measure it. Metaphorically speaking, it is like trying to nail Jell-O to a wall. To combat this sense of turmoil, they seek solace in a frame of reference where they feel comfortable and in control - functionalism. Another way to look at this is to apply Kaplan’s Law of the Instrument: ‘Give a small boy a hammer, and he will find that everything he encounters needs pounding’. It comes as no particular surprise to discover that a scientist formulates problems in a way which requires for their solution just those techniques in which he himself is especially skilled’’ (Goles & Hirschheim, 2000, p. 255)
204
RESEARCH METHODOLOGY an outside (positivism) or an inside observer (interpretivism), to become an actor in the improvement program (pragmatism). In sum, the pragmatic paradigm provides a suitable epistemological, axiological and ontological perspective to ensure a successful transformation through its key assertion of “action that improves existence” (Goldkuhl, 2004, p. 1). 5.2.
Research Methodology
This thesis employs an action design research methodology (Sein et al., 2011) to achieve its research objective and answer the research questions laid out in Chapter 1. Action design research itself is deeply rooted in the pragmatic paradigm, and presents an integral pragmatic research framework that applies multiple methodologies to achieve the goal of successful organizational improvement. Action design research is a recent and innovative addition to the repertoire of post-positivist research methodologies, and has its origins in the two established research methodologies of action research and design science. Therefore, this chapter first reviews the heritage and properties of action research and design science to provide the background to the selected action design research methodology. To operationalize the action design research methodology, this dissertation employs a single longitudinal case study research strategy to answer the research questions and achieve the research objective. The research approach taken in the case study is thus detailed after introducing the methodology employed here. The case study augments knowledge gained about the phenomenon under study through interviews of stakeholders and project members to achieve the research objective and improve academic understanding and business practice. 5.2.1.
Action Research
The origins of action research lie in the work of Kurt Lewin, who coined the term in 1944 to combine experimental social science with social action programs designed to both advance theory and progressively solve important social problems (Kemmis, 1980) (also compare to section 3.1.7). Action research is aimed at addressing the frequent failure of organizational transformation and change activities imposed from above; as Stringer (2007, p. 40) notes, “my experience suggests that programs and projects begun on the basis of the decisions and definitions of authority figures have a higher probability of failure; [therefore,] central programs need to be complemented by the creative action of those who are closest to their sources facing the problems on a daily basis”.
205
RESEARCH METHODOLOGY Action research is defined in various ways: Action research is a participatory process concerned with developing practical knowing in the pursuit of worthwhile human purposes. It seeks to bring together action and reflection, theory and practice, in participation with others, in the pursuit of a practical solution to issues of pressing concern to people, and more generally the flourishing of individual persons and their communities (Reason & Bradbury, 2001, p. 4). Action research may be defined as an emerging enquiry process in which applied behavioral science knowledge is integrated with existing organizational knowledge and applied to solve real organizational problems. It is simultaneously concerned with bringing about change and organizations, in developing self-help competencies in organizational members and adding to scientific knowledge. Finally, it is an evolving process that is undertaken in the spirit of collaboration and core enquiry (Shani & Pasmore, 1985, p. 439). Action research is a systematic approach to investigation that enables people to find effective solutions to problems they want [to solve] in their everyday lives. Unlike traditional experimental/scientific research that looks [at] what generalizable explanations […] might be applied to all contexts, action research focuses on specific situations and localized solutions (Stringer, 2007, p. 1). Action research has been praised for its relevance. For example note Avison, Baskerville and Myers (2001, p. 44) that “no other research approach has the power to add to the body of knowledge and deal with the practical concerns of people in such a positive manner”. Despite its merits, action research has also been criticized for its lack of rigor (Cohen, Manion, Morrison & Morrison, 1980). Davidson et al. (2004) address this criticism in their conceptualization of canonical action research (CAR), in which they point out that action research can yield both relevant and rigorous results if it follows five distinct principles. The first principle of canonical action research is the principle of research client agreement, in which researcher and client have to agree on the research approach and acknowledge that the CAR approach is suitable for the specific organizational problem being addressed. In this process of informed consent, the principles of CAR are laid out, the commitment of the organization is explicitly expressed, and further information about roles and responsibilities and research
206
RESEARCH METHODOLOGY objectives are provided in a manner similar to a project charter. The research client agreement creates transparency about the intended research and helps to builds up trust between researcher and client. The second principle — the principle of the cyclical process model — asserts that action research should follow a cyclical process model to progressively increase understanding and validation in the problem-solving process. As action research is founded on the pragmatic epistemology, in which the “aim of the enquiry is not to establish the truth [...] but to reveal the different truths and realities held by different individuals and groups” (Stringer, 2007, p. 41), it allows the researcher to become part of the problem-solving group, guiding and providing input to the problem solving process. Here, the researcher utilizes an hermeneutic dialectic negotiation process that compares and contrasts different views held by members of the organization to let new meanings and understandings emerge (Guba, 1999). One such cyclical process model is Stringer’s “look, think and act” cycle model (see Figure 59), which is based on Lewin’s initial spiral model of “analysis, fact-finding, conceptualization, planning, execution, more fact finding or evaluation; and then a repetition of this whole circle of activities” (Kemmis, 1980, p. 3, quoting Lewin).
LOOK
ACT
LOOK
ACT
THINK
LOOK
ACT
THINK
THINK
Figure 59: Action research interacting spiral (Stringer, 2007, p.9)
The third principle is the principle of theory, whereby theory is understood according to pragmatic epistemology as knowledge that guides and evaluates action (Goldkuhl, 2004). In the initial diagnosis or planning phase, action researchers are encouraged to establish a preliminary theoretical framework that guides inquiry utilizing established theories or grounded theories that emerge as part of the research process. As research progresses, the theoretical framework may evolve over time through ongoing hermeneutic dialectic negotiations until a final theory emerges when the research is completed. Davison et al.
207
RESEARCH METHODOLOGY (2004) point to two benefits of a preliminary theoretical framework in action research: first, it avoids academic research that might be relevant to the client without any significant relevance to the academic community; second, it also avoids the researcher getting lost in the overwhelming richness of qualitative research and keeps the research project focused along the way. The fourth principle is change through action. Change is an integral part of action research. As Baskerville (1994, p.4) notes, “the fundamental contention of the action researcher is that complex social processes can be studied best by introducing changes into these processes and observing the effects of these changes. This change-oriented contention profoundly shapes the action research approach”. Successful change requires the commitment of both parties, client and researcher, to form a joint understanding of the scope of the intended change, agreement on root causes of the problematic situation, and intended actions to address this situation (Davison et al., 2004). Similar to an audit trail, the documentation of change activities starts from the description of the initial situation and continues until the final status at the end of the research project, thereby allowing for analysis of the improved situation and the actions and timeline adopted (Davison et al., 2004). Documentation allows for a review of successful or unsuccessful changes in an action research project. If change was successful, it needs to be reviewed to establish whether it can be wholly attributed to the change action performed or, alternatively, it is simply a result of “myriad routine and non-routine organizational actions” (Baskerville, 1999, p. 16). While it may be argued that change in action research is imperative and a lack thereof represents unsuccessful action research, even such projects can yield important theoretical and practical insights that help to shape the next iteration cycle or subsequent research efforts. A lack of change in action research projects may be attributed to the lack of a meaningful problem, or to political and practical obstacles not adequately considered at the beginning of the action research project (Davison et al., 2004). The fifth principle is learning through reflection. Reflection among practitioners is not something exclusive to action research; as Schön (1983, pp. viii-ix) notes: ”practitioners often reveal a capacity for reflection on their intuitive knowing in the midst of action and sometimes use this capacity to cope the unique, uncertain, and conflicted situations of practice”. However, practitioners frequently face the paradox that action prohibits reflection in crisis situations or tight time-
208
RESEARCH METHODOLOGY lines, or as a result of organizational politics in the form of norms and games, thus limiting the effects of reflection on action (Schön, 1983). Here the action researcher plays a crucial role; as Stringer (2007, p. 24) notes: “In action research, the [role of the] researcher is not that of an expert who does research but that of a resource person. He or she becomes a facilitator or consultant who act as a catalyst to assist stakeholders in defining the problems clearly and to support them as they work towards effective solutions to the issues that concern them”. The action researcher therefore acts as facilitator who helps the client to overcome limitations, empowers participants to reflect in action (Schön, 1983) and supports organizational learning through action (Argyris, Putnam & Smith, 1985). Davidson et al. (2004) state that intensive reflection should occur both internally during the research project in the form of ongoing communication and progress reports on preliminary and final results to stakeholders, and externally in the form of findings, theoretical frameworks and models to advance theories and consider transferability to other organizational settings. The action researcher thus has two responsibilities in reflecting the research: the first is to provide the client with solutions to pressing problems, and the second is to contribute to the stock of knowledge in the academic community. Soft Systems Methodology A more recent and widely applied branch of action research is the soft systems methodology (SSM) developed by Checkland (1999, 2010), whereby action research is combined with systems thinking to address issues resulting from the application of system engineering methods to real-world problems. Systems thinking “organizes internalized systems ideas, systems concepts, and principles into an internally consistent arrangement, using a systems way of viewing and understanding, in order to establish a frame of thinking” (Banathy, 1996, p. 156). Checkland and Poulter (2006, p. 22) define SSM as follows (see Figure 52): SSM is an action-oriented process of inquiry into problematical situations (1) in the everyday world; users learn their way from finding out about the situation to defining/taking action to improve it (4). The learning emerges via an organized process in which the real situation is explored, using as intellectual devices (2) - which serve to provide structure to discussion (3) - models of purposeful activity built to encapsulate pure, stated worldviews.
209
RESEARCH METHODOLOGY
Figure 60: The SSM learning cycle (Checkland, 1999, p. 13)
The SSM approach differs from other action research methods, as it provides a prescriptive model for conducting action research supported by a unique terminology. In his terminology, Checkland does not talk of ‘problems’, but of ‘problematic situations’, as for him, problems imply the search for solutions which solve real-world problems. To him, however, solutions in complex human situations are a “mirage when faced with real world complexity, with its multiple perceptions and agendas” (Checkland, 1999, p. 63). Checkland thus sees problem definition not simply as a given, but as a concept that needs to be socially constructed. Construction must be conducted through action research, as no universal problem description is typically available when research starts due to the different perspectives and opinions individual stakeholders hold about what constitutes the problem. Diverging world views are not, however, seen as an obstacle in SSM, but as a “source of strong feelings, energy, motivation and creativity” leading to a rich appreciation of the situation, which Checkland (1999, p. 56) sees as a precondition he explicitly encourages: “if […] the models you’ve built are not leading to energetic discussion, abandon them and formulate some more radical root definitions”. Under the SSM approach, a separate activity model is built for each divergent worldview to describe the problematic situation from that particular worldview. It encompasses both the operations perspective with its logically linked activities and a monitoring and control feedback loop in which performance is measured and corrected if required (see Figure 53).
210
RESEARCH METHODOLOGY
Figure 61: The general form of a purposeful activity model (Checkland, 1999, p. 8)
Building activity models requires a thorough investigation into the different worldviews of stakeholders, intended change actions (from both a process and a content perspective), values and norms, and the internal political environment. Learning and reflection occurs through performance monitoring in the activity model, which relates to the content of the transformation (SSMc) and the process of transformation itself (SSMp). Activity models are then used to enable structured discussions about the intended change and improve the problematic situation “to find a version of the real situation and ways to improve it which different people with different worldviews can nevertheless live with”. This approach marks a departure from the usual imperative of achieving a consensus among stakeholders whose divergent world views are often irreconcilable; instead, SSM is aimed at achieving stakeholder accommodation, “a good enough” or “I can live with this”. After stakeholder accommodation has been achieved for both the content (SSMc) and process of the transformation (SSMp), improvement action can commence to change the perceived problematic situation. Checkland is less prescriptive on the design of possible change actions and vaguely defines three objects of change action – changes in procedures, in structure and in attitudes. While Checkland sees changes in formal structures
211
RESEARCH METHODOLOGY as procedures and structures that are easy to design and implement,19 he also points out the difficulties of using action research to change informal phenomena such as attitudes, as “these involve many important, but intangible, characteristics that reside in the individual and in the collective consciousness” (Banathy, 1996, p. 83). Change may result in a different or new problematic situation, which again triggers the SSM learning cycle. This cycle is thus a never-ending process of reflective practice and learning, and “once the practitioner has internalized the SSM process [...] then reflective practice becomes built-in [and] the SSM user becomes a reflective practitioner”. To assess the results of this transformation process (SSMp), Checkland suggests a meta-monitoring process that monitors three sets of pragmatic criteria and takes improvement action if required. The first is efficacy criteria that assess whether the transformation is working and produces intended outcomes; the second is efficiency criteria that measure whether the transformation utilizes a minimum of resources to achieve its goals; the third and final set of criteria gauge effectiveness to assess whether the transformation achieves higher level or long-term goals (Checkland, 1999). To strengthen the rigor of the soft systems methodology, Checkland proposes that the principle of recoverability be adopted as a key validity criterion. In the SSM context, recoverability is understood as the ability of a reader to “re-experience” the action research conducted, “to see exactly what was done and how the conclusions were reached [to] make the whole activity of the researcher absolutely explicit (including the thinking as well as the activity), so that an outside observer can follow the whole process and understand exactly how the outcomes came about” (Checkland, 1999, p. 177). In cases where readers choose to disagree with the findings presented in an SSM case, such explication creates the foundation of a coherent discussion between readers and researchers. Soft systems methodology has been successfully applied in various domains such as organizational design, software design, systems engineering and pubic services (Checkland & Poulter, 2006). Despite its merits, it is important to consider its limitations with regard to the study at hand. While SSM involves considerable effort being directed toward the analysis phase of an intended transformation, taking account of various stakeholder worldviews, values, po19 While it could be argued that implementing changes in structures and processes might be considered easier than effecting attitude changes, Checkland nevertheless fails to acknowledge the problems of organizational resistance to change, as it cannot simply be assumed that all planned changes will be implemented to their fullest extent after planning has been finalized (Greiner, 1967; Kotter, 1995) (also compare with section 3.1.7).
212
RESEARCH METHODOLOGY litical power situations, and structured discussions about perceived real-world problems, it is less prescriptive when it comes to improvement actions for problematic situations. Checkland (1999, p. 59) sees this as inevitable, and attributes it to the variability of human situations, which inhibits clear prescriptions for action: “This is inevitable, and is due simply to the fact that no human situation is ever exactly the same as any other. Once we start exploring the real complexity of a human situation, not simply its logic, then formulae, algorithms and readymade solutions are not available. Even guidelines become fewer”. Summary: Action Research/SSM Action research in its canonical form provides a methodology that addresses both relevance through participatory organizational transformation and rigor through the application of its five principles: researcher/client agreement, a cyclical process model, theory (infusion), change through action and learning through reflection. It therefore provides a suitable foundation for this thesis aimed at transforming and improving the global organizational structure of a software enterprise. The action research methodology employed in this thesis is further operationalized through the use of soft systems methodology (SSM). SSM provides an actionable process and practical design elements for an effective action research change process, especially in its problem definition stage. The rich situational descriptions used in this study allow recoverability for its readers and reviewers, and thus ensure rigor in the application of soft systems methodology. While a purely SSM approach is not feasible in the context of this thesis due to its limitations in terms of the design of improvement actions and the pre-existing internal project framework setup by SAP, key elements such as the thorough internal analysis of stakeholders’ worldview perceptions, values and norms and the internal political environment are used as a foundation for cross-functional structured discussions about the perceived problem and intended change actions. While action research and SSM make valuable contributions to this study in defining the perceived problem and by introducing the concept of accommodation, they do not provide prescriptions for the design and later implementation of improvement actions. Therefore, this thesis augments the action research methodology with design science research that provides the framework for the design of improvement actions and supportive artifacts. The next section reviews design science methodology before it is merged with action research approaches in the subsequent section to yield an action design methodology. This allows for the strengths of action research, the description of the problem
213
RESEARCH METHODOLOGY and the concept of accommodation to be combined with the design science framework to formulate improvement actions and various innovative artifacts (whether organizational or physical) to improve the problematic situation. 5.2.2.
Design science research
Throughout human history, natural sciences and humanities have established themselves as the main pillars of scientific inquiry, with natural sciences enjoying a privileged position among academic disciplines (Simon, 1996). Therefore, “mainstream organization research strategies, aimed at understanding organizations and explaining their behavior, are largely based on the approaches of the natural sciences and, more recently, also the humanities” (van Aken & Romme, 2009).These traditional forms of scientific inquiry however have been considered a source of “fragmentation and lack of relevance” (van Aken & Romme, 2009). Banathy (1996, p. 1) thus advocates a new perspective, as “improvement or restructuring of existing systems, based on the designs of the industrial machine age, does not work any more. Only a radical and fundamental change of perspectives and purposes, and the redesign of our organizations and social systems, will satisfy the new realities and requirements of our era”. Simon (1996) suggests such a new perspective to establish a third archetype of scientific inquiry alongside the natural sciences and humanities: the “science of the artificial”, or scientific inquiry into the design of man-made artifacts (Simon, 1996). He points to fundamental differences between research into the design of artifacts and research that analyzes and explains natural phenomena: “the natural science[s] are concerned with how things are. […] Design, on the other hand, is concerned with what ought to be, with devising artifacts to attain goals” (Simon, 1996, pp. 114-115). Design science is thus defined as “a research paradigm in which a designer answers questions relevant to human problems via the creation of innovative artifacts, thereby contributing new knowledge to the body of scientific evidence. The designed artifacts are both useful and fundamental in understanding that problem” (Hevner & Chatterjee, 2010, p. 5). Because design science is deeply rooted in pragmatism, research quality is established through pragmatic validity (van Aken & Romme, 2009), which evaluates whether designed solutions indeed fulfill initially stated requirements. While essentially pragmatic, Hevner (2007, p. 91) points out “practical utility alone does not define good design science research. It is the synergy between relevance and rigor and the contributions along both the relevance cycle and the rigor cycle that define good design science research”.
214
RESEARCH METHODOLOGY Design science is “interested in systems that do not yet exist or in improved performance of given systems and is characterized through an emphasis on solution-oriented knowledge, linking interventions or systems to outcomes, as the key to solve field problems” (van Aken & Romme, 2009, p. 7). Given the type of improvement program envisaged for the global R&D organization examined in this thesis, design science methodology provides a suitable perspective for improving the given systems and solving field problems, especially as artifacts are not confined to the realm of information system research, but as van Aken and Romme (2009, p. 6) state, also cover organizational design: “organizations are also artifacts, shaped through design-based interventions by the founders and other change agents, as well as action systems created and sustained by their stakeholders to combine and coordinate actions to further common goals. Therefore, this study defines the term “artifact” more broadly in the sense of an ensemble artifact (Sein et al., 2011) to include all artifacts required (organizational design, ICT design, strategy design, etc.) to successfully improve the global R&D organization of the enterprise under study and “extend the boundaries of human and organizational capabilities by creating new and innovative artifacts” (Hevner et al., 2004, p. 76). The Design Science Process Design science is aimed at fulfilling both rigor and relevance. Hevner (2007) thus recognizes that design science research is located at the intersection of environmental requirements and preexisting knowledge foundations. In his conceptualization the design science research cycle consists of three interlocking cycles for relevance, design and rigor (see Figure 54). Environment
Design Science Research
Application Domain
t People
t Organizational Systems
t Technical Systems
t Problems
Relevance Cycle
t Requirements t Field Testing
& Opportunities
Knowledge Base Foundations
Build Design Artifacts & Processes
t Scientific Theories & Methods
t Experience
Rigor Cycle Design Cycle
Evaluate
tGrounding tAdditions to KB
& Expertise
t Meta-Artifacts (Design Products & Design Processes)
Figure 62: Design science research cycles (Hevner, 2007, p. 88)
215
RESEARCH METHODOLOGY The relevance cycle serves two main purposes the definition of requirements for the research and the provision of acceptance and measurement criteria for the generated designs and research results. Design science must address important and relevant business problems (Hevner et al., 2004) and produce a viable artifact in the form of constructs, models, methods, instantiations or better theory (Vaishnavi & Kuechler, 2004). The relevance cycle also includes field testing of preliminary or final designs, here Cole et al. (2005) suggest the action research methodology to conduct field tests and obtain feedback from stakeholders and customers. As design is iterative, the relevance cycle evaluates initial prototype designs against predefined acceptance criteria and determines if the result of the design cycle is satisfactory or if further iterations are required. The rigor cycle ensures that the design project is appropriately grounded in existing theories and generates creative ideas through the application of scientific theories, engineering methods, previous design experience, and pre-existing artifacts of the application domain (Hevner, 2007). Design science research qualifies as rigorous if it provides clear and verifiable research contributions to domain knowledge and not just routine design tasks that represent best practices. Design contributions typically include the “design artifact, design foundations and design methodologies” (Hevner et al., 2004, p. 83), and “any extensions to the original theories and methods made during the research, the new meta-artifacts (design products and processes), and all experiences gained from performing the research and field testing the artifact in the application environment” (Hevner, 2007, p. 90). At the core of the design science process lies the design cycle, which alternates between construction and evaluation of the design artifact. Design is seen as a search process that utilizes available resources to achieve desired goals given applicable constraints and laws (Hevner et al., 2004). Design science research calls for rigorous methods to be employed in the construction and evaluation of the design artifact (Hevner et al., 2004), with “design and evaluation theories drawn from the rigor cycle (Hevner, 2007, p. 91) providing evaluation methods that test the utility, quality and efficacy of a design artifact (Hevner et al., 2004) “in laboratory and experimental situations before releasing the artifact into field testing along the relevance cycle” (Hevner, 2007, p. 91). After the design science process is completed and the artifact has been created, the utility of the designed artifact must be assessed so the design science process can make a contribution to knowledge in the application domain and be deemed rigorous. Research results should be communicated and presented in
216
RESEARCH METHODOLOGY sufficient detail, both to technology-oriented audiences to enable construction of the artifact and to management-oriented audiences so the required resources can be allocated for construction and use of the newly designed (Hevner et al., 2004). Summary: Design Science Design science research provides a new research methodology that, like action research, is grounded in the pragmatic paradigm. It is concerned with the “science of the artificial”, the creation of man-made artifacts to improve human condition and thus adresses relevance in research. Design science research intends to address both relevance and rigor. Design science research is aimed at addressing both relevance and rigor. This section reviews an integrated framework proposed by Hevner et al. (2007, 2004) that features a design science process and guidelines for rigorous design science research. The inclusion of design science in this thesis is important, as design represents a central theme in this study aimed at formulating multiple artifacts such as strategies, organizational structures, R&D networks and software tools to accomplish the research objective of improving a global R&D network. However, due to several limitations, the design science methodology provides only one component of the research framework required to achieve the research objective. First, the design science research methodology suffers from a lack of prescription on how to achieve requirements and specifications in complex organizational settings. The design science methodology largely ignores the emergent nature of requirements and design in organizational settings (Sein et al., 2011), and suggests that stage-gate models be employed between the design and build phases (Cooper, Edgett & Kleinschmidt, 2002). It assumes the existence of a pre-defined set of requirements, or at least premature requirements that can be evolved through multiple iterations throughout the design research cycle. Design science thus neglects the different worldviews and perceptions of stakeholders on what constitutes the problem. Second, design science is aimed at improving the human condition through the creation of innovative artifacts; however, it neglects the social implications of organizational interventions through artifacts (Cole et al., 2005), which can lead to acceptance problems or political behavior throughout the research project and in later use. Considering these shortcomings and the strengths of the previously discussed action research methodology, it becomes obvious that action research provides
217
RESEARCH METHODOLOGY a valuable complement to design science. Sein et al. (2011) propose a framework that integrates action research and design into action design research, an approach that has received much attention (Figueiredo & Cunha, 2007; Järvinen, 2007; Lee, 2007) and is reviewed in the next section.
5.2.3.
Action Design Research
Sein et al. (2011, p. 38) suggest that the shortcomings of design science be addressed in organizational settings by augmenting the methodology with action research to create a “method that simultaneously aims at building innovative IT artifacts in an organizational context and learning from the intervention while addressing a problematic situation”. To attain this goal, action design research methodology combines the principles and processes of canonical action research previously presented (Davison et al., 2004) with the design science framework proposed by Hevner et. al. (2004) (see Figure 55). The action design research (ADR) approach is perceived to be especially applicable in research situations where the artifact is dynamic and emerges from ongoing organizational, technological and economic practices (Orlikowski & Iacono, 2001; Sein et al., 2011). Design Science Research Principles
Design as an Artifact
Action Design Research Stages and Principles
Canonical Action Research Principles
Stage 1: Problem Formulation Practice Inspired Research Researcher Client Agreement
Problem Relevance
Design Evaluation
Theory Ingrained Artifact
Stage 2: Building, Intervention and Evaluation
Cyclical Process Model
Reciprocal Shaping Theory use
Research Contributions Authentic and Concurrent Evaluation Research Rigor
Design as a Search Process
Communication of Research
Change through Action
Guided Emergence
Stage 4: Formalization of Learning Generalized Outcomes
Figure 63: Origin of action design research principles (own graphic based on the studies of (Davison et al., 2004; Hevner et al., 2004; Sein et al., 2011))
Sein et al. (2011) propose that action design research occurs in four stages; problem formulation, BIE (building, intervention and evaluation), reflection and learning, and formalization of learning (see Figure 56). Throughout the ADR project, the problem formulation and BIE stages occur in an alternating pattern
218
RESEARCH METHODOLOGY and are accompanied by a simultaneous reflection stage, whereas at the end of the ADR project, research findings are formulated to formalize what has been learned in the project and communicate these lessons to academic and practitioner audiences.
Figure 64: Action design research: stages and principles (Sein et al., 2011)
Stage 1: Problem Formulation The starting point of the ADR methodology is formulation of the problematic situation as perceived by organizational participants or anticipated by the researcher. Action research conducted through, for example, the soft systems methodology, is especially helpful at this stage in drawing together the various stakeholder worldviews to arrive at an accommodated perceived definition of the problematic situation, which leads to the initial research question. The researcher should conceptualize the problematic situation as an instance of a class of problems to be able to provide a solution to a wider class of problems it represents. The first stage of the ADR methodology follows the two initial principles of practice-inspired research and theory infusion. Practice-inspired research refers to the real-life problem at the center of the research project, as opposed to an inquiry into a pure knowledge problem. It draws on both the design science postulation that the research problem should be relevant and the researcher/client agreement on canonical action research that defines the roles, responsibilities, scope and approach of the research project to secure long-term commitment from the enterprise under study.
219
RESEARCH METHODOLOGY The second principle of a theory-ingrained artifact draws on the design science principle of “design as artifact” and the theoretical use of action research by which research is grounded in theories in the application domain and other relevant fields. At this stage, the theoretical basis of the ADR is identified. Furthermore, past solutions in the form of prior technological advances such as tools, processes and platforms are reviewed to enable their reuse and addition to new solutions. Stage 2: Building, Intervention and Evaluation The building, intervention and evaluation stage (BIE) is the core element of the ADR methodology. It represents three sequential activities and is based on the problem formulation derived and the theoretical foundation laid in stage one. When no ready-made artifact is available to solve the formulated problem, a new artifact needs to be created. The researcher then builds the artifact, intervenes by using it in the organizational setting, and immediately evaluates how the artifact works in this setting and how it addresses the defined problem. The BIE stage exposes the artifact to the organizational setting; it is given to end users to allow them use and reflect on it. This reciprocal shaping, by which design of the artifact is seen as a rigorous iterative search process undergone during interaction with the organizational context, represents the first principle of the BIE cycle. ADR practitioners then collect end-user feedback and evaluate the results. Based on these evaluations, the ADR researcher builds and enhances understanding of the artifact to enable construction of its next iteration and initiate a new iteration. The BIE cycle thus follows the principle of mutually influential roles, as it represents a joint effort among researchers, practitioners and end users who contribute their unique backgrounds, experiences and knowledge to shape the artifact in the BIE cycle. The number of iterations needed depends on availability of the ADR team, company resources, and the progress made in solving the problem and setting pre-defined acceptance criteria through end users. In the iteration cycles, the ADR team challenges the assumptions and ideas organizational members hold about the artifact to advance the design, establish design principles and derive theory. The BIE stage makes three major contributions: Scholarly knowledge contribution: Knowledge about designing the particular class of systems and knowledge to advance theory;
220
RESEARCH METHODOLOGY Organizational contribution: Utility through the new artifact that addresses an organizational problematic situation; Deconstructed artifact: Knowledge of improved artifact properties that provide a novel and better solution allowing others to repeat it through the identification of key technologies that enabled the improvement, knowledge about design principles and improved design process. Depending on the properties of the problematic situation and the artifact to be designed, Sein et al. differentiate between two types of BIE cycle: an IT-dominant cycle and an organization-dominant cycle (see Figure 57).
Figure 65: The generic schema for an organization - dominant BIE cycle (Sein et al., 2011)
IT-dominant BIE cycle: The artifact ensemble possesses a high level of technology content, with initial designs receiving limited organizational exposure. This is done to manage the complexity of technological designs and create a stable prototype without overloading it with too many requirements too early. Only in later iterations does the artifact receive full organizational exposure to obtain organizational feedback for improvement and validation; Organization-dominant BIE cycle: The artifact ensemble contains a high level of organizational content that requires tighter and earlier integration into the organizational context. Initial versions of the artifact are exposed to members of the organization in the first iteration to obtain feedback enabling the artifact to be shaped.
221
RESEARCH METHODOLOGY Therefore, it is important to select the appropriate BIE type before the cycle commences, and if required, to adjust the design of the BIE cycle to the specific requirements of the ADR project. The third principle of the BIE cycle is authentic and concurrent evaluation. Evaluation in the ADR context cannot be separated from design, and assuming a staged design process (Cooper et al., 2002), the artifact ensemble emerges though reciprocal shaping in the BIE cycle, requiring an immediate, concurrent evaluation. The evaluation should be pragmatic and appropriate to the context in which the ADR project is conducted - authenticity (Guba, 1999; Stringer, 2007) is deemed more important than any predefined evaluation scenario or criteria. Stage 3: Reflection and Learning The reflection and learning stage runs simultaneously with the problem formulation and BIE stages. In this stage, the ADR team constantly steps back and analyzes how the intervention is working compared to the initially stated goals, and if necessary, adjusts the intervention based on early feedback by, for example, changing the problem definition or the emerging artifact. The reflection and learning stage serves as a means to identify and abstract emergent contributions to knowledge to assist the move from creating a single solution to creating a class of solutions for a class of problems. In this stage, the applicable principle is that of guided emergence. Guided emergence describes the interplay between the design and emergence perspectives by which the artifact is shaped through organizational use and authentic and concurrent evaluation. The process is guided as the artifact emerges through the guidance of researchers, practitioners and end users. As significant changes may occur alongside the project, effective guidance requires ongoing awareness of and alertness to ongoing changes to find appropriate measures to deal with such changes throughout the ADR project. Stage 4: Formalization of Learning At the end of the third stage, the ensemble artifact has been built and put to use in the organization. Organizational transformation and change projects typically end at this stage; however, in ADR projects, the formalization of learning commences. Formalization allows for knowledge contributions to be made via dissertations and academic articles or presentations to practitioners. In the initial problem formulation stage, the ADR researcher defines the initial research problem as a class of problems. In the formalization of learning stage,
222
RESEARCH METHODOLOGY the solution instance that emerged from the organizational context of the ADR project is reconceptualized through generalization and abstraction as a class of solutions that can be transferred and applied to the larger class of problems previously specified. This ADR principle of generalized outcomes draws on the design science principles of communication and research contributions and on the action research principle of learning through reflection, which requires the researcher to make both a practical and theoretical contribution. Generalizing and abstracting problems is, however, often difficult due to the unique organizational context for which the artifact solution has been designed. Therefore, generalization is considered too strong a criterion in this context, and Sein et al. (2011) suggest transferability as a more suitable criterion for the formalization of learning, meaning the designed artifact can be used in a similar way in other organizational settings. While it is important to identify similarities, it should also be defined under what conditions things need to be done differently. Design principles address differentiation to clarify what specific properties of the designed artifact will be different for other organizations and what parts of the designed artifact must be applied to, for example, only certain types of organization. It is unlikely that each ADR project will generate new core design theories; therefore, Hevner (2007) suggests that design principles should be formalized instead, as theories are not necessarily required to qualify a research project as rigorous. The formalization of design principles also adds knowledge to the application domain, as “design principles capture the knowledge gained about the process of building solutions for a given domain, and encompasses knowledge about creating other instances belonging to this class” (Sein et al., 2011, p. 45). Summary: Action Design Research Action design research combines the strengths of two research methodologies — design research and action research — to introduce change in organizations through the creation of innovative artifacts. Considering that the objective of this thesis is to improve a particular problematic situation in the management of a global R&D organization through innovative artifact ensembles, the ADR methodology is deemed the most appropriate research methodology available to achieve the research objective and yield knowledge to inform practice and academia.
223
RESEARCH METHODOLOGY This study is aimed at achieving both relevant and rigorous results. First, the ADR methodology selected provides relevance, as its core objective is to improve a real-life problematic situation through the introduction of change. Relevance in terms of the study at hand is attained, as it provides a solution by which the enterprise under study (SAP) can improve its global R&D organizational framework, and it also provides the practitioner community of global R&D managers with class of solutions for improving global R&D organizations. Second, ADR provides a methodology for rigorous inquiry, as it augments the principles of rigorous design science as proposed by Hevner et al. (2004), and also of canonical action research (Davison et al., 2004). A peer review verifies that ADR is a rigorous methodology by confirming that research cases based on this innovative research approach qualify as rigorous (Sein et al., 2011, p. 45). Academic rigor, in the context of this study, relates to the creation of a rigorous DBA thesis comparable to a solution instance, and to improvement of the ADR methodology to provide a class of solutions. Because action design research represents a new branch of post-positivist research methodologies, few projects have applied this innovative approach, thus affording this thesis an opportunity not only to advance the field of management science, but also to make an overall contribution to research methodology. 5.2.4.
Case Study Research
This thesis utilizes a case study research strategy to operationalize the ADR methodology presented above. The case study method is a qualitative approach to inquiry aimed at gaining a deeper understanding of a contemporary phenomenon to generate new theories and ideas. Case study authors have often taken case study research as a manifesto of qualitative research; however, a case is a unit of analysis, and can thus include both qualitative and quantitative data. Because the overwhelming majority of evidence considered in this thesis is of a qualitative nature, this section focuses on the qualitative side of case study research. Qualitative research puts the researcher in a real-life setting and uses multiple interpretive practices that describe and transform the world “through representations as field notes, interviews, conversations, photographs, recordings, and memos to the self [...] attempting to make sense of, or interpret phenomena in terms of the meanings people bring to them” (Denzin & Lincoln, 2005, p. 3). Qualitative research is intended to generate a thick description of a phenomenon that “accurately describes observed social actions and assigns purpose and
224
RESEARCH METHODOLOGY intentionality to these actions, by way of the researcher’s understanding and clear description of the context under which the social actions took place” (Ponterotto, 2006, p. 543). The thick description the researcher creates allows him to generate a thick interpretation of the phenomenon, leading the researcher, participants and stakeholders to thick meanings of the research findings (Ponterotto, 2006), something adopting a purely quantitative research strategy in a social setting would not provide. As a qualitative form of inquiry, the case study method is aimed at generating such thick descriptions, as it investigates bounded systems such as organizational settings over time and collects data from multiple sources to generate a case report (Creswell, 2007) to answer “how” and “why” questions (Yin, 2003). The rich or thick description sought in conducting a case study requires, however, that multiple sources of data be converged through triangulation (Stake, 1995) and “the prior development of theoretical propositions to guide data collection and analysis” (Yin, 2003, pp. 13-14). Case study research is especially useful for inquiries into contemporary phenomena where “boundaries between phenomenon and context are not clearly evident” (Yin, 2003, p. 13). Here, case studies can generate novel theories by contrasting contradictory or paradoxical evidence that can be immediately tested and thus produce empirically valid theories, as theory construction is tightly interwoven with evidence (Eisenhardt, 1989b). While case study research provides relevance through rich descriptions and theory building in real-life settings, the rigor of case study research has been questioned, as the extensive use of rich data in case studies may lead to overcomplicated theories or theories that are too narrowly focused on an individual situation (Eisenhardt, 1989b). Other arguments commonly raised to question the rigor of case study research include the perceived inferiority of practical vs. theoretical knowledge generated, a lack of generalization, its greater suitability for generating a hypothesis than for testing, its bias towards verification, and the difficulty of summarizing case study results to develop general propositions (Flyvbjerg, 2004; Flyvbjerg, 2011). Flyvbjerg (2011, p. 304) refutes these claims as misunderstandings, and argues that the concrete case knowledge case studies produce is more valuable than “the vain search for universal theories”. While some authors see the core intent of the case study method as particularization rather than generalization (Stake, 1995), generalization from case studies is possible, as they may be central to scientific development. In addition, partial, naturalistic generalization (Stake,
225
RESEARCH METHODOLOGY 1980) that utilizes concepts such as transferability or repeatability can provide considerable value (Flyvbjerg, 2011). This thesis employs a single case study approach to accomplish its research objective. Yin states various rationales for selecting a single case design rather than a multiple case study design, such as the significance of the case, with the possibility of the case manifesting itself as a critical case, an extreme case, a unique case, a representative case or a revelatory case (Yin, 2003). These criteria, however, are not mutually exclusive, as cases can represent various forms and assume different forms throughout the research process based on the increasing understanding of the researcher (Flyvbjerg, 2011). The rationale for using a single case study approach in this thesis is that the case examined is believed to be critical (Yin, 2003), thus making a significant contribution to knowledge and theory building in understanding what governance models and processes companies utilize to improve global R&D networks. Considering the importance of this MNC in the global software industry, it is believed that insights gained in this case study show it represents a critical case that justifies the use of the single case study methodology rather than trying to gain generalizable results from a larger population through the multiple case study methodology.
5.2.5.
Data Acquisition and Analysis
Case study research requires the researcher to collect evidence from multiple sources before writing the study report; this section thus reviews the approach taken in this thesis to obtain access to the selected enterprise, the data acquisition process, data analysis, and how the unit of analysis is defined to focus this research. The unit of analysis is the major entity analyzed throughout the case study; it defines what the “case” is (Yin, 2003) and creates boundaries to define the scope of the study at hand. Cases can use various units of analysis such as “decisions, programs, the implementation process, and organizational change” (Yin, 2003, p. 23). This thesis analyzes organizational changes and improvements intended for a global R&D network that affects a whole global enterprise. Therefore, the unit of analysis in this study is the global enterprise, SAP, a firm operating in the software industry with a globally dispersed R&D network encompassing multiple R&D projects (including one software product) developed across several globally dispersed R&D sites.
226
RESEARCH METHODOLOGY The reasons for choosing this unit of analysis are manifold. First, the organizational design challenges of globally distributed software development only occur in large-scale software development, and are of minor or no importance to small or very small software enterprises. Second, the SAP improvement project extends from the core value-generating process of global R&D to cover the entire enterprise. Third, this unit of analysis has been chosen as it offers a unique opportunity for longitudinal research to be conducted on an improvement program never seen before in a global R&D network in one of the largest business software enterprises in the world. Gaining Access In my last role as Vice President of Strategic Development in SAP Labs China, my responsibilities included the development and growth of the SAP Labs organization in Shanghai. Working as an ambassador for the Chinese labs, I was able to attract research projects from global product owners, which gave the labs in Shanghai the opportunity to grow from an operation employing only 40 developers in 2004 to one staffed by over 1800 employees in 2010. Throughout the course of my work there, I was always interested in the global allocation of resources and the rationales of stakeholders for selecting sites. In late 2007, I finally decided to make this topic the subject of my DBA research thesis. While I had left SAP in the meantime to concentrate on this thesis, I still had the opportunity to meet board member and Chief Operating Officer Ernie Gunst on the SAP Labs Campus in Shanghai on the afternoon of July 25th, 2008. While I initially intended to request support for a survey on global R&D project allocations, Gunst mentioned that he was in the process of setting up a number of projects to improve SAP’s overall organization, and suggested I participate in the Location Strategy and Management project to provide my unique global perspective to the team in Walldorf, Germany. After making initial contact, I negotiated my entry into the project as a freelance researcher throughout the following months, and eventually joined the project, where I was introduced as doctoral student and project member at the first fact-gathering workshop in Walldorf on November 6th, 2008. My entry was facilitated by my extensive experience with SAP during my eight-year tenure at the company, by leveraging on my existing internal network, and ultimately through the recommendation of my previous manager and board member Ernie Gunst. To gain full access to all project data and pre-studies conducted to date, SAP
227
RESEARCH METHODOLOGY required that I sign a non-disclosure agreement to secure trade secrets and confidential information. A dedicated liaison officer from the SAP project was assigned to review research findings and review the case study for any confidential information that should not be disclosed. Data Acquisition Various forms of data were acquired both as part of the project work and through subsequent interviews with stakeholders, which include, but are not limited to, emails, meeting minutes, project plans, presentations, personal notes, personal communications, interview recordings and transcripts. Having access to the SAP email system through a personal email address allowed me to create an email audit trail covering all communications during the location strategy and management project at SAP. The project team also used a global file sharing tool to collect and structure all project-related information. This central repository facilitated the case study, as it was available to the researcher throughout the duration of this study. I also created a personal report log in which an extensive collection of personal communications, observations, impressions and reflections about the project was recorded for later use in writing up the case study. Final debriefing interviews held at the end of the project were conducted with project members and key stakeholders, giving participants an opportunity to reflect on the project and provide their personal views on project events and achievements. Before each interview, the participant was provided with an informed consent form (see Appendices A and B) and was personally briefed on the scope and purpose of the intended interview. Interviews were conducted in a semi-structured fashion, with interview guides (see Appendix B) being used to frame the discussion and provide an initial starting point (ice breaker) to ensure an open-ended interview with the interview participant. Interview partners were asked for their consent to record the interview on tape, a request to which all participants acceded. After the interviews were conducted, all interviews were transcribed to yield approximately 300 A4 pages of raw interview transcripts. Finished transcripts were then sent back to the interview partners to allow them to review the content of our discussion and clarify any questions. Data Analysis Case study data analysis can take many forms, as case study methods do not provide strict analytical procedures for the analysis of data obtained (Yin, 2009).
228
RESEARCH METHODOLOGY The goal of data analysis in case study research is to make sense of the texts and data collected to enable the researcher to interpret and write the case study report. Data gathered is thus analyzed and interpreted in the context of the real-live setting of the problematic situation this study addresses. Data is typically analyzed concurrently with the data gathering, interpretation and report writing processes (Stake, 1995). This concurrent design does not, however, make it difficult for the researcher to maintain oversight of the case study. Creswell (2007) thus suggests that a hierarchical step-by-step process be adopted for case study data analysis to provide the researcher with a structured approach to this complex undertaking (see Figure 66). After evidence in various forms such as research notes, interviews, emails, photos, etc. has been collected by the researcher, the first step of Creswell’s data analysis process is the initial organization and preparation of raw data through, for example, the transcription of interviews and the sorting and arranging of key data. Case study research typically involves the acquisition of large quantities of data in various formats; data organization allows the researcher to focus the limited amount of time available on the most important data. After data has been arranged, it is important that the researcher obtains a general sense of the information at hand by reading through the organized data. Initial themes and descriptions often emerge from this initial read through, providing a starting point for the later coding process. Coding is defined as “the process of organizing the material into chunks or segments of text before bringing meaning to information” (Rossman & Rallis, 2003, p. 171), whereas a code refers most often to “a word or short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data” (Saldaña, 2008, p. 3). The coding process consists of two steps: the segmentation of similar data as evidence into particular categories, and the labeling of these categories in the words of the researched context. Categorization is performed so the researcher can say something meaningful about the data instances assigned to a particular category (Stake, 1995). The first coding cycle typically yields a relatively small number of codes that capture the essence of the case. The number of codes increases throughout subsequent coding cycles, with further refinement and particularization. Creswell (2007) suggests using codes that capture what readers would expect based on the case study introduction and theoretical foundation presented, rather than codes that are unusual or surprising or codes that address a larger theoretical perspective.
229
RESEARCH METHODOLOGY
Figure 66: Data analysis in qualitative research (Creswell, 2007, p. 185)
Researchers usually organize their codes in codebooks that contain code hierarchies and code definitions. Depending on the approach taken and the research context, various code hierarchies are applied and various code types are adopted, such as “setting and context codes, perspectives held by subjects, subjects way of thinking about people and objects, process codes, activity codes, strategy codes, relationship codes & social structure codes, preassigned coding schemes” (Bogdan & Biklen, 2006, pp. 166-67). The use of computer software to organize and facilitate qualitative data analysis is highly recommended, especially for larger case studies. Computer software provides organized storage, quick access to locate information, accelerated coding and multiple analytical functions that help to identity correlations and patterns between codes (Creswell, 2007, pp. 168-169). However, the use of computer software is also subject to several caveats, as the learning curve can be steep in learning how to use of the software effectively, and software needs to be selected on the basis of functional requirements (Creswell, 2007). Although computer software empowers the researcher, it does not act as substitute for him, as “coding can only attend to the verbatim or surface language in the texts, potentially serving as a micro level starting point for doing case study analysis” (Yin, 2009, p. 269).
230
RESEARCH METHODOLOGY The coding process lays the necessary foundation for formulating a detailed description of the case and its settings that exhibits all relevant conditions, including people, places and events (Yin, 2003). Coding establishes these initial themes and descriptions through the identification of patterns. While many patterns can be relevant to the case study, a pattern may represent “a series of related actions or events” (Yin, 2009, p. 269) that occurs throughout the actual case observed or participated in. In the case of organizational transformation and change, hypotheses may be developed concerning the changes required to draw up expected patterns that would be required to substantiate such a change and compare them with later observations and the actual outcome of the organizational transformation and change process (Yin, 2009). Alternatively, organizational transformation and change outcomes could also be considered a pattern enabling comparison between expected and actual outcomes of the organizational transformation and change process. An analysis of data on such patterns would compare expected and actual patterns and test them by reviewing relevant variables such as worldviews, designs and opinions to show their thickness and explain the difference or similarity between the expected and actual pattern, thus enabling naturalistic generalizations to be drawn. In contrast, naturalistic generalizations refer to “conclusions arrived at through personal engagement in life’s affairs or by vicarious experience so well constructed that the person feels it happened to himself” (Stake, 1995, p. 85). The sixth step of Creswell’s data analysis process interconnects the various themes and descriptions generated through the coding process into a narrative or new theory. Interconnection can be accomplished through the chronological ascertaining and arraying of key events or through the construction and testing of logic models (Yin, 2009). Chronological interconnection explains themes as time bound cause-and-effect chains, and thus strengthens the logic, understanding and repeatability of the case study. Chronological data are typically easy to obtain considering the ubiquitous availability of computer files and emails with date stamps. Another method of interconnecting themes and descriptions in a case study is to construct logic models that provide an explicit conceptualization between various dependent, independent and intervening variables (Yin, 2009). Throughout the case study, the researcher collects data that help to validate, refine and support the logic model. Logic models can also contain rival explanations of events and case study outcomes to provide a thick description of the case and allow the reader to develop their own opinion about events and explanations and relive the case.
231
RESEARCH METHODOLOGY The last stage of Creswell’s process of qualitative data analysis is the interpretation of generated themes and descriptions, which provides meaning beyond the properties of the data. The interpretation of qualitative data is defined as “attaching significance to what was found, making sense of findings, offering explanations, drawing conclusions, extrapolating lessons, making inferences, considering meanings, and otherwise imposing order” (Patton, 2002, p. 480). Interpretation identifies lessons learned from a particular case, compares findings against theory, and gives rise to recommendations on future research issues. Interpretations provide compelling explanations to answer the initial “why” and “how” questions of case studies. Interpretations should provide raw data and other exhibits to strengthen logical explanations, and should also incorporate rival explanations (Yin, 2009). Throughout the case study, the researcher concurrently evaluates the validity and accuracy of the data collected. Here, the qualitative validity process refers to checks the researcher conducts to “determine […] whether the findings are accurate from the standpoint of the researcher, the participant or the reader of an account” (Creswell, 2007, p. 191). Practices commonly used to ensure qualitative research is of sufficient quality include triangulation, the use of thick descriptions, clarification of bias, the inclusion of rival explanations, prolonged engagement, participant validation, and peer debriefing and exchange with the community of practice, concepts discussed in more detail in section 5.2.6 below (Creswell, 2007; Rossman & Rallis, 2011; Yin, 2003a). Data Analysis in the Context of this Thesis The large amount of data gathered throughout the research project was organized, coded and analyzed with the help of the qualitative research software application NVivo 9.0. An initial read through of interview transcripts and key documents used in the research project yielded initial themes and descriptions that were elaborated on in several cycles to develop a three-dimensional coding framework (see Figure 67). The three-dimensional coding framework shown below was used to code project data, research notes, interview recordings, transcripts and photographs. The first dimension of codes was determined through the stages of the action design research methodology, with the second dimension capturing the main themes of the case study. Given the case study’s primary focus on organizational transformation and change, the themes selected capture the change process and the journey from problems to solutions across the several stages of the
232
RESEARCH METHODOLOGY ADR methodology. The third coding dimension of description codes provide vivid descriptions of the organizational setting and the environment in which the improvement project occurred. SAP Case Study
From Problems to Solutions Themes
ADR Stages
Problem Formulation
BIE
Learning
BIE-Strategy &
REFL-Strategy &
BIE-Data & ICT
REFLOptimization
BIE-Optimization
REFL-Data & ICT
Formalization of Learning
Organizational
Management -
Descriptions
in SAP
Organization
Work Design
Informal Organization
Collaboration
Formal Organization
Environment
Cultural
Figure 67: Coding Framework of the SAP LSM Case Study (own graphic)
In this study, themes were interconnected by chronological order and logic models. The action research design methodology employed provided a chronology with clear stages of the research undertaking across which themes and descriptions ran. With most data available in digital file format with date stamps, the chronological order allowed for events and actions to be revisited retrospectively, and thus facilitated creation of the case study. In addition, this case study created several logic models to describe, test and explain various phenomena encountered in the case, such as a root cause analysis of problems that emerged and the effect of company policies and practices on global allocation and fragmentation. Interconnected themes provided the foundation on which interpretation occurred. 5.2.6.
Research Quality and Validity
Quality in traditional research is commonly based on rigorous and scientific processes that apply four tests (Yin, 2003): 1) Construct validity: establishing correct operational measures for the concepts being studied; 2) Internal validity: establishing a causal relationship, whereby certain conditions are shown to lead to other conditions, as distinguished from spurious relationships;
233
RESEARCH METHODOLOGY
3) External validity: establishing a domain to which study findings can be generalized; 4) Reliability: demonstrating that the operations of the study -such as the data collection procedures - can be repeated with the same results
These tests or constructs employed to assess research quality are, however, founded on the positivist epistemology, and are thus of limited utility in the research design of this study, which employs a pragmatic epistemological stance. This study thus sets out to create an integrated quality framework that applies rigorous quality criteria to all its components, as quality must be present in all parts of the research design to facilitate the rigorous acquisition of knowledge (see Figure 68). The components of the overall research design comprise the epistemological stance, the research methodology and the specific mode of enquiry employed. Research Design Quality
Epistemological Quality
Pragmatism
Research Methodology Quality
Interpretivism
Action Design Research
Action Research
Research Strategy Quality
Case Study
Interview
Design Research
Figure 68: Integrated research design quality framework of this study (own graphic)
Epistemological Quality Epistemology, the theory of knowledge, is concerned with how valid knowledge can be obtained. The test of epistemological validity is whether knowledge obtained is a representation of reality, distinguishing justified belief from opinion. While a pragmatic epistemological stance has been chosen for this thesis, it is
234
RESEARCH METHODOLOGY also important to include quality criteria for the interpretive paradigm to interpret meaning in the social context of the organization understudy. Therefore, quality is reviewed in the context of both paradigms. Pragmatic Quality As previously discussed, the pragmatic paradigm is not concerned with knowledge’s representation of reality, but whether knowledge “serves to guide human action to attain goals” (Hoshmand & Polkinghorne, 1992, p. 58) and “produce intended outcomes” (Worren et al., 2002). Therefore, this thesis adopts the quality criteria exhibited in section 5.1 — level of adoption and user feedback — to ensure the pragmatic validity of the research findings. Interpretive Quality While a pragmatic stance has been chosen for this thesis, it is also important to include quality criteria for the interpretive paradigm to interpret meaning in the social context of this study. Several authors have put forward principles and conceptualizations to strengthen quality in interpretive research. In the following, the studies of Lincoln and Guba (1985) and Myers (1997) are presented as proxies for related work. Interpretive research uses substitute criteria for the previously mentioned positivist criteria. Lincoln and Guba (1985, p. 43) propose the quality criteria of credibility (instead of internal validity), transferability (instead of external validity), dependability (instead of reliability), and confirmability (instead of confirmability), together with “corresponding empirical procedures that adequately (if not absolutely) affirm the trustworthiness of naturalistic approaches”. A major component of trustworthiness is credibility. The researcher has to ensure that the study provides credible findings. Credibility starts with the sources of evidence; the researcher has to ensure such sources are credible to be able to establish the overall credibility of the study as a whole. In this study, the researcher went to great lengths to establish the credibility of sources and findings through a prolonged engagement involving persistent observations and triangulations over the course of approximately two years. Furthermore, the research process was regularly reviewed in peer debriefings with my supervisor, other professors and PhD students, whose suggestions often triggered further inquiries and clarifications with members of the research project. Member checking through participants in the project and SAP’s liaison
235
RESEARCH METHODOLOGY officer provided another set of checks of the authenticity and credibility of both the case study and the research findings. The second quality criterion for interpretive research Lincoln and Guba propose is transferability. Transferability concerns how the findings of a study can be transferred to another setting, with the degree of transferability depending on the similarity of the source and the target context. A high degree of similarity between the two contexts may thus suggest the findings from the source context are applicable to a new target context. As the researcher cannot conceive of all future target contexts, they must provide a sufficient description of the source context to facilitate subsequent researchers’ assessments of whether findings from the source context can indeed be transferred to their particular target setting. Lincoln and Guba (1985, p. 125) thus suggest providing a thick description, “which should contain everything that a reader may need to know in order to understand the findings” and judge the transferability of findings to any particular new target context. The findings of this thesis are believed to be transferable to other organizations that have established a globally dispersed R&D organization. To enable subsequent researchers to assess the transferability of the findings of this study, a thick description is provided, including specific details of the researched context such as properties of the products developed by the organization, properties of the artifacts designed to address the problematic situations, and details of the organizational setting and culture. The third criterion of dependability is used to assess the reliability of research findings and their underlying research process. Lincoln and Guba recommend demonstrating the reliability of the study to establish its dependability through overlapping methods that operate in a similar way to triangulation and the use of an inquiry audit in which the researcher provides evidence that allows the audience to audit the research process and findings independently. In this study, the action design research methodology is vigorously employed, incorporating routines in the BIE and reflection and learning phases that make use of triangulation and checks with members of the organization to verify design outputs and research findings. The study further utilizes various forms of end-to-end audit trails of emails and other evidence stored in the SAP project repository and further refined in the nVivo 9.0 qualitative research data base, in which the extensive collection of raw data was stored with date stamps and coding identifiers. The last criterion of quality interpretive research is confirmability, which refers to the confirmation of findings. Lincoln and Guba suggest member checks and
236
RESEARCH METHODOLOGY peer debriefing to confirm the validity of research findings. In this study, ongoing member checks were conducted throughout the research project through regular project meetings, interviews and steering committee meetings to confirm findings and recommendations. In addition, the case study was reviewed by various members of the organization and the assigned SAP liaison officer to confirm the findings. While member checks ensure confirmability in the context of the researched organization, peer debriefings ensure research findings are confirmed by members of the academic community. Regular meetings with my research supervisor, Prof. Kumar, affiliated PhD students and distinguished professors such as Prof. Samir Chatterjee, Prof. Matti Rossi and Prof. Jos van Hillegersberg confirmed the research approach and the findings obtained in a research workshop on action design research held in Shanghai on December 6th, 2011. Myers (1997) proposes that interpretive research should follow several principles to make the research study plausible and convincing to its readers and establish quality standards for field studies. The first and most fundamental principle Myers suggests is the use of the hermeneutic circle in interpretive field research. The hermeneutic circle, similar to the hermeneutic dialectic negotiation process in action research presented previously (compare with section 5.2.1), iteratively contrasts the researcher’s initial preconceptions and prejudices about the meanings of the parts of a complex system with their interrelationships. The researcher starts out with an initial understanding of the parts, moves on to acquire a global understanding of the whole system, then goes back with an improved understanding of its parts. Inquiry thus continuously alternates between the two stages until a comprehensive understanding is accomplished. The second principle Myers lays out is that of contextualization. Contextualization is similar to the concept of recoverability in soft systems methodology, as it requires the researcher to critically reflect and present evidence in its historical and social contexts so readers can re-experience and comprehend how the phenomena emerged. The third principle of researcher subject interaction states that facts (realities) are socially constructed through interaction between researchers and participants. Researchers need to proactively engage in interactions with participants to construct facts, as participants should not be seen solely as providers of information, but should also be regarded as interpreters and analysts of the context and events.
237
RESEARCH METHODOLOGY Fourth, Myers mentions abstraction and generalization as principles to establish the quality of interpretive research. While neither principle is typically associated with interpretive research, generalizations such as the development of concepts, the generation of theory, the drawing of specific implications, and the contribution of rich insights from interpretive research are possible, and should be used by the researcher to strengthen research quality (Walsham, 1995). Generalizations in interpretive research are not, however, assessed by statistical means, but by the plausibility and reasoning used in presenting findings and generalizing from them. The fifth principle of dialogical reasoning enables the researcher to challenge their preconceptions with the data obtained from the study. The principle calls for an awareness of preconceptions and the willingness of the research to change them throughout the study as data and findings emerge. The principle of multiple interpretations states that more than one reality exists, and thus allows for the co-existence of multiple and rival interpretations of investigated events and context. Including the richness of multiple interpretations strengthens the quality of the research conducted, as it not only gives the researcher the opportunity to revise initial preconceptions, but also provides the audience with an opportunity to establish their own opinion based on the evidence presented. The final principle of suspicion reveals the effects of “socially created distortions and psychopathological delusions” (Myers, 1997, p. 77) such as personal or organizational agenda and bias. The researcher has to analyze more than just the raw data and evidence obtained, and must also read “between the lines” of such evidence to identify power structures and political interests among participants to uncover the social world behind the evidence. Research Methodology Quality Action Research Action research, being essentially pragmatic, uses Guba’s principle of trustworthiness to ensure rigor. As Stringer (2007, p. 57) points out, “rigor in Action Research is based on checks to ensure that the outcomes of research are trustworthy – that they do not merely reflect the particular perspectives, biases, or worldview of the researcher and that they are not based solely on superficial or simplistic analyses of the issues investigated”. In addition to the four criteria previously presented of credibility, transferability, dependability and confirmability, this thesis also applies the previously discussed canonical action research
238
RESEARCH METHODOLOGY (CAR) approach. In CAR, the researcher applies the five principles of researcher/ client agreement, the cyclical process model, theory (infusion), change through action and learning through reflection to establish a rigorous inquiry (Davison et al., 2004). Furthermore, this thesis utilizes components of soft systems methodology (Checkland, 1999) in the initial project phase to achieve a thorough understanding of problematic situations and the worldviews of stakeholders and to strengthen the repeatability of the study design for subsequent researchers. Design Research Rooted in the pragmatic paradigm, design science is aimed, as previously mentioned, at addressing important and relevant business problems (Hevner et al., 2004) and at producing a viable artifact in the form of constructs, models, methods, instantiations or better theory (Vaishnavi & Kuechler, 2004). To achieve these goals of design science research, this thesis employs the seven principles put forward by Hevner et al. (2004) to establish quality in design science research: design as an artifact, problem relevance, design evaluation, research contributions, rigor, design as a search process and the communication of research. Action Design Research In practice, ADR is an augmentation of canonical action research (Davison et al., 2004), and design science research undertaken according to the principles put forward by Hevner et al. (2004). Both methodologies use multiple criteria to ensure research quality. It is thus assumed that ADR, as a combination of action research and design research that incorporates these principles, provides sufficient quality criteria to ensure the results of research undertakings utilizing the ADR approach indeed produce valid results that change and improve human situations. Research Strategy Validity Establishing quality in qualitative research requires that the researcher ensures that “in every qualitative inquiry, findings, interpretations, and conclusions should be assessed for truth value, applicability, consistency, neutrality, dependability, credibility, confirmability, transferability, generalizability, or the like” (Onwuegbuzie & Leech, 2007, p. 246). Various forms of bias or issues of legitimation (Onwuegbuzie & Leech, 2007) can, however, threaten the quality of qualitative research. To mitigate such threats and strengthen rigor in qualitative research, authors generally advocate multiple strategies that resemble those previously
239
RESEARCH METHODOLOGY discussed in the context of quality criteria for interpretive research. These strategies include triangulation, the use of rich and thick descriptions, clarification of bias, the inclusion of rival explanations, prolonged engagements, and the use of member checking, peer debriefings and reviews through a community of practice (Creswell, 2007; Rossman & Rallis, 2011). As these strategies have been exhaustively discussed in previous sections, research quality in the context of the case study and interview research strategies selected is discussed only briefly in areas that provide additional insights. Case Study Validity In addition to the interpretivistic quality criteria discussed previously (Lincoln & Guba, 1985) and the various related strategies aimed at strengthening quality, authors have suggested different approaches to establish rigor in case study research, such as by means of process (Eisenhardt, 1989b), multiple cases (Eisenhardt, 1989b; Stake, 2005), or longitudinal case studies (Leonard-Barton, 1990), through the application of positivistic quality criteria (Yin, 2003), especially in selection criteria in academic journals (Gibbert, Ruigrok & Wicki, 2008), or through straightforward practical suggestions such as in write-ups of research notes and the use of a research data base (Yin, 2003). To strengthen rigor in this case study, a longitudinal approach was selected, with the researcher being embedded in the research project for almost two years to build up a comprehensive understanding of the intended organizational transformation and change process. Furthermore, the previously mentioned interpretivistic quality criteria of credibility, transferability, dependability and conformability are used to establish quality in this case study. Interview Validity Qualitative research uses observation and qualitative interviewing of participants in their ordinary settings to obtain research evidence (Rubin & Rubin, 2005). Interviewing is more than a collection of questions and answers; the exchange between interviewer and interviewees represents a collaborative effort that creates the interview (Fontana & Frey, 2005) in which “each conversation is unique, as researchers match their questions to what each interviewee knows and is willing to share” (Rubin & Rubin, 2005, p. 9). Interviews are useful if a situation, context or event cannot be described briefly or simply. Researchers often use unstructured interviews to establish initial patterns in the inquiry process, and employ semi-structured or focused interviews after initial insights have been
240
RESEARCH METHODOLOGY established that require further clarification. To establish quality in interviewing, Rubin and Rubin (2005) suggest applying the principles of thoroughness and accuracy, believability and transparency. While these criteria mainly echo the previously presented interpretivistic criteria, Rubin and Rubin also provide practical suggestions on how to strengthen the quality of qualitative interviews. The researcher should exhibit thoroughness in the inquiry process by ensuring the completeness of facts and the identification of gaps, requiring for example, follow-up interviews to establish missing links, backing up explanations with evidence from interviews, and careful preparation and review of transcripts. Interviewees should be selected in such way that divergent views with rival explanations can also be obtained. The second principle Rubin and Rubin suggest for qualitative interviews is accuracy. Accuracy means that the researcher should be careful in obtaining and recording what is heard in the interview and produce it as an exact transcript. Transcripts should be checked for errors and distortions, both by the researcher and by the participants, to ensure their accuracy. Accuracy extends beyond the interview itself, as it also requires the researcher to include the research setting and context in the report to create a thick description of the interview’s circumstances and context. Similar to the principle of credibility put forward by Lincoln and Guba (1985), Rubin and Rubin propose believability as a quality criterion for qualitative interviewing. Believability is sometimes threatened by lies, exaggerations and deceptions interviewees produce. While these threats to believability can provide important clues when investigating why interviewees use such tactics, the researcher is expected to ensure that what has been said is correct and not a deception, lie or exaggeration. Researchers can counter such tactics through the use of extensive or repeated interviews, the triangulation of multiple sources, and cross checks with other evidence obtained. While these deception tactics can occur, Rubin and Rubin (2005) point out that interviewees in qualitative interviews tend to be reasonably truthful, as people realize that the researcher also talks to others and thus that triangulation might occur. First hand information obtained via direct access to key people in the organization is another way to increase believability, as it avoids potential distortions caused by having intermediaries. The researcher should thus secure early access to key people in the research project to obtain first hand information rather than relying on proxies. The last principle suggested is transparency, which is similar to the interpretivistic criterion of dependability. Transparency allows the
241
RESEARCH METHODOLOGY audience to comprehend how data was obtained and analyzed in the research process, how and in which settings interviews occurred and transcripts were produced, which according to Checkland (1999) enables recoverability. A transparent research process obliges the researcher to stay close to the data and evidence obtained so they provide sufficient support for explanations put forward. Summary A high quality research design necessitates the presence of quality in all its components. Therefore, this study applies quality criteria to all components of the research design to achieve an integrated quality framework that ensures the overall rigor and validity of the research findings. The key components of this quality framework are, first, the pragmatic stance selected for this study to ensure the practical utility of its findings; second, the application of interpretivistic quality criteria to ensure trustworthiness is safeguarded in the qualitative case study and interview inquiry process through triangulation, prolonged engagement, member checking, peer debriefings and dialog with the community of practice; and third, the action design research methodology applied, which mandates change through action and the creation of innovative artifacts to address problematic human situations. This study also applies more practical suggestions to establish its quality, such as through the extensive use of research notes, audit trails, a research database, and audio recordings that have been transcribed. It is believed that this comprehensive framework establishes the quality of the research design employed in this study, and thus ensures the validity of the research findings obtained. A quality research design should not only be valid, but should also ensure compliance with ethical standards. Ethics in research and the measures applied to ensue ethical research standards are thus reviewed in the next section.
5.2.7.
Ethical research considerations
Research in organizations affects people’s lives, and thus must respect their interests and communities. Social research must be conducted in accordance with generally accepted ethical standards to ensure the well being of research participants and their organization. This is especially true for the action design research approach selected here, which is aimed at improving a particular organizational situation through close collaboration with practitioners. Failing to follow ethical standards may harm members of the organization, undermine
242
RESEARCH METHODOLOGY their credibility in their organization, or even terminate their careers. Following ethical standards not only protects members of the organization, but also creates an open environment that avoids information hiding or distortion and ensures validity in social research; as Sieber (2009, pp. 105-106) notes, “the ethics of social and behavioral research is about creating a mutually respectful, win-win relationship in which important and useful knowledge is sought, participants are pleased to respond candidly, valid results are obtained and the community considers the conclusions constructive”. The interests of organizational members involved in research can differ. The early identification of stakeholders and their interests ensures the latter are considered and respected in the study. Such respect will, over time, build rapport and create trust between the researcher and members of the organization involved in the research. To establish rapport and trust, this study employs an ethical research design by which an initial stakeholder analysis is conducted to identify stakeholders and understand their interests through surveys and discussions undertaken in the initial problem formulation phase utilizing an SSM approach. This is especially critical, as this study is aimed at improving and reorganizing a development organization with a large population of approximately 12,000 software developers. Close interaction with workers’ representatives, such as the workers’ councils in Germany, is critical to ensure mutual respect, the inclusion of their opinions and concerns in the early stages of the organizational redesign process, and successful organizational transformation and change. The major aspects of ethical research design include consent, privacy, the consideration of risks and benefits, and the consideration of specific needs in the case of vulnerable persons (Sieber, 2009). The first aspect of privacy ensures that members of the organization involved in the research freely and willingly consent to participate in the study. Research participants must first comprehend and then voluntarily agree (Israel & Hay, 2006). Researchers should ensure they obtain informed consent so participants understand what they are consenting to and what the researcher expects from them. Informed consent refers to “knowing what a reasonable informed person in the same situation would want to know before giving consent” (Sieber, 2009, p. 111). The consent form should be written in a language research participants understand, and the researcher should be available to clarify questions prior to the intended study. This study uses multiple layers of informed consent to ensure an ethical research approach. At the commencement of the case study process, the researcher and the intended study were introduced in project meetings and to organizational
243
RESEARCH METHODOLOGY members interacting with the project team. Informed consent was also obtained from participants prior to the interviews conducted in the later phase of learning formulation. Prospective interview partners were furnished with a comprehensive review of the study and samples of potential questions prior to the planned semi-structured interviews, as provided in Appendices A and B. Participation in interviews was not mandatory for members of the organization; they were free20 to consent based on the informed consent information provided. While signed informed consent in the form of signed paper documents is desirable to document ethical behavior in social science research, such formal signed consent was considered inappropriate for the organization under study. Instead, participants provided email confirmations in reply to interview invitations that contained information on, the intended study, and interview transcript confirmations were provided here as a modern form of signed consent. The second aspect of ethical research design, privacy, allows members of the organization to decide for themselves whether it is appropriate to join the study and what information to disclose to the researcher. An integral part of privacy, confidentiality, enables trust to be built up in the research process; as Fitzgerald and Hamiltion (1997, p. 1102) note, “Where there can be no trust between informant and researcher, there are few guarantees as to the validity and worth of information in an atmosphere where confidence is not respected”. While privacy is concerned with people, confidentiality is about data and how they are treated in the research process to avoid harm to participants. In this study, privacy is respected and confidentially ensured through several measures. First, privacy is respected through the use of informed consent and the ability of participants to freely join or decline interviews in the later stages of the project. Second, statements about data confidentiality were included in the informed consent document to assure potential participants that their details would be treated as confidential and would not be used against them. Third, quotes from members of the organization used in the case study have been annonymized to avoid linkages to individual interview partners. Fourth, all interview transcripts will be deleted after the study has been completed. Fifth, SAP provided a liaison officer who reviewed the case study to avoid organizational harm through the accidental disclosure of information deemed confidential. Sieber (2009, p. 125) notes that the “researcher’s good name and institution may reduce the suspicion of potential respondents”. In this study, the researcher is a former long-term employee of SAP, and built up a considerable internal 20 Interviewees had a choice not to join and interviews were open and relaxed, however a possibility remains, that participants felt obliged to participate as they also participated in the project.
244
RESEARCH METHODOLOGY reputation in this capacity. In addition, the institution this study is affiliated with, the City University of Hong Kong, is highly regarded as one of the world’s top universities. These factors underscore the professionalism of the researcher and the research institution, thus further reducing any suspicion that might exist. The third aspect of ethical research is the reduction of research risk and the improvement of research benefits, both for the organization under study and the researcher. Research risk refers to the possibility that throughout the research process and thereafter, harm, loss or damage might occur from various sources such as research theory, the research process, the institutional setting, the use of research findings, and the violation of previously described ethical standards (Sieber, 1992). Several research risks have been identified in this thesis and mitigated through appropriate measures. First this study employs the relatively new research methodology of action design research, the use of which has been documented in only a few cases, for example (Saarinnen, 2011). Mitigation is achieved through close interaction with the research supervisor, who is familiar with this new methodology, and one of the co-creators of this methodology, Prof. Matti Rossi from Aalto University, Finland. Second, this thesis represents a longitudinal study conducted over approximately two years with an ambitious research scope and the goal of transforming a global organization affecting close to 12,000 employees. To safeguard the success of this seminal study, regular discussions and feedback cycles with the research supervisor and presentations of preliminary findings to doctoral students ensured regular feedback and the inclusion of relevant theories and adjustments throughout the research process. Third, considering the intimate nature of SAP’s reorganization, which has exposed various problems and shortcomings of current practice, the use of research findings may be restricted by the organization. In general, SAP can be considered an organization strongly influenced by academia, with over 80% of employees21 holding a university degree and many actively contributing to academic communities. SAP has been the subject of intense academic research in Germany, and maintains close relationships with universities worldwide through its university alliance program and SAP Research. SAP is thus familiar with academic research practices in the organization, and is generally supportive of the publication of internal developments and projects (Boutellier, Gassmann & Zedtwitz, 2008d; Jui, 2010; Neumann & Srinivasan, 2009; Snabe, 2007; Werder, 21 http://www.sap.com/corporate-de/investors/pdf/GB1997_D.pdf and http://www.diwa-it.de/ img/content/081125_sap.pdf
245
RESEARCH METHODOLOGY 2006). Concerns about the publication of research findings have been further addressed through open communication and interaction with project members and SAP’s liaison officer. The researcher’s empathic interaction based on longterm experience with the organization, its culture and its key decision makers has also helped to address such potential concerns as they arise. Contemporary research is not only required to acquire knowledge, but is also expected to be a “moral inquiry” (Kvale, 2007) to improve a particular situation. Maximizing the benefits of research is thus another goal of ethical research design. While the benefit of research is often seen as contributing to a general body of knowledge, Sieber (2009, pp. 132-133) points to a wider range of benefits ethical research can provide, such as valuable relationships, knowledge or education, material resources, training, employment, opportunities for employment, receiving the esteem of others, empowerment, and scientific or clinical outcomes. This thesis provides various benefits to the organization under study, as it goes well beyond the pure observatory approach of traditional research. The action design research methodology selected encapsulates organizational improvement as an integral part of the research process. The formalization of learning as part of this innovative research methodology provides a substantial benefit not only for the project, but also for the ongoing renovation of the global R&D organization of the enterprise study. Furthermore, the formalization of learning will also provide a blueprint enabling other organizations to organize their global R&D units more effectively based on the lessons learned from the SAP case study. The design component of this research further creates artifacts that provide ongoing value for the global R&D organization to ensure its continuous improvement. These material resources include global strategy, guidelines, IT systems, training resources and other artifacts. It is further assumed that the data transparency provided by this study will allow members of the organization to make better-informed decisions and thus empower employees and managers. In sum, this thesis takes appropriate account of ethical concerns in its design to ensure ethically responsible research outcomes according to generally accepted standards and the ethical research regulations of the City University of Hong Kong.
246
RESEARCH METHODOLOGY 5.2.8.
Summary – Research Methodology
The objective of this chapter is to introduce the epistemological stance and research methodology selected to acquire an understanding of how a global MNC in the global software industry can improve its global R&D network. Global R&D network improvement represents a new phenomenon requiring organizational transformation and change through the creation of innovative artifacts. The action design research methodology (ADR) was chosen for this study, as it mandates change through action and the creation of innovative artifacts, and is aimed at accomplishing both rigor and relevance. The ADR methodology is rooted in the pragmatic paradigm, a research philosophy that asserts “action that improves existence” Goldkuhl 2004@1} and utilizes mixed methods for inquiry. As a research strategy that operationalizes the ADR methodology, case study research has been chosen for this thesis in the form of a single longitudinal case study of the SAP location strategy and management project. The case study method represents a qualitative research approach chosen because a quantitative approach could not yield the rich data required to answer the “how” questions this study is aimed at answering. Concerns over research quality and validity, as well as ethical concerns, have been adequately addressed to obtain valid and ethical research results. The following chapter applies the research methodology presented above and discusses the action design research case study of the SAP LSM project.
247
CASE STUDY
CHAPTER 6
CASE STUDY
The previous chapter introduces the action design research methodology and case study research strategy employed to answer the research question stated in section 1.4: “How and under the application of what strategies, processes and governance models does the selected MNC in the software industry establish, maintain and improve its global R&D organization?”. This chapter describes the single case study that provides the raw data for the empirical inquiry. The selected ADR methodology was used in conducting this case study to shape and implement solutions to address the pressing needs of the world’s largest business software company and inform both practitioners and academics. As part of the chosen action design research methodology, active project participation captures the richness of the case and allows for a thick description of the case setting, participants and outcomes - criteria critical for the validity of qualitative inquiry.
The case study of the Location Strategy and Management Project at SAP represents a longitudinal participatory inquiry undertaken over almost two years in a global project that developed strategies, processes and ICT tools to manage and improve SAP’s expansive global R&D network. Direct collaboration with SAP’s board members and strong senior management support allowed for unique insights to be made and solutions defined for strategy definition, management, the allocation of teams, and the improvement of globally dispersed R&D networks. It is believed that these unique insights and solutions can be applied to a larger class of problems in other industries, and hence provide a class of solutions that provide for substantial advancements in globally distributed knowledge work. The case study opens with an introduction to the enterprise under study, SAP, the genesis of its global R&D network, and organizational features relevant to the case study. After the introductory sections that structure the case study, the distinct phases of the action design research methodology are described and elucidated: problem formulation, business intervention and evaluation, reflection and formalization of learning. The final section looks at how the research results may be used in future and provides recommendations for future research.
248
RESEARCH METHODOLOGY The project started with an extensive six-month problem formulation phase in which the newly appointed COO and his roadmap team conducted an in-depth bottom-up review supported by substantial internal and external research to identify the most pressing issues SAP needed to address to improve its competitiveness. Along with the need to renovate SAP’s core business process from initial product idea conception to the actual closing of a business deal in the so-called ‘idea to close program’, five additional supportive projects were identified and approved by the board. This initiative comprising one program and five projects was called the COO Program and Projects (COO P&P), with one of the five supportive projects being the Location Strategy and Management Project (LSM) set up to address issues identified in the area of location management to provide for the effective management and improvement of SAP’s global location portfolio with a focus on its R&D network. The term location management is thus used synonymously with the term R&D network management in this thesis. Based on the findings made in the problem formulation phase, the LSM team initiated initial workshops to clarify how the problem was understood, to exchange experience gained from previous projects aimed at addressing the issues, and to derive initial directions on how to approach the problem. The LSM project was then subdivided into four distinct work streams, with each work stream addressing a subset of the overall issues identified, as different subject matter expertise was required to design solutions in each work stream.
In accordance with the ADR methodology, solutions were designed in iterative business intervention and evaluation cycles in each of the four work streams. Frequent formal and informal reflection on project results or feedback obtained led to revisions or refinements in subsequent BIE cycles until a solution was deemed acceptable by project stakeholders. Towards the end of the project, learning was formalized in the form of handbooks, executive presentations and handover training delivered to the newly established Location Strategy and Management organization. The project was successfully closed at the end of April 2010, 21 months after the newly appointed COO took office and set up the COO Program and Projects initiative. Figure 6.1 provides a high-level overview of events in the LSM Project over its two-year duration, with indications of the related ADR phase.
249
CASE STUDY PROBLEM FORMULATION
BIE AND REFLECTION
FORMALIZATION OF LEARNINGS
Gunst appointed COO Listen and Learn Tour COO Roadmap Project Board Approval for COO P&P Start of COO - LSM Project LSM - Planning Phase LSM - Strategy & Process
LSM - Data
LSM - Defragmentation
LSM - Quick Wins Gradual Handover to permanent Organization Communication of Results / Training of permanent Org. Closure of LSM Project HY2 2008
HY1 2009
HY2 2009
HY1 2010
Figure 69: Detailed time line of events of the longitudinal ADR case study
In addition to the wealth of information provided by participation in a project spanning almost two years, 26 debriefing interviews were conducted with project participants and key stakeholders in the LSM project to facilitate final reflections on the project, its underlying causes, solutions designed, and the future outlook for global R&D network management (see Figure 6-2 for details of the interviews conducted). Quotes derived from these interviews were extensively used and interwoven in the case study to illustrate stakeholder views and sometimes divergent opinions, thus providing a detailed description of the global R&D network enhancement design process. The names of interviewees have been anonymized to ensure their privacy in line with the informed consent statement and information provided to interviewees prior to the interview (Appendices A and B).
250
RESEARCH METHODOLOGY No.
Role
Location
Interviewed on
Means of Interview
1 LSM-Project Change Manager
Walldorf Germany
22. January 2010
In Person
2 LSM-Project Director
Walldorf, Germany
22. January 2010
Via Telephone
3 LSM-Project Director
Walldorf, Germany
29. January 2010
In Person
4 LSM-Project Director
Walldorf, Germany
8. June 2010
5 Development Controller
Walldorf, Germany
26. January 2010
In Person
6 Member of the Support Operations Team
Walldorf, Germany
29. January 2010
In Person
7 COO P&P - LEAN Expert (I)
Walldorf, Germany
18. February 2010
In Person
8 Head of Global Facility Management
Walldorf, Germany
19. February 2010
In Person
9 COO P&P - LEAN Expert (II)
Walldorf, Germany
23. February 2010
In Person
10 SAP Labs Operations Team Member
Walldorf, Germany
23. February 2010
In Person
11 Global Head of SAP Labs Network
Walldorf, Germany
23. February 2010
In Person
12 Project Director - Value Realization Head
Walldorf, Germany
26. February 2010
In Person
13 Former Head of the SAP R&D Effectiveness Program
Walldorf, Germany
26. February 2010
In Person
14 COO Program and Projects Manager
Walldorf, Germany
1. March 2010
In Person
15 Head of Global Facility
Walldorf, Germany
2. March 2010
In Person
16 Manager of the Corporate Strategy Team
Walldorf, Germany
2. March 2010
In Person
17 Senior Vice President HR
Walldorf, Germany
25. February 2010
In Person
18 Chief Operating Officer - ERP Suite
Walldorf, Germany
02. March 2010
In Person
19 Project Executive, Strategic Workforce Planning
Walldorf, Germany
3. March 2010
In Person
20 Head of Development Controlling
Walldorf, Germany
3. March 2010
In Person
21 Head of SAP Research EMEA
Walldorf, Germany
5. March 2010
In Person
22 Head of Global Finance Infrastructure
Walldorf, Germany
12. March 2010
Via Telephone
23 Managing Director SAP Labs Palo Alto
Palo Alto, USA
13. May 2010
Via Telephone
24 Managing Director SAP Labs India
Bangalore, India
8. June 2010
Via Telephone
25 Senior Vice President & Global Head of Business One Development
Shanghai, China
18. August 2010
In Person
26 Program Manager, Business One Release 8.82
Shanghai, China
20. August 2010
In Person
Via Telephone
Figure 70: Details of the interviews conducted at SAP
251
CASE STUDY 6.1.
Introduction to the Enterprise SAP Under Study
As the world’s third largest software company based on market capitalization and the world leader in business software, SAP has researched and developed software in a globally dispersed setup since the early 1990s. Established in 1972 by three former IBM engineers, SAP now has more than 50,000 employees, of which 15,000 are developers, in 80 locations across more than 50 countries with annual revenue of more than $10 billion. SAP is currently the market leader in enterprise resource planning software, a suite of business support applications that covers all business and managerial areas including applications for financial and managerial accounting, materials management, production planning, sales and distribution and many related functionalities, which are used by more than 183,000 customers in over 130 countries. While SAP has expanded its product portfolio, especially through large-scale acquisitions such as Business Objects, Sybase, SuccessFactors and many others, most revenue is still generated from license sales and maintenance fees for core ERP solutions. To establish a comprehensive understanding about SAP and the decisions made throughout the case study, this introductory section reviews the various relevant aspects of SAP such as the drivers of globalization at SAP in section 6.1.1, SAP’s global R&D network the SLN network in section 6.1.2, specific organizational features of SAP such as organizational culture in 6.1.3, the genesis of the COO P&P initiative and the LSM project in sections 6.1.4 and 6.1.5 and finally the dynamics that surrounded the project in section 6.1.6.
6.1.1.
Drivers of Globalization at SAP
SAP initially operated R&D activities in an ethnocentric centralized R&D organization located in the village of Walldorf, Germany. The ongoing success of its integrated business software R/2 in Germany and the German-speaking countries of Austria and Switzerland in the early 1980s led to the founding of SAP International in Biel, Switzerland in 1984 to coordinate the startup of international activities, with offices opening in these countries in 1986 (Neumann & Srinivasan, 2009). While international expansion and the opening of new offices in Europe and worldwide was first focused on software sales, R&D functions still remained at SAP’s central location of Walldorf. This also applied to the localization of business software, involving the development of country-specific
252
INTRODUCTION TO THE ENTERPRISE SAP UNDER STUDY versions that contained specific statutory functionalities in the language of the target market, which was still done centrally at the company’s headquarters in Walldorf. In line with the literature reviewed in Chapter 2, multiple drivers were identified that triggered the global dispersion of SAP’s R&D organization. Access to Markets and Innovative Ideas In 1993, SAP finally opened its first international R&D operation, a technology development center in Foster City, USA. This center later moved to Palo Alto and became SAP Labs America. The move to internationalize its R&D activities through a relatively small listening post in Palo Alto was strongly advocated by Hasso Plattner, one of SAP’s founders, who believed that SAP had to have a presence in this high-technology cluster where software companies were in the vicinity of many renowned universities and research centers: “We had clearly planned that we had to go to America. If you’re not successful in America, you can’t be successful in the rest of the world either.” Hasso Plattner quoted by Neumann & Srinivasan (2009) The mission of SAP Labs Palo Alto was primarily seen as sourcing innovative ideas from the technology hub of the Silicon Valley and leveraging on the operation’s proximity to the market of the largest economy in the world. This was a unique value proposition, as the managing director of SAP Labs Palo Alto in Silicon Valley pointed out in an interview: “Yes it is proximity [to markets]. Obviously the location Silicon Valley is still the technology hub of the world and certainly has been since the inception of the labs in the mid-90s. I think that Hasso realized that growth in the Americas is important and this is the place to be as the heart of hub of the technology universe.” Access to Talent In the early 1990s, large multinational companies increasingly adopted ERP solutions supported by additional international SAP sales offices and SAP partners that sold and implemented SAP solutions internationally, even in countries where SAP itself did not have a presence. The growing worldwide demand for the core SAP ERP product, R/3, created considerable demand for talented software developers to develop new functionalities and country versions to fulfill customer requirements worldwide. The situation was further exacerbated by SAP’s transformation into a multiproduct company, which required additional
253
CASE STUDY software developers to conceive of and develop new large-scale applications. These so-called “new dimension products” that provided additional functionality, such as business intelligence, advance planning and optimization, customer relationship management and catalogue-based purchasing, could be linked with customers’ existing ERP solutions and extended their functionality considerably. The increasing demand for software developers, however, far exceeded the limited supply of suitable university graduates in SAP’s German home base, which at that time was able to supply only a few hundred skilled computer science graduates each year. At the same time, the Internet boom attracted talented software developers in Germany to join startup companies, a dramatic situation, as the manager of the SAP Labs Network remarked: “Towards the end of the 1990s the situation was dramatic, we noticed that we couldn’t get any more developers to Walldorf. We [could] no longer develop the products as fast as the market would like to consume them. Then there was the internet boom, startup hype etc. - the people simply moved elsewhere.” Attracting foreign talent was also difficult due to the limited attractiveness of the provincial city of Walldorf in the international competition for talent; as a senior vice president of HR remarked: “To move out of Germany and develop somewhere else was a factual problem, as we didn’t get enough people here. Also all that Microsoft has in the USA – an attractive country, attractive location, attractive for foreigners – all that we didn’t have or [was only] very, very limited in Germany.” He also pointed out that the duration of the hiring process, the “time to fill,” was much longer in Germany than in foreign subsidiaries that offered quicker turnaround times, which accelerated the rapid growth of foreign locations: “The time to fill [a new position] is considerably shorter in foreign locations. This is due to the considerably shorter notice periods, which are typically around 14 days. Here [in Germany] they are up to six months, and if it is within a quarter of the year you sometimes have to wait nine months, and that is not acceptable in [a] rapidly growing environment.” At a board meeting in 1997, SAP decided that it needed to reach out globally beyond its existing SAP R&D centers to either attract global talent to existing
254
INTRODUCTION TO THE ENTERPRISE SAP UNDER STUDY R&D centers or open new R&D hubs in new regions to acquire more suitable software developers (Neumann & Srinivasan, 2009). While it was difficult to attract international talent to the provincial village of Walldorf, a task force soon identified two new locations in which to expand SAP’s R&D presence via regional hubs. The first location was Sophia Antipolis in southern France, and the second was Bangalore in India, where SAP already had a small presence through its 1997 acquisition of the sales force automation company Kiefer & Veittinger. Both locations were approved at a board meeting in 1998 and provided with a substantial budget to start operations. SAP initially had the idea of capitalizing on the globalization of software development by applying the “follow the sun principle” by which software is developed on a concurrent 7 x 24 basis. However, this idea was quickly abandoned after initially being considered at SAP due to the strong ties between developers and their code, as one development-controlling manager explained: “The idea, which we had to produce software 7 x 24 that a piece which is developed here and passed on to another [in a different time zone]. That doesn’t work. It fails, because we have too much code ownership and too different cultures how people work. That is all brainpower, you cannot simply pass on a piece of code to another person and say ‘now you work the next 8 hours on that.’” Despite the global trend toward offshore outsourcing of software development (Moitra, 2008), software development at SAP is done exclusively in-house in the form of captive or intra-firm offshoring rather than as offshore or inter-firm outsourcing of software development. While offshore R&D outsourcing potentially offers additional cost savings and flexibility, R&D was seen as a core competency, and the risks of intellectual property appropriation and issues associated with integrating outsourced R&D units into the existing development organization were considered too high. To further foster the growth of innovative solutions, SAP concentrated its medium to long-term research activities in 12 locations within the global SAP Research organization. SAP Research’s mission is to identify and shape emerging IT trends and generate breakthrough technologies through applied research (SAP, 2009). While software development focused on the short to medium-term development of new functions to satisfy customer requirements, the 400 researchers in SAP Research focused research in areas of new technology trends and ideas with a timescale of about three to five years from conception to application to
255
CASE STUDY contribute to future product development. In sum, it can be seen that access to talent was a decisive factor in SAP’s globalization to overcome limitations of their home country that was unable to provide enough talented developers as required. Factor Cost Advantages and Flexibility While access to talent and proximity to markets were initially the main drivers of the global dispersion of R&D at SAP, the substantial labor cost differential between developers in industrialized and developing countries soon came into focus. Continued cost pressures led the financial community in SAP to advocate low-cost locations for further growth, as the head of the development controlling outlined: “There was clearly the understanding that we cannot continue growing so unrestrained as before as it was simply too expensive. We have the locations in India with well-educated people, a growth market and relatively low labor costs when compared to Germany or America. So let us invest in India and therefore it was said ‘growth please in low-cost or cost-effective locations’, which created an additional run on India.” SAP labs in low-cost locations such as India initially operated in an extended workbench setup by executing tasks sent from headquarters in Walldorf or other high-cost locations such as Palo Alto. Their ability to execute these tasks satisfactorily at a considerably lower cost than was possible in high-cost locations further fueled the growth of the labs in Bangalore, as the managing director of SAP Labs India recalled: “The reason why we came here way back in 1998/99, and I was fortunately there in the early stage of the establishment, we were in many ways a good low cost center. The focus was on ‘how can we have more people to deliver more’. That was a very simple equation on if you have more people you can do more. And in many ways we acted, even if we don’t like to call it that way, as an extended workbench, where a lot of tasks were given to us and we executed it. We were very good at execution. This is how we worked in the last ten years and we saw an explosive growth in Labs in the last seven years. From 150 odd people we reached 4000 people in the last seven years.” The lower cost structures of locations in developing countries also enabled SAP to execute business cases that would have not been viable in higher cost loca-
256
INTRODUCTION TO THE ENTERPRISE SAP UNDER STUDY tions, as the head of the SAP Labs Network elaborated: “We were an early mover in 1998 when we started in India. One reason why certain labs were growing so fast was the cost structure. It was recognized that the costs in India were low for the near future and that certain projects and business cases only were feasible when the project was done in India.” The large talent pool around in Bangalore and partner companies around the SAP Labs in India, which SAP itself often describes as its ‘ecosystem’, also offered considerable flexibility to accommodate short-term requests. One senior vice president at SAP used the term ‘breathing locations’ in this context to describe locations that could ramp up the number of employees within a short timeframe to accommodate demand spikes, and could quickly ramp down in the event of weaker demand due, for example, to a cooling global economy. The head of the SAP Labs Network saw this flexibility as a unique value proposition of the labs in Bangalore: “We have very different ecosystems around us depending on the lab’s location, which we also use differently, [such as] in Bangalore where we have the unique opportunity to balance short-term demand for [the developers we have in] software development. Suddenly something has to be tested with 200 developers, or you have to change the software to accommodate accessibility requirements that were developed with 150 developers from [a] third party in only two weeks. This you can only get done at such a location.” While many people saw the growth of the labs in Bangalore as explosive, one senior vice president of HR put the growth SAP experienced into perspective by comparing it to other players in the Indian market: “Compared to some other providers [such] as WIPRO, or if you look at large software companies in the Indian market that grew [by] 10,000 employees per year, really 10,000 employees per year, we have restrained ourselves considerably. Especially in the low margin area, there would have been considerably higher growth possible.” While low-cost locations such as Bangalore were initially seen as an extended workbench, this view has changed in recent years as the capabilities of employees grew over time, resulting in more innovative work and complete product ownership being allocated to low-cost locations such as Bangalore, as the Chief
257
CASE STUDY Operating Officer of ERP Suite Development recalled: “Coming from a follow the sun principle [...] that wasn’t practicable, we arrived at the conclusion that the location strategy is rather built on the principle that we distribute topics to globally distributed locations in a way that it makes sense, with until now a strong focus on Walldorf, as that [is the] location where we have most of the business ownership, at least when we talk about the traditional components. In new components also, locations such as Palo Alto and the locations in Asia Pacific like Bangalore or China, partly with a long tradition - coming from an extended workbench scenario to the allocation of complete product topic ownership.” As many still saw India as an extended workbench only, SAP’s former CEO Leo Apotheker emphatically emphasized this shift in a newspaper interview: “I don’t view India as a labor arbitrage place. That is a fundamental mistake. We view India as a source of very skilled labor so we are looking not necessarily for numbers. Our strategy is to develop high-end capabilities, to use our lab capabilities in India to create complete sets of software products.” Based on the feedback provided by the interviewees, the firm globalization theories discussed earlier, the eclectic paradigm and the Uppsala model of globalization provide a suitable foundation for explaining SAP’s globalization. According to the eclectic paradigm, foreign direct investment (FDI) occurs in the presence of ownership, location and internalization advantages. In the case of SAP, ownership advantages were seen in form of IP appropriation risk avoidance, location advantages through proximity to markets, access to talent, and factor cost advantages, while internalization advantages were seen through the better integration of foreign developers via captive investments rather than offshore outsourcing. SAP’s globalization also occurred as a gradual process, with low initial commitments gradually extended over time as teams built up their trust in the execution capabilities of foreign subsidiaries. In sum, it can be seen that the globalization provided ownership advantages as described by the eclectic paradigm and occurred as a gradual process as conceptualized in the Uppsala model of globalization.
258
INTRODUCTION TO THE ENTERPRISE SAP UNDER STUDY 6.1.2.
The SAP Labs Network SAP’s Globally Dispersed R&D Network
Since the opening of the first R&D center in Foster City in 1993, SAP has grown its number of R&D centers to over 80. At the beginning of 2010, Germany was still the largest R&D location with over 8000 researchers and developers, followed by Bangalore (close to 5000 developers), Shanghai (approximately 1500 developers) and Palo Alto (approximately 1200 developers). Thirteen of these R&D centers have qualified as “SAP Labs”, and are large SAP development locations of between several hundred and more than 5000 employees (see Figure 6-1). The executive management team has typically promoted development centers to the status of an “SAP Lab” if their size exceeded about 500 employees. In a few cases, development centers have been promoted despite their smaller size if the development center gained strategic importance due, for example, to its location in a strategic market or its proximity to growing research clusters with a strong academic community (such as SAP Labs Brazil in Sao Leopoldo with approximately 400 employees).
SAP Labs Germany SAP Labs Canada
SAP Labs Israel SAP Labs US SAP Labs China
SAP Labs India
SAP Labs Brazil
Big Medium Small
> 500 employees 100 employees