Multitask Agency, Modular Architecture, and Task ... - Semantic Scholar

5 downloads 130393 Views 224KB Size Report
and the 2007 Steven Schrader Best Paper Finalist at the Academy of ..... automation tasks (e.g., web hosting, office productivity applications, etc.) should ..... tion type. the contract estimations are presented in table 10. the baseline estimation.
Multitask Agency, Modular Architecture, and Task Disaggregation in SaaS Anjana Susarla, Anitesh Barua, and Andrew B. Whinston Anjana Susarla is an Assistant Professor in the Department of Information Systems and Operations Management at the Foster School of Business at the University of Washington in Seattle. She received a B.S. in Engineering from the Indian Institute of Technology, Chennai, India, an MBA from the Indian Institute of Management, Calcutta, India, and a Ph.D. in Information Systems from the University of Texas at Austin. Her research has been published or is forthcoming in a variety of outlets, including Information Systems Research, Journal of Management Information Systems, MIS Quarterly, and IEEE Computer. She has received several research awards such as the William S. Livingston Graduate Fellowship for 2002–3 at the University of Texas, 2009 Microsoft Prize by the International Network of Social Network Analysis, and the 2007 Steven Schrader Best Paper Finalist at the Academy of Management Conference. Anitesh Barua is the Mr. and Mrs. William F. Wright Jr. Centennial Professor for Management of Information Technology and University of Texas Distinguished Teaching Professor in the Department of Information Systems, Risk and Operations Management at the McCombs School of Business at the University of Texas at Austin. He is an associate director of the Center for Research in Electronic Commerce. He received his Ph.D. from Carnegie Mellon University. He has published papers in a variety of outlets such as Information Systems Research, Journal of Management Information Systems, MIS Quarterly, Organization Science, Journal of Productivity Analysis, and Sloan Management Review. He has received several awards for his research and teaching, including the William W. Cooper Doctoral Dissertation Award from Carnegie Mellon University and the Stevens Piper Professorship. Andrew B. Whinston is the Hugh Roy Cullen Centennial Chair Professor of Information Systems in the McCombs School of Business at the University of Texas at Austin. He is also the director of the Center for Research in Electronic Commerce at the University of Texas at Austin. He received his Ph.D. from Carnegie Mellon University. He is the author of over 300 articles in various journals such as the Journal of Economic Theory, American Economic Review, Review of Economic Studies, Journal of Political Economy, Journal of Marketing, Marketing Science, Operations Research, Management Science, Journal of Management Information Systems, IEEE Transactions on Knowledge and Data Engineering, Informs Journal of Computing, Information Systems Research, MIS Quarterly, and others. He serves as the editor-in-chief of Decision Support Systems. He is a recipient of the LEO award by the Association of Information Systems. Abstract: We examine contract choices in the provision of “software-as-a-service” (SaaS), which is a business innovation that transforms information technology (IT) Journal of Management Information Systems / Spring 2010, Vol. 26, No. 4, pp. 87–117. © 2010 M.E. Sharpe, Inc. 0742–1222 / 2010 $9.50 + 0.00. DOI 10.2753/MIS0742-1222260404

88

Susarla, Barua, and Whinston

resources into a continuously provided service. We draw upon agency theory and modularity theory to propose that one of the central challenges in service disaggregation is that of knowledge interdependencies across client and provider organizations. The resulting lack of verifiability of certain tasks results in a multitask agency problem. Our key research questions involve (1) the suitability of high- versus low-powered incentives in SaaS contracts when the outsourced tasks involve business analytics that are difficult to verify, and (2) how such contract choices are affected by the modularity of interfaces between the client and the provider. Analysis of data collected from 154 providers of SaaS offering a range of IT services supports our contention that when contracting for business analytics characterized by knowledge interdependencies across clients and providers, incentives should be “low powered.” Modularity in the interfaces of the service provider increases the desirability of high-powered incentives in such situations. Our results are robust after accounting for endogeneity issues arising from unobserved matching between service providers and the nature of IT services outsourced by clients. With the increasing importance of information systems in services, this paper suggests that arm’s-length relationships and high-powered incentives may be ineffective in incentivizing providers to perform on complex business analytic tasks, unless accompanied by the modularization of interfaces. Key words and phrases: endogenous matching, information technology, modularity, multitask agency, outsourcing, service science, services.

In recent years, advances in communications technologies have made it significantly easier to structure work outside the traditional boundaries of the firm, leading to the unbundling of the corporation [34]. A related development is modularization, where a tightly integrated hierarchy is supplanted by a loosely coupled network of organizational actors [57], enabling firms to distribute production activities to a global network of suppliers without extensive communication [26]. Information technology (IT) and business service providers are likewise attempting to develop a factory model, where a standardized process can be deployed across multiple organizations [64], enabling firms to access productive capabilities of specialized suppliers [41]. With the emergence of service science approaches and new business models therein [14], this paper is motivated by the imperative to develop a richer understanding of contracting and task disaggregation problems that arise in these settings. In this study, we examine the economic implications of “software-as-a-service” (SaaS), also known as application service providers (ASP), where IT services are delivered to user organizations over the Internet.1 SaaS transforms IT resources into a continuously provided service [15], where the provider bears final responsibility for service execution. The stream of IT services provided by SaaS is embedded in the business processes of outsourcing firms, creating interlinked organizational roles and relationships across the provider and the client firms. SaaS offers an attractive option to businesses frustrated with the high installation cost and uncertainty pervading traditional IT initiatives [32, 49].2 With the emergence of service science and models of Web

Multitask Agency, Modular Architecture, and Task Disaggregation in SaaS

89

services, the standardization of business processes and frameworks to evaluate process execution [22] promise to usher in a nearly decomposable organization [60]. It needs to be recognized that, fundamentally, IT services are not organized in a manner similar to loosely coupled manufacturing processes. The consumption of a service and its production are inseparable to such an extent that the user is often a co-producer [69] and supplies key inputs to the service production process. Thus, we do not always have a clear demarcation of roles and responsibilities between providers and clients, with knowledge interdependencies across providers and clients. Standards to measure process execution by the service provider, including processes to trace failures in service execution by the provider, do not always exist, leading to difficulties in structuring contracts to reward performance. Industry evidence suggests that in spite of the promise of the SaaS model, its success has been mixed at best. Contractual problems were often cited as an impediment to the proliferation of SaaS [23, 31]. This paper examines the role of knowledge interdependencies between clients and vendors in an SaaS setting and the contracting issues that arise due to difficulties in task disaggregation. Characterizing a provider of IT services as the agent and the user of IT services as a principal, we describe the agency conflict between an SaaS provider and the client. A part of the service provided by SaaS consists of the automation of tasks that involve the substitution of labor and IT for increased efficiency. As an example, consider the outsourcing of a customer relationship management (CRM) application that entails automation of tasks such as campaign and lead management. However, CRM also includes service and marketing analytics and customer service knowledge management (e.g., these are a subset of services offered by www.salesforce.com). Autoresponse e‑mail or e‑mail prospecting are examples of tasks that involve substitution of labor and IT for increased efficiency, where the provider should process and update transactions efficiently. Typically, service-level agreements (SLA) promise a desirable level of service by the SaaS on data center and network consistency, infrastructure and server hardware availability, compensation for outages, and so forth, which reflect business automation needs such as response times [5, 7], availability, and quality of service [7, 24, 55]. By contrast, analytics and business intelligence, global forecasting, and customer service knowledge management involve developing information capabilities that lead to better decision making, which we refer to as business analytic capabilities.3 How well such activities are performed depend on (1) the quality of the analytical tools and (2) the quality of data that are used as inputs. The first requires an understanding of the client-specific routines and processes, and the second needs integration between the SaaS service and the client, once again necessitating knowledge of the client organization. Such knowledge interdependence between the client and the provider is difficult to contract upon, resulting in the provider’s effort being observable, but not verifiable, in the parlance of contracting literature [38, 39, 51]. Contracting for SaaS is either on a fixed price or time and materials basis. In the former, the SaaS charges a fixed fee per month that is specified at the start of the

90

Susarla, Barua, and Whinston

outsourcing initiative. The latter compensates a provider for the person-hours of effort expended each month. A fixed price contract is an example of a high-powered incentive contract, where a service provider is the residual claimant of cost-saving effort [51].4 Tightly defined contracts that include detailed clauses for service scope and comprehensive measures of performance provide end users greater control over costs. Compensating SaaS based on costs incurred, on the other hand, is a low-powered incentive contract. However, time and materials contracts are advantageous when delivering IT services specific to client needs. In the presence of knowledge interdependencies across organizations, the difficulty of communicating tacit requirements creates a potential for incentive conflict in the form of a multitask agency problem [38]. High-powered incentives on a subset of tasks that are verifiable create a distortion of effort away from those that are not readily verified objectively. The lack of codification and standardization in business processes and the need for communication of localized and situated knowledge creates interdependencies between providers and clients. Thus, it becomes difficult to verify effort on business analytic capabilities. SLAs tend to emphasize aspects of verifiable performance (such as reliability, availability, etc.), accentuating the distortion in incentives faced by the provider. Building upon multitask agency theory [8, 28, 38, 39], a sizable literature in procurement [9, 12, 35, 42], and modularity theory [11, 18, 57, 63], we explore the link between agency costs, modularity in provider architectures, and contractual incentives in disaggregating IT services. Our specific research questions include the following:

RQ1: Are low-powered incentives more desirable than high-powered incentives when the outsourced task has multiple dimensions or facets that differ in verifiability? RQ2: How does the modular design of interfaces between the client and the provider impact contract choice for tasks with verification challenges? RQ3: How do firms select service providers for tasks with both automation and analytics dimensions? Using data from 154 SaaS providers, we find evidence that when process performance is difficult to verify, incentives should be low powered. Our results are robust after accounting for potential endogeneity from unobserved matching between providers and firms. Further, we show that modular architectures may mitigate some of the agency costs in disaggregation of tasks by minimizing interdependencies. Task decomposition along modules results in lower need for communication and lower interdependence across provider and client. The interactions between the client and provider become systematized through structured interfaces as a result of modular architectures. Thus, modularization could potentially supplant the low-powered incentives supported by relational exchange. We also find that clients select providers of business analytic IT services based on the latter’s capabilities to deal with the special demands posed by the disaggregation of hard-to-verify tasks. One of the contributions of this study is to demonstrate how the emerging models of IT services can lead to multitask agency challenges in outsourcing. Another con-

Multitask Agency, Modular Architecture, and Task Disaggregation in SaaS

91

tribution is to analyze agency costs in service disaggregation. While firms understand the cost and efficiency considerations in IT services outsourcing, less understood are the difficulties in unbundling IT services. Our results are relevant to the context of IT services and to the emerging area of service science research.

The Context of SaaS In this section, we describe the interdependencies that are created as a result of service disaggregation, which may lead to multitask agency problems for certain outsourced IT services (discussed in the next section). In the earlier example involving the outsourcing of a CRM service, a customer service knowledge repository must not only be populated with rich content that subsumes various contingencies that may arise in interactions with a customer but also make the information available to customer service representatives in formats that are easy to comprehend and use. Similarly, there must be a way for service representatives to learn as more experiences are added. Because CRM is an activity that involves multiple business functions, salespeople should be able to use the SaaS to view customer balances, monthly sales summaries, and overall account histories (which may reside in applications within the client organization), requiring integration of the SaaS with other applications in the organization. To get useful information from the outsourced application, the provider should expend effort understanding the business domain and provide interfaces with the legacy and proprietary applications of the client.5 The service performed by the SaaS should therefore be well integrated into a chain of other functions in the client organization, all of which contribute to the eventual outcome, whether it is lead generation or sales conversion.6 We therefore distinguish between the process verifiability that dictates the choice of contracts and the verifiability of the eventual organizational outcome (such as lead generation). We highlight two factors that lead to tight coupling between the client and the provider: (1) the need for client-specific knowledge and (2) the need for technical interfaces specific to the client organization. First, context-specific knowledge essential to performing a task is embedded in organizational routines and information channels [36], which is difficult for the provider to elicit and for clients to communicate [65]. Such knowledge interdependencies across clients and providers can be addressed in two ways. The client and the provider should work together in creating boundary-spanning problem-solving processes that lead to joint learning and a rich partner-specific experience [25, 44], which translates to an improved provision of tasks where context-specific knowledge is important. Another approach is to separate task boundaries from knowledge boundaries [63]. In prior IT outsourcing arrangements, methodologies such as structured requirement definition and frameworks such as the capability maturity model (CMM) were used to describe complex processes and articulate business requirements [20], thus achieving a demarcation of task and knowledge boundaries. That is, the client has the knowledge about the organization while the provider understands the technological aspects of the system, and each knows where expertise resides in the other organization [27].

92

Susarla, Barua, and Whinston

The second challenge is the difficulty in providing technological interfaces (between the client and the provider) specific to the platforms and systems in the client’s organization. Technological interfaces across systems define interactions across components comprising a technical system [30]. When tasks do not call for in-depth knowledge of the client organization, an organization can easily hand over a process to an SaaS provider. Exchange of information and data can take place through a standard technical interface between the provider and the client; for instance, automating data transfer from a Web-based office productivity tool to databases in the client’s organization. When client-specific knowledge is important and when a business process involves multiple functional areas, providing back-end integration needs to be specific to the platforms and systems in the client’s organization, demanding an in-depth understanding of the proprietary information systems in the client’s organization [16, 37]. Considerable effort needs to be spent by the provider in achieving common data syntax, definition of documents, and interpretation of service definitions [6, 43]. As we note in the next section, these requirements may have a significant impact on the choice of the contract. Table 1 identifies a set of outsourced task characteristics for hosted CRM services that demand client involvement as well as provision of both automation and business analytics. Providing automation capabilities leads to some efficiency benefits through lower labor requirements and increased accuracy and timeliness of customer data. For instance, automated audit trails can speed up processing of customer inquiries and after-sales support. Similarly, business analytics can create value by enabling retention and promotion efforts. However, business analytics are not as effective without high levels of automation. For instance, manual provisioning of digital dashboards would be ineffective for the user organization without automated data visualization tools, while high levels of automation with low levels of analytics for such tasks would not provide benefits in the area of revenue generation. At the same time, without business analytics tailored to the client organization, an automated campaign tool might send a promotional offer to individuals who see no value in the offer, and who might even perceive they are being spammed. Thus, a client organization outsourcing CRM tasks derives benefits from both automation and analytics dimensions. However, business analytic tasks are likely to have higher interdependencies (and lower verifiability) than automation tasks, and hence the contract must provide incentives to the provider to exert sufficient effort on analytics. Table 1 also depicts the client-side inputs, which are critical to obtaining business value in deploying SaaS. For complex applications involving business analytics, clients need to be actively involved in a number of activities such as participating in requirements gathering, mapping the service provided with business goals, ensuring that the stream of services provided by the SaaS can enhance desired organizational outcomes, and so forth. We draw upon discussions in prior literature on modularity theory and the conceptualization of a design structure matrix in Baldwin and Clark [11]. We create a representative task structure matrix in Table 2 for hosted CRM, where the client hands over CRM functions to an SaaS provider. Table 2 depicts task interdependencies for the set of tasks identified in Table 1. The rows and columns represent processes, while

Client’s responsibilities

Communicate local knowledge and business domain

Configuration and change management

Map forecasting to internal performance measures

Communicate process flow to enable customer profiling

Provide inputs on standards and systems architecture in client organization

Does not require inputs from client organization

Does not require inputs from client organization

Tasks

1.  Map SaaS to   business   requirements

2.  Streamlining   customer-facing   activities

3.  Forecasting

4.  Provide digital   dashboard

5.  Integration with   client-side billing   systems

6.  Sales tracking

7.  Autoresponse

Table 1. Task Structure in Hosted CRM Services

Automated follow-up with customers, automated audit trail

Automated campaign management and lead conversion

Provide interfaces to streamline data, automate data transfer, provide application monitoring scripts

Data visualization tools, rapid response, reliability, and availability

E-mail prospecting, 360-degree view of client transactions

Process automation on all aspects of the customer encounter

This task requires informational capabilities rather than automation

Purely transactional (or automational) component

A purely automational task and does not involve analytics

A purely automational task and does not involve analytics

Custom tools and integration with client databases, achieve common syntax in data standards and document definitions

History of customer interaction combined with analytical tools

Standardize data capture across different sales channels, provide custom reporting tools

Analytics tailored to client organization

Configure SaaS service to function within the stream of processes in the client organization, standardize processes across different sales channels

Analytics component

Multitask Agency, Modular Architecture, and Task Disaggregation in SaaS 93

94

Susarla, Barua, and Whinston

Table 2. Task Structure Matrix Representing Interdependencies for the Tasks in Table 1 Task

1

2

3

4

5

6

7

X X 1 X X 2 X X 3 X X X 4 X X 5 X X 6 X X X 7 X X X X Notes: The tasks are depicted on both axes. Tasks 3–7 are performed by the provider. Tasks 1–2 are performed by the client. Tasks 3–5 denote the business analytics dimension. Tasks 6–7 denote automation tasks. The Xs identify the interdependencies between processes performed within an organization (i.e., within the SaaS or within the client). Tasks highlighted in boldface indicate the knowledge interdependencies between processes performed by the SaaS and those performed by the client. The rectangular box indicates a potential task module, i.e., a service dimension.

the marked Xs in cells depict dependencies between processes. Client-side processes are shown in the first two columns (and rows), and tasks performed by the provider are depicted in the remaining columns (and rows). Columns 6 and 7 represent automation tasks, and 3, 4, and 5 involve business analytics. Two processes are defined to be dependent (the X in the task structure matrix) when the execution of one task depends on another. For the set of tasks identified in Table 1, we depict knowledge interdependencies between the tasks with the X signs in boldface in Table 2. Automation tasks shown in columns 6 and 7 are easily detachable from clients, and thus have no interdependencies with client-side processes. Analytics demands intense communication with the client and boundary-spanning routines to understand processes and localized knowledge that reside in the client’s organization. In this example, the performance of business analytic tasks depends on the automation dimension, but not vice versa. For SaaS to effectively provide business analytic capabilities such as forecasting or dashboards that demand business intelligence, it is important to have a thorough understanding of the user organization. Clients need to map the fit between what the provider can deliver and what the organization needs, including streamlining existing processes, which the provider needs to understand in tailoring service to client needs. For processes that need sticky or localized knowledge, the resulting knowledge interdependence across organizations needs to be managed through interfirm processes and communication [62] or integrated problem solving [65]. Table 3 presents the qualitative evidence gathered from providers in our sample. About half the providers indicate the importance of customization to client needs, long-term partnerships with clients, and understanding the client’s outsourcing needs, all of which demand interfirm collaboration. However, only a few of the providers we interviewed identified mechanisms such as knowledge sharing routines, detailed docu-

Multitask Agency, Modular Architecture, and Task Disaggregation in SaaS

95

Table 3. Summary of Qualitative Responses from the Interviews with Providers Description of process* Customization of solution to clients Tools to ensure reliability, security, response times, availability Strength of industry experience, knowledge of client-specific needs Strength of partnerships, importance of long-term relationships Importance of understanding client’s application outsourcing needs Investment in technical standards in privacy and security and training Investment in process standardization such as the CMM, ISO Documentation and client-centric requirement gathering

Number of respondents 51 98 55 81 65 90 6 8

* Numbers represent frequency of responses indicating that providers supply these dimensions of service; total = 154.

mentation, or requirement gathering processes that allow a structured representation of knowledge interdependencies, which could ameliorate some of the measurement problems in providing business analytics. By contrast, most providers had in place networking standards to measure compliance with SLAs.

Theory and Hypotheses This paper examines how knowledge interdependencies may lead to a multitask agency problem due to the difficulty of verification of some tasks (or task dimensions) and how contracts should be chosen to incentivize providers to perform on such tasks. Automation of business tasks is measured using SLAs on verifiable dimensions of effort [55].7 Measuring the effectiveness of business analytics is complicated by the idiosyncratic technical interfaces specific to a client [30] and knowledge interdependencies that create “synergistic specificity” or tight coupling [58, p. 1154] between the client and the SaaS provider.8 Thus, we assume that performance on the analytics dimension is difficult to verify [38]. The provider’s ability to successfully perform each task dimension entails resources devoted to entirely distinct areas of expertise, such as the domain knowledge of the provider and the reliability of the service [46]. It is also difficult to separate the two task dimensions; for instance, a single process such as analyzing sales patterns of a potential buyer requires that the SaaS solution provide both automation and business analytics. We therefore assume that the choice of effort for each task dimension is substitutable in the provider’s cost function and that the cost attributable to either dimension is difficult to assess [9], making it challenging to achieve the task separation prescription of Holmstrom and Milgrom [38].

The Multitask Agency Problem in SaaS When disaggregating IT services, firms need to confront the costs incurred in designing technical interfaces across the client and the vendor, communicating tacit requirements

96

Susarla, Barua, and Whinston

and the difficulties in creating technical interfaces. We follow prior literature that has distinguished between task boundaries and knowledge boundaries [18, 63, 65].9 Automation tasks can be performed with minimal knowledge of the client organization and therefore can be easily detached; the client still retains the overall knowledge of how such automation tasks fit within its organizational landscape, while the provider has the knowledge internally to execute the task. However, provision of business analytics demands knowledge of the client organization, creating knowledge interdependencies across the client and the provider that result in tight coupling across providers and clients [36, 57], posing difficulties in establishing well-defined audit systems to record process failures.10 If the performance on both dimensions can be verified, the optimal choice follows predictions from agency theory [51]. A fixed price contract is a high-powered incentive scheme since the provider is made fully accountable for its cost savings.11 The difficulty in codifying or communicating the service requirements, the lack of a single standard interface resulting in tight coupling, the lack of knowledge sharing routines, and, finally, the measurement problems in process execution combine to make provision of business analytics observable, but not verifiable, in the parlance of the contracting literature [38, 51], which is in sharp contrast with that of verifying effort on the automation dimension. Because effort is substitutable on each dimension, strong incentives on a subset of tasks with verifiable performance measures lead to effort distortion where the agent has incentives to underperform on one service dimension and overdeliver on the other.12 When tasks are substitutes in the agent’s cost function and incentives are provided on easily measurable tasks [38], there is a crowding out of nonmeasurable activities. For instance, vendors could spend scarce resources on maintaining uptime versus learning client requirements. The emphasis placed on objective criteria, such as SLAs, on the automation dimension accentuates the distortion in incentives, leading to an incongruence of goals between the client and the provider [28]. Indeed, the failure of many early SaaS providers has been attributed largely to their inability to address the needs of end users [49] rather than providing reliable or available service. Because CRM involves the provision of business analytics (in addition to automation tasks), we likewise expect hosted CRM to be associated with time and materials contracts. By contrast, hosted IT services that primarily involve automation tasks (e.g., Web hosting, office productivity applications, etc.) should be governed by fixed price contracts. Thus, we hypothesize: Hypothesis 1 (The Tasks and Incentives Hypothesis): Tasks that include business analytics are associated with low-powered incentives.

Relational Norms and Provision of Business Analytics Actors in exchange relationships are enmeshed in a pattern of social interactions [25]. When knowledge interdependencies exist across the client and the service provider, relational norms motivate greater interaction between clients and providers, leading parties to invest in joint learning and joint problem-solving arrangements [44]. By

Multitask Agency, Modular Architecture, and Task Disaggregation in SaaS

97

increasing trusting behavior, relational norms can also decrease potential opportunism [25] and thereby lessen the problems of effort distortion and measurability associated with provision of business analytics. When disaggregating processes that rely on integrated problem solving, relational norms ensure commitment between the organizations and alleviate the multitask agency problem that is a result of knowledge interdependencies between organizations [8]. Thus, we hypothesize: Hypothesis 2 (The Relational Norms and Task Types Hypothesis): Business analytic tasks are associated with greater relational norms between the client and the service provider.

Modular Architecture and Interaction Across Providers and Clients An alternate approach to achieve task disaggregation is through modular architectures and standardized interfaces of SaaS. Interfaces are defined as “detailed descriptions of how the different modules will interact, including how they will fit together, connect, communicate” [11, p. 77]. Interfaces refer to the manner in which tasks are partitioned and how clients and providers interact with respect to task provision. Modular systems have components relatively independent of each other, enabling a standard interface that partitions information into visible design rules and hidden design parameters [56]13 to connect with little or no loss of performance [11]. Interfaces across modules specify inputs and outputs of a component, making it possible to have clear task boundaries between the provider and the client. It is posited that only the visible pieces of information need to be communicated, while the hidden design parameters are encapsulated within the modules, and thus not communicated outside the boundaries of the module [11]. Modular architectures thus achieve information hiding, partitioning tasks so that the individual processes performed by the SaaS provider are not transparent but interaction between the provider and the client becomes structured and systematized [57]. Modularity of SaaS and the resultant process verifiability therefore permits easier outsourcing management [56, 58], making it possible to govern outsourcing arrangements through high-powered incentives. Thus, we hypothesize: Hypothesis 3 (The Modular Architectures and Incentives Hypothesis): Modular architectures of services providers are associated with high-powered incentives contracts. It is posited that product architectures drive the need for “frequent and appropriately structured task communication” [17, p. 358]. Knowledge interdependence imposes greater information processing and communication requirements across the client and the services provider. The intense volume of interaction and communication (across clients and providers) should be handled through appropriate boundary-spanning processes and metrics to assess the provider’s effort. When such metrics and structured communication standards do not exist, the overlaps in knowledge results in the quality of the provider’s effort being difficult to verify. Modularity theory posits that standardized interfaces enable a structured representation of interdependencies and

98

Susarla, Barua, and Whinston

reduce the need for close coordination, creating a predictable workload [11]. As a result, the provider’s interaction with the client occurs through a standardized interface and the client can designate performance specifications to be met by the provider [56]. Modular architectures thus enable a structured representation of interdependencies and streamlined interaction between providers and clients.14 This lowers the synergistic specificity of processes across clients and providers, thereby reducing the difficulty associated with knowledge transfer in business analytic capabilities. Thus, the increased communication load in managing knowledge interdependencies for business analytics is moderated by modularity. Therefore, we hypothesize: Hypothesis 4 (The Modular Architectures, Task Types, and Incentives Hypothesis): Modularity of the process architecture moderates the relationship between the provision of business analytic tasks and the strength of incentives. The research model in Figure 1 summarizes the theoretical arguments discussed above. Table  4 outlines how different process architectures deal with knowledge interdependencies.

Data Description and Analysis Measure Development Measures were generated based on discussions with managers in firms outsourcing to SaaS, evidence from trade literature, and prior academic research. Dependent Measures • The contract type is a binary variable—that is, either fixed price or time and materials. Independent Measures • Business analytics provision: We coded a binary measure, which is 1 if the IT service of the SaaS provider is CRM, and 0 otherwise. Thus, IT services such as e‑fulfillment and credit card processing; productivity applications such as word processing and spreadsheet applications; financial, accounting, or human resource applications, as well as IT support, which are mainly automation related, are coded as 0. Note that CRM involves both automation and analytic dimensions (as shown in Tables 1 and 2) and is therefore characterized by knowledge interdependencies and difficulty in task verification. • Relational norms: Relational norms were conceptualized in terms of communication and sharing of information, trust, dependence, and cooperation [54]. We therefore considered factors such as the cooperation and partnership with clients highlighted in the industry and trade press [61].

Multitask Agency, Modular Architecture, and Task Disaggregation in SaaS

99

Figure 1. Research Model

Table 4. Service Type, Knowledge Interdependencies, and Types of Interfaces

Integral architecture

Modular architecture Analytics Quadrant II

Quadrant I Tightly coupled processes • Knowledge interdependencies • Interfaces specific to client • Intense client–provider interaction • Low-powered incentives

Nearly decomposable processes • Some learning about the client • Task boundaries coincide with knowledge boundaries • High-powered incentives

Automation Quadrant IV Tightly coupled processes • Interfaces specific to client needs • Predictable but high volume of interaction • Standardized processes • High-powered incentives

Quadrant III Loosely coupled processes • Predictable data flow between provider and client • Standardized processes • High-powered incentives

• Modular architecture: The source for this measure was archived Web pages as well as interviews with providers. The use of Web service standards enables a standardized way to define process flows, process execution standard systems interfaces [22], and decouple service interfaces from implementation and platform considerations [29]. Indeed, the customer specificity of interfaces and tight coupling has been posited to be a chief limitation of some of the early SaaS models [52]. With Web services, the application program interfaces will be less specific

100

Susarla, Barua, and Whinston

and reusable, achieving the twin principles of standardized interfaces and task decomposition associated with modularity. • Knowledge interdependencies: When the type of service outsourced cannot be easily separated or decoupled from what is going on inside the client organization, and when the service provided by the SaaS requires tacit knowledge of the client’s organization [65], there should be rich interaction enabling an understanding of client-specific processes and routines, as well as boundary-spanning mechanisms. This measure is obtained by coding binary variables based on the interview data denoting that the provider needs to exert effort in obtaining an in-depth understanding of the tacit client context and needs on the following dimensions: (1) investments made by a provider in offering best practices to the client’s service requirements, (2) investments by the provider in domain-specific knowledge relevant to a client’s context, and (3) customization to client-specific needs. • Verifiable process performance: It has been posited that a measure of performance should include a broad range of dimensions [53]. Our measures assess service delivery on a host of verifiable aspects of service such as (1) system uptime [5], (2) implementation and service costs [45, 46], and (3) improved customer service provided by the SaaS [46]. Controls • Service-level agreements: In line with prior studies that have measured desired service attributes [69], we measured the importance of SLAs between the ASP and the end-user organization on three dimensions—(1)  tools to prove SLA achievement, (2) SLAs on network reliability [24], and (3) SLAs on customer response time [55]—as a proxy for the importance assigned to infrastructural dimensions. • Contract duration is the length of time the provider agrees to provide service to the end-user organization [15]. In a longer-term horizon, an organization’s demands from the service provider can be subject to uncertainty, as well as unpredictability in service interaction. • Expectation of continued interaction is the anticipation that the relationship persists into the future [35, 42] by measuring the (1) expectation of future partnership with clients and (2) investment in future needs of clients. The promise of relationship continuity fosters incentive alignment, inducing the provider to work toward cooperative outcomes, such as exerting effort on difficult-to-verify dimensions of service. • Completion time (Comp_Time) is defined as the actual months taken to deploy the SaaS. Typically for projects where the business analytics are not important, deployment can occur sometimes in less than a week. On the other hand, projects with some customization and integration can take up to three to five months [40]. • Uncertainty: Determinants of contract choices could be factors such as difficulty in specifying service scope and requirements that increase the difficulty of es-

Multitask Agency, Modular Architecture, and Task Disaggregation in SaaS

101

timating costs. Measures from prior studies in software development [13] were adapted for the SaaS context, such as the problems in estimating (1) personalized service to end users [49], (2) compatibility with client applications [24], (3) scalability with future needs of client service requirements from end users [32], and (4) unexpected problems in deployment, leading to delays in service rollout and uncertain implementation period [49]. • Decision maker: We controlled for the decision maker, that is, whether it is IT, senior management, or both who decide to outsource to an SaaS, which reflects how organizations trade off quality and costs and which could affect choice of contract [9, 12]. • Instrumental variables: Factors important in selecting the SaaS are potential candidates for instruments to remove the bias associated with endogenous matching. The competence of service personnel [69] is an important attribute in contracting for the service, so we measure a provider’s partnership with key software vendors. Because the participation of business users is critical in selecting providers [68], we measure the principal client contact (i.e., IT versus business user) when the SaaS first markets to an organization. Likewise, a service provider’s investments in capabilities to understand client requirements and its systems design and architectural capabilities are also important. Measures were developed based on discussions in the trade and industry literature. We measured the accreditation of the provider [45] and the ability to provide training to end users [6, 61].

Data Collection Data were collected through a telephone and mail survey by a professional data collection organization using a survey instrument developed by us. The questionnaire was designed to measure characteristics of a representative service agreement undertaken by a provider of SaaS and pretested on a sample of 23 providers. Those selected for the final survey were contacted by the research organization for telephone-based screening, to be sure that the respondent understood the survey questions. The profile of informants included executive (chief executive officer, vice president, chief information officer, etc.) or managerial positions, which is consistent with prior studies that recommend using key informants [50, 53]. We used the multiple-informant technique to ensure that the respondents were knowledgeable about the different aspects of the survey.15 As part of the final survey, we also conducted open-ended telephone-based interviews with the SaaS firms to gather details about the organizational practices of the SaaS. Our sample did not contain large corporations (e.g., Siebel, Oracle) that offer hosted IT services, because a large firm would be able to devote sufficient resources to multiple services, thereby contradicting assumptions of task substitutability. To ensure data integrity, it is recommended that subjective measures be validated against quantitative performance [50]. Responses to the unstructured questions from the open-ended interviews were coded after the data collection was complete and cross-checked for consistency with the responses to the structured questions. Whenever possible, we checked for consistency of the survey responses with information in industry reports

102

Susarla, Barua, and Whinston

and a survey of end users. Out of 600 questionnaires administered, we had 154 usable responses. The data collection was done in two stages. The two-sample Kolmogrov–Smirnov Z‑test [3] found no bias between the responses. The questionnaire used for data collection is available from the authors on request. A confirmatory factor analysis showed that the t‑values for all the indicators exceeded the recommended value of 3.29 at a p‑level of 0.01, supporting convergent validity. The Shapiro–Wilk [59] test does not reject our assumption of normality at the 1 percent level. We also collected data about the technological standards and history of each provider by using archived sources. Archived Web pages of firms were obtained from the Internet Archive (www.archive. org), which is a nonprofit group that captures snapshots of Web pages across the Internet at weekly intervals. We coded a variable for modular architecture—that is, whether the SaaS uses Web service–based protocols to create a technological interface. Patent is a dummy variable coded as 1 if the SaaS holds a patent. Alliance was coded depending on whether the SaaS had alliances with dominant technology vendors. The survey items are presented in Table 5. Summary statistics and correlations are presented in Table 6. Table 7 summarizes the hypotheses.

Analysis and Results Econometric Approach The baseline logit estimation is C = ( α 0 + α1 PERF + α 2Comp_Time + α 3CRM + α 4 KNOW + α 5 SLA + ε > 0 ) ,

(1)

where C denotes the choice of contract, which is either fixed price or time and materials; PERF measures the provider’s performance on verifiable dimensions and SLAs; Comp_Time denotes the project completion time (i.e., the time to deploy the SaaS service); CRM is a dummy variable (= 1 if application is CRM); KNOW measures knowledge interdependencies; and SLA measures the SLAs. We also include appropriate controls. In outsourcing IT services, potential determinants of observed contract choices are not only multitasking considerations but also factors such as the difficulty in estimating costs, specifying project scope, and future stream of requirements [9, 12] that increase the uncertainty in service description ex ante. Factors that increase the difficulty of estimating costs and the future stream of requirements could also increase the need to adopt time and materials contracts. We therefore control for such factors in estimating the drivers of contract choice. One challenge in estimation is that governance of outsourced processes may not be independent of the processes outsourced. Typically the firm decides the scope of IT services handled by the agent as well as the type of contract. Services providers are heterogeneous in terms of client-specific industry knowledge, or their capacity to understand the client’s business needs and translate it into successful service delivery. A client will decide to outsource critical applications only when it is satisfied with

Multitask Agency, Modular Architecture, and Task Disaggregation in SaaS

103

Table 5. Survey Items 1. Project completion time: What is the time taken to fully implement a typical application once the sales cycle is complete? (i.e., time taken to implement and deploy the service)   < 30 days   1–6 months   6 months to 1 year   1–2 years   > 2 years 2. Duration (the term of the typical contract)   < 1 year   1–2 years   2–3 years   3–5 years   > 5 years 3. Decision maker: Individual in client organization primarily responsible for the decision to outsource   IT/MIS decision makers   Senior business decision makers   Both IT and business decision makers 4. Instruments (binary variables)   (Accreditation) My company is industry accredited   Yes  No   (Client contact) When your company first markets to a new client, who is the    principal target? (Items 2 and 3 were coded as 1)    (1) IT/MIS decision makers, (2) senior business decision makers, (3) both The items below are measured on a Likert scale ranging from “strongly disagree” to “strongly agree” 5. Uncertainty: For a typical engagement, how difficult is it to estimate each of the following?   Implementation period   Personalized service to end users   Scalability   Compatibility with standards in client organization 6. Verifiable performance   Increased system uptime   Lower implementation and service costs   Improved customer service 7. Instruments (Likert scale)   We provide all the training and information needed to allow every employee in a    client company access to applications (Training)   We partner with well-known application software vendors to acquire application   critical knowledge (Partnership) 8. Service-level agreements: How strongly do you agree or disagree that the following are important in a client engagement?   Tools and tracking systems to prove SLA achievement   SLAs on network reliability and security   SLAs on application response time (continues)

104

Susarla, Barua, and Whinston

Table 5. Continued 9. Relational norms   Client company’s senior management is willing to share workload and information    with my company   Client company’s senior management thinks the use of my company’s services is    a good way to obtain IT functionality   Client company’s IT department is willing to share workload and information with    my company   We have strong partnerships with our clients 10. Expectations of continuity   Our clients are willing to partner with us to meet their future application    outsourcing needs   We invest in future business application outsourcing needs of the client

the provider’s demonstrated ability to manage service delivery.16 Econometrically, therefore, there is an unobservable process of matching between clients and providers where user firms decide on the level of complexity of outsourced processes based on the abilities of the service provider, which is especially important given knowledge interdependencies across providers and clients. The recommended procedure for correcting for unobservable matching is to use instrumental variables estimation that explicitly recognizes the decision-making process by the client in estimating the application type [1]. Because the application type is discrete, we conduct a two-stage limited information maximum likelihood model. We first estimate a logit model of the form shown below: CRM = I {β0 + β1 X + η > 0} .



(2)



In the above equation, X (the instrumental variables) is a set of characteristics important in SaaS selection, but not in the choice of a contract. CRM takes the value of 1 depending upon values of X. As a check, a Wald F‑test conducted by including instrumental variables along with exogenous variables in the contract choice estimation [4] rejected the joint significance of the instrumental variables. In the second stage, the estimated value of CRM is used in the logit model:

(

)

C = I α 0 + α1 PERF + α 2Comp_Time + α 3 CRM + α 4 KNOW + α 5 SLA + ε > 0 . (3)

Discussion of Results Table 8 presents the distribution of applications by contract type and project completion time. CRM applications are significantly more likely to have time and materials contracts (23 percent higher probability). Table 9 presents the estimates for application type. The contract estimations are presented in Table 10. The baseline estimation is presented in column 1 of Table 10. Column 2 presents the estimates corrected for endogeneity. Columns 3 and 4 include the interaction tests.

1.94 1.67

1.00 0.14 0.07 0.01 0.01 0.03 0.04 0.01 0.01 –0.02 –0.04 0.20 0.22 0.05 0.09 0.28 0.09 0.07 0.33 0.18 0.02 0.01 0.08

0.19 0.13

1.00 0.05 0.04 –0.09 0.01 –0.05 0.02 0.03 0.01 –0.09 –0.01 0.12 –0.08 0.12 0.15 0.27 0.26 0.31 0.38 0.15 0.15 0.08 0.06 0.07

Mean Standard   deviation 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

2

1

Variable

1.00 –0.03 0.07 –0.01 0.06 0.12 0.01 0.08 –0.05 0.05 0.18 0.13 0.07 0.13 0.08 0.21 0.12 0.11 0.08 0.06 0.17 0.20

2.44 1.42

3

1.00 0.38 0.42 0.40 0.10 0.03 0.18 –0.05 –0.09 0.15 –0.13 0.06 –0.07 0.07 0.05 0.10 –0.03 0.18 0.05 0.01 0.02

3.92 2.27

4

Table 6. Summary Statistics and Correlations (n = 154)

1.00 0.37 0.42 0.13 –0.02 0.19 –0.07 0.01 –0.01 –0.13 0.09 –0.06 0.08 –0.11 0.13 –0.07 0.19 0.06 0.07 0.01

3.98 2.26

5

1.00 0.47 –0.02 0.04 0.11 0.03 –0.07 –0.05 0.03 0.11 0.07 –0.03 0.05 0.02 –0.01 0.15 0.03 0.07 0.00

4.01 2.31

6

1.00 0.15 0.03 0.23 0.03 –0.01 0.14 –0.04 0.14 –0.04 0.10 0.04 0.08 0.01 0.14 0.05 0.07 0.13

4.11 2.39

7

1.00 0.36 0.38 0.26 0.11 0.16 0.30 0.19 0.18 0.23 0.19 –0.05 –0.04 0.14 0.15 0.20 0.07

3.9 2.2

8

1.00 0.20 –0.12 0.25 –0.03 0.03 0.19 0.29 0.27 –0.01 0.18 –0.08 0.10 0.23 0.17 0.14

4.73 1.36

9

1.00 –0.08 0.08 –0.01 0.02 0.15 0.23 0.15 –0.05 0.14 –0.14 0.03 0.16 0.22 0.15

3.8 2.4

10

1.00 0.19 0.08 0.19 0.12 0.07 0.15 –0.04 0.03 0.01 0.18 –0.05 –0.17 –0.14

4.2 2.3

11

1.00 0.36 0.05 0.09 0.03 –0.21 0.04 –0.09 –0.11 –0.08 –0.04 0.04 0.03 (continues)

2.22 1.20

12

Multitask Agency, Modular Architecture, and Task Disaggregation in SaaS 105

1.00 0.05 0.06 0.02 0.10 0.05 0.11 0.13 0.10 0.11 0.01

1.00 0.16 0.11 0.21 0.19 0.05 –0.03 –0.04 0.04 0.09 0.06 0.11 1.00 0.11 0.31 0.12 –0.01 0.16 0.09 0.07 0.10 0.25

0.26 0.40

15

1.00 0.19 0.01 0.03 –0.17 0.02 0.09 0.16 0.10

4.24 2.33

16

1.00 0.07 0.02 –0.11 0.04 0.07 –0.0 0.03

4.7 2.2

17

1.00 0.07 –0.12 0.06 –0.12 –0.01 –0.13

5.1 1.4

18

1.00 0.13 0.39 0.17 0.17 0.20

4.1 1.27

19

1.00 0.19 0.07 –0.01 0.02

4.4 1.76

20

1.00 –0.07 0.05 0.12

4.81 1.37

21

1.00 0.28 0.33

4.05 1.24

22

1.00 0.23

5.00 1.32

23

Notes: Variable descriptions: 1 = CRM; 2 = Completion time; 3 = Duration; 4 = Uncertainty–implementation period; 5 = Uncertainty–service; 6 = Uncertainty–scalability; 7 = Uncertainty– compatibility; 8 = Verifiable performance–uptime; 9 = Verifiable performance–network; 10 = Verifiable performance–application response; 11 = Accreditation; 12 = Best practice; 13 = Domain knowledge; 14 = Alliance; 15 = Patent; 16 = Expectations of continuity–partnership with clients; 17 = Expectations of continuity–investing in future; 18 = Relational norms–sharing workload with senior management; 19 = Relational norms–perception of senior management; 20 = Relational norms–sharing workload with IT; 21 = Relational norms–partnerships with clients; 22 = SLA tools; 23 = SLA–application response; 24 = SLA–network. Correlation > 0.15 = p  0.23 = p  0.10 denotes 10 percent level of significance; r > 0.15 denotes 5 percent level of significance.

0.19 0.23

0.24 0.13

Mean Standard   deviation 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

14

13

Variable

Table 6. Continued

106 Susarla, Barua, and Whinston

Business analytics

Relational norms

Modular architecture

Interaction term between modular architecture and business analytics

H2:  Relational norms and    service types

H3:  Modular architecture and    incentives

H4:  Modular architecture,    tasks, and incentives

Variable

H1:  Association of tasks and    incentives

Hypothesis

Table 7. Summary of Hypotheses

Modularity theory, multitask agency theory

Modularity theory

Relational contracts, agency theory

Multitask agency theory

Theoretical background

Modular architectures moderate relationship between analytics and contract choices

Prevalence of high-powered incentives

Association with CRM services

Prevalence of low-powered incentives

Hypothesized impact

Yes

Yes

Partially supported

Yes

Supported

Multitask Agency, Modular Architecture, and Task Disaggregation in SaaS 107

108

Susarla, Barua, and Whinston

Table 8. Distribution of Task Type and Contracts Task type CRM Others   Productivity applications, such as word processing,    spreadsheets, and database management   Payroll applications, fulfillment, financial and    accounting applications   IT support, hosting, Web, or e-mail applications

Contract Time and materials

Fixed price

22 48 12

8 76 22

22

31

14

23

Table 9. Matching Equations of Task Type Independent Intercept Instruments   Partnership with software vendors   Client contact   Accreditation   Training   Expectations of continuity–partnering with clients Other explanatory variables   Expectations of continuity–invest in client needs   Relational norms–sharing workload   Relational norms–competence   Relational norms–sharing workload with IT   Relational norms–partnership Association of predicted probabilities and observed   responses Pseudo R 2

CRM (matching equation) –4.35 (2.04)*** 2.61 0.24 0.11 –0.17 0.09

(0.51)*** (0.16)* (0.09)* (0.13)* (0.14)

–0.28 –0.13 0.51 –0.39 0.45 83.2

(0.13)** (0.10)* (0.20)*** (0.19)*** (0.21)***

0.42

Note: p-values are based on two-tailed tests; *, **, and *** significance at the 10 percent, 5 percent, and 1 percent levels level of confidence, respectively.

The estimates from stage 1 provide evidence of endogenous matching, suggesting that, all else being equal, clients can systematically sort out which providers are more capable of executing CRM. For example, accredited providers that have partnerships with software vendors and provide comprehensive training to users in client organizations are more likely to be entrusted with CRM. We also find that client’s perception of a provider’s competence and partnership predicts the nature of tasks contracted out to SaaS; that is, perceptions of professionalism [33] and partnership orientation of the provider act as guarantors to elicit effort when output measurement problems exist.

(1.5) (0.31)** (0.25)*** (0.30) — –0.18 (0.11)* –0.02 (0.17) –0.09 (0.10) 0.39 (0.11)*** –0.22 (0.11)** 0.11 (0.17) 0.23 (0.16)* –0.24 (0.15)*** –0.15 (0.16)

–0.14 (0.16)

(1.51) (0.36)** (0.25)*** (1.58) ** (1.26)* (0.13)* (0.18) (0.10) (0.11)*** (0.10)*** (0.17) (0.16)* (0.15)** –0.17 (0.17)

1.02 0.77 0.61 –2.86 –1.98 –0.21 0.01 –0.10 0.38 –0.23 0.11 0.28 –0.22

(continues)

–0.24 (0.17)*

(1.70)* (0.50)** (0.20)*** (1.63)* (1.02)** (0.13)* (0.18) (0.10) (0.12)*** (0.10)** (0.16) (0.18)* (0.14)*

2.74 1.01 0.63 –2.88 –2.05 –0.23 0.06 –0.06 0.34 –0.22 0.17 0.32 –0.25

0.89 0.49 0.56 –0.07

Intercept CRM Completion time Modular architecture Modular * CRM Duration Decision maker Uncertainty–scalability Uncertainty–implementation period Uncertainty–service Uncertainty–compatibility Verifiable performance–uptime Verifiable performance–customer   service Verifiable performance–implementation   cost

0.84 (1.31) 0.48 (0.30)** 0.55 (0.25)** — — –0.25 (0.13)** 0.00 (0.16) –0.15 (0.11) 0.37 (0.11)*** –0.28 (0.12)*** 0.16 (0.10)* 0.21 (0.16)* –0.23 (0.21)

(4) Instrumental variables with interactions and controls

(1) (2) (3) With Instrumental instrumental variables with Independent variables Baseline variables interactions

Table 10. Contract Choice Estimation

Multitask Agency, Modular Architecture, and Task Disaggregation in SaaS 109

0.16 –0.67 –0.34 –0.33 –0.13 0.08 –0.15

Patent Alliances SLA tools SLA–application response SLA–network Knowledge interdependencies–customization Knowledge interdependencies–best practices Knowledge interdependencies–   domain-specific knowledge Association1 67.9 69.9 71.6 Pseudo R 2 0.19 0.21 0.23

Notes: Numbers in parentheses signify standard errors. A positive coefficient indicates higher probability of time and materials contracts. 1 Denotes association between observed responses and predicted probabilities. Coefficients shown are based on two-tailed tests (*** 1 percent level of significance; ** 5 percent level of significance; * 10 percent level of significance).

0.37 (0.19)** 79.6 0.31

(0.40) (0.50)* (0.18)** (0.20)* (0.09)* (0.20) (0.10)*

(4) Instrumental variables with interactions and controls

(1) (2) (3) With Instrumental instrumental variables with Independent variables Baseline variables interactions

Table 10. Continued

110 Susarla, Barua, and Whinston

Multitask Agency, Modular Architecture, and Task Disaggregation in SaaS

111

Columns 1 through 4 in Table 10 show that CRM services, which demand the provision of business analytics, are significantly associated with the probability of time and materials contracts, supporting the Tasks and Incentives Hypothesis (H1). This confirms the relationship depicted in quadrant I of Table 4. The marginal effects analysis shows that with other explanatory variables held at the sample mean, CRM services are twice as likely to be associated with time and materials contracts than with fixed prices. By contrast, for tasks that mainly involve automation, contracts are three times more likely to be fixed price than time and materials (consistent with quadrant IV in Table 4). Low-powered incentives are more likely with knowledge interdependencies, due to the lack of a clear demarcation between client and provider roles and need for tacit knowledge of the client context. Two of the verifiable performance factors (lower implementation and service costs and improved service) are associated with fixed price contracts, as predicted by our theoretical discussions. Along similar lines, SLAs related to tools and tracking, network performance, and application response times favor fixed price contracts. We find mixed support for the role of relational norms. The partnership between the client and the provider, and the confidence of the client in the provider’s competence, are significantly associated with CRM in Table 9. We thus find partial support for the Relational Norms and Task Types Hypothesis (H2). This is possibly because norms of workload sharing are negatively associated with CRM, maybe because cooperative behavior by the provider may not entirely dispel fears of opportunism at a later date. The overall market being relatively new, the interaction between a provider and its client may not yet be characterized by the deep relationships that lead to cooperation [25]. Because the provision of business analytics calls for managing the knowledge interdependencies between organizations (as represented in the task structure matrix in Table 2), relational norms can offset low-powered incentives through implicit rewards or sanctions [10] and through norms of solidarity and trust [42], which assuages concerns about potential opportunism and mitigates effort distortion under low-powered incentives. We note that Web service standards favor high-powered incentives (Table 9), thus finding support for the Modular Architectures and Incentives Hypothesis (H3). Web service standards are designed in a manner so as to encapsulate the functions of components and employ a standard interface that allows components to connect, interact, or exchange with no loss of performance [56, 58]. When the architecture of the SaaS is modular, contracts are eight times more likely to be fixed price contracts, consistent with quadrant III in Table 4. Modular architectures streamline the communication between providers and clients, lessening the knowledge interdependencies between them. Loose coupling provides the flexibility to easily create new and potentially valuable configurations that are inherent in a broader array of inputs [57]. Firms can specialize in a few key activities while accessing other activities through relationships with outside parties. Modular architecture also offers the benefit of reusing components for the services provider. We also obtain support for the moderating effect of modular architectures in the Modular Architectures, Task Types, and Incentives Hypothesis (H4). The interaction

112

Susarla, Barua, and Whinston

term between modular architecture and application type in Table 10 is significantly associated with high-powered incentive contracts. Modularization lowers the multitask agency problem by minimizing interdependencies across providers and clients. Interactions across client-side applications and the systems hosted by the SaaS become streamlined and systematized through open standards that decouple applications from the interaction between applications [21], such as data exchange between the client and the provider. The streamlined interaction enabled through standardized interfaces shifts the relationship between the client and the provider from a relational model (quadrant  I in Table  4) to that of a loosely coupled model with lower knowledge interdependencies (quadrant II in Table 4). Through the use of standards that enable structured interfaces, and by making the provider solely responsible for its performance, modular architectures ease the challenges in coordination and communication across firms, thereby reducing agency costs. Modular architectures achieve a separation of roles and tasks across the client and provider, demarcating task boundaries in accordance with knowledge boundaries [63], and therefore realizing the task separation prescription of Holmstrom and Milgrom [38].

Managerial Implications We examine the role of relational norms between providers and clients, as well as the impact of modular architectures of providers. Strategic partnerships enabled by relational norms could support low-powered incentives contract forms. The results therefore suggest two distinct business models for SaaS providers. Some IT services providers may adopt a partnership, similar to the model identified in quadrant I of Table 4. Alternatively, providers and clients could choose to govern IT service contracts through arm’s-length relationships as depicted in quadrant III of Table 4. Web services frameworks envision a modular organization where loosely coupled processes are linked through standardized application program interfaces. Modularization could then imply that providers may have to redefine roles and responsibilities for the outsourcing of IT services from a partnership model, where the provider identifies with the outsourcing organization, to that of a corporate model of stand-alone specialists [2]. The extent to which either model of service outsourcing is preferred depends on the degree of modularization and the extent of knowledge interdependencies between providers and clients. One of the benefits of modular architectures is the ease of recombination that allows firms to benefit from easy access to specialized capabilities of suppliers. A second benefit of modularization is that it lowers the risk of opportunism and makes organizational disaggregation easier. It needs to be recognized, however, that modularization requires a high degree of formalization and codification [66]. Clients and providers also need to understand the importance of appropriate task partitioning and task design that accompanies modularization. Both providers and clients need to pay close attention to the issues of task partitioning and task disaggregation that arises when knowledge interdependencies are involved. In order to partition tasks as suggested in Table 2, clients need to build an understanding of the integral system [18]. It is

Multitask Agency, Modular Architecture, and Task Disaggregation in SaaS

113

equally important to develop incentive systems that reward organizational actors for competence in knowledge across modules [63].

Conclusions and Future Research The emergence of new models of IT services poses challenges to researchers. New models of IT services demand a richer understanding of the provision of incentives and design of interorganizational processes to support exchange. In particular, knowledge interdependencies could make it difficult to verify business analytic capabilities. The emphasis placed on SLAs on verifiable performance accentuates the distortion in incentives faced by providers of SaaS. This paper finds that the boundaries of knowledge partitioning and the structure of interfaces dictate the choice of modular organizational forms in IT services. This paper also shows that modular SaaS architectures may partly mitigate the multitasking problem, increasing the desirability of high-powered incentives. Our study makes three contributions to the literature. We focus on the interactions within a subsystem of outsourced IT services and the consequent agency problems that arise in disaggregating IT services. Testing of agency theory and contracting issues has been considered challenging, because the predictions involve second-order effects, rather than direct effects alone [19, 47]. We highlight the difficulty of interface specification and the nondecomposable nature of problem solving that are important in service-oriented enterprises and IT service innovations. The second contribution of this research is to highlight the agency costs that arise due to knowledge interdependencies and difficulties in standardization and codification. We demonstrate that low-powered incentives may be chosen to address the problem of effort distortion in settings where some of the outsourced tasks may have verifiability challenges. The third contribution of this paper is to highlight the role of modular organizational design in the context of IT services. Task decomposition along modules results in a lower need for communication and a lower interdependence across providers and clients, thus systematizing interaction across providers and clients. One of the limitations of this paper is that we do not distinguish between different types of verifiability and do not examine situations where there could be a divergence between process verifiability and performance verifiability. We also do not consider alternate types of contracts such as contracting on outcomes and how such performance measures could mitigate multitask agency problems. In practice, other types of incentive mechanisms such as gain sharing or sales/marketing performance could also be used to address the multitask agency problem. However, the difficulty in decomposing tasks and the resultant effort distortion may still persist. Multisourcing arrangements, which are becoming increasingly popular, could also exacerbate the problems of effort distortion. Task interdependencies could, in fact, give a services provider the opportunity to strategically shift costs to the customer or another provider. Relational exchange could address the problems of task distortion. However, as long as market exchange remains a relatively attractive fallback option, firms may renege on the implicit promises needed to sustain relational exchange [10]. Organizational

114

Susarla, Barua, and Whinston

designers should be mindful of the difficulty of selective intervention [67]: replicating the useful features of market governance but avoiding the attendant inefficiencies. The emerging models of IT services outsourcing and unbundling of corporate IT services offers a rich area for future research on contracting issues. Acknowledgments: The authors thank the special issue guest editors, Indranil Bardhan, Haluk Demirkan, P.K. Kannan, and Rob Kauffman, and the anonymous reviewers for constructive remarks and suggestions. They also thank seminar participants at the Workshop in Information Systems Economics, University of Washington, Tulane University, Purdue University, University of Rochester, University of Pennsylvania, International Conference on Information Systems 2008, and participants at the INFORMS 2009 cluster on Global Sourcing for comments on earlier drafts of this paper.

Notes 1. We refer to the IT services provider as the vendor or provider and the firm that outsources as the client. 2. For instance, using SaaS costs about one-fourth of the cost of onsite enterprise software [48]. 3. Examples are identifying customers to “cold call,” cross-selling opportunities, and customer service management. 4. On the other hand, compensating based on costs incurred is a low-powered incentive scheme whereby the services provider does not have an incentive to exert effort to reduce costs for the end users. 5. While our example refers to CRM, there are similar challenges in providing value-added services to clients. 6. Such interdependencies make it almost impossible to reward the SaaS on outcome-based performance measures, such as the number of leads or closed sales deals. Our interviews with practitioners and a survey of the trade and business press revealed that such output-based pricing schemes are extremely difficult to implement. 7. The service levels are defined for infrastructure availability, such as application response time [7], quality of service [24], and help desk response time [55]. 8. Such standards enable a structured representation of business processes, which can then be translated into application development frameworks that are familiar to IT professionals. 9. Task boundaries demarcate who performs the task—what is performed by the client and what is performed by the provider—while knowledge partitions indicate who has the knowledge to perform the task [63]. 10. With respect to the SaaS context, for instance, poor customer analytics leading to lost opportunities are difficult to detect and reduce the effectiveness of sales representatives. 11. In this case, the agent bears the residual risk, or the difference between effort expended in effort and the promised payments to the agent. 12. The performance distortion is that the provider can change the nature of its activities in response to the contract structure in a way that is privately beneficial to the agent but harmful to the principal. 13. Web services allow common standards in defining data, standardizing component interface specification, and creating shared meaning between the SaaS provider and the client [6, 21, 43, 52]. 14. In the case of the task structure matrix in Table 2, the SaaS provider’s role is redefined to specifying an interface to client-side activities, rather than client-centric processes, for which there needs to be close communication and coordination between providers and clients. For example, we can measure the amount of data from internal applications accessed per day by the provider (such as transactions per day or “line items” per day), which is not possible without a clearly defined interface that demarcates the responsibilities of the client and the provider.

Multitask Agency, Modular Architecture, and Task Disaggregation in SaaS

115

15. For instance, the person in the provider organization who handled the client contact might not have been aware of the data on project completion time, overruns, and so forth. In those cases, we asked for a referral to someone knowledgeable. 16. The client decides whether the provider should provide automation or analytics that require knowledge of the client organization. A provider with sophisticated capability in providing analytics delivers a solution to an organizational problem in addition to automation, while a relatively immature provider will only automate processes.

References 1. Ackerberg, D.A., and Botticini, M. Endogenous matching and the empirical determinants of contract form. Journal of Political Economy, 110, 3 (2002), 564–591. 2. Adams, S.M., and Zanzi, A. The consulting career in transition: From partnership to corporate. Career Development International, 10, 4 (August 2005), 325–338. 3. Anderson, T.W. An Introduction to Multivariate Statistical Analysis. New York: Wiley, 1958. 4. Angrist, J.D., and Krueger, A.B. Does compulsory school attendance affect schooling and earnings? Quarterly Journal of Economics, 106, 4 (November 1991), 979–1014. 5. Apicella, M. SLAs: Shaking hands is not enough. Infoworld, 23, 18 (April 2001), 49. 6. Applegate, L.M.; Austin, R.D.; and McFarlan, F.W. Corporate Information Strategy and Management, 7th ed. New York: Irwin/McGraw-Hill, 2006. 7. Applying the RFP Process to Application Service Providers. NetsEdge Research Group, Los Altos, CA, 2001 (available at www.irg-intl.com/pdf_files/asp_rfp.PDF). 8. Azoulay, P. Capturing knowledge within and across firm boundaries: Evidence from clinical development. American Economic Review, 94, 5 (December 2004), 1591–1612. 9. Bajari, P.L., and Tadelis, S. Incentives vs. transaction costs: A theory of procurement contracts. Rand Journal of Economics, 32, 3 (Autumn 2001), 287–307. 10. Baker, G.P.; Gibbons, R.; and Murphy, K.J. Relational contracts and the theory of the firm. Quarterly Journal of Economics, 117, 1 (February 2002), 39–83. 11. Baldwin, C.Y., and Clark, K.B. Design Rules: The Power of Modularity. Cambridge, MA: MIT Press, 2000. 12. Banerjee, A., and Duflo, E. Reputation and the limits to contracting. Quarterly Journal of Economics, 115, 3 (August 2000), 989–1017. 13. Barki, H.; Rivard, S.; and Talbot, J. Toward an assessment of software development risk. Journal of Management Information Systems, 10, 2 (Fall 1993), 203–225. 14. Bardhan, I.; Demirkan, H.; Kannan, P.K.; Kauffman, R.J.; and Sougstad, R. An interdisciplinary perspective on IT services management and service science. Journal of Management Information Systems, 26, 4 (Spring 2010), 13–64, this issue. 15. Bolton, R.N. A dynamic model of the duration of the customer’s relationship with a continuous service provider: The role of satisfaction. Marketing Science, 17, 1 (January 1998), 45–65. 16. Bower, J.L., and Darwall, C. DoubleTwist, Inc. Case Study, Harvard Business School, Boston, 2003. 17. Brown, S.L., and Eisenhardt, K.M. Product development: past research, present findings, and future directions. Academy of Management Review, 20, 2 (April 1995), 343–378. 18. Brusoni, S.; Prencipe, A.; and Pavitt, K. Knowledge specialization, organizational coupling and the boundaries of the firm: Why do firms know more than they make? Administrative Science Quarterly, 46, 4 (December 2001), 597–621. 19. Chiappori, P.A., and Salanie, B. Testing contract theory: A survey of some recent work. In M. Dewatripont, L. Hansen, and S. Turnovsky (eds.), Advances in Economics and Econometrics. Cambridge: Cambridge University Press, 2003. 20. Crowston, K., and Kammerer, E. Coordination and collective mind in software requirements development. IBM Systems Journal, 37, 2 (April 1998), 227–245. 21. Curbera, F.; Duftler, M.; Khalaf, R.; Nagy, W.; Mukhi, N.; and Weerawarana, S. Unraveling the Web services Web. IEEE Internet Computing, 6, 2 (March–April 2002), 86–93.

116

Susarla, Barua, and Whinston

22. Davenport, T. The coming commoditization of processes. Harvard Business Review, 83, 6 (June 2005), 101–108. 23. Demirkan, H., and Cheng, H.K. The risk and information sharing of application services supply chain. European Journal of Operational Research, 187, 3 (June 2008), 765–784. 24. Drogseth, D. Managing the challenges of application services. Business Communication Review, 34, 6 (June 2004), 50–55. 25. Dyer, J.H., and Singh, H. The relational view: Cooperative strategy and sources of interorganizational competitive advantage. Academy of Management Review, 23, 4 (October 1998), 660–680. 26. Eisenhardt, K.M., and Tabrizi, B.N. Accelerating adaptive processes: Product innovation in the global computer industry. Administrative Science Quarterly, 40, 1 (March 1995), 84–110. 27. Faraj, S., and Sproull, L. Coordinating expertise in software development teams. Management Science, 46, 12 (December 2000), 1554–1568. 28. Feltham, G.A., and Xie, J. Performance measure congruity and diversity in multi-task principal/agent relations. Accounting Review, 69, 3 (July 1994), 429–453. 29. Ferris, C., and Farrell, J. What are Web services? Communications of the ACM, 46, 6 (June 2003), 31. 30. Gassmann, O., and Mikkola, J.H. Managing modularity of product architecture: Towards an integrated theory. IEEE Transactions on Engineering Management, 50, 2 (May 2003), 204–218. 31. Gomolski, B. Will your ASP bite the dust? Infoworld, 33, 7 (February 12, 2001), 78. 32. Goth, G. The next gold rush: Application service providers stake their claims in a red-hot market. IEEE Software, 17, 2 (March–April 2000), 96–99. 33. Granovetter, M., and Tilly, C. Inequality and labor processes. In N. Smelser (ed.), Handbook of Sociology. Beverly Hills, CA: Sage, 1988, pp. 175–221. 34. Hagel, J., and Singer, M. Unbundling the corporation. Harvard Business Review, 77, 2 (March–April 1999), 133–141. 35. Heide, J., and John, G. Alliances in industrial purchasing: The determinants of joint action in buyer–supplier relationships. Journal of Marketing Research, 27, 1 (February 1990), 24–36. 36. Henderson, R., and Clark, K.B. Architectural innovation: The reconfiguration of existing product technologies and failures of established firms. Administrative Science Quarterly, 35, 1 (March 1990), 9–30. 37. Hild, S.G.; Binding, C.; Bourges-Waldegg, D.; and Steenkeste, C. 2001. Application hosting for pervasive computing. IBM Systems Journal, 40, 1 (January 2001), 193–219. 38. Holmstrom, B., and Milgrom, P. Multitask principal agent analysis: Incentive contracts, asset ownership and job design. Journal of Law, Economics and Organization, 7, 2 (Special Issue 1991), 201–228. 39. Holmstrom, B., and Milgrom, P. The firm as an incentive system. American Economic Review, 84, 4 (September 1994), 972–991. 40. Hosted CRM Gains Momentum as Salesforce.com Paces the Market. White paper, Aberdeen Group, Boston, 2003. 41. Jacobides, M.G., and Winter, S.G. The co-evolution of capabilities and transaction costs: Explaining the institutional structure of production. Strategic Management Journal, 26, 5 (May 2005): 395–413. 42. Jap, S.D., and Anderson, E. Safeguarding interorganizational performance and continuity under ex post opportunism. Management Science, 49, 12 (December 2003), 1684–1701. 43. Jardim-Gonçalves, R., and Steiger-Garção, A. Implicit multilevel modeling in flexible business environments. Communications of the ACM, 45, 10 (October 2002), 53–57. 44. Kale, P.; Singh, H.; and Perlmutter, H. Learning and protection of proprietary assets in strategic alliances: Building relational capital. Strategic Management Journal, 21, 3 (March 2000), 217–237. 45. Kaplan, J. ASPs prepare to make comeback. Network World, 20, 33 (August 2003), 47. 46. Kendler, P.B. Gaining the outsourcing edge. Insurance & Technology, 29, 7 (June 2004), 43–44. 47. Masten, S.E. Modern evidence on the firm. American Economic Review, 92, 2 (May 2002), 428–432.

Multitask Agency, Modular Architecture, and Task Disaggregation in SaaS

117

48. McDougall, P. Enhanced services and richer technology mean ASPs deserve a second look. InformationWeek (June 2003) (available at www.informationweek.com/story/showArticle. jhtml?articleID=10100611). 49. Mears, J. Net provider sold on ASP model. Network World, 18, 14 (April 2001), 29–30. 50. Miller, J.G., and Roth, A.V. A taxonomy of manufacturing strategies. Management Science, 40, 3 (March 1994), 285–304. 51. Mirrlees, J.A. The theory of moral hazard and unobservable behavior. Review of Economic Studies, 66, 1 (January 1999), 3–22. 52. Papazoglu, M.P. Web services and business transactions. World Wide Web, 6, 1 (March 2003), 49–91. 53. Poppo, L., and Zenger, T. Testing alternative theories of the firm: Transaction cost, knowledge-based and measurement explanations for make-or-buy decisions. Strategic Management Journal, 19, 9 (September 1998), 853–877. 54. Poppo, L., and Zenger, T. Do formal contracts and relational governance function as substitutes or complements? Strategic Management Journal, 23, 8 (August 2002), 707–725. 55. Recktenwald, J. Negotiating service level agreements with ASPs. TechRepublic.com (April 2000) (available at http://articles.techrepublic.com.com/5100-10878_11-1061324.html). 56. Sanchez, R., and Mahoney, J.T. Modularity, flexibility, and knowledge management in product and organization design. Strategic Management Journal, 17 (Winter 1996), 63–76. 57. Schilling, M.A. Towards a general modular systems theory and its application to inter-firm product modularity. Academy of Management Review, 25, 2 (April 2000), 312–334. 58. Schilling, M.A., and Steensma, H.K. The use of modular organizational forms: An industrylevel analysis. Academy of Management Journal, 44, 6 (December 2001), 1149–1168. 59. Shapiro, S.S., and Wilk, M.B. An analysis of variance test for normality (complete samples). Biometrika, 52, 3–4 (December 1965), 591–611. 60. Simon, H.A. The architecture of complexity. Proceedings of the American Philosophical Association, 106, 6 (December 1962), 467–482. 61. Software as a service. White paper, Software and Information Industry Association (SIAA), Washington, DC, 2004. 62. Sosa, M.E.; Eppinger, S.D.; and Rowles, C. The misalignment of product architecture and organizational structure in complex product development. Management Science, 50, 12 (December 2004), 1674–1689. 63. Takeishi, A. Knowledge partitioning in the inter-firm division of labor: The case of automotive product development. Organization Science, 13, 3 (May–June 2002), 321–338. 64. Upton, D.M., and Fuller, V. Wipro Technologies: The factory model. Harvard Business School Case Study 606–021, Boston, 2005. 65. Von Hippel, E. Sticky information and the locus of problem solving: Implications for innovation. Management Science, 40, 4 (April 1994), 429–439. 66. Worren, N.; Moore, K.; and Cardona, P. Modularity, strategic flexibility, and firm performance: A study of the home appliance industry. Strategic Management Journal, 23, 12 (December 2002), 1123–1140. 67. Williamson, O. The Economic Institutions of Capitalism. New York: Free Press, 1985. 68. Yager, T. Avoid ASP imprisonment. Infoworld, 22, 43 (October 2000), 55–58. 69. Zeithaml, V.A.; Berry, L.L.; and Parasuraman, A. The nature and determinants of customer expectations of service. Journal of the Academy of Marketing Science, 21, 1 (December 1993), 1–12.

Suggest Documents