Organization Science
informs
Vol. 15, No. 1, January–February 2004, pp. 98–118 issn 1047-7039 eissn 1526-5455 04 1501 0098
®
doi 10.1287/orsc.1030.0058 © 2004 INFORMS
On the Relationship Between Organizational Complexity and Organizational Structuration Mihnea C. Moldoveanu
Rotman School of Management, University of Toronto, 105 St. George Street, Rotman Centre 555, Toronto, Ontario, Canada M5S 3E6,
[email protected]
Robert M. Bauer
Institut fuer Unternehmensfuehrung, Johannes Kepler Universitaet Linz, Altenbergerstrasse 69, A-4040 Linz, Austria,
[email protected]
T
his article represents a contribution to the conceptualization of organizational complexity. The first part of the article relates the concept of complexity to the production tasks of the organization by deriving measures of the complexity of production and planning tasks within the organization. This move allows us to analyze organizational activities in terms of the computational complexity of the tasks that the organization carries out. Drawing on concepts from theoretical computer science, the article introduces a taxonomy of production tasks based on their computational complexity and shows how to use the notion of computational complexity to analyze organizational phenomena such as vertical integration disintegration, the choice between markets and organizations as performers of particular production tasks, and the internal partitioning of organizational tasks and activities. The article then relates the complexity of the production function of the organization to the ways in which organizations structure themselves. It attempts to bring theorizing about organizational behavior based on complexity theory closer to the conceptual realm of “mainstream” organization theory and to make the concepts of complexity theory more useful to empirical examinations of firm dynamics and organizational behavior. Key words: organizational complexity; structuration; complexity-coping strategies; organizational production functions
1.
Introduction
that emerge from the application of complexity theory to organizational issues (Cohen 1999) that seem to be related to the operationalization problems that McKelvey discusses: How is organizational complexity related to stylized facts relating to the vertical integration and disintegration of productive tasks? to organizational search and exploration processes? to the structure and dynamics of value-linked activity chains at the industry level? The present article attempts to provide these conceptual bridges by accomplishing four tasks. First, it introduces a notion of organizational complexity that is based on the computational difficulty of simulating the activities and tasks of the organization. Second, it partitions organizational production tasks into “easy,” “hard,” and “intractable” classes and shows how this distinction can be used to make sense of organizational phenomena. Third, it modulates the empirical focus of the study of “organizational complexity” studies by (a) expanding the applicability of tractability analysis (Garey and Johnson 1979) from manufacturing and logistics tasks to a whole set of algorithmically reducible phenomena and (b) focusing “organizational complexity” studies on production functions comprising critically linked tasks. Finally, it shows how complexity coping strategies developed by algorithm designers for tackling intractable problems can be deployed to understand ways in which organizations structure their tasks and taskplanning processes.
The application of concepts from complexity theory to the interpretation and understanding of organizational phenomena has made significant progress, as evidenced by the rich and textured models put forth in the Organization Science Special Issue on the Application of Complexity Theory to Organization Science (1999). There seems to exist widespread agreement that concepts from complexity theory have new insights to offer to those who are interested in model-based theorizing about organizational phenomena (Anderson 1999), even though articulating sharp formulations of these insights remains challenging (Sterman and Witemberg 1999). The metatheoretical commentary, however (Anderson 1999, Cohen 1999), points out that there is at present no underlying “theory” of organizational complexity, even though research on complex organizational processes seems to have made some progress towards providing us with new concepts and interpretative lenses for the analysis of organizational phenomena. There are also difficulties with the “straight application” of concepts from the theory of complex systems—such as Kauffman’s NK(C) model (Kauffman 1993)—to the understanding of organizational phenomena, highlighting the need for conceptual work aimed at operationalizing the concepts and theories that organization scholars have appropriated from other disciplines (McKelvey 1999). Finally, there are queries about the testability of the models 98
Moldoveanu and Bauer: Organizational Complexity and Structuration Organization Science 15(1), pp. 98–118, © 2004 INFORMS
Our analysis builds upon and contributes to the research tradition that seeks to understand firms as chains of routines (Nelson and Winter 1982, Becker 2003) or value-linked activity sets (Knott and McKelvey 1999, McKelvey 1999). In this tradition, reliably repeatable sets of observable organizational behaviors are put forth as the basic units of analysis for understanding both organizational stability and organizational learning and change (Nelson and Winter 1982, Levitt and March 1988). Much work on organizational routines has focused on operationalizing the concept and examining the formation, propagation, and disintegration of routines and routine sets (Becker 2003), with comparatively little attention given to the structure of routines, and in particular to their complexity and its effects on their survivability. We consider, in this work, the algorithmic properties of firm-level activity sets, and in particular examine routines qua canonical algorithms for solving well-defined problems. Drawing on the theoretical computer science literature (Garey and Johnson 1979, Cormen et al. 1993), we distinguish among activity sets that can be described by “simple” and “difficult” algorithms, and between “difficult” and “impossible” algorithms, and show that this distinction has direct implications for the dynamics of organizations, which for the most part seek to organize around computationally “simple” activity sets (rather than “hard” ones) and systematically avoid computationally “impossible” tasks. Of particular relevance to the analysis in this paper is the analysis of organizational activity sets using the language of coevolving entities, modelled as Boolean networks (McKelvey 1999, Rivkin 2000) with N binary, mutually influencing elements, each of which is, on average, connected to K other such elements (hence, “NK” models). NK models model activity sets as causally linked chains of “simple” events, or events caused by conformity of simple elements to simple, deterministic rules (Kauffman 1993). Their dynamic evolution is periodic, with periodicity determined by the number of interacting elements and average number of network links per element. They are a species of the broader genus of cellular automata (Wolfram 2002) and hence one of a set of possible models for a universal computational device (Boolos and Jeffrey 1993, Wolfram 2002). Modelling organizations as NK networks amounts to modelling them as computational devices of a particular kind. What our analysis adds to this description of firms is an analysis of the algorithms that run on them in terms of their computational complexity. We show that different organizational activity sets can be modelled by algorithms in different complexity classes, which have different implications for patterns of organizational planning and action. They model organizational production tasks (broadly conceptualized to include the production of “plans” and “self-knowledge” as well as
99
the production of goods, services, and the logistical support apparatus for them) in a way that is not specific to any one embodiment of a computational device (such as a Boolean network), as canonical algorithms can run on any computational device. Moreover, our analysis of complexity regimes allows us to make predictions about the structuration of organizational tasks and activity sets as a function not only of the previous state of the organization (as is the case with strictly deterministic models such as those of Kauffman 1993), but also of the impending or resulting complexity regime of the organizational activity set. Computational complexity is not, in our analysis, only a dependent variable that emerges from a set of activities, but a strategic decision variable that influences the planning of these activities as well. We start by giving a representation of the firm’s activity set (or, production function), which explicitly incorporates the complexity of the production task of the firm into the representation of the firm’s production task. We then derive explicit complexity measures for the algorithms corresponding to different common production tasks, and show how our complexity measure can be operationalized. We show how the ideas of complexity and criticality can be incorporated into the representation of the production task of the organization, using the O-Ring production function first introduced in operations research, and more recently (Kremer 1993) used to analyze the optimal schedule of wages in an economy making products in which even small errors are costly to the firm. Next we show how the computational complexity of the production tasks of the organization can be measured, by providing analytical measures of the complexity of common production tasks such as resource allocation, software production, debugging, and strategic planning. We show that production tasks or activity sets can be classified into three different classes according to their complexity measures. We operationalize the computational analysis of organizational production tasks and show how the computational “design” lens can be used to understand the ways in which organizations approach difficult or intractable activity sets. Thus, we extend the application of tractability analysis from problems of scheduling and production (such as the maximally efficient allocation of n tasks to m machines) (Garey and Johnson 1979, Benjaafar and Sheikhzadeh 1997) to a broader class of organizational tasks (such as strategic planning, software design, debugging, and rule-based organizational transformation). Thus, “what the firm does” can be conceptualized using the language of computational complexity theory. Our analysis is to be distinguished from those based on entropy-like measures of the complexity of manufacturing operations (Deshmukh et al. 1998), which seek to relate expected manufacturing lead times to an index of the structural complexity (number of parts/processes/links/interactions) of the manufacturing
Moldoveanu and Bauer: Organizational Complexity and Structuration
100
Organization Science 15(1), pp. 98–118, © 2004 INFORMS
line. Although there are direct links between our representation and entropy-based representations of complexity, there is also an important difference: Our approach stresses a functional (rather than structural) approach to representing the complexity of organizational phenomena. In our work, simple-to-describe structures can give rise to complex-to-understand patterns, in the same way in which a structurally trivial phenomenon (such as a double pendulum) can give rise to very complex temporal patterns (chaos).
2.
Complexity, Criticality, and the O-Ring Production Function
Motivated by the realization that classical production functions in economic theory do not adequately capture the importance of task complexity to the production function of the firm, Michael Kremer (1993) proposed the use of an “O-ring production function” that specifically takes into account the dependence of the production cost of the firm on the probability that a particular step in the production process is misperformed, and on the total number of steps in the production process. The O-ring production function draws its name from the process by which a space shuttle engine is assembled, which has the property that errors introduced into the process by any individual contributor or worker (an O-ring is a small part of an engine) can dramatically influence the quality of the whole product. The space shuttle Challenger exploded a short time after takeoff on January 28, 1986, killing all of its crew members. The cause of the crash was traced to malfunction of the solid rocket booster, due to a failure of one of the O-rings holding the booster together: a “small” error that caused the breakdown of a “large” system because its effects propagated, catastrophically, through the larger system of which it was a part. The two important variables in Kremer’s (1993) analysis are the probability q that a task will be carried out perfectly and the number N of different and critically linked steps that together make up a production task. Task steps that are critically linked are those that must be successfully completed in sequence in order for the output of the production task that they model to be realized. In the case of the space shuttle, the design of the O-ring, the design of the thruster, and the sequence of decision steps leading to the launch decision are critically interdependent series of tasks. Denoting output per worker by Y , the expected output of the firm by E, capital by K, and the productivity of capital by , the O-ring production function is given by E Y K = K Ni=1 qi NY and describes the expected output of the organization whose production tasks are modelled by the O-ring production function. The cost of less-than-perfect performance will be given by CP = K 1 − Ni=1 qi NY . For a given number of critically interdependent steps (holding N constant), the higher the probability of error (p = 1 − q) on
Table 1
Complexity and Criticality of Different Examples of Production Tasks High Criticality
Low Criticality
High complexity De novo software Graduate education design, software testing, semiconductor chip design, neurosurgical operations, aircraft design, aircraft testing Low complexity
Taxi driving, automotive repair
Hair dressing, clerical functions
any given production task, the higher the production cost of the organization as a whole. Varying N allows us to concentrate on the effects of increasing task complexity, corresponding to an increasing number of critically linked steps. Production tasks with high levels of complexity (high N ) and critically interlinked steps (see Table 1) are common in high-technology organizations such as those building software, designing and building aircraft, testing and flying aircraft, performing surgical procedures, designing and building application-specific integrated circuits (ASICs), or producing strategic plans for an organization in a quickly growing or shrinking market. The production tasks associated with undergraduate or graduate education, by contrast, may have high levels of sophistication, but are error tolerant in the sense that single lapses or shortcomings in teaching methods can be easily “made up for” by another instructor or by the perseverance of the student. There are also organizations in which individual task performance is critical to the output of the organization as a whole, but the production task itself is not complicated. These include trades such as taxi or bus driving and automobile repair. There are organizations (hair-dressing shops, convenience stores) where individual task performance is neither critical to the output of the organization nor very complex. By contrast, the slip of the knife of a neurosurgeon, or a miscalculation on the part of a strategist in recommending capacity expansion in a volatile industry with a nonlinear demand curve can make the difference between years and days of recovery in one case, and a stellar or mediocre quarterly report in the other case. Thus, it is possible to classify organizational tasks in terms of their complexity and criticality.
3.
Computational Complexity and the Algorithmic Description of Production Tasks
Kremer (1993) concentrates on the effects of the value of q on the equilibrium wages in an economy of firms described by O-ring production functions. We focus
Moldoveanu and Bauer: Organizational Complexity and Structuration
101
Organization Science 15(1), pp. 98–118, © 2004 INFORMS
instead on providing measures for N , the number of critically linked steps in a production task that the organization must execute correctly to deliver its product or service to its clients. We will therefore associate N , the number of such tasks, with Ka , the computational complexity of the production task of the organization taken as a whole, and will stipulate that N is a monotonically increasing function of Ka . Thus, by building computational models of various production tasks and using tools from theoretical computer science to measure their complexity, we will be able to provide measures for N . The definition of task complexity enables generalization to tasks and functions that have not to date been considered as part of the production function of the firm, such as communication between members of the organization or the successful production of ex post rationalizations of firm activities that are critical to the public image and subsequent profitability of the firm. 3.1.
Production Functions in Functional and Algorithmic Form The organization can be seen as a group of people that together resolve the problem of combining and modifying a set of inputs to produce outputs. The organization resolves this problem through the coordination of the individual efforts of its members in the face of perceived market opportunities and resource constraints. Both opportunities and resource constraints on one hand, and the ways in which individuals cooperate and coordinate with one another on the other hand, can be modelled by algorithmic structures: information (the inputs: opportunities and constraints) and an algorithm (a computation) that converges to (stops when it reaches) a set of numbers, quantities, or answers (the outputs). Denoting the M outputs by the vector Y = Yk M k=1 , the N inputs by the vector X = Xk Nk=1 , and representing the production function of the firm by a generalized (matrix) function F that maps the inputs to the outputs, we can describe the input-output relation of the firm by the equation, Y = F X (1) The general relation (1) does not fully specify the nature of the production function of the organization. It specifies only the ways in which the inputs must be modified and combined with each other to obtain the outputs of the firm. A complete specification of the production function must also contain the set of steps by which the firm generates its outputs, or, equivalently, the set of steps by which the function F is computed. A simple example will illustrate the difference between a functional representation and an algorithmic representation. Suppose that the organization linearly weights and then combines a set of inputs to derive its outputs. This matrix multiplication could model the task of mapping N different products in inventory to
the orders of M different customers according to their orders from the firm’s catalogue of its inventory. Y, the M-component output vector, represents the set of product bundles associated with each customer. X, the N component input vector, represents the availability of each individual product in inventory at a particular point in time. The production task associated with this simple inventory-to-demand matching process can be described by the matrix multiplication, Y = AX
(2)
where A is an M × N matrix with nonnegative entries. The matrix multiplication (2) can be performed in several ways. The inputs can be weighted simultaneously or in sequence, and they can be combined in small groups or all together. The set of steps by which the function (2) is computed (by which the outputs Y are synthesized) is not unique to a given A. This means that the algorithm by which (2) is computed is itself not unique, and must be specified if we wish to fully describe the production function of the firm. Whereas the specification of (2) does not imply—or fully determine—the algorithm by which it is computed, a specification of the algorithm itself fully specifies the function it is intended to compute. Briefly put, specifying the function tells us what the economic organization does. Specifying the algorithm tells us how the economic organization does it. Algorithms and the production functions that they model can be classified according to their computational or algorithmic-time complexity, Ka , or the number of distinct operations or steps required for their convergence to an answer. More complex algorithms require a greater number of steps than less complex ones, and therefore a computational device will take longer to process the former than the latter. Theoretical computer scientists use three different complexity classes to distinguish between problems on the basis of the computational difficulty of the algorithms required to produce answers or solutions for them (Garey and Johnson 1979, Cormen et al. 1993). Polynomial-time algorithms—corresponding to polynomial-time-hard, or P -hard problems—are algorithms that require, to get to a solution, a number of steps that is no greater than a polynomial function of the number of inputs, m to the algorithm. That is, for P -hard problems, we have, as a computational complexity measure, KaP m ≤ L mk for some k, where L is a linear operator (or an operator that generates linear combinations of the argument mk ), and KaP is the P -class complexity measure. Examples of production tasks that can be modelled by P -hard problems include sorting and classification, inventory planning, and most manufacturing processes involving the combination of inputs in fixed proportions, as we shall see in the next section. Non-polynomial-time algorithms—corresponding to non-polynomial-time-hard, or NP-hard problems—are
102
algorithms that require a number of steps to get to a solution that is greater than any polynomial function of the total number of inputs for completion. That is, KaNP m > L mk for any k and L, and KaNP is the NP-class complexity measure. An example of a greater-than-any-polynomial function of its argument is an exponential function, which we shall use throughout the paper as an approximation to the complexity of NP-complex algorithms. Examples of tasks that can be modeled by NP-hard problems include de novo software creation and interface writing among software modules, game-theoretic analysis using iterated dominance reasoning, and strategic analysis of industries and firms based on inference to the best explanation. Finally, undecidable problems are those problems having a provably infinite computational complexity, in the sense that it is provably true that no finite algorithm will converge to an answer in a finite amount of time. That is, KaU m = , for any m where KaU is the U class complexity measure. Examples of such problems include most forms of moral deliberation (for instance, how one would trade off between fairness and efficiency considerations in a reasoned debate in which efficiency supporters only see fairness as a way of furthering efficiency, and vice versa), epistemic deliberation (for instance, deciding on the grounds for adopting one well-corroborated theory over another, given epistemological differences in the ways in which “data” and “theory” were construed by proponents of each), as well as the problem of making predictions about reflexively reflexive social systems (predicting market equilibrium price vectors when the information of various traders is bought and sold on the basis of information that is itself bought and sold on the same market). 3.2.
Complexity Classes: An Algorithmic Theory of Production Tasks We now apply the ideas of algorithmic complexity and the taxonomy of algorithms introduced above to the representation and analysis of common production tasks, with the aim of turning the taxonomy of algorithms into a taxonomy of the organizational production tasks described by the algorithms in question. Thus, a “P -class production task” will denote a production task that can be represented by a problem whose algorithmic solution is P -hard, an “NP-class production task” will denote a production task that can be represented by a problem whose algorithmic solution is NP-hard, and a “U -class production task” will denote a production task that can be represented by an undecidable problem. This classification of production tasks makes it possible—in the following section—to produce testable models of the effects of the complexity of the production function of the firm on managerial decisions and subsequent behavior of the organization.
Moldoveanu and Bauer: Organizational Complexity and Structuration
Organization Science 15(1), pp. 98–118, © 2004 INFORMS
3.2.1. The P -Class: “Simple” Production Tasks. The P -class of production tasks includes inventory planning and resource allocation problems (problems of linear matrix inversion); batch manufacturing processes (processes of matrix multiplication, which model the construction of linear combinations of input); and criterion-based sorting, searching, and categorization routines (such as bookkeeping, recording, and classification of technical information, the assignment of specific cases to particular reference classes, and processes where learning takes place in question-and-answer sessions). A summary of the P -class of organizational production functions and the canonical algorithms that model them is shown in Table 2. The sorting task can be represented by the problem of ordering a set of variables f1 f2 fM into the set fi1 fi2 fiM , where fi1 ≤ fi2 ≤ · · · ≤ fiM . The computational complexity of this production task is approximately given by Ka = M log2 M (Traub and Wozniakowski 1980), where M is the total number of variables. Thus, the computational complexity of sorting operations will be a logarithmic function of the size of the list of objects that are to be sorted, and will therefore increase more slowly than a linear function as a function of M. A variant of the sorting problem has to do with determining the reference class to which a particular instance belongs. Such classification problems come up in situations such as that of a bank manager trying to figure out the “risk class” for a loan applicant for the purpose of determining his or her eligibility for a loan, or that of an insurance provider trying to classify a particular oil shipment in order to get some sense of the probability of having to pay out the full insured amount. Suppose that we are working with a binary alphabet used to encode the risk class for a particular asset or loan using binary variables such as over 50—“1,” under 50—“0”; college-educated—“0,” not college-educated—“1”; and so forth). The full “code” assigns different risk measures to different strings of binary symbols—“1000001” might beget a “high” risk factor, for example. The computational complexity of decoding this binary tree is given by the number of operations required to compare, in bitwise fashion—the binary digits of two strings of bits. The process is sketched in Figure 1. If the number of possible input strings is L, and each individual bit takes on the values of 0 or 1, then the total number of steps required to completely decode the string of bits will be of the order of Ka = log2 L because the total number L of possible input strings will be L = 2Ka . Another common P -hard production task that can be easily formalized as a canonical algorithm is that of reasoning by precedent or analogy, a form of reasoning that often undergirds “fast and frugal” strategic thinking (Schwenk 1984). A large (and common) knowledge base of previous events and relationships between them
Moldoveanu and Bauer: Organizational Complexity and Structuration
103
Organization Science 15(1), pp. 98–118, © 2004 INFORMS
Table 2
Mapping Production Tasks to Associated Canonical Algorithms: The P Class can be understood as an instantiation of this algorithm (Cormen et al. 1993)
This production task
whose complexity as a function of the number of input variables is of the order of
and is therefore in complexity class.
Calculation of expected utility for N-outcome lottery, demand forecasting for N-parameter demand function
Linear matrix multiplication for 2 N-dimensional matrices
N2
P
Inventory planning for N different products with N different demand functions, resource allocation to N competing projects
Linear matrix inversion for 2 N-dimensional matrices
N3
P
Classification of random data set into N different categories, sorting of N different quantities in increasing or decreasing order
Tree search for a tree with N nodes
N log2 N
P
Game of “20 questions” to extract information about a variable between zero and M with maximum error of M/N
Tree search for tree with log2 N nodes
Log2 N
P
functions as the organizational memory to which current predicaments are compared in order to derive decision heuristics. Such comparisons can be treated as correlations between vectors (Cormen et al. 1993) or N dimensional arrays. Their computational complexity is of the order of the product of the sizes of the vectors that are being correlated (i.e., MN, where M and N are the sizes of the two vectors in question, for M × 1 and N × 1 vectors). Hence, reasoning by analogy or “pattern matching,” individually or as a group, can be understood as a P -hard production task.
of the number of input variables. These production tasks include diagnostic inference problems (inference to the best explanation) (Bylander et al. 1991), pointwise verification of the internal logical consistency of different sets of axioms (as might be found in software engineering and debugging, for instance), and strategic reasoning involving iterated dominance arguments encountered in game-theoretic analyses of strategic predicaments. The class of NP-hard production tasks is summarized in Table 3. An example of an NP-hard production task is the building of a new piece of software subject to the specification of a set of required outputs (see Abelson and Sussman 1993). For a complete specification of a software design problem, we need a set S = sk m k=1 of rules of syntax (describing, say, the allowable configurations
3.2.2. The NP Class: “Hard” Production Tasks. The NP class comprises production tasks that can be represented as processes of solving NP-hard problems: Their solution time increases exponentially as a function Figure 1
Computational Complexity of a Binary Tree Search
Path of search
“0” 0
1
…..001….-----> Problem: Map “001” onto one of the end nodes of the tree on the left-hand side by sequentially “going through” decision tree.
1
[000]
0 0
“0”
“1”
0 1
1
0 1
[001] End state of search [010] [011]
N=8=number of possible final states of search through binary tree with three levels
0 [100] 1 [101] 0 [110] 1 [111]
K=3= number of levels of binary tree
Moldoveanu and Bauer: Organizational Complexity and Structuration
104 Table 3
Organization Science 15(1), pp. 98–118, © 2004 INFORMS
Mapping Production Tasks to Associated Canonical Algorithms: The NP Class
This production task
can be understood as an instantiation of this algorithm (as per reference)
whose complexity as a function of the number of input variables is of the order of
and is therefore in complexity class
De novo software design with finite set N of output constraints, De novo semiconductor chip design with finite N set of output constraints
Consistency checks with N different rules, (Casti 1991)
2N
NP
Strategic situation analysis based on inference to the best explanation
Graph search of N-node undirected graph (Cormen et al. 1993)
2N
NP
Nash equilibrium calculation for M players with K strategies each
Graph search for KM-node undirected graph (Cormen et al. 1993)
2MK
NP
Logistical map design for N objectives with nonlinear cost function
Traveling salesman Problem with N locations (Garey and Johnson 1979)
2N
NP
of words, phrases, commands, and symbols included in a programming language) and a set G = gl nl=1 of performance specifications (“goals”) for the code (including, but not limited to, run-time requirements, interface specifications, and the availability of certain features). One way to capture the description of the code development problem is that of producing logical consistency between the two sets S and G by iterative refinements of the sets S and G. To get an estimate for the complexity of this task, consider first the problem of verifying whether or not S is consistent with g1 , i.e., one of the set of constraints and design goals that the code must satisfy. To determine the complexity of this calculation, form the set S1 = S g1 and ask, what is the computational complexity of deciding whether or not all of the elementary propositions in this set are internally consistent? To determine the internal consistency of the set S1 , we have to form all of the subsets of S that include g1 , i.e., g2 s1 g1 s1 s2 , and so forth, which in turn is equivalent to forming all of the subsets of S1 except for the null subset and checking for the internal consistency of these subsets. There are 2m such subsets. Therefore, we have that the computational complexity of self-consistently adding one constraint or design goal onto the series of design rules for the code involves resolving a problem of computational complexity 2m − 1: an exponential function of the number of design rules and the number of design goals and constraints. Another example of an NP-hard production task is that of finding the Nash equilibrium in a competitive game. This can be thought of as the task of an “ideal” (and, perhaps, real) executive team working as a whole, or a single strategic planner within the organization. The problem has been shown to be NP -hard (Gilboa 1989) by showing that it is isomorphic to the problem of finding all of the cliques of size k in an undirected graph G
(see Figure 2). We can see, intuitively, that the problem is exponential-time complex when we consider that iterative dominance reasoning requires that each “player” be thinking about the other player’s strategies, about one’s optimal responses to these strategies, about the other player’s optimal responses to one’s own optimal responses, and so forth, until equilibration is reached— i.e., until no one player can make himself or herself better-off by unilateral deviation from a common strategy set. 3.2.3. The U -Class. The U -class contains production tasks that can be represented by problems that are provably undecidable. Undecidability does not refer to “ambiguity” or “fog” in the specification of a problem, but rather to the fact that the problem, even though exactly and sometimes quite simply specified, is provably not solvable. An early example of an undecidable problem comes from an epistle from St. Paul to Titus, stating that Epimenides, a Cretan, had informed St. Paul that “most Cretans are liars.” In its simplest form (deciding whether or not “I am telling a lie” is true) the resulting problem is not decidable, as the proposition is true if judged to be false and false if judged to be true. Examples of organizational production tasks that can be understood as undecidable problems include solving some moral (finding transcendental and final grounding for a set of moral principles) or epistemic (the skeptic’s problem) problem. Moral reasoning problems would be no harder than simple classification (P -hard) problems if there is ex ante agreement on a set of ultimate principles or values (as all one has to do to get agreement is compare any argument with these ultimate principles). However, they become U -hard problems if there is no such agreement, as an “ultimate justification” that transcends any one set of principles must be given in what would qualify as a “solution.” Similarly for epistemic
Moldoveanu and Bauer: Organizational Complexity and Structuration
105
Organization Science 15(1), pp. 98–118, © 2004 INFORMS
Figure 2
Graph-Theoretic Interpretation of the Clique Problem
The problem: “Does graph G have a clique of size k?” is NP-hard (Cormen et al. 1993)
Clique Cl(x, y, z, w) is a clique of size k = 4 of graph G. A clique is a set of fully connected vertices (each vertex is connected to every other vertex).
y
x
y
x u
v
z w
v
z w
Graph Gr{x, y, u, z, w, v} is a set of connected nodes and vertices.
problems, as theoretical paradigms bring with them their own means of justification of truth claims, different theories are symbiotic with different epistemologies and the problem of ultimate justification is synonymous with that of building, starting from the premises of a theory, an epistemology that transcends the boundaries of that theory (a U -hard problem). The U -class of organizational production tasks is summarized in Table 4. A U -class problem of particular relevance is that of determining “reflexive” market equilibria—i.e., equilibrium in markets where information is traded at prices about which information is traded at prices about which and so forth. Suppose that n traders participating in a market make their respective buy/sell decisions on the basis of their personal probabilities about the equilibrium price of the good in which they are trading. Suppose that there is only one good and that the traders all want to maximize their individual profits after the trade. Each trader k has a personal set of probabilities Table 4
u
The problem is NP-hard because, given k and G, all possible k-cliques of G must be searched through.
Pr ik pe0 #k $ 1 ≤ k ≤ n$ 1 ≤ i ≤ m about the equilibrium price, pe0 , conditional upon her private information #k . A trader will trade if the expected value of the trade is positive, and will not trade if the expected value is zero or negative. There is, however, also a market for the information embedded in the personal probabilities of the traders. That is, there is an industry of market analysts, academics, observers, and informants who trade in the information they have about traders’ propensities to trade—and therefore in the information they have about traders’ personal probabilities. The information embedded in a probability estimate Pr ik pe0 #k is given by 1 Ik = − log2 Pr ik pe0 #k (Hamming 1987), where the superscript 1 refers to the first-level information of the traders. This second market functions analogously to the first: Trader k trades on the basis of her expectations about the equilibrium price of the information about a particular equilibrium price. This second-order
Mapping Production Tasks to Associated Canonical Algorithms: The U Class
This production task
can be understood as an instantiation of this algorithm (as per reference) .
whose complexity as a function of the number of input variables is of the order of .
—and is therefore in complexity class.
Moral deliberation from incommensurable sets of assumptions
Search for ultimate deductive justification of axiomatic base (Putnam 1985)
infinite
U
Epistemic deliberation from incommensurable sets of representations (market equilibration with information feedback loops)
Search for ultimate deductive justification of axiomatic base (Putnam 1985)
infinite
U
Moldoveanu and Bauer: Organizational Complexity and Structuration
106
Organization Science 15(1), pp. 98–118, © 2004 INFORMS
1
equilibrium price, pe1 Ik Pr ik pe0 #k , will therefore also be subject to traders’ personal probabilities, given 1 by Pr k pe1 Ik Pr ik pe0 #k #k , which will have information content, 1 2 Ik = − log2 Pr k pe1 Ik Pr ik pe0 #k #k There are also higher-order markets formed analogously. Their formation is only limited by the time, computational resources, and imagination available to each trader. Now the personal information #k for each trader changes with every realization of a higher-order market by the inclusion of the relevant information, i.e., #k = Ik0 + Ik1 + Ik2 + · · · and so forth. Therefore, trader k s trade decision for the good or commodity in question will change according to his or her beliefs about the equilibrium price of information about his or her beliefs, the equilibrium price of information about his or her beliefs about the equilibrium price of information about his or her beliefs, and so forth. Does the cascade of markets converge to a global equilibrium? If we assume that the market is a gigantic computational device and that the information available at any point in time is finite, then the problem becomes that of devising a computational process that will converge to an answer or result, given an arbitrary input vector. Traders observe each other, process the relevant information about each others’ behavior, and try to take advantage of each others’ errors of judgment. Together, they instantiate a system that reflects upon itself using the rules of first-order logic. A price system that converges to a unique price vector for any set of inputs is equivalent to a logical system that can prove or disprove any proposition about itself. It can be shown that such a logical system (and, by implication, price system) does not exist (see Appendix A). This result confirms the intuition that the concept of “perfect equilibria” in trading markets sensitively depends on assumptions about the kinds of algorithms that individual traders use to make their buy-sell decisions. These algorithms have no ultimate justification and do not carry irreversible logical weight. Rather, they are dependent on the historical and psychological context of the “market” in which they operate. 3.3. Operationalization of Complexity Measures It is reasonable to ask: How are complexity measures to be operationalized in such a way as to provide empiricists with conceptual and analytical tools for the investigation of real organizations and organizational behaviors? There are three steps to an operationalization of the computational approach to organizational analysis: Step 1 is an ontological step. It invites the researcher to conceptualize and formulate organizational tasks as algorithms. An algorithm is a logical structure consisting of linked elementary steps: Inputs from one step
are the outputs of the previous step. Examples of algorithms are matrix multiplication and inversion procedures, graph-and-tree searches, and linear optimization routines. Organizational tasks such as linear combinations of inputs (matrix multiplication), the calculation of Nash-optimal output in an oligopolistic market (clique searches in undirected graphs), and logistical planning (Traveling Salesman Problem, see below) can be readily conceptualized as algorithms. Other organizational processes such as de novo software creation, software debugging, ASIC design, and negotiation processes are not immediately amenable to algorithmic form, and an algorithmic model of them must be built. We have worked above—as an example of such a construction— through the specific case of software creation to external specifications—, which can be modelled as rule-based reasoning with finite sets of rules—, where the complexity of the task increases exponentially with the size of the rule sets. Several different algorithms may be used to capture a process, and the choice among these algorithms should subsequently be made either on empirical grounds, related to psychological realism and ecological validity; or on normative grounds, choosing the algorithms with the lowest computational complexity, with the expectation that rational agents would choose the same way. For example, negotiation processes can be modelled as problems of finding a Nash solution (“Nash bargaining”) by iterated dominance reasoning (and thus, as clique searches in undirected graphs) or as problems of finding suitable precedents (leading to simpler problems of pattern matching). Which algorithm one uses to understand negotiation processes may depend on the context, complexity, contingency, and payoff structure of the negotiation. Step 2 is a search step. It involves searching, from the data base of canonical algorithms (see, for instance, Cormen et al. 1993, Garey and Johnson 1979), for those algorithms whose computational complexity has been accurately measured in an input-independent fashion, and mapping the “organizational algorithm” to one of these known algorithms. For instance, clique searches in undirected graphs that are fully connected are NPhard regardless of the information at the disposal of the computational device carrying out the algorithm. However, some organizational algorithms escape such precise mapping. One may find, for instance, an algorithmic description of a particular organizational routine, but no well-known “canonical algorithm” that this algorithmic description maps into. For this, we need Step 3, which is a measurement step. It involves crafting an algorithmic description of a task (albeit one that does not use a canonical algorithm that comes from a “library”) and finding functional representation of the dependence of the number of steps in the algorithmic representation of an organizational task to the number of input variables to that task. Computation
Moldoveanu and Bauer: Organizational Complexity and Structuration Organization Science 15(1), pp. 98–118, © 2004 INFORMS
theorists (Garey and Johnson 1979, Cormen et al. 1993) approach this step by attempting to prove the equivalence between an algorithm whose computational complexity is not known (iterated dominance reasoning, for instance) into an algorithm whose computational complexity is known (clique searches on undirected graphs). As one example of the application of computational mapping to the study of organizational tasks, consider the task of a small urban courier company using cars, motorcycles, and bicycles to provide same-day mail and parcel service to a large metropolitan area. Its couriers, acting as an organization, seek to optimize, through their actions, the total amount of space covered within a given day, subject to providing mail service to all of the intended recipients of the parcels. The organization as a whole, then, can be seen to be attempting to resolve, through its behaviors, a “traveling salesman problem” (TSP), one of a handful of problems shown early on in the literature to be NP-hard (Garey and Johnson 1979, Cormen et al. 1993). It is the problem faced by a salesman traveling to N different cities and seeking the shortest route that connects all of the cities. The problem is obviously NP-hard because, to find the shortest path, one must generate and search through all of the possible paths that connect the N cities. The number of operations required to solve this problem grows exponentially with the number of sites to which the courier company must provide service. There are at least three possible cases here that are interesting from a modelling perspective, and the computational approach helps us make the relevant distinctions between the three cases and can easily be applied to each case: (a) The first is one in which there is very limited central intelligence in the firm (relegated to taking orders and broadcasting them) and couriers, perhaps equipped with wireless communication devices, try to “find their way” to the optimum cost point. This situation corresponds to that in which the “computation” is carried out by the very actions of each individual courier, none of whom individually must attempt to solve an NP-hard problem. (b) The second is one in which the NP-hard planning task is delegated to a central computer or executive entity that calculates the optimum itineraries and sends them in real time to the individual couriers. This situation is one in which there is an explicit awareness of the TSP qua TSP: The computation is explicit, and its successful solution depends on the timely transfer of information to and from individual couriers and the computational resources of the central processing unit. (c) The third situation is one in which the organization as a whole attempts to solve the TSP (either in a centralized or decentralized fashion) but fails to do so, either because of computational or informational constraints, or because of an incorrect conceptualization of the problem. In this case, the organization as a whole can
107
still be computationally modelled—not as an ongoing process aiming to solve a TSP, but rather as an ongoing process trying to solve a modified TSP, one with blind alleys or errors. This modeling “move” is a standard one in the literature on computation theory, where not only “ideal” computational processes must be considered, but also processes running on imperfect computational devices, or having nonideal interfaces. From a modelling perspective, it is not necessary that members of the courier organization as a whole “know” they are resolving the problem which—ideally—they should be solving to achieve optimal results. Rather, what is interesting is to measure their progress towards a solution of the “optimal” problem even as it consciously pursues the solution to a different problem. Now let us examine how we actually measure complexity (rather than complexity class) in this example. Suppose that we concerned ourselves with the situation represented by the first case (case (a), above). We know from the literature on computational complexity that the traveling salesman problem is NP-hard, indeed, NP-complete; i.e., it can be mapped by a simple (P -hard) transformation into any other NP-hard problem (Cormen et al. 1993). This suggests that the problem is irretrievably intractable; i.e., it is almost guaranteed that no P hard solution to it exists (because any P -hard solution to any other NP-complete problem would also entail that the TSP was solvable in P -time, but no such solution has been found, in spite of many years of concerted search efforts on all other NP-complete problems). Given that the problem is irretrievably intractable, we can assume that the fastest algorithm for solving it exactly will take a number of operations that is at least an exponential function of the number N of locations we are optimizing over; i.e., Ka = eN . For N = 10 locations, the solution algorithm would require 22,026 operations, easily manageable on a modern personal computer. For N = 100, the algorithm would require 27 × 1043 operations, which, on a machine capable of performing 1012 operations per second (a postmodern supercomputer) would require an amount of time that is greater than the age of the universe (3 × 1017 seconds, or nine billion years) by a multiplicative factor of 1014 . Thus, we can use the computational complexity measure in conjunction with an estimate of an organization’s computational and planning resources and an adequate algorithmic characterization of its production task to estimate upper bounds on the scale of the task it will likely undertake, and to define a complexity frontier of the organizational production task. It is also possible, using this complexity measure, to quantify precisely the decrease in complexity that satisficing approximations will bring, or to evaluate moves and tactics (in computational complexity space) that an organizational designer might make to deal with irretrievably intractable production tasks. It is these issues that the next section will discuss.
Moldoveanu and Bauer: Organizational Complexity and Structuration
108
4.
Organization Science 15(1), pp. 98–118, © 2004 INFORMS
Complexity Regimes and Organizational Structuration
The distinctions among P , NP, and U -complex production tasks that we have drawn in the previous section makes it possible for us to focus on the ways in which organizations structure their tasks, inference, and decision-making processes between and within complexity regimes and to examine through the computational lens the effect of task complexity regimes on organizational behavior. The fundamental assumption that pervades our analysis is that organizations seek to minimize the costs of their respective production tasks and, since production costs rise with computational complexity of a task, that they will consequently seek to mitigate the computational complexity of the tasks they undertake. This assumption is consistent with the basic insights coming from the information-processing view of organizations (Simon 1947, March and Simon 1958, Cyert and March 1963, Simon 1996) and from the literature on organizations as coupled, interacting systems (Thompson 1967, Simon 1962, Weick 1995). This suggests that organizations evolve towards separable (or weakly coupled) systems of interacting elements, lest they lose their ability to carry out their activity sets reliably (Perrow 1984) and predictably (Rivkin 2000), or to implement purposive changes (McKelvey 1999). Our analysis adds to these basic insights a categorization of complexity regimes that allows researchers to make sharper distinctions in organizational patterns of behavior than was previously possible. Starting from complexity minimization as a reasonably imputable objective function that helps us understand organizational structuration processes, we will examine organizational behavior within different task complexity regimes and across the U /NP and NP /P boundaries and show that various forms of functional and structural decomposition can be understood through the computational lens as rational adaptations to computationally intractable task regimes. 4.1.
Complexity-Driven Task-Partitioning Strategies We start our analysis of structuration patterns by noting that unlike organizations such as Dell Corporation, which appropriates only marginal profits for putting together boards and components to build PCs by a typically P -hard production process (input-output matching and inventory planning), organizations such as Microsoft, Cisco, Lucent, and Intel manage to appropriate super-normal profits by selling products or services that are outcomes of typically NP-hard organizational production tasks (software design, testing and integration, chip design, firmware, and system design). The organizational behaviors that enable these results can be made intelligible as exercises in task structuration and
the structuration of decision making and planning processes in the face of potentially intractable complexity regimes. The P /NP distinction sharpens the focus with which we can scrutinize the “simple-complex” or “easydifficult” boundary among various production tasks. It is a fundamental distinction from a computational perspective, in that NP-hard production tasks cannot be “separated out” into P -hard combinations of P -hard production tasks (Cormen et al. 1993). (This fact can be perceived clearly by realizing that one cannot build a polynomial approximation or exact representation of an exponential function.) If they could be so separated, then NP-hard tasks would really be merely P -hard, because a polynomial function of a polynomial argument is itself a polynomial function. Thus, the P /NP boundary cannot be bridged by clever manipulations of the problem or task variable space. NP-hard organizational tasks can, however, be separated or truncated in ways that we can expect to encounter in the form of complexity coping and complexity management strategies in organizations. These strategies can be understood as organizational adaptations to intractable tasks or problems, and hence our analysis contributes a new family of complexity reduction mechanisms to the analytical (and interpretative) tools that we can use to understand the ways in which organizations structure themselves. We postulate that there are three sorts of taskpartitioning strategies that are particularly useful for reducing the organizational costs of engagement with NP-hard tasks: (1) The first strategy is to separate out an intractable production task (NP-hard) into a number of associated NP-hard production tasks, each with a significantly smaller variable space. The problem of efficiently separating out NP-hard tasks into NP-hard subtasks is P hard for a number of well-defined problems (Cormen et al. 1993), which include several of the graph-search problems (including the problem of finding a Nash equilibrium) we have considered here. In such cases, an organization can benefit from a large reduction of computational complexity of the overall production task achieved through the functional decomposition of its activity systems. Taking away critical links between different organizational task modules (i.e., decomposing the system) leads to an exponential decrease in the computational complexity of the overall production task in the case of NP-hard production tasks. This NP-to-NP reduction in computational complexity achieved through decomposition (or task modularization) can be easily understood by comparing two different partitionings of an NP-hard production task with eight different input variables. If the task is carried out with no partitioning (assuming exponential dependence of complexity on the number of input variables), then its computational complexity will be Ka = 28 = 256
Moldoveanu and Bauer: Organizational Complexity and Structuration
109
Organization Science 15(1), pp. 98–118, © 2004 INFORMS
Figure 3
The Benefits of Decomposition of NP-Hard Task into Multiple NP-Hard Subtasks with Smaller Input Spaces
Unmodularized Task, 8 inputs
Complexity ∼ 28 = 256 operations
Modularized Task, 4 × 2 inputs
Complexity ∼ 4 × 22 + 1 × 24 = 32 operations
NP-hard subtask, 4 inputs N=4 NP-hard t ask, 8 inputs
Input Variables, N=8 operations. Suppose, however (see Figure 3), that the task is decomposed into four independent subtasks with two input variables each (each of which is NP-hard), whose outputs are the inputs to a fifth NP-hard production task. The computational complexity of this modularized production task will be Ka = 4 22 + 24 = 32 operations. This reduction in complexity does not take into account the cost decrease realized by the parallelization of the subtasks. We would expect, and company studies indicate (Cusumano and Selby 1995), that large software firms such as Microsoft make use of this sort of ability to partition a problem space through (a) the structural modularization of large blocks of software and (b) the functional modularization of the software production tasks, enabling collaboration among programmers for the nonredundant production of a single software module. Such organizations reap the benefits of higherthan-any-polynomial cost decreases of their production tasks as a result of their modularization-based approach to structuration. The benefits of such structuration activities will, of course, also exist in the case of simple production tasks. However, because the task of task structuration is itself costly in many cases and the parallelization of subtasks entails a new set of coordination tasks, we would expect task modularization to be more prevalent in organizations engaged in NP-hard production tasks than in organizations engaged in P -hard production tasks. Moreover, because of the higher expected value of the payoff, we would expect that organizations engaged in NP-hard production tasks would more readily engage in research and development activities aimed at production task modularization and parallelization than
NP-Hard Subtask, 2 inputs
NP-Hard Subtask, 2 inputs
NP-Hard Subtask, 2 inputs
NP-Hard Subtask, 2 inputs
N=2 N=2 N=2 N=2 Input Variables, N=8 would organizations characterized by P -hard production tasks. (2) A second strategy for dealing with NP-hard tasks consists of matching the production task to the characteristics of the entity that carries out the task. This strategy is similar to the strategy of computation system designers who match the characteristics of the algorithm to the characteristics of the computational platform on which the algorithm is supposed to run (Cormen et al. 1993). One would not use a Pentium processor, for instance, for digital signal processing operations, nor a digital signal processor as a host for the Linux operating system. Matching the properties of the algorithm to the properties of the device on which the algorithm runs is one of the keys to the efficient implementation of computational intelligence (Cormen et al. 1993). NP-hard problems such as the traveling salesman problem (TSP) are hard in the sense that the time to carry out the steps necessary for solving them is prohibitively long. However, each individual step in the TSP is in itself simple. In the absence of any shortcut to the solution, a mathematical genius would fail to solve the TSP even for a moderately high number of input variables, but a very large number of individually simple but coordinated computational devices could well succeed. Thus, it may be advantageous to an organization to partition its tasks in such a way as to optimize the fit between the computational structure of the task and the computational structure (speed, internal sophistication) of the entity carrying out the task. Organizations and markets can be thought of as instantiations of computational devices on which problems are solved. Production tasks can be understood as
110
the dynamic processes by which problem-solving algorithms run on their hardware platforms. Organizations can be understood as highly centralized computational platforms (Hayek 1945), whereas markets can be understood as decentralized platforms. When feasible and efficient, some production tasks that can be parallelized and do not require a lot of central intelligence can be “subcontracted” to a component of the market, which provides a decentralized computational platform in which individual efforts are coordinated through the price system (Hayek 1945). Large software companies provide instantiations of this sort of partitioning. They understand that (a) software testing does not require a lot of sophistication, but a large number of trials, and (b) that markets containing a large number of (relatively unsophisticated) users are more likely to thoroughly test a piece of software than a small group of (highly sophisticated) software engineers would (Cusumano and Selby 1995, Cusumano 2000). Thus, they organize for a division of labor between the firm and its own product market. It is expected that large pieces of code will contain bugs that will be fixed by an iterative process in subsequent releases of the software package. It is also known that the process of searching for these bugs is an NP-hard production task, as it involves testing for the responses of the software to all admissible combinations of inputs (a factorial function of the number of possible inputs) (Garey and Johnson 1979). Each testing cycle (laboratory, alpha, beta, general availability) provides for a broadening of the solution search space over which the overall product optimization is performed. Task partitioning via structural partitioning can be carried out within the organization as well, by predefining the couplings among value-linked activity sets (e.g., engineering and production, production and marketing, marketing and business development) and forbidding (or increasing the cost of) the addition of certain links through organizational fiat, rule systems, or incentive structures. An example of forbidding certain couplings would be the use of geographically separate design teams to work on different parts of a system design problem, or the separation of design and test groups within an engineering organization. The complexity of the tasks that can be taken on by the organization will subsequently be constrained by the inherent architecture of the organizational “hardware” meant to carry out these tasks and by the time constraints placed on the organization as a whole by its market environment. This may explain the isomorphism between project structure and project team structure discussed in the project management literature (Wheelwright and Clark 1992, Baldwin and Clark 2000). This strategy does not require a computationally formidable entity at the top of the organization that can optimally allocate tasks to different groups within the organization based on an intimate knowledge of the task.
Moldoveanu and Bauer: Organizational Complexity and Structuration
Organization Science 15(1), pp. 98–118, © 2004 INFORMS
Rather, complexity constraints are placed on the resulting tasks via the insertion of structural constraints on the entities carrying out the tasks. The importance of this sort of strategy to the overall complexity of the production task of an organization can also be understood through the lens of models of organizations as coupled, value-linked activity systems such as NK C networks (Kauffman 1993, McKelvey 1999). Rivkin (2000) exposes the link between computational complexity and the structural complexity of NK systems by showing that the problem of predicting which local optimum a Boolean network of linked activity sets will converge to for the average number of links K > 2 is NPhard. If organizational activity systems are structurally constrained to the regime K ≤ 2 by a system of organizational boundaries on the groups carrying out these activities, then overall task complexity will have been controlled without the need for task-specific intelligence to reside in the executive or operational management of the organization. (3) A third strategy for dealing with NP-hard production tasks aims to mould or partition them into subsystems of activities that allow for satisficing, P class approximations. When optimality can only be achieved by consideration of a fully coupled system of activities that poses impossibly large computational complexity costs, exact solutions—whose pursuit is NPhard—may be replaced with P -hard approximations. Computer boards, for instance, are decomposable into subassemblies, as there are, relatively, far fewer interactions among the functional elements on a board than among the functional elements of its components, such as the chip (Baldwin and Clark 2000). While it is NPhard to jointly optimize the feature sets of microprocessors, memory chips, discrete analog components, power components, and so forth on a computer board, once these tasks are decoupled and the individual components are individually optimized subject to compatibility to a set of standard interfaces, it is only P -hard to select the “boundedly optimal” configuration. (One has only to list all of the configurations that are interface compatible and order this list, a tree-search, P hard production task). Thus, modularization carries an “information-hiding” function that reduces the computational complexity of the resulting optimization task (Baldwin and Clark 2000). The idea of satisficing solutions in situations of bounded computational resources is certainly not new. It dates back to the early work of Simon, March, and Cyert (Simon 1947, March and Simon 1958, Cyert and March 1963). However, one particular sort of partitioning emerges as a new structuration strategy from the computational perspective. It involves separating the task of generating possible solutions to NP-hard problems from the task of verifying that these solutions are indeed
Moldoveanu and Bauer: Organizational Complexity and Structuration
111
Organization Science 15(1), pp. 98–118, © 2004 INFORMS
correct. As is known in the theoretical computer science field, it is only a P -hard problem to verify that a given solution to an NP-hard problem is sufficiently close to the optimum, provided that sufficiency conditions have been specified (Cormen et al. 1993). If an organization can find a way to avoid the cost of systematically searching through the intractably large solution set of an NP-hard problem, but to appropriate a solution that it has verified, then it will have put into place a mechanism for receiving the benefits of carrying out an NP-hard production task for the cost of carrying out a P -hard production task. Growth-through-acquisition strategies in complex high-technology fields (such as carrier-class communications systems) can be explained as a sort of partitioning between solution generation and solution verification. Large systems integrators such as Cisco, Inc., and Nortel, Inc., grow through acquisitions of smaller firms with finished products that have achieved the first positive market tests of the core technologies embedded in these products (Slater 2003). Given the high payoffs involved in an acquisition by a large systems integrator, there are usually many venture-capital-backed small firms that function as “solution generators” for technological problems. The customers of these firms, the market, are the primary solution verifiers. Large systems integrators monitor market reactions and employ their own technical knowledge to identify the closest and most robust approximation possible to the solution of the NPhard problem. Once they identify such a solution, the act of acquisition or launch of a generic product reinforces this solution and opens the market for more conservative, risk-averse customer groups. Such a partitioning strategy is also identifiable in the development paths of large software organizations such as Microsoft, Inc. None of Microsoft’s major applications platforms (such as DOS, Windows, and Navigator) were developed de novo within Microsoft (Cusumano and Selby 1995). Rather, these applications were purchased or licensed from their inventors when the consumer market produced sharp signals about their desirability, and aggressively marketed as part of a large suite of applications and platforms with which they became interdependent. This strategy uncouples the task of solution generation (de novo software creation) from the task of solution verification (verification of product fit with a set of syntactic and structural constraints and output goals). The former is carried out by independent producers that act jointly as a source of variation. The latter is carried out by the consumer market, which acts as a selection mechanism. The focal organization (Microsoft) achieves by this strategy a structural partitioning of an NP-hard production task by contracting out to entrepreneurial groups the solution-generation step and to the consumer market the solution-verification step.
The same kind of partitioning strategy can be observed in the evolution of open-source software development projects such as Linux (Bonaccorsi and Rossi 2003, von Krogh and von Hippel 2003). The basic syntax of the new operating system provided a framework that allowed for the hierarchical partitioning of the NPhard task of de novo software development into NPhard subtasks pursued by independent developers and teams of developers. The market for software products acted as a selection mechanism for new software modules, and permanent changes in the syntax of the operating system code acted as a retention mechanism, which granted legitimacy and staying power to the successful innovations. 4.2.
Complexity-Driven Structuration of Organizational Planning and Decision-Making Processes As we have pointed out above, organizational tasks that are purely cognitive in nature—such as the generation of explanations for market and organizational events and the generation and justification of strategic and operational plans—can also be understood as tasks whose computational complexity can be measured and makes a difference to their own evolution. The generation of organizational explanations, strategies, and plans can be understood as computational simulation processes (Wolfram 2002) that use information about current conditions and causal mechanisms to generate explanations of past events or predictions (or prescriptions) about future events. This formulation makes it possible for us to apply the computational framework developed in this paper to examine the ways in which organizations structure processes of explaining, planning, forecasting, and the complexity coping mechanisms used to deal with computationally intractable tasks. Before doing so, we must also consider the ways in which the problems of explaining, forecasting, and planning are rendered solvable in the first place, or—more precisely—the ways by which organizations “do away” with provably unsolvable (U -type) problems, which lurk wherever there exists the will to unmask epistemological or moral conflict or ambiguity. We posit that organizations develop “stopping rules” (see Gigerenzer and Selten 2000, Gigerenzer 2003) that allow their managers to state well-posed problems and “safely” ignore U -type problems. We can conceptualize, in complexity language, the contribution of at least two organizational mechanisms for producing such stopping rules: the role of conventions and habits of the mind that are common knowledge within or among organizations, and the use of specialists as guarantors of epistemic warrant (in lieu of ultimate justification or ultimate proof of optimality of a particular assumption or action prescription). First, culture and tradition can function as sources for effective stopping rules that turn uncomputable problems
112
into tractable ones. In the example of reflexive price formation in markets in which information is traded, information about information is also traded, and so forth, there is clearly none other than an arbitrary point at which a trader, analyst, pension fund manager, or private investor can stop computing, because the problem of computing an “ultimate” set of prices for the traded asset, information about the traded asset, information about the information about the traded asset and so forth is provably noncomputable. In this case, commonly accepted models in the industry (general equilibrium models, the capital asset pricing model, the BlackScholes option pricing model for derivatives) supply de facto stopping rules for computation. Notably, solving problems using these models always confronts one with a no harder than P -hard problem (Moldoveanu 2003). When these models become common knowledge in an industry (everyone knows them, knows that everyone else knows them, and so forth), a predictive equilibrium develops in which individual managers can make good predictions about each others’ behavior without the use of detailed technical models that are more realistic or truthlike (but are computationally intractable). Numerical simulations (Darley and Kauffman 1999) reveal that such equilibria develop precisely at the points where the computational complexities of the models used by all of the agents that are trying to predict each others’ behaviors are equal. Thus, a predictive equilibrium is constructed on the foundation of computational isomorphism of the models used by managers to predict each others’ behaviors. Second, the emergence of specialists that provide customized solutions to managerial problems can also be seen as an adaptation to the underlying indeterminacy of solution techniques for managerial problems. Specialists provide epistemic warrants (Toulmin 1983) or credibility tokens in situations in which there is no selfevident or unassailable justifiable way of arriving at a solution. Most epistemic or moral problems are of this sort, and many, if not most, organizational problems can be made to slip down the slope that leads to fundamental epistemic or moral difficulties (for examples, see Moldoveanu and Nohria 2002). Specialists such as strategic, operational, and process consultants can be understood as suppliers of legitimate, solvable problem statements or, in computational terms, of synthetic, often arbitrary, “stopping rules” to processes of reasoning about reasoning (Gigerenzer and Selten 2002). In examining the ways in which organizations cope with solvable but computationally intractable problems of explanation, forecasting, and planning, we would expect to encounter complexity management strategies that are similar to those we described for coping with organizational tasks, above: (1) Functional Decomposition into NP -Hard Subtasks. As seen in the case of organizational tasks, turning
Moldoveanu and Bauer: Organizational Complexity and Structuration
Organization Science 15(1), pp. 98–118, © 2004 INFORMS
NP-hard problems into parallel NP-hard subproblems affords the problem space designer a large net decrease in computational complexity. Four, two-input NP-hard subtasks in sequence with one, four-input NP-hard subtasks require eight times fewer operations (32) than the eight-input NP-hard task (256) that they replace. NPhard problems of cognitive production, such as iterative dominance reasoning (Gilboa 1989) and consistency checking on finite sets of axioms (Casti 1991)—which can be used to model de novo design of organizational rule systems—can be usefully decomposed into subproblems which, although themselves NP-hard, are nevertheless based on far narrower input variable sets than the original, unrestricted problems. Iterative dominance reasoning, for instance, can be usefully decomposed into sequential subgames (Gintis 2000). Functional specialization on top management teams can be understood as mechanisms of complexity coping based on this kind of problem space partitioning. Strategic and operational analysis can be understood as a (NP-hard) production task of providing an inference to the best explanation based on a finite but potentially large data set and finite but potentially large set of covering hypotheses, which together form an explanation for a market or an organizational event. Inference to the best explanation based on minimum-vertex-cover computations has been used to effectively simulate the diagnostic reasoning process in internal medicine (Miller et al. 1982, Pearl 1987) and, more recently, to simulate the process by which strategic managers reason about sudden changes in market or competitive conditions (Moldoveanu 2003). Inference to the best explanation (Bylander et al. 1991) is computationally isomorphic to the problem of finding the minimum vertex cover (see Figure 4) for an undirected graph. A minimum vertex cover is the smallest subset of edges (links) in the graph, which together span all of the nodes of the graphs. If we position the “data points” at the nodes of the graph and the explanations that cover them on the edges of the graph, then the minimum vertex cover of the graph represent the minimal set of hypotheses that together cover all of the data that needs to be explained. The problem of generating the minimum vertex cover given an undirected graph is NP-hard (and therefore intractable for large data sets). Functional specialization amounts to the decomposition of the large-input-size NP-hard problem of providing an optimal explanation for a large number of disjoint data points into parallel (functionally specialized) NP-hard subtasks with far more restricted input variable spaces. Each executive function yields its own “optimal explanation” for the pattern of evidence that falls under its legitimate cognitive jurisdiction, which becomes an input to the (NP-hard) task of the chief executive. This problem space partitioning highlights the fact
Moldoveanu and Bauer: Organizational Complexity and Structuration
113
Organization Science 15(1), pp. 98–118, © 2004 INFORMS
Figure 4
Graph Analysis of Inference to the Best Explanation
Graph G ={x, y, u, z, w, v} represents data points at vertices, hypotheses at edges.
A “best explanation” is the minimal set of hypotheses (edges) that together cover all of the data points (vertices).
y
x
u v
y
x It is, in other words, the “minimum vertex cover” of graph G (Bylander et al. 1991).
z
w An “explanation” is a set of hypotheses (edges) that together cover all of the data points (vertices).
that there is an inverse relationship between the computational load that befalls the functional specialists and the computational load that befalls the chief executive. Partitioning an eight-input NP-hard task into four parallel two-input NP-hard tasks leaves the chief executive to tackle a four-input NP-hard task, whereas partitioning the same eight-input NP-hard task into two parallel four-input NP-hard subtasks reduces the computational load on the chief executive by a factor of four. The novel insight that computational complexity theory affords us into such problem space partitioning phenomena is that the NP property of a problem is conserved and decomposition cannot do away with it. No matter how many functional specialists the chief executive retains in our example, he/she/it must still deal with an NP-hard subproblem, given that he/she/it started out with an NP-hard problem. This is a consequence of the basic property of NP-complex algorithms—that they cannot be broken down into a P -complex combination of P -complex algorithms. (2) Problem Formulation, Satisficing Approximations, and the Role of Heuristics. Problem formulation (or selection, recognition, or construction) (Pounds 1969, Mintzberg et al. 1976, Lyles and Mitroff 1980, Cowan 1986) has for some time been studied as both an independent and dependent variable that makes a significant difference in strategy processes—although there is relatively scant agreement on the dimensions along which such a difference should be measured. Our framework suggests such a dimension: Managers face a fundamental decision among the various complexity classes of problem statements that they can use to conceptualize
v
u
z w
Thus, “inference to the best explanation” can be represented as the problem of finding the minimum vertex cover of a graph G whose vertices represent “data” and whose edges represent “hypotheses.”
a particular predicament. A competitive interaction, for instance, can be conceptualized in terms of a series of mutual sequential adjustments of output predicated on short-time historical interactions (a P -hard problem of pattern recognition and rule selection based on the optimal outcome), in terms of a game-theoretic analysis of the strategic options available to other industry players, followed by iterative dominance reasoning to isolate the Nash equilibria and focal point selection to choose among multiple equilibria (an NP-hard problem Gilboa 1989), to name but two alternatives. Problem formulation (or selection, discovery, or articulation) can now be studied as a problem of complexity class selection, wherein complexity related costs are traded off against the benefits of more accurate final solutions. We conjecture that these trade-offs will depend sensitively on the computational complexity of the models used by other agents with whom the organization’s managers interact. Both P -hard and NP-hard reasoning schemas are self-reinforcing in the sense that they lead to predictive equilibria in situations in which everyone uses them, as described above. Of course, it is an open question as to whether predictive equilibria based on P -hard problem statements are resistant to purposive incursions from agents using NP-hard reasoning schemas, and vice versa. Another way in which computational designers can satisfice is to accept NP-hard problem statements, but cut through to a solution in polynomial time, using a rule of thumb, or heuristic. Ecologically sound heuristics can be thought of as providing “best guesses” or P -hard approximations to computationally intractable problems. The
114
recognition heuristic (Goldstein and Gigerenzer 2002) is a “positive version” of the well-known “availability bias” (Tversky and Kahneman 1980): In laboratory realizations, the subject infers that the recognized object in a multiple-choice judgment test has a higher value relative to a decision criterion than the unrecognized object. The alternative to using such a heuristic is to engage either in the exercise of building a full-blown causal model that makes the correct prediction (an NP-hard task involving assumption specification and consistency checking among assumptions, among assumptions and hypotheses, and among hypotheses and observation statements) or to build a probabilistic model that conforms to the rules of the probability calculus and the information given in the test case (another NP-hard problem involving consistency checks among information partitions and the rules in question). As Gigerenzer argues (Gigerenzer 2003) the recognition heuristic is “ecologically rational” (rational given the environment in which it is used) when “ignorance is systematic,” i.e., when “recognition is strongly correlated with the criterion” (Gigerenzer 2003). It is common when studying managerial cognition (Schwenk 1984) to point to reasoning by example or reasoning by analogy (especially by readily available analogy) as an example of a failure of rationality or of an unjustified “simplification maneuver.” However, if strategic managers “cut through” to an answer by using a polynomial-time computation and an analogy whose predictive power for the correct answer (obtainable using an NP-hard computation) is significantly higher than chance, then the “availability bias” turns into the “recognition heuristic,” a powerful means of generating informed guesses to intractable problems. (3) Structural Separation of Solution Generation from Solution Verification and the Structuration of Variation, Selection, and Retention Mechanisms for Solving Intractable Problems. When confronted with intractable problems and limited computational resources, theoretical computer scientists do what most other individuals do: They guess at the answer, try out multiple guesses, and retain the successful ones. Genetic algorithms (Koza 1995, Back 1996) have emerged as a powerful computational paradigm for the solution of problems whose difficulty overwhelms the computational resources required for “brute force” solutions. Not coincidentally, the “NP” denomination of NP-hard problems stands for “nondeterministic polynomial time,” rather than “nonpolynomial time” tout court. This is meant to remind researchers that such problems can be solved in polynomial time some of the time, i.e., if one is willing to give up the guarantee that the algorithm in use will converge to the right answer in a finite amount of time and is willing to live with answers that are probabilistically correct. This is precisely what happens when the solution algorithm begins from a guess at the answer, or a “hunch.”
Moldoveanu and Bauer: Organizational Complexity and Structuration
Organization Science 15(1), pp. 98–118, © 2004 INFORMS
In the study of organizations, genetic algorithms have emerged as a powerful method for the study of organizational change processes (Bruderer and Singh 1996, Moldoveanu and Singh 2003) in complex predicaments in which brute force solutions are computationally infeasible because of their complexity. The success of evolutionary algorithms as a solution paradigm for “hard problems” suggests a third structuration strategy by which organizations cope with NP-hard tasks of cognitive production: the design of efficient evolutionary mechanisms, relying on the separation of the variation, selection, and retention functions. As we have seen above, it is sometimes efficient in high-technology organizations with sophisticated production functions to separate out solution generation (variation) from solution verification (selection) and enforcement (retention), with small firms acting as solution generators (sources of variation), the product market acting as solution verifiers, and the large organization acting as a retainer and legitimator of successful solutions, which are integrated into its product suite. This separation principle can also be used within the organization to sort through intractable problems of strategic and operational planning and design. As the studies of Burgelman and his collaborators have shown (Burgelman 1983), de novo research and development processes in high-technology organizations (such as Intel Corporation) confront the organization’s managers with impossibly large solution search spaces (Simon 1996). This is the same problem as that which is faced by the algorithm designer confronted with an NP-hard solution task. In such cases, organizations cope with computational complexity through the design of internal mechanisms for the structuration of guessing. They act as internal resource-allocation markets, in which solution generation is delegated to small (and competing) research and development groups, solution verification (or selection) is delegated to lead (alpha and beta) customers and the office of the chief technology officer, and solution retention is carried out by the head office. Intraorganizational mechanisms for the structuration of NP-hard problem solutions mimic the evolutionary mechanisms to which the organization itself is subjected in the market place: The organizational planning process becomes a nondeterministic simulation engine for the process of production and competition, which amounts to an enactment of a virtual evolutionary process within the organization.
5.
Discussion: Complexity and Organizational Structuration
We have developed a model of organizational complexity that connects conceptual work in the theory of computation and organizational complexity to empirically grounded concepts of production task analysis and the
Moldoveanu and Bauer: Organizational Complexity and Structuration Organization Science 15(1), pp. 98–118, © 2004 INFORMS
economics of uncertainty and information. By linking the idea of organizational complexity to that of the computational complexity of a model of the production function of the organization, the foregoing analysis makes it possible to talk of organizational complexity in the language of the theory of computational complexity. We showed that it is possible to arrive at analytical measures of computational complexity for various production tasks, and therefore at a taxonomy of organizations derived from the nature of their production functions. We argued that there are natural “break points” in the categorization of organizational tasks, corresponding to the complexity class of the algorithms modelling those production functions. Several organizational structuration regimes were examined in the last section as coping mechanisms in the face of intractable task or problem complexity. This approach focuses on a different distinction from that on which Thompson’s (1967) influential theory of complex organizations is based. Thompson contributes a structural taxonomy of organizational production tasks and distinguishes among long-linked technologies (wherein several steps have to be performed in sequence), mediating technologies (linking interdependent customers), and intensive technologies (that are dependent on the execution of rapid feedback loops for their successful execution). From a computational perspective, one can have simple (P -hard) or complex (NP-hard) production tasks falling into any one of Thompson’s categories. For instance, feedback loops iterated on even very simple systems can give rise to chaotic overall behavior if the feedback system is unstable and allows errors to propagate through the feedback cycle (Ott 1993), making the problem of making predictions about the long-term evolution of the system NP-hard. Of course, deterministic predictions about the precise state of a system at a point in time can be replaced with statistical ones about distributions of possible states of the system (Ott 1993), but the variance of statistical predictions about samples of chaotic sequences increases exponentially as a function of the farsightedness of these predictions, facing the predictor with a trade-off between higher uncertainty or lower visibility. On the other hand, uncoupling feedback loops from one another and from quickly changing environmental variables can lead to linear complexity regimes in which individual task subsystems achieve self-stabilization. They can be modelled by P -hard computational chains, and the problem of predicting their evolution is also P-hard. Similarly, depending on the network structure of the linked activity sets comprising Thompson’s mediating technologies, we can have either computationally simple (P -hard) or complex (NP-hard) instantiations of the overall organizational activity sets (Rivkin 2000). Dense
115
Boolean networks lead to instantiations of NP-hard computational processes. Sparse ones lead to instantiations of P -hard computational processes. Structural uncoupling can lead to problem-space complexity reduction (from NP to P ) when such structuration reduces the average link density in the network of coupled elementary activity sets. Finally, long-coupled technologies do not of themselves constrain the computational complexity of the overall organizational production task: The P /NP taxonomy allows us to conceptualize an additional boundary between “short-coupled” (computationally simple) and “truly long-coupled” (computationally hard) production tasks, and to look for organizational behavior within and at the resulting boundaries. Thus, the framework contributes to the research tradition based on Thompson’s taxonomy a new distinction within one of his production task classes, based on the complexity class of the overall production function of the organization. Our approach is—on the other hand—similar to Thompson’s approach in that it rests on the implicit assumption that organizations act to minimize the resulting uncertainty associated with their activity sets: Complexity coping or complexity management can be understood as uncertainty coping or uncertainty management if we realize that noise (small variances in inputs) are linearly amplified in computationally simple production tasks and exponentially amplified in computationally difficult production tasks (Cormen et al. 1993). As we pointed out before, one can replace the requirement for deterministic predictions with one for statistical predictions, but noise propagation down NPhard computational chains quickly broadens the variance of the resulting predictions and forces a choice between the Scylla of very short prediction time horizons and the Charybdis of high-variance predictions. Thus, intractability at the computational level implies a sort of fog at the level of strategic and operational planning. The management of intractable problems can also be understood as the management of uncertainty. The design of organizational structures that deal with intractability is part of the larger problem of the design of organizational structures that minimize uncertainty-related production and transaction costs (Williamson 1985). We would like to highlight the importance of this work on the ontological and epistemological dimensions of organizational complexity studies. We would like to distinguish between a structural view of complexity that views complexity as a property of a system (Simon 1996) and a cognitive view of complexity that views complexity as a property of the model of the system that either the observer or the actor uses in order to represent the system for explanatory or predictive purposes. Taking the cognitive view of complexity has two key advantages: It allows us to be psychological realists about modeling agents (whose behaviors turn on their
116
perceptions and beliefs, not necessarily on those of the researcher), and it is congenial to the use of new and useful tools from computation theory for the analysis of complexity. Together these tools provide an enabling new “technology” for the study of organizational complexity. By representing production tasks by the algorithms that most effectively and efficiently represent them, we can analyze the activities of an organization in terms of the computational complexity of its production tasks. By making assumptions about the way in which decision makers incorporate insights about complexity into their deliberations, we can generate testable propositions about the effects of organizational complexity on organizational behavior. The above analysis can be seen as a reasoned argument—at the epistemological and ontological levels—for the incorporation of cognitive features of agents into a model of organizational complexity. We end with a suggestion for the normative use of the computational complexity measure we introduced for the problem of organizational design and task design. If the algorithmic approach to the representation of organizational tasks is accepted and begins to be used as a planning tool, then it becomes possible for strategic decision makers to choose task structures (or organizational structures) that minimize computational complexity within each complexity class (P , NP) and to choose among task partitions those that minimize the computational complexity of the entire organizational production function. Computational complexity measures can supply a metric for discriminating among different task structures that is analogous to “Occam’s Razor” (minimum informational complexity) in providing a decision metric among different models and theories in both basic and applied scientific reasoning. Complexity-wise evaluations of organizational tactics, maneuvers, and restructuring moves (considered as production tasks and represented algorithmically) can give managers valuable a priori information about the costs of such tactics, moves, and maneuvers, costs that were previously kept hidden by the unavailability of a measure for their quantification. Acknowledgments
The authors would like to acknowledge the financial support of the Strategic Management Group at the Rotman School of Management, University of Toronto. They would also like to acknowledge the helpful critical comments of Marshall Scott Poole and two anonymous reviewers, whose joint contributions significantly improved the substance and form of this paper. The authors acknowledge the helpful editorial and research assistance of Stewart Melanson of the Rotman School of Management, University of Toronto.
Appendix Following Godel (see Putnam 1985), we would like to assign natural numbers to each proposition (made up of functions, variables, constants, and particles such as connectives and
Moldoveanu and Bauer: Organizational Complexity and Structuration
Organization Science 15(1), pp. 98–118, © 2004 INFORMS
quantifiers), such that to every proposition there corresponds exactly one number. Godel proceeds by assigning the first 10 natural numbers to the constant signs, ∼ ∧ → ∃ = , 0 s , one prime number greater than 10 to each numerical variable, the square of a prime number greater than 10 to each variable denoting a sentence, and the cube of a prime number greater than 10 to each variable denoting a predicate. Godel then proposed building Godel numbers for sentences made up of various variables and constants by writing down the Godel numbers of each individual variable or constant and then concatenating them to form the Godel number for the entire proposition by the following rule: g a1 a2 a3 a4 a p = 2g a1 × 3g a2 × 5g a3 × · · · × pg ap , where the numbers 2 3 5 p are the first p prime numbers. Because prime factors are the building blocks of Godel numbers and each variable begets a distinctive Godel number, therefore two different formulas cannot have the same Godel number. If we want to discover whether or not a particular number is a Godel number and which formula or proposition it is the Godel number of, we can factor the Godel number into powers of its prime factors, arrange the prime factors in increasing order, and thus decode the formula. Thus, the Godel numbering system represents a mirroring of any set of logically connected propositions into the natural numbers. If we let G represent the Godel number associated with the arbitrary proposition , then the question about the provability of turns into a question about the computability of G . If G turns out to be computable by a function M, then that function must be Turing-computable (Boolos and Jeffrey 1993). Moreover, a function is Turing computable iff it is recursive. Therefore, G is computable if the function M that computes it is provably recursive. Now, if we can show that there exists at least one recursive function that cannot be proved to be recursive, then we will have shown that there exists a proposition with a Godel number whose computability cannot be demonstrated, and therefore that there exists at least one undecidable proposition in any logical system that can be encoded by arithmetic. To do so, we follow Putnam (1985) and let M represent a procedure that proves or disproves the proposition “function m is recursive,” for any arbitrary function m. Next, define the matrix function D such that D n k represents the kth number of the nth function that was proved by M to be computable, plus one. D will be recursive. However, if M can prove the proposition “D is recursive,” then D should be one of the functions listed in D, in which case we obtain the contradiction, D n k = D n k + 1. Hence, M cannot prove that D is recursive, although we can just see that it is. Therefore, there exists at least one true proposition whose Godel number is not computable, and therefore whose provability cannot be established. Therefore, there exists at least one unprovable proposition in a logical system whose propositions can be mirrored in the natural numbers. Therefore, no such logical system can be complete.
References
Abelson, H., G. Sussman. 1993. The Structure and Interpretation of Computer Programs. MIT Press, Cambridge, MA. Anderson, P. C. 1999. Complexity theory and organization science. Organ. Sci. 10(3). Back, T. 1996. Evolutionary Algorithms in Theory and Practice. Oxford University Press, New York.
Moldoveanu and Bauer: Organizational Complexity and Structuration
117
Organization Science 15(1), pp. 98–118, © 2004 INFORMS
Baldwin, C., K. Clark. 2000. Design Rules I: The Power of Modularity. MIT Press, Cambridge, MA.
Hayek, F. 1945. The uses of knowledge in society. Amer. Econom. Rev. 36 1–13.
Becker, M. 2003. The concept of routines twenty years after Nelson and Winter (1982): A review of the literature. DRUID Working Paper 03-06.
Kauffman, S. 1993. The Origins of Order. Oxford, New York.
Benjaafar, S., M. Sheikhzadeh. 1997. Scheduling policies, batch sizes and manufacturing lead times. IIE Trans. 29 159–166. Bonaccorsi, A., C. Rossi. 2003. Why open source software can succeed. Res. Policy 32 1243–1258. Boolos, G., R. Jeffrey. 1993. Computability and Logic. MIT Press, Cambridge, MA. Bruderer, E., J. Singh. 1996. Organizational evolution, learning and selection: A genetic-algorithm-based model. Acad. Management J. 39 1322–1349. Burgelman, R. A. 1983. A model of the interaction of strategic behavior, corporate context and the concept of strategy. Acad. Management Rev. 8 61–70. Bylander, T., D. Allemang, M. C. Tanner, J. Josephson. 1991. The computational complexity of abduction. Artificial Intelligence 49 125–151. Casti, J. 1991. Reality Rules: Picturing the World in Mathematics, Vol. 2. Wiley Interscience, New York. Cohen, M. D. 1999. Commentary on the Organization Science Special Issue on Complexity. Organ. Sci. 10(3) 373–376. Cormen, T., C. E. Leiserson, R. L. Rivest. 1993. Introduction to Algorithms. MIT Press, Cambridge, MA. Cowan, D. A. 1986. Developing a process model of problem recognition. Acad. Management Rev. 11 763–776. Cusumano, M. 2000. Competing on Internet Time: Lessons from Netscape and Its Battle With Microsoft. Scribner Books, New York. Cusumano, M., R. Selby. 1995. Microsoft Secrets: How the World’s Most Powerful Software Company Creates Technology, Shapes Markets and Manages People. Touchstone Books, New York. Cyert, R. M., J. G. March. 1963. A Behavioral Theory of the Firm. Prentice Hall, New York. Darley, V., S. Kauffman. 1999. Natural rationality is bounded. Harvard University Department of Applied Engineering Sciences mimeo. Deshmukh, A. V., J. J. Talavage, M. M. Marash. 1998. Complexity in manufacturing systems, Part I: Analysis of static complexity. IIE Trans. 30 645–655. Garey, M. R., D. S. Johnson. 1979. Computers and Intractability: A Guide to the Theory of NP -Completeness. W. H. Freeman, New York. Gigerenzer, G. 2003. Bounded rationality: The study of smart heuristics. D. Kochler, N. Harvey, eds. Handbook of Judgment and Decision Making. Blackwell, Oxford, U.K. Gigerenzer, G., R. Selten. 2000. Bounded Rationality. MIT Press, Cambridge, MA. Gilboa, I. 1989. Iterated dominance: Some complexity considerations. Games Econom. Behavior. 1 15–23. Gintis, H. 2000. Game Theory Evolving. Princeton University Press, Princeton, NJ. Goldstein, D., G. Gigerenzer. 2002. Models of ecological rationality: The recognition heuristic. Psych. Rev. 109 75–90. Hamming, J. 1987. Coding and Information Theory. J. Wiley and Sons, New York.
Knott, A., B. McKelvey. 1999. Nirvana efficiency: A comparative test of residual claims and routines. J. Econom. Behavior Organ. 38 365–383. Koput, K. W. 1997. A chaotic model of innovative search. Organ. Sci. 8 528–542. Koza, P. 1995. An Introduction to Genetic Algorithms. MIT Press, Cambridge, MA. Kremer, M. 1993. The O-Ring theory of economic development. Quart. J. Econom. 108 551–575. Levitt, B., J. G. March. 1988. Organizational learning. Annual Rev. Sociology 67 23–45. Lyles, M., I. Mitroff. 1980. Organizational problem formulation. Admin. Sci. Quart. 25 102–120. March, J. G., H. A. Simon. 1958. Organizations. John Wiley, New York. Mas-Collel, A., M. Whinston, J. Green. 1995. Microeconomic Theory. Oxford University Press, New York. McKelvey, B. 1999. Avoiding complexity catastrophe in coevolutionary pockets. Organ. Sci. 10(3) 294–321. Miller, R. A., J. E. Pople, J. D. Myers. 1982. INTERNIST-1: An experimental computer-based diagnostic consultant for general internal medicine. New England J. Medicine 307 468–476. Mintzberg, H. 1994. The Rise and Fall of Strategic Planning. Free Press, New York. Mintzberg, H. A., D. Raisinghani, A. Theoret. 1976. The structure of unstructured decision processes. Admin. Sci. Quart. 21 246–275. Moldoveanu, M. C. 2003. Do strategic managers play chess or Nintendo? Working paper, Rotman School of Management, University of Toronto. Moldoveanu, M. C., N. Nohria. 2002. Master Passions: Emotions, Narrative and the Development of Culture. MIT Press, Cambridge, MA. Moldoveanu, M. C., J. Singh. 2003. Use evolutionary logic as an explanation-generating engine for the study of organizational phenomena. Strategic Organ. 1 439–449. Nelson, R. E., S. G. Winter. 1982. An Evolutionary Theory of Economic Change. Harvard University Press, Cambridge, MA. Ott, E. 1993. Chaos in Dynamical Systems. Cambridge University Press, New York. Pearl, J. 1987. Distributed revision of composite beliefs. Artificial Intelligence 33 173–215. Perrow, C. 1984. Complex Organizations. Norton, New York. Popper, K. R. 1959. The Logic of Scientific Discovery. Routledge, London, U.K. Porter, M. 1980. Competitive Advantage. Free Press, New York. Pounds, W. 1969. The process of problem finding. Indust. Management Rev. 11 1–19. Putnam, H. 1985. Reflexive reflections. Erkenntnis 22 143–153. Rivkin, J. 2000. Imitation of complex strategies. Management Sci. 46 824–844. Schwenk, R. 1984. Cognitive simplification processes in strategic decision making. Strategic Management J. 5 111–128. Simon, H. A. 1947. Administrative Behavior. Macmillan, New York.
118 Simon, H. A. 1962. The architecture of complexity. H. A. Simon. 1996. The Sciences of the Artificial. MIT Press, Cambridge, MA. Simon, H. A. 1996. The Sciences of the Artificial. MIT Press, Cambridge, MA. Slater, R. 2003. Eye of the Storm: How Chambers Steered Cisco Through the Technology Collapse. HarperCollins, New York. Sterman, J., J. Witemberg. 1999. Path dependence, competition and succession in the dynamics of scientific revolutions. Organ. Sci. 10(3) 322–341.
Moldoveanu and Bauer: Organizational Complexity and Structuration
Organization Science 15(1), pp. 98–118, © 2004 INFORMS
Tversky, A., D. Kahneman. 1980. Causal schemas in judgements under uncertainty. M. Fishbein, ed. Progress in Social Psychology. Erlbaum, Hilldale, NJ. von Krogh, G., E. von Hippel. 2003. Editorial: Special issue on open source software development. Res. Policy 32 1149–1157. Weick, K. 1995. Sensemaking in Organizations. Sage, Beverly Hills, CA. Wheelwright, S., K. Clark. 1992. Revolutionizing Product Development. Free Press, New York.
Thompson, D. 1967. Organizations in Action: Social Science Basis of Administration Theory. McGraw Hill, New York.
Williamson, O. E. 1985. The Economic Institutions of Capitalism. Free Press, New York.
Toulmin, S. 1983. The Uses of Argument, 2nd ed. Cambridge University Press, New York.
Wolfram, S. 2002. A New Kind of Science. Wolfram Publishing, New York.
Traub, J., H. Wozniakowski. 1980. Theory of Optimal Algorithms. J. Wiley, New York.
Zajac, E., M. Bazerman. 1991. Blind spots in industry and competitor analysis. Acad. Management Rev. 16 37–56.