On centralized resource utilization and its reallocation by using DEA
Cecilio Mar-Molinero, Diego Prior, Maria-Manuela Segovia & Fabiola Portillo Annals of Operations Research ISSN 0254-5330 Volume 221 Number 1 Ann Oper Res (2014) 221:273-283 DOI 10.1007/s10479-012-1083-8
1 23
Your article is protected by copyright and all rights are held exclusively by Springer Science+Business Media, LLC. This e-offprint is for personal use only and shall not be selfarchived in electronic repositories. If you wish to self-archive your article, please use the accepted manuscript version for posting on your own website. You may further deposit the accepted manuscript version in any repository, provided it is only made publicly available 12 months after official publication or later and provided acknowledgement is given to the original source of publication and a link is inserted to the published article on Springer's website. The link must be accompanied by the following text: "The final publication is available at link.springer.com”.
1 23
Author's personal copy Ann Oper Res (2014) 221:273–283 DOI 10.1007/s10479-012-1083-8
On centralized resource utilization and its reallocation by using DEA Cecilio Mar-Molinero · Diego Prior · Maria-Manuela Segovia · Fabiola Portillo
Published online: 18 February 2012 © Springer Science+Business Media, LLC 2012
Abstract The standard DEA model allows different DMU units to set their own priorities for the inputs and outputs that form part of the efficiency assessment. In the case of a centralized organization with many outlets, such as an education authority that is responsible for many schools, it may be more sensible to operate in the most efficient way, but under a common set of priorities for all DMUs. The algorithm that is used to do this, the centralized resource allocation model, does just this. We show that the centralized resource allocation model can be substantially simplified and we interpret the simplifications and show how the model works using real data of Spanish public schools. Apart from finding the best way to reallocate resource among the schools, it is shown that the most desirable operating unit is found to be a by-product of the estimation. This is useful information when planning new schools. Keywords DEA · Production planning · Efficiency
1 Introduction Under the standard DEA model, each Decision Making Unit (DMU) sets its own priorities. This is reasonable when DMUs operate in a decentralized regime. Depending on the objectives the units should pursue, DMUs could be considered to be activity, investment, cost,
C. Mar-Molinero Kent Business School and Universitat Autonoma de Barcelona, Barcelona, Spain D. Prior () Universitat Autonoma de Barcelona and IESEG School of Management, Barcelona, Spain e-mail:
[email protected] M.-M. Segovia Universidad Pablo de Olavide, Seville, Spain F. Portillo University of La Rioja, Logroño, Spain
Author's personal copy 274
Ann Oper Res (2014) 221:273–283
revenue, or profit centers.1 For all of these types, DEA models exist to assess to what extent DMUs are meeting their specific targets.2 The problem appears when a group of units are under the central control of a decisionmaker who would like to see a common set of priorities operating over all of the systems. Obviously, in these circumstances, standard DEA models do not make much sense. Examples of centralized decision-making, where all operating units would be expected to behave in the same way are the branches of a bank, the schools in a local authority and the organization of a local service such as refuse collection, under the responsibility of a common council. Take, for example, schools: One may just ask why a teacher should be valued differently in two different schools when doing the same job, in the same way, for the same education authority. It would be much more reasonable to impose the same model on all of the operating units under the same decision-maker. This is one of the advantages of the centralized DEA models. Previous literature on centralized assessment takes different perspectives, as can be seen in Table 1. Apart from imposing a common set of weights, the articles dealing with a weak centralized management assume that any input excess of inefficient units can be reallocated among the efficient units. They also assume that there could be inflexibilities, which would make it difficult to reallocate some outputs and inputs among the components of the group. On the contrary, the articles formalizing a strong centralized management accept the reallocation of the majority of inputs and outputs among the complete sample of units in the group (not only the inefficient but also the efficient ones) in order to optimize the overall performance of the whole system. In this paper, we interpret the Lozano and Villa (2004) model, propose simplifications and extensions, and reinterpret what they define as variable returns to scale (VRS). Our proposal can be aligned in the subsample of strong centralized management proposals, as it may well be that, at the optimum level, some of the units could disappear because the reallocation of the inputs among other units could improve the overall efficiency of the group. This is an extreme case, to our knowledge, and not considered in previous works. This introduction is the first section of the paper. The second part is concerned with the model and its properties. The paper continues with an example using data from public schools and ends with a conclusion.
2 The model Lozano and Villa (2004) suggest a variety of models, but here only one of them will be discussed: “Model Phase I/Radial/ Input-Oriented.” This model can be formulated in the envelopment form and in the ratio form. In order to make the discussion easier to follow, we will reproduce both versions here, starting with the ratio formulation. The notation of the original paper will be preserved.
1 The characteristics and the key variables for these specific centers are detailed in Kaplan and Atkinson
(1998). 2 See in Färe and Primont (1995), the different representations of the technology that characterizes input or
output-oriented technical efficiency, cost or revenue efficiency, and profit efficiency.
Author's personal copy Ann Oper Res (2014) 221:273–283
275
Table 1 Literature on group efficiency and reallocation Authors
Type of reallocation
Variables
Färe et al. (2000)
Assessment of the possible reduction in inputs by a complete reallocation of production among individual firms given their current capacities Output regulated and current capacities cannot be exceeded
Panel data on annual activity for nine vessels operating between 1987 and 1990. Data include one output, two variable inputs (days at sea and man-days), one fixed factor (vessel characteristics), and one allocable factor (stock abundance)
Lozano and Villa (2004)
Reallocations of inputs and outputs are introduced in the complete sample
Example: Two-inputs, two-outputs, ten-DMUs
The number of units in the group remains constant
62 glass-recycling municipalities
Reallocations of inputs and outputs are introduced only in inefficient units
Dataset of 54 restaurant locations belonging to a Spanish fast-food chain
Gimenez-Garcia et al. (2007)
The number of units in the group remains constant Li and Ng (1995)
Reallocation of inputs and outputs among units is allowed
Dataset of 20 hospitals, members of the Hospital Authority in Hong Kong (government-aided hospitals)
The number of units in the group remains constant
Dataset of 26 State-owned Enterprises in the textile industry
Athanassopoulos (1995)
Reallocation of inputs-outputs among local authorities is allowed
Dataset consisting of the largest 62 local authorities in Greece with a population over 20,000
Asmild et al. (2009)
Reallocations of inputs and outputs are introduced in inefficient units
Empirical dataset consisting of 16 units of a public service organization that are controlled by a central manager
Non-transferable outputs and non-discretionary variables can be introduced The number of units in the group remains constant Nesterenko and Zelenyuk (2007)
Reallocations of inputs and outputs are introduced in inefficient units Non-transferable variables could exist
Simulated data (two-inputs, two outputs) to illustrate the measure of group efficiency
Author's personal copy 276
Ann Oper Res (2014) 221:273–283
The formulation of “Model Phase I/Radial/ Input-Oriented” is as follows: p n n k=1 νk r=1 ykr + r=1 ξr m n max i=1 ui j =1 xij p k=1 νk ykj + ξr m s.t. ≤ 1, ∀j, ∀r i=1 ui xij ui ≥ 0,
νk ≥ 0,
(1)
ξr free,
where νk is the weight associated with output k, of which there are p, ui is the weight associated with input i, of which there are m, ykr is the quantity of output k generated by unit r, xij is the quantity of input i that is used by unit j , ξr is a VRS variable associated with unit r, and there are n operating units in the system. In this model, the objective function values all outputs, irrespective of the unit that generates them, at the same price, νk . In this way, the numerator of the fraction gives the total value of the outputs of the system, while the denominator of the fraction performs the same function with inputs, each input being weighted by ui , irrespective of the unit that uses it. In this sense, the whole organization is being treated as a macro unit that uses all of the inputs available in all of the units to generate all of the outputs that the system generates irrespective of the unit in which they are produced. The constraints are the usual ones in DEA. For each unit, the inputs that are used are valued at the overall price, ui , and the outputs that are produced are also valued at the overall price, νk . Note that each unit has two sub-indices, reflecting the fact that cross-efficiencies are computed under the returns to scale associated with every unit in the system. We will show that the model can be simplified. This model cannot be interpreted in the usual DEA way. We are no longer asking every unit to choose the weights that make it look as good as possible, under the constraint that the remaining units should be assessed with the same weights. There is a subtle change: The system, as a global unit, finds the weights that present it in the best light possible and assesses the performance of individual units under these weights. The equations for the envelopment formulation—also given by Lozano and Villa (2004)—are as follows: min θ s.t.
n n
λj r xij ≤ θ
r=1 j =1 n n
xij ,
∀i
j =1
λj r ykj ≥
r=1 j =1 n
n
n
ykr ,
∀k
(2)
r=1
λj r = 1,
∀r
j =1
λj r ≥ 0,
θ free.
This is not the standard BCC model, since there are two summation signs in the input and output constraints. In fact, this model can easily be obtained from the BCC formulation. Under the standard BCC approach to DEA, it is necessary to run the model for each unit. The Lozano and Villa (2004) formulation just adds up all of the individual BCC equations, for all unit runs, assuming a common value for θ . The end result is that the right-hand side of the input constraints contains all the inputs available to the system, and that the right-hand side
Author's personal copy Ann Oper Res (2014) 221:273–283
277
of the output constraints contains all outputs that it has produced. A further consequence is that there is a VRS constraint for each unit. It is worth noting that the number of unknowns in this formulation, excluding slack variables, is n2 + 1, as each unit—of which there are n—creates nλs, and the overall efficiency, θ , is also an unknown. The number of unknowns to be estimated increases as a quadratic function of the number of units. This leads to problems with a relatively small number of units becoming large quite quickly. There is an interpretation for the equations above. The output constraints indicate that we would like to obtain at least the total amount of outputs that are currently being obtained from the system. The input constraints can be read as if we were prepared to reduce inputs in order to achieve the amount of outputs already available. This reduction would be done, keeping the proportion in which the inputs are used.3 The returns to scale constraints attempt to keep the size of units within the observed range of values. Two questions will be addressed. The first question is how the model can be best interpreted in logical terms. The second question is how the model can be simplified and generalized. We will start with the second question. Are there any valid simplifications to this model? 2.1 Simplifications Take the constraints in the ratio form of the model (1). These can be rewritten as follows: p
νk ykj −
k=1
m
ui xij + ξr ≤ 0,
∀j, ∀r.
i=1
In this equation, ykj and xij are data and, therefore, fixed in advance. The optimization procedure calculates νk and ui . Thus, for a given DMU, j , we can define p k=1
νk ykj −
m
ui xij = kj ,
i=1
and the equation becomes kj + ξr ≤ 0,
∀j, ∀r.
Hence, for every DMU, j , there could be up to n values ξr . This would be perfectly compatible with the equations; but, is it possible for each unit to be associated with a variety of ξr ? Imagine that, in this case, it would mean that a unit can operate at the same time under a variety of variable returns to scale, something that does not make sense. It follows that, for a given j , all ξr are equal. Since this happens for any value of j , it further follows that all ξr are equal. We conclude that there is a single value of ξr , which we may simply call ξ . Thus, in the constraints of the ratio model, the sub-index r can be dropped. Doing this creates, for each unit, j, r-identical constraints. Only one such constraint is needed for each unit, and the remaining r − 1 constraints can be dropped from the formulation. The objective function can also be simplified. It becomes p νk nr=1 ykr + nξ m n . max k=1 i=1 ui j =1 xij 3 Alternative orientations to what Lozano and Villa (2004) propose are possible. As an example, in Sect. 3,
our empirical application is non-radial, as it concentrates on a specific subvector of discretionary inputs.
Author's personal copy 278
Ann Oper Res (2014) 221:273–283
The Duality theory tells us that each constraint in the dual program is associated with a column in the primal program. Saying that dual constraints are not necessary implies that the associated primal variables are not needed either. In fact, if we work backwards, the envelope form of the model can be simplified to: min θ s.t.
n
λj xij ≤ θ
j =1 n
xij ,
∀i
j =1
λj ykj ≥
j =1 n
n
n
yj ,
∀k
(3)
j =1
λj = n
j =1
λj ≥ 0,
θ free.
This formulation only contains n + 1 unknown decision variables, λj and θ . This is an important simplification with respect to the Lozano and Villa (2004) formulation. We will now proceed to interpret the model. 2.2 Interpretation: cloning the best DMU The simplified model has a very similar structure to the BCC model, but there are some differences. The similarities are obvious: the left-hand side of the input constraints and the left-hand side of the output constraints remain unchanged, with respect to the BCC; the lefthand side of the VRS constraint is also unchanged, and the objective function is the same. Thus, the piece-wise convex VRS technology is preserved. The differences with BCC appear on the right-hand side of the constraints. The right-hand side of the input constraints contains the total amount of inputs available to the system, as in the original Lozano and Villa (2004) model. The right-hand side of the output constraints is also the same as that in the Lozano and Villa (2004) model, containing the total amount of output produced by the system. The interpretation of the input and output constraints remains unchanged: The system as a whole would like to produce at least as much as the current level of outputs, and it will do this by reducing the total amount of inputs used. The constraint that replaces the standard constraint for VRS is very interesting. Let us imagine the unlikely situation that the system is already operating under optimal conditions and using common weights. In this case, the best way to produce the outputs is the current one, implying that all of the λj should be equal to one; the sum of λj automatically becomes equal to the number of units in the system. It is easy to conjecture what will happen if the system is not operating under optimal conditions: The most efficient unit will be identified and “cloned” a certain number of times. The relative proportions under which these units are operating will, in general, not be the same as the relative proportions under which the system as a whole operates, and other units will be used in order to make up the acceptable balance of inputs and outputs. This is, in fact, the interpretation given by Lozano and Villa (2004), although they do not discuss the issue of VRS. The formulation has a further consequence: It may be possible to identify the most efficient units, as these are the ones “cloned,” but it is not possible to know for each unit the returns to scale under which it is operating. We should think further about modeling issues. Is it really necessary for the sum of λj to be equal to the number of units? This appears to be an unnecessarily tight constraint.
Author's personal copy Ann Oper Res (2014) 221:273–283
279
One can indeed think of situations when a solution to the problem could be found with a smaller number of units than the current number. The way to model such a situation is straightforward; all that needs to be done is to replace n on the right-hand side of the lambda restriction of linear program (3) with a smaller number, very much as that done by Lozano and Villa (2005). An additional idea is that we could be operating with more units than we are currently using, in order to have smaller decision units, but better balanced. It could even be desirable that we ought to consider a standard size of the unit that is reproduced as many times as is necessary. An example of a situation where this would be desirable is school management; much debate has taken place concerning the optimal size of a school. The optimal size and balance of a school could be deduced from this model, and guidelines could be produced for the building of new schools or for the reorganization of existing schools. An example shall now be offered of the simplified formulation and compare it with the original formulation.
3 Data and application In order to illustrate our proposal, a dataset of 54 secondary public schools located in Barcelona was used (recent research works in efficiency on the education sector are Ouellette and Vierstraete (2010) and Johnson and Ruggiero (2011)). Table 2 contains the values of the three discretionary inputs (teaching hours per week, x1 , specialized teaching hours per week, x2 , capital investments in the last decade, x3 ), one non-discretionary input (the total number of students present at the beginning of the academic year) and two outputs (number of students passing their final assessment, y1 , and number of students continuing their studies at the end of the academic year, y2 ). As the number of students starting the course is a non-discretionary variable (the public network has to provide places to all students interested in following their studies), program (3) has to be adapted to work in the presence of both discretionary (di) and non-discretionary (ndi) inputs: min θ s.t.
n
λj xdij ≤ θ
j =1 n
λj xndij ≤
n
xndij ,
j =1
λj ykj ≥
j =1 n
xdij ,
∀di
j =1
j =1 n
n
n
yj ,
∀ndi (4)
∀k
j =1
λj = n
j =1
λj ≥ 0,
θ free.
Basically, we want to compare programs (2) and (4). After running program (2), we obtained the results shown in column (7) of Table 3. The overall efficiency of the group is 0.66, meaning that it has been demonstrated that the outputs of the system could be produced while saving 34.00% of the discretionary inputs (1–0.66).
Author's personal copy 280 Table 2 Output and input variables (data corresponding to the year 2008)
Ann Oper Res (2014) 221:273–283
School #
y1
y2
x1d
x2d
x3d
X1nd
1
260.65
378.00
44.00
3.00
8
2
195.18
213.00
32.01
3.00
18
384.00 225.00
3
242.75
429.70
56.98
4.00
84
446.00
4
283.02
350.00
49.50
3.00
39
356.00
5
376.76
650.80
77.50
5.50
61
657.00
6
252.19
429.00
49.40
2.00
56
440.00
7
225.50
247.34
33.15
1.50
43
248.00
8
363.85
364.34
45.90
2.00
36
381.00
9
261.87
272.00
44.37
2.00
24
288.00
10
235.40
251.00
35.49
1.50
51
259.00
11
198.63
223.34
42.00
1.50
46
227.00 250.00
12
159.78
248.00
36.96
2.00
2
13
98.09
193.00
35.20
1.50
55
203.00
14
214.92
219.00
33.60
1.50
32
229.00
15
136.07
269.20
33.80
1.50
54
271.00
16
214.68
346.00
54.39
2.00
33
347.00 212.00
17
117.12
196.00
29.00
1.50
7
18
261.89
334.00
42.40
2.00
47
339.00
19
248.11
329.00
44.88
2.50
28
357.00 264.00
20
227.11
257.00
33.48
3.00
4
21
316.30
350.00
55.48
4.50
20
356.00
22
128.81
233.00
37.44
3.50
13
241.00
23
296.86
365.00
45.90
1.50
49
365.00
24
375.49
483.00
62.40
4.00
78
510.00
25
196.29
315.00
10.80
1.50
31
336.00
26
328.26
371.00
44.80
2.00
27
371.00
27
298.78
496.00
64.40
2.00
54
496.00
28
450.99
459.00
56.98
4.00
69
469.00 383.00
29
263.95
376.00
50.94
2.00
9
30
269.29
297.00
42.90
6.00
47
323.00
31
137.02
211.00
11.39
3.00
24
213.00
32
235.44
331.00
54.60
3.00
37
351.00
33
138.55
201.10
29.70
3.00
29
239.00
34
305.96
465.00
56.98
3.50
35
470.00
35
171.46
304.00
37.80
2.50
9
310.00 255.00
36
168.92
251.00
45.90
1.50
9
37
449.27
453.00
67.76
2.50
40
453.00
38
285.07
350.00
48.83
3.00
65
350.00
39
180.78
335.00
43.52
3.00
87
337.00
40
92.71
170.00
16.02
2.00
58
171.00
41
282.21
361.00
48.07
2.00
17
361.00
42
177.00
275.00
37.44
3.00
37
278.00
43
138.54
180.00
29.20
3.00
37
193.00
44
133.20
214.00
33.80
1.50
12
214.00
Author's personal copy Ann Oper Res (2014) 221:273–283 Table 2 (Continued)
281 School #
y1
y2
x1d
x2d
x3d
X1nd
45
291.70
328.00
43.20
1.00
79
350.00
46
200.41
332.00
42.99
3.00
73
367.00
47
260.82
275.50
35.98
2.00
64
289.00
48
272.03
371.00
54.00
3.00
57
393.00
49
269.67
367.00
54.00
3.00
17
377.00
50
112.39
219.00
34.97
1.50
25
243.00
51
159.70
265.00
40.00
1.50
83
321.00
52
371.20
526.00
62.88
3.00
48
532.00
53
241.53
279.00
41.40
3.00
18
284.00
54
260.50
368.00
44.00
3.00
10
368.00
(9)
(10)
(11)
(12)
Table 3 Results from the application of programs (2) and (4) (n = 54) (1)
(2)
group effic.
(3)
(4)
(5)
(6)
(7)
(8)
optimal nˆ (0.6)n (0.7)n (0.8)n (0.9)n n
(1.1)n (1.2)n (1.3)n (1.4)n (1.5)n
0.64
0.71
0.90
0.80
0.71
0.65
0.66
0.76
0.81
0.88
0.97
λ 5
0.00
7.88
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
7
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.65
0.00
0.00
14
0.00
0.00
0.00
0.00
0.00
0.00
6.35
14.10
20.80
25.98
27.44
17
0.00
0.00
0.00
0.00
0.00
2.69
2.46
0.00
0.00
2.67
11.18
20
0.00
0.00
0.00
0.00
0.00
0.00
0.62
0.85
0.00
0.00
0.00
23
0.91
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
25
23.96
0.00
9.73
19.58
24.54
24.26 22.31
20.29
17.15
12.45
4.94
26
17.62
0.00
0.00
0.00
9.76
20.02 15.27
8.65
3.22
0.00
0.00
28
0.00
8.48
2.68
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
29
7.87
0.00
0.00
0.00
6.82
0.63
0.00
0.00
0.00
0.00
0.00
31
0.00
0.00
0.00
0.00
0.00
0.00
0.00
1.38
3.53
4.50
4.99
36
0.00
0.00
0.00
0.00
0.00
4.71
0.00
0.00
0.00
0.00
0.00
37
0.00
1.12
3.74
8.32
4.40
0.00
0.00
0.00
0.00
0.00
0.00
40
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
2.90
8.59
44
0.00
0.00
0.00
0.00
0.00
1.70 12.40
19.54
24.85
27.11
23.86
52
0.00
14.92
21.65
11.45
2.11
0.00
0.00
0.00
0.00
0.00
0.00
54
0.00
0.00
0.00
3.85
0.97
0.00
0.00
0.00
0.00
0.00
0.00
50.36
32.40
37.80
43.20
48.60
54.00 59.40
64.80
70.20
75.60
81.00
λ
0.00
When our simplified model (4) is running, we obtain the same efficiency level, 0.66. Regarding λs, we obtained the values appearing in column (7) ( λ = 54). It is worth discussing the meaning of this information. In order to globally minimize the discretionary inputs of the system, the most efficient units are cloned. Units to be cloned are 25 (24.26 times), 26 (20.02 times), 36 (4.71 times), 17 (2.69 times), and 44 (1.70 time). This implies the reallocation of inputs and outputs among the 54 schools and would produce, after the control of the inefficiencies, the said reduction of the 34.00% of the discretionary inputs.
Author's personal copy 282
Ann Oper Res (2014) 221:273–283
Fig. 1 Results of the reallocation process
There are situations when the central planner can modify resource allocation by closing the most inefficient units and/or by opening (cloning) new units. When this is the case, both linear programs (2) and (4) are artificially tight because they include a non-justifiable constraint (minimizing inputs while maintaining the initial number of units). In our particular case-study, we have conducted an experiment aimed at exploring the sensitivity of the solution to this constraint. To do this, program (4) was rerun by replacing n with n, ˆ where nˆ takes values in the range nˆ = (0.6)n to nˆ = (1.5)n. The results of this exercise are displayed in Table 3 and Fig. 1. There are no feasible solutions for nˆ < 32.40 or nˆ > 81.00. This means that it is impossible to produce the aggregated level of output with only 32 schools or fewer (in percentage, the system requires at least 60.00% or the original schools) or with more than 81 schools (increasing the original number of schools by 50%). Figure 1 shows that the solution presented in column (7) of Table 3— nj=1 λj = 54.00 and θ = 0.66—is indeed a good solution, but it also shows that if we make nˆ < 54.00, we can obtain even better solutions. The overall minimum for θ (0.64) is reached when nˆ = 50.36 (cloning Unit 25 23.96 times, Unit 26 17.62 times, and Unit 29 7.87 times,as can be seen in column (2)). This solution can be easily obtained by dropping constraint λ from linear program (4). It could be argued that Unit 25 is an ideal school for the system, and that if new schools have to be built, an attempt should be made to take School 25 as an example of good practice.
4 Summary and conclusions The original DEA model studies DMUs one at a time. Its philosophy is that each DMU wanted to be seen performing in the best possible way in a decentralized management sys-
Author's personal copy Ann Oper Res (2014) 221:273–283
283
tem. Thus, each DMU is given the flexibility to value inputs and outputs in the way that best suits its “modus operandi”. Lozano and Villa (2004) make a significant contribution to the DEA literature by pointing out that this model was unrealistic when all of the units were under the control of a single decision-maker in a centralized management system. They propose a formulation that values inputs and outputs equally, irrespective of the units that had used or produced them. In this paper, we have attempted to interpret the Lozano and Villa (2004) model and to show that it can be substantially simplified. In the original model, the number of unknowns is proportional to the square of the number of units, while in our simplified version, the number of unknowns grows linearly with the number of units. We think that this simplification makes the model easier to implement in many situations, for example, the number of schools under the control of a local authority may be rather large. We have argued that the constraints that the sum of the lambdas, which should be equal to a certain number, are unnecessarily restrictive. The link between this constraint and variable returns to scale is lost. This constraint does not serve to compare a unit with a linear interpolation of existing units, since we are no longer assessing individual units. The only purpose of this constraint is to force the number of units used in the final solution, both original and cloned, to a given total. This constraint can be completely relaxed if we wish to discover the optimum size, or if can be given a value decided a priori if we wish to limit the number of components in a strong, centralized management system.
References Asmild, M., Paradi, J. C., & Pastor, J. T. (2009). Centralized resource allocation BCC models. Omega the International Journal of Management Science, 37, 40–49. Athanassopoulos, A. D. (1995). Goal programming & data envelopment analysis (GoDEA) for target-based multi-level planning: allocating central grants to the Greek local authorities. European Journal of Operational Research, 87, 535–550. Färe, R., & Primont, D. (1995). Multi-output production and duality: theory and applications. Dordrecht: Kluwer Academic. Färe, R., Grosskopf, S., Kerstens, K., Kirkley, J. E., & Squires, D. (2000). Assessing short-run and mediumrun fishing capacity at the industry level and its reallocation. In Microbehavior and macroresults: proceedings of the tenth biennial conference of the international institute of fisheries economics and trade (IIFET), July 10–14, 2000, Corvallis, Oregon, USA. Gimenez-Garcia, V. M., Martınez-Parra, J. L., & Buffa, F. P. (2007). Improving resource utilization in multiunit networked organizations: the case of a Spanish restaurant chain. Tourism Management, 28, 262– 270. Johnson, A. L., & Ruggiero, J. (2011). Nonparametric measurement of productivity and efficiency in education. Annals of Operations Research. doi:10.1007/s10479-011-0880-9. Kaplan, R. S., & Atkinson, A. A. (1998). Advanced management accounting (3rd edn.). Englewood Cliffs: Prentice Hall International. Li, S.-K., & Ng, Y. Ch. (1995). Measuring the productive efficiency of a group of firms. International Advances in Economic Research, 1(4), 377–390. Lozano, S., & Villa, G. (2004). Centralized resource allocation using data envelopment analysis. Journal of Productivity Analysis, 22, 143–161. Lozano, S., & Villa, G. (2005). Centralized DEA models with the possibility of downsizing. The Journal of the Operational Research Society, 56, 357–364. Nesterenko, V., & Zelenyuk, V. (2007). Measuring potential gains from reallocation of resources. Journal of Productivity Analysis, 28, 107–116. Ouellette, P., & Vierstraete, V. (2010). Malmquist indexes with quasi-fixed inputs: an application to school districts in Québec. Annals of Operations Research, 173(1), 57–76.