Technical efficiency, managerial efficiency and objective-setting in the ...

14 downloads 793 Views 177KB Size Report
The efficiency of an educational system must take into account the students' .... 2 Model for evaluating technical efficiency and maximum potential output.
Journal of the Operational Research Society (2007) 58, 996 --1007



2007 Operational Research Society Ltd. All rights reserved. 0160-5682/07 $30.00 www.palgrave-journals.com/jors

Technical efficiency, managerial efficiency and objective-setting in the educational system: an international comparison V Gim´enez1 ∗ , D Prior1 and C Thieme2 1 Universitat

Aut`onoma de Barcelona, Barcelona, Spain; and 2 Universidad Cat´olica Silva Henr´ıquez, Santiago, Chile This study uses data envelopment analysis to analyse the efficiency of educational systems in 31 countries. This type of evaluation is of interest both when formulating a model for analysis and when applying such a model empirically. The efficiency of an educational system must take into account the students’ economic and social background, as this is an environmental factor that decisively influences their performance. This is a highly important aspect and so we propose a specific evaluative process for it. Secondly, we evaluate the efficiency of educational systems in different countries, an analysis that has few forerunners since the majority of previous research has focused on analysing a single country. The results suggest that, in general, the most efficient management of educational systems can be found in those countries with a Communist past. They also suggest that there is a series of developed countries, which, judging by the results obtained, could increase their students’ performance with even fewer resources than those currently allocated to their educational systems.

Journal of the Operational Research Society (2007) 58, 996 – 1007. doi:10.1057/palgrave.jors.2602213 Published online 19 July 2006 Keywords: education; data envelopment analysis; efficiency

1. Introduction As the requirement to implement economic policies aimed at decreasing the deficit and curtailing public expenditures has grown along with the social demand for greater and better services, many public administrations all over the world have had to operate more efficiently (Hoxby, 1999). Improving efficiency levels has become an objective in many different government programmes. The educational systems in many countries have not been exempt from this trend; they have been subjected to actions aimed at containing expenses and improving outcomes. One example of this has been the gradual implementation of profound educational reforms, with the introduction, in many countries, of concepts such as ‘educational productivity’ and ‘accountability’ (Delannoy, 1998; Harris, 2000). There are two reasons why high priority is placed on education. First, it is believed that a nation’s reserve of human capital is an important component when explaining the differential in growth rates, and that education is also an essential factor for promoting equal opportunities among all the members of a society (Hanushek and Kimko, 2000). Second, many countries face high unemployment rates among the young people ∗ Correspondence: V Gim´enez, Department d’ Economia de l’Empresa, Universitat Aut`onoma de Barcelona, Bellaterra (Cerdanyola), Barcelona 8193, Spain. E-mail: [email protected]

along with the effects of globalization, the alleviation of both of which requires greater competitiveness. In both contexts, policies aimed at improving the performance of the different educational systems have been implemented. As pointed out by Levin and Kelley (1994), the main question in educational productivity is whether the resources allocated to education are being used effectively. To determine this, there has been a growing interest in recent years in evaluating the internal efficiency of schools, especially those operating in the public sector (Manceb´on and Bandr´es, 1999). Unfortunately, the difficulty of obtaining comparable data and standard metrics of educational output among countries has restricted evaluations to the regional and national level. However, the publication of the results of the Third International Mathematics and Science Study (TIMSS) for 1999 makes it possible to carry out an international comparison along these lines. The evaluation of an educational system should not be limited solely to the knowledge acquired during the school years. Rather, it must also include other equally important skills, attributes and values that favour workplace and social integration; communication and interpersonal skills, respect for the environment, physical fitness; and political, social and personal responsibility (Gray, 1981; Ray, 1991; Thanassoulis and Dunstan, 1994; Rico, 1996; Silva-Portela and Thanassoulis, 2001). However, given the difficulty in aggregating the diverse range of effects produced by the function of educational production, the majority of existing studies solely

V Giménez et al—Objective-setting in the educational system

take into account academic performance (Gray et al, 1986). Despite the limitations of an evaluation based exclusively on academic performance, the need to carry out comparative analyses among countries has led to the emergence of standardized tests of knowledge. In the case of the TIMSS, the available test results only evaluate scientific and mathematical knowledge (Darling-Hammond, 1991). The most significant limitations associated with these tests are that (a) they do not take into account knowledge acquired in the humanities, (b) the other attributes that the educational process adds to students (outcomes) are ignored, and (c) it does not take into account the possibility that different national governments may have disparate objectives. None the less, the tests of knowledge are highly useful when attempting to make international comparisons. Their main advantage is that they use a common evaluation criterion for all the countries, a standardization that would be much more complex where non-scientific knowledge will be evaluated as well. Another key factor within the educational process is the importance of environmental or contextual factors. This was emphasized by the 1996 Coleman Report, a main conclusion of which was that the resources used by a school could only account for 10% of the academic outcomes obtained by students. The rest could be explained by the characteristics of the social, economic and family environment (Levin and Kelley, 1994). There are a significant number of studies devoted to determining which environmental factors influence student performance (Gray et al, 1986; Jesson et al, 1987; Mayston and Jesson, 1988; Sammons et al, 1993), although there is not yet consensus in the literature on the subject (Bifulco and Bretschneider, 2001). The objectives of this study are to evaluate the technical efficiency and determine the maximum potential outcome of the educational systems from 31 different countries. We understand technical efficiency to be the achievement of maximum academic performance given the available resources and the social and economic conditions obtaining in each country. Nevertheless, once a country’s educational system has achieved technical efficiency, it seems reasonable to pose an additional question in the light of the importance of education: Could students’ academic performance be improved? The answer to this question is clearly restricted by the environmental conditions and is likely to be associated with the amount and mixture of variable resources allocated to the educational system. In other words, what is the maximum potential output of the educational system? The concept of maximum potential output was introduced by Johansen (1968), who specified that ‘capacity is the maximum amount that can be produced per unit of time with the existent plant and equipment, provided that the availability of variable factors of production is not restricted’. Johansen’s initial posit was completed by F¨are (1984), while F¨are et al (1985) generalize the concept to multi-product companies. Further extensions (F¨are et al, 1989a,b) relate the maximum potential output to technical change and also to the efficiency

997

of public organizations where cost minimization is not necessarily an objective. From a political standpoint, the concept of maximum potential output of the educational system is implicit in the European Union documents where the specific objectives of the national educational systems are laid down. Specifically, in the report ‘The Concrete Future Objectives of Education Systems’ (Commission of the European Communities, 2001) it is claimed that the future of the Union—achieving all the aims inherent in the challenge set out in the Lisbon Conclusions, ie, to become the most competitive and dynamic knowledge-based economy in the world capable of sustainable economic growth with more and better jobs and greater social cohesion—requires a solid contribution from the world of education. It requires that education systems can be adapted and developed so as to deliver the basic skills and competences everyone needs in the knowledge society; to make lifelong learning attractive and rewarding; and to reach out to everyone in society, however far from education and training they may consider themselves, with ways of developing their skills and making the best use of them.

From a methodological standpoint, we shall use data envelopment analysis (DEA). Various authors have used DEA models to analyse efficiency in education, although the majority of them are limited to studying the differences found when comparing schools within a certain group. Examples of this type of study are Bessent and Bessent (1980), Barrow (1991), Ludwin and Guthrie (1989), Bessent et al (1982) and Manceb´on and Mar-Molinero (2000). In other studies (Sengupta and Sfeir, 1986; Mayston and Jesson, 1988; Bifulco and Bretschneider, 2001), outcomes are compared using parametric and non-parametric techniques. There are also applications on dealing with the contextual factors when evaluating efficiency, with applications within the educational sector focussed. Within this group, we can highlight the studies by Ray (1991), Ruggiero et al (1995), Kirjavainen and Loikkanen (1998) and Mu˜niz (2002). Finally, we should mention the application by Silva-Portela and Thanassoulis (2001), which attributes poor outcomes to factors such as the inherent characteristics of the student, the school and the operational system under which it operates. The text that follows is organized in the following way: in the second section, we describe the methodology. In the third section, details of the empirical application, the description of the available database and the procedure used to choose the variables are presented. The results are analysed in the fourth part, following which the main conclusions of the study are laid out.

2. Model for evaluating technical efficiency and maximum potential output Data envelopment analysis (DEA) was initially developed by Charnes et al (1978) to assess the relative efficiency of a series of homogeneous business units, known as decision-making

998

Journal of the Operational Research Society Vol. 58, No. 8

units (DMUs), which produce outputs by transforming inputs. Let I be DMUs producing m outputs based on the consumption of n sr inputs controllable in the short-term and nlr inputs controllable in the long-term, under the assumption of output orientation and a technology with variable performances in terms of scale. The following linear program is solved for each of the DMUs to evaluate the overall technical efficiency Max 1 s.t.  I   



I  j=1 I  j=1

I 

1 yr O ,

z j yr j

j=1

r = 1...m

 z j xisrj

 xisrO ,

i = 1 . . . n sr

 xklrO ,

k = 1 . . . nlr

 z j xklrj

(1)

zj = 1

j=1

z j  0; 1 free where yr j is the output r of the DMU j and xisrj , xklrj are their inputs inherent in the productive process that are controllable in the short and long-terms. The symbol  represents the efficiency coefficient, with ( − 1)100 representing the percentage by which all the outputs could be increased given the current consumption of inputs of the DMU evaluated. The BCC model (Banker et al, 1984) is a special case of Model (1) in which there are no inputs that are adjustable only in the long-term. In this initial evaluation, all the DMUs are compared without taking into account the fact that they might be operating under negative environmental conditions that could affect performance. Thus, the results obtained are related to the inputs used. However, an evaluation of the educational systems necessitates taking into account the fact that the different environmental and contextual factors could affect the efficiency of the educational systems, especially when comparing very different countries, must be borne in mind. The literature suggests various alternatives for dealing with environmental factors when evaluating efficiency; there is not yet a consensus on which is the most appropriate in the context of DEA models. The first alternative involves separating the frontiers by grouping the different DMUs based on the most important environmental factors and subsequently constructing an efficient frontier for each group (Charnes et al, 1981; Banker and Morey, 1986a; Brockett and Golany, 1996). The main difficulty here lies in selecting, a priori, the most important characteristic in the operating environment (Fried et al, 1999). The second approach comprises one-stage models, where environmental variables are included directly in the model (Banker and Morey, 1986b; Charnes et al, 1994). In the final and third approach, mixed multiple-stage models

are employed. Here, non-parametric techniques are combined with parametric ones. There are different proposals within the last-named alternative. The majority of them consist of estimating the efficiency coefficients via a standard DEA model and subsequently using econometric techniques to explain the effect of the environmental factors on the efficiency coefficients calculated (Ray, 1991). However, some authors suggest using successive stages aimed at making corrections in the dummy variables or in the initial levels of inputs and outputs and to recalculate the DEA model once the environmental effects have been corrected for (Fried et al, 1999). A detailed review of these alternatives can be found in Mu˜niz (2002). In our case, to measure technical efficiency under the influence of environmental variables, we chose a simple method based on the one-stage models and similar to the one previously used in Lozano-Vivas et al (2001) and in Lozano-Vivas et al (2002). First, technical efficiency is evaluated taking into account only the inputs inherent in the productive process as well as its outputs. In an effort to make a fairer comparison and incorporate the effect that conditions that are unique to each country could have on efficiency coefficients, the environmental variables are included in a second linear program, obtaining the technical efficiency of management. The mathematical formulation of this model is Max 2 s.t.  I 

 z j yr j

 2 yr O ,

r = 1...m

 xisrO ,

i = 1 . . . n sr

j=1



I  j=1



I  j=1



I  j=1

I 

 z j xisrj  z j xklrj

 xklrO ,

k = 1 . . . nlr

 e pO ,

p = 1... P

(2)

 z j e pj

zj = 1

j=1

z j  0; 2 free where e pj are the environmental variables. As Lozano-Vivas et al (2001) and Lozano-Vivas et al (2002) pointed out, 1 = 2 , where  1. The coefficient  encompasses the possible negative impact that the environment could exert on the overall levels of technical efficiency in each country. The higher the value, the greater is the negative effect. Only those countries operating under unfavourable conditions will improve their efficiency coefficient in the second stage; if not, 1 = 2 will hold. Based on (1) and (2), the technical efficiency (both global and of management) of the educational systems analysed is

V Giménez et al—Objective-setting in the educational system

Y (output) yhΦ1λ = ye′ yd

g′′

d

g′

b g

e

yhΦ1= ye′ yh yf

a

xd

xg

n sr  i=1

s.t  I 

Sisr



 3 yr O ,

z j yr j

r = 1...m

j=1



I  j=1



I  j=1



I  j=1

I 

 + Sisr = xisr ,

z j xisrj

i = 1 . . . n sr

 z j xklrj

(3)

 xklrO ,

k = 1...n

 e pO ,

p = 1... P

lr

 z j e pj

eo (frontier given the observed level of environmental variables of the DMUs e, f, g, h)

X (input)

Evaluation of technical efficiency, management efficiency and maximum potential output.

evaluated. In other words, it is identified whether there are potential increases in observed outcomes, given the current allocation of inputs and the environmental factors in each country. However, as mentioned in the introduction, we believe that, when formulating educational policies, it is useful to complement an analysis of technical efficiency with that of maximum potential output: we must know up to what point the outcomes achieved approach the possible maxima. Likewise, another equally important result derived from the analysis of maximum potential output is identifying whether there are countries that allocate excessive resources to their educational systems. To determine the maximum potential output of each country and the resources that must be allocated to the system to realize such an output, we used the following linear program: Max 3 + 

e* (frontier assuming the optimal level of environmental variables)

h

f

xf

Figure 1

c e′

999

zj = 1

j=1

z j , Sisr , xisr  0; 3 free where  is an infinitesimal positive constant, 3 is, given the observed environmental conditions (e pO ), the maximum potential increase achievable in all the outputs, xklrO is the observed level of outputs that are uncontrollable in the long term, and (xklrO ) and xisr are the optimal allocation of inputs that are controllable in the short term associated with the maximum potential output, in accordance with the definition

by Johansen (1968). In the initial stage, the linear program determines the maximum radial increase of all the outputs without taking into account either the environmental conditions or the amount of inputs inherent in the process that are only adjustable in the long term, leaving free the consumption of factors that are controllable in the short term. In the second stage, once the maximum output has been determined, the linear program determines the lowest level of inputs that are controllable in the short term associated with this output. In this way, countries can be identified that have allocated insufficient resources to their educational systems, and vice versa. It should be pointed out that including changes in the composition of inputs does not necessarily determine how efficient the current levels of allocations are. We will not deal with this aspect in this study because it requires information on the prices of the inputs. We will simply attempt to obtain the maximum potential output with the least possible amount of resources: for this, information on prices is not necessary. Figure 1 provides a visual summary of the method suggested for evaluating efficiency and potential efficiency. The DMUs that present a situation of overall technical efficiency are a, b, and c, but all three are positively affected by outstanding environmental factors. Taking into account the respective environmental factors, the DMUs that present a situation of technical efficiency of management are a, b, c, d, e and f. Evidently, their respective environments negatively influence units d, e, and f, an impact that is represented by the coefficient  ( = (1 /2 ) > 1). There are two units (c and d ) which obtain the maximum potential output because, given their environmental conditions, they manage to obtain the maximum possible outcome for their students. Units a, b, e and f lie below the maximum potential output: to achieve the maximum output, the resources allocated to education would have to be increased. Finally, if the output of unit g were expanded to point g  , it would achieve the maximum potential output, albeit inefficiently since unit d also manages to achieve the same output but with a lower consumption of input. Thus, the concept of maximum potential output must go hand in hand with situations of efficient allocation. The application of the DEA models implies assuming certain suppositions, not exempt from discussion in their

1000

Journal of the Operational Research Society Vol. 58, No. 8

application to education. Specifically they are those of endogeneity bias, homotheticity and convexity. DEA methodology raises, in general, serious difficulties for verifying the fulfillment of the three suppositions (see Mayston, 2003). The educational centres with the best results may attract the best teachers and students. Orme and Smith (1996) state that although the DEA models can distinguish between controllable and non-controllable inputs, they are not immune to such endogeneity bias. In our case study, we believe that endogeneity is not a problem since countries are the objectives of the evaluation, rather than individual educational centres. An important additional question is the suitability of evaluating the technical efficiency when the educational sector is analysed. Given the budgetary restrictions of the educational centers and even of the countries, it seems to be sensible that the evaluation should be oriented from a criterion of cost efficiency. A more detailed discussion of the above issues can be found in Mayston (2003).

voluntarily participated, the study evaluates mathematics and science knowledge among students in their eighth year of schooling. In our case, we only had valid information for 31 countries. For the seven remaining countries (England, Finland, Iran-Islamic Republic, Israel, Japan, Slovak Republic and Turkey), information was lacking for some of the variables used. The database contains information on the results obtained by the students, on the unique environmental variables in each country, and on the resources allocated to the educational system. The descriptive statistics of all the variables are shown in Table 1. We can interpret the inputs that appear in Table 1 as follows. The controllable resource ‘intensity of teaching resource’ represents the total number of teacher hours per student per year. The variable ‘quality of teaching staff’ corresponds to an average index based on the self-expressed confidence to teach mathematics and sciences by the teachers themselves. More specifically, this index has been constructed by taking the average of the detailed results by subject, with scores of three representing the teachers who define their confidence when teaching the subject as high, scores of two representing teachers who admit to having a medium level of confidence when teaching the subject, and finally scores of one when the teacher admits to having a low level of confidence. The variable ‘facilities’ can be summarized as an index of the adequacy of teaching facilities, such as building, heating, air conditioning,

3. Description of the sample and selection of the variables The data used have been obtained from the Third International Mathematics and Science Survey (TIMSS) from 1999 (http://timss.bc.edu/timss1999.html, accessed 7 November 2005). Using a common test for the 38 countries that Table 1

Variables

Variables Inputs x1sr : Intensity of teaching resource x2sr : Facilities x3sr : Materials x1lr : Quality of teaching staff Contextual variables Literacy rate in the population over the age of 15 (%) Gross National Product per capita adjusted by purchasing power (US$) Employment rate over the total workforce (%) Percentage of the population assessed with expectations of pursuing vocational or university studies (%) Weighted index of students’ positive attitudes towards mathematics Weighted index of students’ positive attitudes towards the sciences Percentage of students with a dictionary at home (%) Percentage of students with a desk at home (%) Percentage of students with a computer at home (%) Percentage of students with more than 25 books at home (%) Percentage of students with at least one parent who has a university degree (%) % of students with a high and medium study rate outside the school Average time spent studying mathematics on an average day Percentage of students with a high self-concept in mathematics Outputs y1 : Academic performance in mathematics y2 : Academic performance in the sciences

Average



Min

Max

35.29 54.95 47.57

21.27 21.00 8.00

56.20 89.67 86.42

7.93 19.22 22.44

216.50

159.00

267.00

26.37

92.70 11 179.00

45.90 1 450.00

99.00 29 230.00

11.28 8.42

90.90 70.10

61.20 43.00

99.10 89.00

8.64 11.94

227.30 226.10 88.80 85.90 43.20 62.90 20.80

183.00 174.00 51.00 52.00 7.00 25.00 7.00

273.00 273.00 100.00 99.00 96.00 93.00 45.00

21.70 25.38 11.84 12.27 29.80 20.58 9.92

85.80

58.00

96.00

10.00

1.10 17.70

0.60 2.00

1.80 45.00

0.38 9.93

486.00 484.00

275.00 243.00

604.00 569.00

74.84 72.69

V Giménez et al—Objective-setting in the educational system

electrical lighting, and teachings areas. Finally, the resource ‘materials’ is an index that reflects the availability and appropriateness of the following items: teaching materials, budget for services, computers for teaching, audiovisual resources, library materials, and equipment for science laboratories. Of all the controllable resources, the only one that is not adjustable in the short term is ‘quality of teaching staff’. Certainly, the levels of training and confidence of the teaching staff can be corrected, but this entails undertaking actions whose results can only be expected in the long term. We view the remaining resources as being adjustable in the short term, since they depend on the allocation of sufficient budgetary resources. With respect to the contextual variables, it is important to point out that, as is well known, the DEA models lose their capacity of discrimination the larger the number of inputs and of outputs analysed is. Thus, and as a consequence of the elevated number of contextual variables available, a factorial analysis with the goal of reducing their number, has been carried out. The previous analysis of the correlation matrix indicated that, given the elevated correlation existing among the different variables, the application of this technique was advisable. The KMO test (Kaiser–Meyer–Olkin test) of sampling adequacy obtained a value of 0.744, greatly superior to the value of 0.5 (habitually considered as the lower limit to accept the validity of the factorial analysis, see Morrison, 1990). The sphericicity test of Bartlett confirmed the elevated correlations existent, which permitted refuting the null hypothesis of the absence of correlation among the variables (this means that the correlation matrix is not an identity matrix). Finally, the anti-image correlation matrix showed very small ‘nondiagonal’ values. All of this confirmed the suitability of the application of the factorial analysis. The method of extraction chosen, given that the data was found in different scales, was that of Maximum Likelihood. The criterion for the extraction of factors was that of obtaining some auto-values superior to the unit. For the rotation of factors, the Equamax method was chosen in order to facilitate a better interpretation of factors as well as of variables (Stevens, 1986). Four factors were identified from the factor analysis. The first was made up of the variables ‘attitudes towards mathematics’, ‘attitudes towards the sciences’, and ‘time spent studying at home’. We have called this ‘positive attitude towards studying’. The second factor brought together the variables ‘percentage of students with more than 25 books’, ‘percentage of students with a desk at home’, ‘parents’ educational level’, ‘self-concept in mathematics’, and ‘literacy rate’. Generally speaking, these variables are related to the availability of both physical and human resources at home, and we have thus called the factor ‘availability of resources at home’. The third factor grouped together the contextual variables ‘per capita GNP adjusted by purchasing power’ and ‘percentage of students with computers at home’. We have called it ‘family income level’ because it encompasses the variables that determine the degree of wealth of the

1001

country and the distribution of this wealth among families. Finally, we called the fourth factor, which caused the greatest difficulty in terms of interpretation, ‘expectations and conception of the difficulty of the subjects’. It is negatively loaded with the variables ‘time spent studying at home’ and ‘expectations of pursuing higher education’. Factor scores for each country were calculated using linear regression. This meant that the scores for some countries on some factors were negative. Since variables in DEA models cannot be negative, the factor scores were transformed in order to ensure strictly positive values (Pastor, 1996). In summary, the variables selected as outputs were y1 : academic performance in mathematics, y2 : academic performance in the sciences. The inputs which are controllable in the short term are x1sr : intensity of teaching resource, x2sr : index of facility availability, x3sr : index of material consumption. We also defined an input that is controllable in the long term: x1lr : quality of teaching staff. As contextual variables, we took four factors: e1 : e2 : e3 : e4 :

positive attitude towards studying availability of resources at home family income level expectations and conception of the difficulty of the subjects

4. Results The most noteworthy results of applying the efficiency and maximum potential output models described in Part two are shown in Table 2. First of all, we analyse the technical efficiency of the sampled educational systems and the effect that contextual conditions exert on them. Then, we will comment below on the results of the maximum potential output and the possible avenues of educational policy that lead from there. Based on (1), the coefficients for overall technical efficiency can be obtained. These results are relevant for two reasons. First, they appear to demonstrate better resource management in absolute terms by identifying the countries that, regardless of the environmental conditions, are obtaining the maximum performance from the resources allocated to their educational systems. Secondly, they reveal the potential negative effect of having worse environmental conditions. The efficiency coefficients shown in the column labelled ‘1 ’ in Table 2 were calculated without taking into account the environmental variables. It can be seen that the overall technical efficiencies of Chinese Taipei, the Russian Federation, Hungary, Moldavia, the Republic of Korea, Singapore, Thailand, Tunisia, Macedonia, Jordan, and South Africa stand out. Yet, other countries such as Belgium, Bulgaria, Hong Kong, Italy, Latvia,

1002

Journal of the Operational Research Society Vol. 58, No. 8

Table 2 Technical efficiency, management efficiency and the impact of environmental factors on technical efficiency 1

2

3

1

2

 = 1 /2

Belgium Bulgaria Chinese Taipei Russian Federation Hong Kong Hungry Italy Latvia Morocco Moldavia Netherlands Czech Republic Republic of Korea Romania Singapore Thailand Slovenia Tunisia Indonesia Republic of Macedonia Jordan South Africa Lithuania Australia Canada Malaysia United States New Zealand Chile Cyprus Philippines

1.06 1.02 1.00 1.00 1.01 1.00 1.15 1.04 1.68 1.00 1.04 1.06 1.00 1.15 1.00 1.00 1.04 1.00 1.30 1.00 1.00 1.00 1.12 1.05 1.07 1.14 1.10 1.11 1.15 1.22 1.58

1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.03 1.05 1.07 1.08 1.10 1.11 1.15 1.19 1.46

1.06 1.02 1.00 1.00 1.01 1.00 1.15 1.04 1.68 1.00 1.04 1.06 1.00 1.15 1.00 1.00 1.04 1.00 1.30 1.00 1.00 1.00 1.08 1.00 1.00 1.05 1.00 1.00 1.00 1.02 1.08

Descriptors Mean Max. Min. Standard deviation

1.10 1.68 1.00 0.16

1.04 1.46 1.00 0.09

1.06 1.68 1.00 0.13

Country

(1) Technical efficiency coefficient. (2) Management efficiency coefficient. (3) Impact of environmental factors on technical efficiency.

Morocco, the Netherlands, the Czech Republic, Romania, Slovenia and Indonesia are not part of the technical efficiency frontier. Nevertheless, it should be taken into account in the interpretation of these results that they may be sensitive to any departure from DEA’s implicit assumption of homotheticity in the influence of different inputs, including environmental variables, on feasible output levels (see Mayston, 2003). The efficiency of management, the result of applying program (2), is presented in the ‘2 ’ column of Table 2. The average inefficiency coefficient is 1.04, indicating that when the environmental conditions and the controllable inputs are taken into account; academic outcomes could be increased by an average of 4%. The country with the highest level of inefficiency is the Philippines, with a 46% potential increase

in academic outcomes. Cyprus is next with 19%. It is worth mentioning that some developed countries, namely Australia (5%), Canada (7%), the United States (10%) and New Zealand (11%) appear as inefficient. This is because these countries have not achieved the academic outcomes they should have despite the greater resources they allocate to education. As can be seen, if we take into account the environmental conditions in which education takes place, the remaining 22 countries are efficient. Thus, it can be claimed that, in general, the results obtained from the resources allocated to the educational systems in these countries are nearly optimal. For each country, given the values of the coefficients, the negative effect that the environment exerts on efficiency can be found in the column labelled ‘1 /2 ’. Thus, we can deduce that the country that is most harmed by its environmental conditions is Morocco: the inefficiency of Morocco increases by 68% when environmental variables are no longer controlled. This is followed by Indonesia (30% worse), Romania and Italy (both 15% worse). Both Morocco and Indonesia are examples of countries where potential improvements in efficiency can be derived from improvements in the environmental conditions. That there are developed countries where the environmental conditions can be improved is not as surprising as it may seem: it could be due to the fact that many of them are obliged to provide education for a large number of immigrant children or children of immigrants with the concomitant effects this has on the educational system. A descriptive analysis shows that the average increase in academic outcomes could be 10%, with 6% attributable to environmental factors and 4% to the inefficiency of the system itself, thus demonstrating the greater impact of environmental factors on levels of technical inefficiency. In this sense, our conclusions do not diverge from those obtained in previous studies (Levin and Kelley, 1994; Manceb´on and Mar-Molinero, 2000; Silva-Portela and Thanassoulis, 2001) which state the importance of factors such as the family environment, the family’s economic level and the student’s own innate abilities and qualities. When environmental factors are included, there is a noticeable decrease in the typical divergence by almost half. The presumed negative impact that high rates of immigration could mean on the performance of the educational systems of the most developed countries seems to suggest its inclusion among the environmental variables used. Nevertheless, the impact of immigrant students is not always necessarily negative, or this can vary in function of other factors such as, for example, the country of origin (Rivkin, 2000; Gould et al, 2004). Consequently, it cannot be guaranteed that the same proportion of immigrant students in two countries will produce the same effect on academic results. This has been the reason why it was decided not to include the rates of immigration in this work. Our conclusions about the impact of environmental variables are threefold: (i) environmental variables play a key role in explaining differences among the results in different countries; (ii) the efficiency differential among countries considerably decreases when environmental

V Giménez et al—Objective-setting in the educational system

Table 3

1003

Maximum potential output and management efficiency

1

2

3

4

5

6

7

8

Country

2

3

3 /2

X 1sr

X 2sr

X 3sr

Y1

Y2

Belgium Bulgaria Chinese Taipei Russian Federation Hong Kong Hungary Italy Latvia Morocco Moldova Netherlands Czech Republic Republic of Korea Romania Singapore Thailand Slovenia Tunisia Indonesia Republic of Macedonia Jordan SouthAfrica Lithuania Australia Canada Malaysia United States New Zealand Chile Cyprus Philippines

1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.03 1.05 1.07 1.08 1.10 1.11 1.15 1.19 1.46

1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.04 1.08 1.13 1.14 1.18 1.49 1.03 1.05 1.07 1.09 1.10 1.11 1.15 1.21 1.49

1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.04 1.08 1.13 1.14 1.18 1.49 1.00 1.00 1.00 1.01 1.00 1.00 1.00 1.02 1.02

1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.19 1.10 1.18 1.40 1.34 1.57 1.03 0.80 0.87 1.07 0.80 0.84 1.00 1.06 1.12

1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.58 0.98 0.46 1.34 1.79 0.98 0.59 0.85 0.76 0.85 0.78 0.82 0.40 1.21 0.88

1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.49 0.57 0.30 1.88 1.39 1.06 0.87 0.87 0.99 0.93 0.80 0.88 0.31 0.88 0.95

1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.04 1.08 1.19 1.14 1.22 1.49 1.04 1.13 1.11 1.09 1.17 1.20 1.20 1.21 1.49

1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.04 1.14 1.13 1.14 1.18 1.66 1.03 1.05 1.07 1.12 1.10 1.11 1.15 1.21 1.49

Descriptors Mean Max. Min. Standard deviation

1.04 1.46 1.00 0.09

1.08 1.49 1.00 0.12

1.04 1.49 1.00 0.09

1.04 1.57 0.80 0.16

0.98 1.79 0.40 0.26

0.97 1.88 0.30 0.28

1.09 1.49 1.00 0.13

1.09 1.66 1.00 0.15

(1) (2) (3) (4) (5) (6) (7) (8)

Managerial efficiency coefficient. Maximal potential output coefficient. Potential increase in output once managerial efficiency is achieved. Coefficient of variation needed in the input ‘Intensity of teaching resources’ to achieve maximum potential output. Coefficient of variation needed in the input ‘Facilities’ to achieve maximum potential output. Coefficient of variation needed in the input ‘Materials’ to achieve maximum potential output. Coefficient of variation needed in the input ‘Results in mathematics’ to achieve maximum potential output. Coefficient of variation needed in the input ‘Results in the sciences’ to achieve maximum potential output.

variables are taken into account; and (iii) environmental variables help to form more homogeneous comparison groups. Having analysed the results of technical efficiency and the influence of environmental factors, we will now turn our attention to the maximum potential output. Column ‘3 ’ of Table 3 shows the achievable increases under the assumption that, given the level of the environmental variables, the educational system is allocated a sufficient level of inputs that are controllable in the short term. Thus, for example, Table 3 indicates that, given the resources currently allocated to the educational system in Cyprus, the academic outcomes could be increased by 19% (1 ). However, by modifying the level

and composition of resources that are controllable in the short term, the increase in outcomes could be as high as 21% (3 ): there is a 2% potential increase in outcomes beyond the output designated as technically efficient (3 /2 ). The optimal level of resources that are controllable in the short term (without changes in the environmental factors) associated with the maximum output can be found in columns 4, 5, and 6 of Table 3. Once again taking Cyprus as an example, we can see how the first two resources should be increased so that ‘materials’ can be decreased. Columns 7 and 8 show the maximum achievable increases in the scores on mathematics and sciences, respectively (for Cyprus, this would mean

1004

Journal of the Operational Research Society Vol. 58, No. 8

Table 4

Matrix of classifications and recommendations for improving educational systems

a 21% increase in both outputs). Likewise, South Africa, which is technically efficient, is the country with the greatest potential for improvement in its academic outcomes (49%). To achieve this, it must increase the resources ‘intensity of teaching resource’ and ‘materials’ by 57 and 6%, respectively, but it may also decrease the resource ‘facilities’ by 2%. However, this suggests problems in the overall allocation of resources, as well as allocation problems within the distribution of the resources earmarked for education. Following a long way behind South Africa are Indonesia, Macedonia, and Jordan, where the potential for improvement in outcomes falls between 13 and 18%. The situation in these countries is different, however. Macedonia and Jordan have to increase their controllable resources, while Indonesia must increase one of them—the intensity of teaching resource—but could decrease the others, if need be. On average, the potential increase in performance, given possible modifications in the variable inputs (3 /2 ), is 4%. For this to happen the input ‘intensity of teaching resource’ must increase by 4% and the remaining short-term controllable inputs must decrease by 2 and 3%. However, it is in these two last-named inputs that the greatest increases and

the greatest disparity among countries occur, indicating that these are the inputs that generate the greatest disparity and are most in need of adjustment. The various countries can be divided into different groups based on the recommendations for improving their educational systems derived from the joint analysis of technical efficiency and maximum potential output. We present the groups in Table 4. Group I consists of the countries that best manage their educational systems without any room for improvement. Such countries are technically efficient both without and with the inclusion of environmental factors. Furthermore, there is neither a reason to believe that a greater allocation of resources would lead to any improvement in students’ academic performance nor that an excess of resources has been set aside for the educational systems (1 = 2 = 3 = 1). Group I includes Chinese Taipei, the Russian Federation, Hungary, Moldova, the Republic of Korea, Singapore, and Thailand. They are countries with impeccable management and are clear benchmarks for the others (in Figure 1 these countries would be represented by unit c). Like Group I, the countries in Group II also manage their resources efficiently. But if the environmental conditions in Group II countries were

V Giménez et al—Objective-setting in the educational system

improved, so would the performance. Such countries include Belgium, Bulgaria, Hong Kong, Italy, Latvia, Morocco, the Netherlands, the Czech Republic, and Romania (the situation of these countries corresponds to unit d in Figure 1). The countries in Group III are such that, although they are optimizing their current resources, they appear to be not obtaining the maximum possible outcomes that might have been achieved if they had allocated more resources to education. The needed modifications can vary in type. Some countries, such as Jordan and Macedonia, need to increase all short-term controllable resources in order to improve the outcomes (cf. units a, b, and e in Figure 1). In these countries, the likely budgetary limits have held back potential improvements in the outcomes of the educational systems. In other cases, such as Tunisia and South Africa, an increase in certain factor(s) must be countered by a decrease in other(s). Group IV includes countries that appear to manage their resources efficiently but are harmed by deficient environmental conditions. Just like group III, in order to achieve the maximum potential output, they must modify their allocation of resources, either through a greater allocation of resources in all the factors (the case for Slovenia, whose situation is comparable to units e and f in Figure 1), or through higher allocations of some factors and lower allocations of others (Indonesia). Group V and VI are empty due to the definition of the model. Conversely, the countries in Group VII not only do not obtain maximum performance from the resources consumed by the educational systems, but also, according to the experiences of other more efficient countries, actually allocate excessive resources to the educational systems. Group VII includes developed countries such as Australia, Canada, New Zealand, and the United States as well as Chile. In other words, these countries are doubly inefficient, because even if they did achieve the maximum output, they should be capable of doing so with a lower consumption of resources (this situation is equivalent to the one symbolized by unit g in Figure 1). In short, these countries should increase the outcomes until they achieve the maximum output and simultaneously reduce the resources earmarked for education. Group VIII includes a series of countries (Lithuania, Malaysia, the Philippines, and Cyprus) that need to first increase the efficiency of the educational systems and achieve better results, but must also consider an in-depth analysis of the resources devoted to education, since additional increases in outcomes would be associated to a reduction in some input(s) and increases in other(s). As a result, these are countries in which the insufficient but primarily erroneous allocation of resources impedes better performance of the educational systems. Finally, a question of potential interest, for assessing the likely effectiveness of investing in the quality of teaching staff in the countries concerned, is to know the difference between the efficiency scores and potential output levels that would result from allowing variations in the only resource input (quality of teaching staff) which has been considered a long-term

Table 5

1005

Maximum potential output allowing variations in all inputs

Country Belgium Bulgaria Chinese Taipei Russian Federation Hong Kong Hungary Italy Latvia Morocco Moldova Netherlands Czech Republic Republic of Korea Romania Singapore Thailand Slovenia Tunisia Indonesia Republic of Macedonia Jordan South Africa Lithuania Australia Canada Malaysia United States New Zealand Chile Cyprus Philippines

3

3a

3a /3

1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.04 1.08 1.13 1.14 1.18 1.49 1.03 1.05 1.07 1.09 1.10 1.11 1.15 1.21 1.49

1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.04 1.18 1.13 1.14 1.18 1.49 1.03 1.05 1.07 1.09 1.10 1.11 1.22 1.21 1.49

1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.09 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.05 1.00 1.00

input. For this, a variation of Model (3) has been resolved, which we shall call Model (3a), in which adjustments have also been permitted in the ‘quality of teaching staff’ input. Table 5 compares the results of both models. The second column of Table 5 (3a ) represents the maximum potential output reachable without considering any adjustable input in the long term. In comparing this coefficient with the previously calculated 3 , it can be seen that only two countries, Chile and Tunisia, could increase their potential output if they invested in the improvement of the quality of their teaching staff. Thus, this result seems to suggest that the quality of the teaching staff is not the variable on which the governments of the countries studied should primarily act, but rather on the rest of the inputs that have been considered as adjustable in the short term.

5. Conclusions In any country, two of the main objectives that managers of educational systems must achieve are proper management of the systems as well as better education of students. It is widely accepted that both human and intellectual capital are important in the future development and competitiveness of a country.

1006

Journal of the Operational Research Society Vol. 58, No. 8

In this study, we have attempted to analyse this dual objective educational strategy by analysing the technical efficiency and maximum potential output of the educational systems in 31 countries. To accomplish this, we have used data from the 1999 Third International Mathematics and Science Study (TIMSS). The main shortcoming of the TIMSS database is that, although it provides a large amount of information, only the scores obtained by students in tests on mathematics and on the sciences are reported. Clearly, this is a significant shortcoming since it does not incorporate other more humanistic competencies and abilities. We have applied data envelopment analysis. In our application, we have placed special emphasis on the effect of environmental factors on performance, as suggested by the literature on educational outcomes. Thus, in the methodology section, an analytic process that enabled the quantification of the impact exerted by environmental factors on levels of technical inefficiency was used. This approach coincides with that of Lozano-Vivas et al (2001, 2002). We have also suggested a model for measuring the maximum potential output and the changes that must be made in the allocation of resources. To the best of our knowledge, this is an original proposal that has not been posited or used in any previous research. The empirical application clearly demonstrates, first, that the environmental factors have an influence on technical efficiency. More specifically, the influence of environmental factors on the technical inefficiency of educational systems is even greater than that of the inefficient management of available resources. One of the most noteworthy set of results was obtained from a group of developed countries (Australia, Canada, the United States, and New Zealand) in which a potential increase in academic outcomes coupled with a decrease in resources was detected. In contrast, some Asian and former Communist countries were found to be the most skillful at managing their educational systems. The countries in this group are Chinese Taipei, the Russian Federation, Hungary, Moldova, the Republic of Korea, Singapore, and Thailand. Yet still, other countries including Belgium, Bulgaria, the Czech Republic, Hong Kong, Italy, Latvia, Morocco, the Netherlands, and Romania were shown to be good managers. The difference between this latter group and the former is that these countries would experience an improvement in performance if environmental factors were improved. The remaining countries are in intermediate positions. The common characteristic among them is that there exists the potential to achieve increases in academic outcomes if greater resources are allocated to the systems and/or there are improvements in the technical efficiency of the educational systems. A common criticism of any frontier assessment is that the entire exercise is focused on detecting the efficient units: there is a failure to analyse, in-depth, the circumstances of the inefficient units, their operational characteristics and the route they must take to lessen their inefficiency. To this end, a future research project could determine the optimal route an inefficient

unit could follow from its position of technically inefficient to the desired position of maximum potential output. Similarly, with the goal of deriving useful tools for the strategic planning of educational services, a system of incentives must be defined that relates increases in resources to improvements in the performance of the educational system. This must be aimed at achieving the maximum outcome in the long-term. Finally, we are compelled to highlight the caution with which this type of results must be regarded. Indeed, the small sample of countries participating in the TIMSS study and the fact that it solely analysed data from a single year reduces the significance of the results obtained. A future extension, data permitting, would be to undertake a dynamic analysis. Acknowledgements —This study is part of a broader research project financed by the Ministry of Science and Technology (ref. SEC2003047707).

References Banker R and Morey R (1986a). Efficiency analysis for exogenously fixed inputs and outputs. Opns Res 34: 513–521. Banker R and Morey R (1986b). The use of categorical variables in data envelopment analysis. Mngt Sci 32: 1613–1627. Banker RD, Charnes A and Cooper WW (1984). Some models for estimating technical and scale inefficiencies in data envelopment analysis. Mngt Sci 30: 1078–1092. Barrow MM (1991). Measuring local education authority performance: a frontier approach. Econ Educ Rev 10: 19–27. Bessent A and Bessent E (1980). Determining the comparative efficiency of schools through data envelopment analysis. Educ Admin Q 16: 57–75. Bessent A, Bessent W, Kennington J and Reagan B (1982). An application of mathematical programming to assess productivity in the Houston independent school district. Mngt Sci 28: 1355–1367. Bifulco R and Bretschneider S (2001). Estimating school efficiency. A comparison of methods using simulated data. Econ Educ Rev 20: 417–429. Brockett P and Golany B (1996). Using rank statistics for determining programmatic efficiency differences in data envelopment analysis. Mngt Sci 42: 466–472. Charnes A, Cooper W and Rhodes E (1978). Measuring the efficiency of decision making units. Eur J Opl Res 2: 429–444. Charnes A, Cooper W and Rhodes E (1981). Evaluating program and managerial efficiency: an application of data envelopment analysis to program follow through. Mngt Sci 27: 668–697. Charnes A, Cooper W, Lewin A and Seiford L (1994). Data Envelopment Analysis: Theory Methodology and Application. Kluwer Academic Publishers: Boston. Commission of the European Communities (2001). The concrete future objectives of education systems. Final Report from the Commission 59, Brussels. Darling-Hammond L (1991). Accountability mechanisms in big city school systems. ERIC/CUE Digest, 17. Delannoy F (1998). Reformas en gesti´on educacional en los 90s. Human Development Department, LCSHD Paper Series N. 21, The World Bank. F¨are R (1984). The existence of plant capacity. Int Econ Rev 25: 209–213. F¨are R, Grosskopf S and Kokkelenberg EC (1989a). Measuring plant capacity, utilization and technical change: a nonparametric approach. Int Econ Rev 30: 655–666. F¨are R, Grosskopf S and Lovell CAK (1985). The Measurement of Efficiency of Production. Kluwer-Nijhoff: Boston.

V Giménez et al—Objective-setting in the educational system

F¨are R, Grosskopf S and Valdmanis V (1989b). Capacity, competition and efficiency in hospitals a nonparametric approach. J Product Anal 1: 123–138. Fried HO, Schmidt S and Yaisawarng S (1999). Incorporating the operating environment into a nonparametric measure of technical efficiency. J Product Anal 12: 249–267. Gould E, Lavy V and Paserman MD (2004). Immigrating to opportunity: estimating the effect of school quality using a natural experiment on Ethiopians in Israel. Q J Econ 119: 489–526. Gray J (1981). A competitive edge: examination results and the probable limits of secondary school effectiveness. Educ Rev 33: 25–35. Gray J, Jesson D and Jones B (1986). Towards a framework for interpreting schools’ examination results. In: Rogers R (ed). Education and Social Class 1986. Falmer Press, London, pp 51–57. Hanushek EA and Kimko DD (2000). Schooling, labour force quality, and the growth of the nations. Am Econ Rev 90: 1184–1208. Harris A (2000). What works in school improvement? Lessons from the field and future directions. Educ Res 42: 1–11. Hoxby CM (1999). The productivity of schools and other local public goods producers. J Publ Econ 14: 1–30. Jesson D, Mayston D and Smith P (1987). Performance assessment in the education sector: educational and economic perspectives. Oxford Rev Education 13: 249–266. Johansen L (1968). Production functions and the concept of capacity. In: Forsund FR (ed). Collected Works of Leiv Johansen 1968. North-Holland, Amsterdam, pp 359–382. Kirjavainen T and Loikkanen HA (1998). Efficiency differences of Finnish senior secondary schools: an application of DEA and Tobit analysis. Econ Educ Rev 16: 303–311. Levin H and Kelley C (1994). Can education do it alone? Econ Educ Rev 13: 97–108. Lozano-Vivas A, Pastor JT and Hasan I (2001). European bank performance beyond country Borders: What really matters? Eur Fin Rev 5: 141–165. Lozano-Vivas A, Pastor JT and Pastor JM (2002). An efficiency comparison of European banking systems operating under different environmental conditions. J Product Anal 18: 59–77. Ludwin W and Guthrie T (1989). Assessing productivity with data envelopment analysis. Publ Product Rev 12: 361–372. Manceb´on MJ and Bandr´es E (1999). Efficiency evaluation in secondary schools: the key role of model specification and of ex post analysis of results. Educ Econ 7: 131–152.

1007

Manceb´on MJ and Mar Molinero C (2000). Performance in primary schools. J Opl Res Soc 51: 843–854. Mayston DJ (2003). Measuring and managing educational performance. J Opl Res Soc 54: 679–691. Mayston D and Jesson D (1988). Developing models of educational accountability. Oxford Rev Educ 14: 321–339. Morrison DF (1990). Multivariate Statistical Methods. McGraw-Hill: New York. Mu˜niz M (2002). Separating managerial inefficiency and external conditions in data envelopment analysis. Eur J Opns Res 143: 625–643. Orme C and Smith P (1996). The potential for endogeneity bias in data envelopment analysis. J Opl Res Soc 47: 73–83. Pastor JT (1996). Translation invariance in data envelopment analysis: a generalization. Ann Opns Res 66: 93–102. Ray SC (1991). Resource-use efficiency in public schools: a study of Connecticut data. Mngt Sci 37: 1620–1628. Rico A (1996). Measuring outcome in schools. In: Smith P (ed). Measuring Outcome in the Public Sector 1996. Taylor and Francis, London, pp 118–134. Rivkin SG (2000). School desegregation, academic attainment, and earnings. J Human Resources 35: 333–346. Ruggeiro J, Duncombe W and Miner J (1995). On the measurement and causes of technical inefficiency in local public services: with an application to public education. J Publ Adm Res Theory 5: 403–428. Sammons P, Nuttall D and Cuttance P (1993). Differential school effectiveness: results from a re-analysis of the Inner London Education Authority’s Junior School Project data. Br Educ Res J 19: 381–405. Sengupta JK and Sfeir R (1986). Production frontier estimates of scale in public schools in California. Econ Educ Rev 5: 297–307. Silva-Portela MCA and Thanassoulis E (2001). Decomposing school and school-type efficiency. Eur J Opl Res 132: 357–373. Stevens J (1986). Applied Multivariate Statistics for the Social Sciences. Erlbaumn: Hillsdale, NJ. Thanassoulis E and Dunstan P (1994). Guiding schools to improved performance using Data Envelopment Analysis: an illustration with data from a local education authority. J Opl Res Soc 45: 1247–1262.

Received January 2005; accepted February 2006 after two revisions