Technical Efficiency and Technologically ... - Semantic Scholar

1 downloads 0 Views 214KB Size Report
Minimum Efficient Scale (MES) in a single product industry, through the ...... This could be due to the fact that there exists a sub-product differentiation in.
Technical Efficiency and Technologically Independent Sub-Markets† Jad Chaaban University of Toulouse, INRA

First version: June 2003 This version: March 2004

Abstract This paper aims to provide a consistent empirical methodology for investigating the level of Minimum Efficient Scale (MES) in a single product industry, through the assessment of the relative productive performance of French soft cheese firms. Two deterministic approaches are used to evaluate firm-level technical efficiency: a parametric method, Corrected Ordinary Least Squares (COLS); and a non-parametric one, Data Envelopment Analysis (DEA). The data covers soft cheese producers in 1997, with detailed firm-level cost and production information. Results show a high correlation between the two estimation methods, and the (estimated) scale efficient firms are situated at the minimum of the sample’s observed average cost curve, leading to inference about the level of MES. A consistent empirical methodology to identify sub-samples of firms with similar technological characteristics is also proposed, allowing the identification of various levels of MES. This shows how technologically independent sub-markets can coexist in the same industry.

Key Words: Technical Efficiency, Minimum Efficient Scale, Market Structure, Data Envelopment Analysis, Corrected Ordinary Least Squares. Journal of Economic Literature Classifications: C81, D24, L11, L25, L66, O1.



This work benefited from the very helpful and encouraging comments of Prof. John Panzar, and from the continuous support of Vincent Réquillart and Zohra Bouamra Mechemache. I am also indebted to Prof. John Sutton, Peter Davis, Alberto Salvo-Farre and workshop participants at the LSE IO workshop for very interesting comments and discussions. Previous drafts of this paper also benefited from insightful suggestions at CNIEL (French Dairy Industry Institute) and MAIA (Rural Economics and Sociology Unit – INRA) presentations. The usual caveat applies. - Contact Information: University of Toulouse, INRA, 21 Allée de Brienne, Manufacture des Tabacs, Bat. F, 31000 Toulouse, France. Email: [email protected]

1

Introduction Economists have long dwelled on how to conceptualize and measure performance. A central line of research in the past 50 years was directed towards the measurement of the productive efficiency of operating units such as bank branches, schools, retail outlets or government agencies. This has lead to the development of analysis techniques that sought to benchmark firms or operating units’ performance against a common best-practice frontier. This efficiency frontier is typically constructed according to parametric or non-parametric methods. The parametric approach relies on econometric techniques and includes simple regression analysis and Stochastic Frontier Analysis. The non-parametric approaches use mathematical programming techniques, and the main widely used non-parametric method is Data Envelopment Analysis (DEA). Theoretical and empirical work has developed heavily around these production benchmarking techniques (see Färe et al. (1994) for an overview), highlighting how each method has its advantages and disadvantages. This has lead to the emergence of recent studies showing that a combination of benchmarking techniques offers a better evaluation of performance, by allowing more flexibility in the construction of the best practice frontiers (Coelli and Perelman (1999)). This paper follows this line of research by exposing previously undiscussed advantages of using combinations of comparative efficiency techniques. Making use of an exceptional dataset containing detailed production and cost information about French soft cheese producers, it is shown how technical efficiency analysis carried on a small sample of single product firms can reveal many interesting results. These are for example a refinement of benchmarking frontiers, and more importantly the identification of sub-samples or clusters of firms with identical technological characteristics. This will be shown to have considerable importance with regards to market structure and public policy undertaking. The paper proceeds as follows: Section 1 exposes the theoretical background for the empirical benchmarking techniques used in this study. Section 2 describes the dataset at hand. Section 3 then details the estimation results, highlighting the way a consistent empirical methodology combining benchmarking techniques can lead to rather good results, and how breakpoints in the sample can be identified. Section 4 then shows how technologically independent submarkets with different levels of efficient scale can be identified in a given industry, and finally Section 5 shows the implications this has on the determination of Minimum Efficient Scale (MES), market structure and public policy.

1. Models of Technical Efficiency Estimation This study uses two deterministic methods to evaluate the technical efficiency of French soft cheese firms in 1997: a programming (non-parametric), Data Envelopment Analysis (DEA), and a statistical (parametric) technique, Corrected Ordinary Least Squares (COLS).

1.1 DEA: Non-parametric deterministic approach DEA uses piecewise linear programming to calculate the best-practice frontier of a sample of decision-making units (DMUs) or firms. This technology frontier envelops the less efficient firms, and the distance between these firms’ position and the calculated frontier provides an indicator of their relative inefficiency, as first computed by Farrell (1957). Farrell decomposed this efficiency into two components: technical efficiency, which reflects the maximum amount of output a firm can produce given its inputs, and allocative efficiency,

2

which indicates the optimal mix of a firm’s inputs given their prices. The combination of these two measurements yields the overall economic efficiency. Technical efficiency can be decomposed into scale, congestion 1, and pure technical efficiency. DEA models can be input and output oriented and can be specified as constant returns to scale (CRS) or variable returns to scale (VRS). Input-oriented models typically seek to answer the following question: “By how much can input usage be proportionally reduced without changing the output quantities produced?”; while output-oriented models deal with the question: “By how much can output levels be proportionally expanded without modifying the input quantities used?”. Detailed discussions concerning DEA and its wide applications can be found in Ali and Seiford (1993), Banker, Charnes and Cooper (1984), Bowlin (1998) and Seiford and Thrall (1990). In this present paper an output-oriented DEA model is used, and only technical efficiency is evaluated: The absence of information on input prices makes the evaluation of allocative efficiency impossible. Concerning the model’s orientation, Coelli and Perelman (1999) show that the choice of orientation does not significantly alter efficiency estimation results. Next the theoretical model underlying the estimation procedure is discussed, based on the axiomatic approach to modeling the production technology detailed in Färe, Grosskopf and Lovell (1994). Let x ∈ ℜ +N denote a vector of inputs and y ∈ ℜ +M the resulting output vector. Assume that there are k = 1,  , K decision making units (DMUs) so that the data is given by: ( x k , y k ), k = 1,  , K . There are, among others, two equivalent ways to express the type of reference technology to be evaluated: the Input Requirement Set L(y) which shows all the combinations of inputs that can be used to produce the output vector y; and the Output Possibility Set P(x) which shows all the combinations of outputs that can be produced by the input vector x. Only P(x) is considered in this present exposition. The Farrell Output-Oriented Measure of Technical Efficiency under constant returns to scale C and strong output disposability S 2 is formally defined by:

Fo ( x, y C , S ) = max{θ : θy ∈ P( x C , S )} where P( x C , S ) is the output possibility set defined by:  y mk ≥ y m , m = 1, l , M  k =1  K  z k x nk ≤ x n , n = 1, l , N  ∑ k =1  k  z ≥ 0 , k = 1, l K   K

∑z P( x C , S ) =

{ ( y , l, y 1

M

):

k

1

Congestion implies that a lot of inputs are ‘congesting’ the output production process. Strong Output Disposability indicates that outputs can be disposed of without cost. This assumption is equivalent to the ‘free disposal’ property of production technologies. The use of the Weak Disposabiliy assumption allows to evaluate a congestion component.

2

3

The output possibility set is usually assumed convex, includes all data points and envelops observations with minimum extrapolation, i.e. the fit is as “tight” as possible. The z variables are commonly referred to as intensity variables. They define through linear combinations the frontier points of the benchmark technology. DEA calculations are designed to maximize the relative efficiency score of each unit, subject to the constraint that the set of intensity variables obtained in this manner for each DMU must also be feasible for all the others included in the sample. In order to determine the efficiency score of each unit, each type of output is scaled up by θ until the frontier is reached, with a simultaneous selection of optimal z variables. DMUs with positive z variables are called the “peers”, and they constitute frontier units with which the DMU under investigation is being compared to. Notice that in order to obtain an efficiency measure for each unit the above linear programming problem must be solved K times, once for each DMU in the sample. Inefficient firms have output efficiency scores greater than one and efficient firms have scores equal to one. Therefore a DMU is technically output efficient if: Fo ( x, y C , S ) = 1 , and inefficient otherwise. A firm with a score of 1.40 for example could increase its outputs by 40% if it were operating on the best practice frontier. In this study only strong output disposability S is considered, where outputs can be disposed of without cost (“free” disposal). This assumption is common in efficiency studies, and is opposed to weak disposability of outputs W which can also be introduced in DEA frameworks (see Färe, Grosskopf and Lovell (1994) for more details). Given this, various types of returns to scale can be imposed on the reference technology by changing the restrictions on the intensity variables z in the definition of the output possibility set. Namely, adding the restriction K

∑z

k

≤1

k =1

defines non-increasing returns to scale N ; and K

∑z

k

=1

k =1

defines variable returns to scale V (which allows for increasing, constant and decreasing returns to scale). Banker, Charnes and Cooper (1984) and Seiford and Thrall (1990) provide proofs based on duality theory for these correspondences. Output-oriented technical efficiency under variable returns to scale can thus be defined by: Fo ( x, y V , S ) = max{θ : θy ∈ P( x V , S )} Moreover, Output Scale Efficiency is defined by the following ratio: S o ( y, x S ) = Fo ( x, y C , S ) Fo ( x, y V , S ) which is a measure of the deviation from constant returns to scale in the output direction. A firm is scale efficient if its output scale efficiency is equal to one, otherwise it is scale inefficient. To identify the nature of this scale inefficiency, the following rule is used: If S o ( y, x S ) > 1 , then scale inefficiency is due to: Increasing Returns to Scale (IRS) if Fo ( x, y N , S ) = Fo ( x, y C , S ) Decreasing Returns to Scale (DRS) if Fo ( x, y N , S ) > Fo ( x, y C , S ) where 4

Fo ( x, y N , S ) = max{θ : θy ∈ P( x N , S )} is technical efficiency relative to a nonincreasing returns to scale technology N. The proof of these results is detailed in Färe, Grosskopf and Lovell (1994). In the empirical results derived below output-oriented technical efficiency scores for each firm are reported under constant and variable returns to scale specifications, along with the scale efficiency score and the nature of scale inefficiency (IRS or DRS).

1.2 COLS: Parametric deterministic approach

In parametric frontier estimation, the production process of single product firms (i.e. for M=1 in the previous notation) is usually modeled as follows: Y k = f (X k , β) where Y is output, X a set of inputs (e.g. labor, capital and raw materials), β a set of parameters to be estimated, and k refers to producers (DMUs). The usual assumptions found in the literature dealing with production function estimation are made at this point: Firms are assumed to be price-takers in input markets, so input prices may be treated as exogenous. The production function is assumed to have all the properties of smoothness, curvature and continuity; and producers are assumed to optimize a well defined objective in the production process. The production frontier described above can be written in a log-linear form: Q k = ln Y k = α + β ′ X k + ε k This specification encompasses among others the translog, generalized Leontieff and CobbDouglas functional forms which are all linear in the parameters. Technical inefficiency is represented by the error term ε k , which is assumed to enter the production model additively after taking logarithms. It indicates the position of each firm relative to the estimated production frontier. It is common in parametric studies to assume that technical inefficiency can be decomposed into two components: ε k = vk − uk

where v k represents randomness or statistical noise and u k technical inefficiency. This decomposition is motivated by the fact that firms’ position off the production frontier might not be entirely due to optimization failure. Models which take into account this randomness in evaluating firms’ efficiency frontiers are usually referred to as Stochastic Frontier Models. In this present paper only Deterministic Frontier estimation is conducted, where it is assumed that v k above is equal to zero. The reason this study is restricted to deterministic estimation is the fact that stochastic frontier estimation usually requires a large number of observations and is typically conducted on time series datasets. Here only a small cross-section of soft cheese producers is evaluated. Moreover, it is intellectually more appealing to compare a

5

deterministic parametric efficiency estimation technique with its non-parametric DEA counterpart, although the limitations of deterministic efficiency evaluation should be well kept in mind. Given this, Greene (1980) shows a procedure to get efficiency scores for each firm under the present parametric approach, a procedure which has since been called Corrected Ordinary Least Squares (COLS). Assuming that ε k is a random variable with a constant mean, all the parameters in the frontier model above except the constant term can be estimated consistently by OLS. Moreover, Greene shows that the constant term can be consistently estimated simply by shifting the least squares line upward sufficiently so that the largest residual is zero. The corrected constant term is written as: αˆ COLS = αˆ OLS + max(e k )

And the resulting individual efficiency measure can be written as: k k εˆCOLS = eOLS − max(e k ) k

which can be rewritten in terms of a technical efficiency (TE) score as: k TE k = exp(−εˆCOLS )

An efficient firm will have a TE score equal to 1. COLS indicates only one efficient firm (by construction), with the others having their technical efficiency ranked relative to this firm. The COLS method thus shifts the estimated production function up by the amount of the largest residual. A TE score of 1.3 for example indicates that in order to achieve technical efficiency the firm has to expand its output by Ln (1.3) which is equal to 26%.

2. The data Data on the soft cheese industry was obtained by merging two statistical sources: the Annual Firm Survey (EAE) conducted by the French Ministry of Agriculture, which compiles accounting and firm specific data ; and the Annual Dairy Production Survey (EALPRODCOM) also conducted by the Ministry of Agriculture, surveying dairy production by all firms operating on the French territory. This survey contains detailed information about the quantities produced by each plant, according to a very narrow product definition. Merging these two datasets allowed the construction of a detailed firm-specific production process database (inputs, outputs, cost, plus other firm characteristics), made available for the first time in studies concerning French industries. Data was only available for the year 1997, so a cross-section of firms are studied in this paper. The choice of working with single product firms is guided by the fact that only the total cost of production for each input is reported (through individual income statements), which makes it impossible to disentangle costs attributed to each activity in multiproduct companies. Working with single product firms also allows to concentrate on a single production technology, thus abstracting from issues related to product differenciation. The soft cheese sector was selected because it contained the highest number of single product manufacturers.

6

Given this, the Annual Dairy Production Survey contains a very detailed product definition, and therefore an aggregation had to be made for each firm’s production into a broad soft cheese output 3. The extent to which this output can be treated as homogeneous or not is one of the issues dealt with in this paper. Assuming first that this aggregation is valid, it is explored ex-post if some heterogeneity in the sub-products’ characteristics affects or not the derived results. The richness of the two available datasets allows the undertaking of this task. Empirical Industrial Organization studies have often neglected the inquiry into the validity of their market or industry definitions, mostly due to data limitations. Market definition in a context of demand-side or supply-side data problems is, as will be shown in the discussion below, a serious concern for empirical inference in the Industrial Organization field. Table 1 shows statistics about the soft cheese industry at hand. 57 single product firms in 1997 produced 60% of the overall soft cheese output. Labor input is proxied by total wages paid, capital is defined as the yearly depreciation of physical assets plus interest paid, and raw materials input is defined as the total purchases of raw materials (the main raw material is unprocessed milk here). These definitions of input variables based on accounting data are common in the literature on productivity analysis (see for example Scully (1999)). It is assumed that firms face the same input prices. This is a reasonable assumption in France because of a high degree of public intervention in the milk sector, making the price of milk, the main input used by soft cheese firms, relatively given for all firms. Cost data shows that on average expenditure on raw materials constituted a very large proportion of total production cost, followed by labor and capital expenses.

1. Soft Cheese Sample Characteristics (1997) Overall number of firms producing soft cheese Total Production Number of single product * firms in the sample Total Production of these 57 firms Share of industry production of the 57 firms

(1000 tons) (1000 tons)

Average Quantity Average Sales Average Labor Average Capital Average Raw Materials

(1000 tons) (million FF) (million FF) (million FF) (million FF)

184 464 57 277 60% 4.8 179 20.3 7.2 133.6

* Single product: 90% (on average) of production is soft cheese

3

The Annual Firm Survey on the other hand only contained information about total sales for the overall cheese production of firms, with no product-specific information.

7

3. Estimation and Results 3.1 COLS and DEA methods, full sample 3.1.1 Estimation results

This section starts by presenting the estimation results under the two methods, DEA and COLS, when the total soft cheese sample is considered. It is then shown how a structural break in the estimation results can lead to splitting the sample under consideration. In order to conduct a COLS estimation, the following production function parametric specification is retained for the Ordinary Least Squares estimation : Qi = β1 LBi + β 2 K i + β 3 RWi + β 4 ( LB.K ) i where Q is quantity produced, LB labor, K capital and RW raw material input. β s are the parameters to be estimated and variables are taken in logs. The index i refers to firm (i=k in the previous section’s notation). Tables 2a shows the OLS estimation results for the full sample. 2a. OLS Results (Total sample, 57 firms) Dependant variable: Q Variable Coeff. Std. Err. t-Stat. 0.13 0.08 1.67 LB 0.41 0.09 4.86 K 1.20 0.09 12.69 RW -0.04 0.00 -10.97 LB.K

Prob. 0.10 0.00 0.00 0.00

R-squ. 0.96 Adj. R-sq. 0.95

More flexible technologies, such as different restricted specifications of translog production functions, presented major problems in the significance of their estimated parameters. This is due to the small number of observations in the sample and to the absence of information about input prices. The estimation of full translog functions can be hampered by an important problem of multicollinearity when factor share equations are not available. For this, various parameter restriction tests were conducted on different specifications of the production technology, and the one kept in this study provided the best results in terms of parameter significance and overall fit. Moreover, the above retained specification provides a consistent view of the production technology: labor and capital substitution is possible, and production is only possible at non-zero input levels. Given this, Table 2a reports a good overall fit for the OLS estimation (96%), and all parameters are significant at the 10% confidence level. Firm-specific efficiency scores are then computed under the COLS and DEA estimation procedures 4. The non-parametric output oriented efficiency scores are decomposed into three components: an efficiency score under a constant returns to scale (CRS) assumption, one under variable returns to scale (VRS), and a scale efficiency (SE) score. This decomposition allows, as in Färe, Grosskopf and Lovell (1994), to infer the nature of returns to scale at the firm level. 4

The ON FRONT® software designed by Färe and Grosskopf (2000) is used to obtain the non-parametric results derived in this paper. E-VIEWS® is used for OLS estimations and graphic simulations.

8

Figure 1 shows the plot of DEA scores under variable returns to scale and COLS scores against an index of increasing output 5. Efficiency Score, DEA VRS and COLS, Total Sample

Fig. 1 4.00 3.50

VRS, COLS

3.00 DEA VRS

2.50

COLS

2.00 1.50 1.00 11.5

12.5

13.5

14.5

15.5

16.5

17.5

18.5

index of Q

Fig. 2

Average Cost (Total sample)

105 95 85

AC (FF)

75 65 55 45 35 25 15 11.5

12.5

13.5

14.5

15.5

16.5

17.5

18.5

index of Q Scale Efficient Firm

5

The retained output index is not specified due to data secrecy constraints. Firms are ordered by their increasing level of output in the sample.

9

3.1.2 Consistency of the two estimation methods

Figure 1 shows that the efficiency benchmarking methods, namely COLS and DEA under variable returns to scale (VRS), yield rather consistent results: individual efficiency scores for the full sample follow the same trend, and they don’t differ much between the alternative methods. Further investigation will be later conducted in this paper to validate the consistency of the two estimation approaches. Given this, notice that a somewhat high variability exists across firms’ efficiency scores, yet there exists a peak of inefficiency at mid-output range, indicating the possible existence of two patterns of technical efficiency scores in the data. Further investigation reveals that by plotting average production cost against output (figure 2), one obtains a similar pattern as the one for efficiency scores: the observed average production cost curve has a peek at midoutput range, indicating that instead of having a somewhat U-shaped (or L-shaped) curve as someone might predict in single product industries, the sample at hand presents two successive U-shaped average cost curves. Moreover, notice that in figure 2 scale efficient firms are not all situated at the minimum of the average cost function, as one would normally expect. This anomaly also suggests that a different benchmark frontier should be constructed for sub-samples in the industry. 3.1.3 Splitting the sample

In order to confirm the presence of a structural break in the sample, a Chow Breakpoint Test is conducted on the estimated production function reported in Table 2a 6. The data is partitioned into two sub samples around the point where the break seems to occur, and the Chow test results report an F-statistic of 1.31 with a probability of 0.28: the null hypothesis of the presence of a breakpoint is thus accepted. This result is further confirmed by looking at a Kernel fit of two scatter diagrams: average cost (AC) against output (LNQ) and average required efficiency increase (AVO1) against output (LNQ). AVO1 is obtained as the simple average of the estimated required percentage increase in output for each firm to attain technical efficiency, both in COLS and DEA. The Kernel fit is a non-parametric regression that fits the local polynomial of the first series on the Y-axis against the second series on the X-axis. It is a powerful tool in exploratory statistics to analyze the structure underlying a given dataset, and especially to uncover the possible existence of multi-modality as suspected in this present study (see Simonoff (1996) and Silverman (1986) for more on Kernel smoothing methods and non-parametric exploratory statistical analysis). Figure 3 shows the Kernel fits for the average cost and the average required increase in output plots. One notices clearly bi-modality in the sample, and this motivates the division of the data into two samples: one sample dubbed SMALL for firms having an output range less than the production level of the firm at which bimodality occurs, and another sample designated by LARGE, for firms having higher output. The reason for choosing this firm as the threshold for this division is the fact that it has the highest inefficiency (and average cost) level around the range were the structural break of the data seems to occur.

6

The Chow breakpoint test is based on a comparison of the sum of squared residuals obtained by fitting a single equation to the entire sample with the sum of squared residuals obtained when separate equations are fit to each subsample of the data. See Davidson and Mckinnon (1993) for more details.

10

Fig. 3

Kernel fit, Average Cost AC

Kernel AVO1 h = 0.8747) Kernel Fitfit, (Epanechnikov,

Kernel Fit (Epanechnikov, h = 0.8747)

200

100

150

AVO1

AC

80

60

50

40

20 10

100

12

14

16

18

0 10

12

Index of Q

14

16

18

Index of Q LNQ

LNQ

3.2 Sub Samples 3.2.1 Estimation results

After partitioning the sample into two sub-samples (of 30 and 27 firms respectively), OLS estimation results for the parametric production function showed a good fit and good overall parameter significance (except for the labor variable in table 2c). Tables 2b and 2c show the estimation results for these two sub-samples. 2b. OLS Results (sample SMALL, 30 firms) Dependant variable: Q Variable Coeff. Std. Err. 0.48 0.15 LB 0.75 0.20 K 0.91 0.15 RW -0.08 0.02 LB.K

t-Stat. 3.20 3.86 6.01 -5.54

Prob. 0.00 0.00 0.00 0.00

R-squ. 0.81 Adj. R-sq. 0.78

2c. OLS Results (sample LARGE, 27 firms) Dependant variable: Q Variable Coeff. Std. Err. 0.12 0.11 LB 0.60 0.10 K 1.05 0.10 RW -0.04 0.01 LB.K

t-Stat. 1.08 6.17 10.74 -6.06

Prob. 0.29 0.00 0.00 0.00

R-squ. 0.93 Adj. R-sq. 0.92

LB : Labor K : Capital RW : Raw Materials Q: Quantity produced

11

Notice that the estimation outcomes confirm the Chow test result: estimated parameters are rather different between sub samples. Also note the high labor coefficient in Table 2b, reflecting a possibly more labor-intensive technology than that of the LARGE sample. Computing COLS and DEA scores for the two sub-samples, one notices that results are rather different than the full sample’s ones. As shown in Table 3 below, on average firms in the LARGE sample are more efficient than those in the SMALL sample, as lower scores indicate a closer position towards the benchmark production frontier. Results also show that subsample scores and returns to scale indicators present additional refinements as opposed to the full sample treatment.

3. Efficiency Benchmarking Results SMALL LARGE TOTAL7 1.6 1.5 1.8 COLS DEA CRS DEA VRS DEA SE Nb Eff. CRS Nb Eff. VRS Nb SE

(0.5)

(0.3)

1.5

1.3

(0.6)

1.5

(0.5)

(0.4)

(0.5)

1.3

1.2

1.4

(0.4)

(0.3)

(0.5)

1.2

1.1

1.06

(0.3)

(0.2)

(0.2)

4 11 4

5 12 5

6 13 14

COLS: Average COLS score DEA: Average score under DEA; CRS: Constant Returns to Scale, VRS: Variable Returns to Scale SE: Scale Efficiency Nb Eff. CRS : number of efficient firms under CRS Nb Eff. VRS: number of efficient firms under VRS Nb SE: number of scale efficient firms Standard deviation in parantheses

Figures 4 and 5 below show the plot of Average Cost against the logarithm of output for the two sub-samples (the x-axis is increasing quantity, not reported for data secrecy constraints). Notice that countrary to the results depicted in figure 3, figures 4 and 5 show that scale efficient firms are all situated at the minimum of the sample’s average cost curve. Scale efficiency implies that the firm is operating at a point where constant return to scale prevail. In single product industries, this point should normally be associated with a minimum level of average costs (given the assumption that firms seek cost minimization). Estimating two different production frontiers has lead to better results than fitting only one for a seemingly heterogeneous sample. This finding will be further confirmed below, where the technological differences between the two sub-samples will be highlighted.

7

Efficiency data refers to the single-frontier initial estimation.

12

Average Cost (SMALL)

Fig. 4

115

AC (FF)

95

75

55

35 4

23 7

15 11.5

12

12.5

13

13.5

14

14.5

LN Q Scale Efficient Firm

Average Cost (LARGE)

Fig. 5 65 60 55 50

AC (FF)

45 40 35 30 25

17

36 29

20 15 14

14.5

15

15.5

16

LN Q Scale Efficient Firm

13

16.5

17

17.5

18

3.2.2 Consistency of the efficiency estimation approaches

It is useful at this stage to compare the two estimated frontiers with respect to the consistency of the methods used to construct them. Rank correlations were therefore computed between output oriented DEA scores and the efficiency scores obtained under COLS. Two nonparametric methods are commonly used in the literature to estimated these correlations: Kendall and Spearman rank correlations 8. Table 4 shows the results obtained for the SMALL sample. Both the Kendall and Spearman coefficients are positive and significant for all rank correlations between DEA scores under constant and variable returns to scale, and the scores obtained under COLS. Similar results were found for the LARGE sample.

4. Kendall & Spearman Rank Correlations SMALL sample

Kendall COLSs CRSs VRSs

COLSs 1.00

CRSs 0.63 1.00

VRSs 0.50 0.53 1.00

Spearman COLSs CRSs VRSs

COLSs 1.00

CRSs 0.81 1.00

VRSs 0.58 0.59 1.00

Note: all results significant at p2.yM17

SMALL n* = 38

LARGE n* = 45

n* = 1 n* = 19 n* = 1 n* = 10

qa : Output level where the two samples are split.

Notice that efficient single product soft cheese industry configuration implies a reduction in the number of operating firms (n*). As stressed by Panzar, this concentration prevents any loss of productive efficiency, yet there may be welfare losses due to oligopolistic pricing.

5.2 Public policy

It has been shown in the previous section that a careful look at an industry’s structure might reveal sub-groups of technologically efficient firms carrying out viable activities. Weiss (1998) reports a similar finding concerning the evolution of the size distribution of farms in the Upper Austrian farm sector. He finds evidence of the existence of two separate “centers of attraction” of farm size supporting the notion of a “disappearing middle” and the emergence of a bimodal structure of farm sizes. Moreover, Agarwal and Audretsch (1999) report that the persistence of an asymmetric firm-size distribution predominated by small enterprises is an empirical regularity across every manufacturing industry and across developed industrialized nations. All this has considerable policy implications. Public policy – such as antitrust legislation affecting mergers and acquisitions, credit facilitation and financial assistance - can have

13

Proposition 17 in Panzar (1988) states that if minimum efficient scale is achieved at yM and average cost remains constant until the output level yX (a point of maximum efficient scale), then the efficient industry configuration for industry output levels qI for 0 < qI < 2 yM can involve only one firm, and for larger industry output n* is equal to qI / yX.

22

mitigated effects if one single MES is evaluated in industries where small and large firms are equally technically efficient and economically viable. In addition to this, results showing that small labor-intensive firms can be technically efficient and occupy a profitable niche encourage public policy geared towards these firms. This is important for the development of rural and less favored areas typically threatened by human abandonment. Ahmad (2000) argues that in developing countries the small scale industry has a high capacity for labor absorption, especially when confronted with a high cost of capital, and this can have significant impacts on poverty alleviation. More importantly, Ahmad shows that the softening of this capital constraint is a necessary but not a sufficient condition for the development of small scale firms: “Without a basic regime change, policy instruments such as payroll subsidies, use of shadow prices for labor, and make-work projects in the public sector, are unlikely to have anything but a minor impact. Indirect intervention in goals of development policy which raise the profitability of production in small scale industry may reduce lenders’ costs and, thus, encourage entry and competition in lending (Ahmad, 2000, p. 122).” Béranger (1999), Chatellier and Delattre ( 2003) and Hart et al. (2000) echo this finding in the context of less favored areas in developed countries, where the combination of state financial assistance and policies promoting profitability (such as quality promoting policies for agricultural products) have achieved good socio-economic development targets. Given this, the analysis carried out in this present paper highlighted the importance of Protected Designation of Origin products as a central determinant for a technological cluster of small soft cheese manufacturers. Sylvander et al. (2000) supports this finding, showing that in some countries quality policies have sought to justify the protection of names and/or collective brands by arguing that what differentiates the products are their specific modes of production. An example is organic farming, which is currently defined by specifications laid down in a number of countries, and at the European and world level in the Codex Alimentarius standards. The authors argue that the thinking behind the European PDO regulation and behind other national policies on quality (such as the French policy) requires something more than classical “horizontal” differentiation: the difference between products must be attributable above all to the mode of production. All this highlights the way public policy can help sustain a desired mode of production in rural areas through the creation of what can be called “technology” driven differentiation. Sylvander et al. (2000) detail in this respect the European PDO Regulation 2081/92. The objectives of this regulation are: - A uniform legal framework for protection of geographical names for all the countries of the Union, - A Clear information for consumers about the origin of the product, - Diversification of agricultural production in order to strike a better market balance between supply and demand (provide a legal framework for differentiation by origin). This regulation sought to have the following impacts: - Products presenting certain characteristics may become an important asset for the rural world, in particular in less-favored areas by improving farmers' incomes and maintaining the rural population in these areas, - Added value for producers in exchange for a genuine effort to improve quality. The PDO regulation thus aims to create a quality rent associated to a particular territory with specific production techniques. Yet the extent to which consumers value PDO products is a matter of empirical debate. Sylvander et al. (2000) contains various studies suggesting that the PDO label is an effective signal of quality only in combination with other indicators or signals of quality. Moreover, these studies point to the fact that consumers and supply-chain actors are getting increasingly responsive to the protection of regional traditional foods. PDO-type

23

legislation appears to have positive impacts on the sustainability of small scale traditional products, and more empirical research is needed to further validate this point.

Conclusion This paper has presented an empirical methodology to identify technologically independent sub-markets in a given industry, which can be summarized by the following: first, a combination of parametric and non-parametric technical efficiency evaluations are conducted upon the overall sample. Non-parametric tests such as Kendall and Spearman rank correlations are then conducted to check for the consistency of these two approaches. Structural breaks and multi-modality in average production cost and technical efficiency scores are then identified by using non-parametric techniques such as a Kernel fit. New estimations are conducted on suspected sub-samples of firms or units, and finally additional characteristics and information about the processing technology is gathered in order to check the validity of the applied technological clustering. A key feature of this analysis is that different levels of Minimum Efficient Scale (MES) can be identified in the same industry, and this was proven to have significant policy implications. Another novelty in this study is the application of the above-mentioned methodology on a highly disaggregated French dataset. Even with a cross section containing only a small number of single product firms, empirical results were significant and consistent. It would be much welcomed to test the present methodology on larger samples and in different countries. Given this, this study only deals with deterministic production frontiers. A well known shortcoming of using a deterministic approach is that it does not capture noise in the data, as all deviations from the estimated frontier are solely attributed to inefficiency. Exploring the validity of the above technique using alternative benchmarking methodologies such as Stochastic Frontier Analysis (SFA) is a necessary extension to this paper. In addition to this, bi-modality and technologically independent sub-markets were identified here using a static cross-section approach only, due to limited data availability. A dynamic identification of technologically independent sub-markets could help explain the emergence and the sustainability of profitable niches in the same industry, and the possible persistence of bimodality in average cost through time. Extending this paper’s findings to multiproduct settings is also an important line for future research. The empirical results derived in this paper also constitute many theoretical challenges for future research. First, technologically independent and viable sub-markets can help explain why small firms survive with bigger ones in the same industry. This finding presents an alternative explanation for small firm survival than the ones reviewed in Agarwal and Audretsch (1999), namely the fact that conditional on the product life cycle firms can occupy profitable niches or are bound to disappear in an evolutionary process. In addition to this, public policy appears to affect market structure through legislations like the Protected Designation of Origin (PDO) regulation: the government indirectly guarantees sustained rents to a given production technology which is believed to play a central role for the socioeconomic development in less-favored areas; therefore affecting market structure. This finding remains to be further confirmed by additional theoretical and empirical research.

References Agarwal, R. and D. Audretsch. 1999. “The Two Views of Small Firms in Industry Dynamics: A Reconciliation”, Economics Letters 62, 245–251

24

AGRESTE, 1993. “La Technologie de l'Industrie Laitière en 1991”, Ministère de l’Agriculture et de la Pèche - études n.23 Ahmad, J. 2000. “Factor Market Dualism, Small Scale Industry and Labor Absorption”, Journal of Economic Development, vol. 25, no.1, pp. 111-126 Ali, A.I. and L.M. Seiford, 1993. “The Mathematical Programming Approach to Efficiency Analysis”, in Fried, H.O, C. Lovell and S. Shmidt (Eds), The Measurement of Productive Efficiency, Oxford University Press, New York, 120-159 Banker, R.D., A Charnes and W.W. Cooper, 1984. “Some Models for Estimating Technical and Scale Inefficiencies in Data Envelopment Analysis”, Management Science, 30, 1078-1092 Béranger, C., 1999. “Les Productions Alternatives et de Qualité dans les Zones de Montagne Défavorisées”, Cahiers de Recherche de l’Académie de l’Agriculture Française, 85, n.7, pp. 97-109 Bowlin, W.F, 1998. “Measuring Performance: An Introduction to Data Envelopment Analysis (DEA)”, Journal of Cost Analysis, Fall, p. 3-27 Button, K. and T. Weyman-Jones. 1992. “Ownership Structure, Institutional Organization and Measured X-Efficiency”, American Economic Review, vol. 82, no. 2, pp. 439-445 Chatellier, V. and F. Delattre, 2003. “La Production Laitière dans les Montagnes Françaises: Une Dynamique Particulière pour les Alpes du Nord", INRA Productions Animales, 16, 61-76 Coelli, T. and S. Perelman, 1999. “A Comparison of Parametric and Non-Parametric Distance Functions: with Application to European Railways”, European Journal of Operational Research, 117, 326-339 Davidson, R. and J. G. MacKinnon, 1993. Estimation and Inference in Econometrics, Oxford University Press. Farrell, J., (1957). “The Measurement of Productive Efficiency”, Journal of the Royal Statistical Society, Series A (General), Vol. 120, Part III, 253-281. Färe, R., S. Grosskopf and C. K. Lovell, 1994. Production Frontiers, Cambridge: Cambridge University Press. Färe, R. and S. Grosskopf, 2000. Reference guide to On Front, EMQ. Available at: www.emq.com Fecher, F., D. Kessler, S. Perelman and P. Pestieau, 1993. “Productive Performance of the French Insurance Industry”, Journal of Productivity Analysis, 4, 77-93 Ferrier G. D. and C. K. Lovell, 1990. “Measuring Cost Efficiency in Banking -Economic and Linear Programming Evidence”, Journal of Econometrics 46, 229-245

25

Geroski, P.A., 1998. “Thinking Creatively About Markets”, International Journal of Industrial Organization 16, pp. 677-695 Greene, W, 1980. “Maximum Likelihood Estimation of Econometric Frontier Functions”, Journal of Econometrics, 13, 27-56 Hart, M., S. McGuinness, M. O’Reilly and G. Gudgin, 2000. “Public Policy and SME Performance: The Case of Northern Ireland in the 1990s”, Journal of Small Business and Enterprise Development, vol. 7, no. 1, pp. 27-41 Kerkvliet, J., W. Nebesky, C. Tremblay and V. Tremblay. 1998. “Efficiency and Technological Change in the U.S. Brewing Industry”, Journal of Productivity Analysis 10, 271-288 Løyland, K. and V. Ringstad, 2001. “Gains and structural effects of exploiting scaleeconomies in Norwegian dairy production”, Agricultural Economics 24, 149–166 Panzar, J.C. 1989. “Technological Determinants of Firm and Industry Structure”, in the Handbook of Industrial Organization, Vol. I, edited by R. Schmalensee and R. D. Willig, Elsevier Science Publishers Scully G. W. 1999. “Reform and Efficiency Gains in the New Zealand Electrical Supply Industry” (1999), Journal of Productivity Analysis n°2, volume 11, p.138 Seiford, L.M. and R.M. Thrall, 1990. “Recent Developments in DEA: The Mathematical Approach to Frontier Analysis”, Journal of Econometrics, 46, 7-38 Silverman, B.W. 1986. Density Estimation for Statistics and Data Analysis, Chapman & Hall. Simonoff, J. S. 1996. Smoothing Methods in Statistics, Springer-Verlag. Sutton, J. 1991. Sunk Costs and Market Structure, MIT Press. Sylvander, B., D. Barjolle and P. Arfini (eds), 2000. “The Socio-Economics of Origin Labeled Products in Agrifood Supply Chains: Spatial, Institutional and Coordination Aspects”, Proceedings of the 67th EAAE Seminar, Le Mans (France), 28-30/10/1998, published in INRA Economie et Sociologie Rurales, Actes et Communications, n.17 Tortosa-Ausina, E., 2002. “Cost Efficiency and Product Mix Clusters across the Spanish Banking Industry”, Review of Industrial Organization 20: 163–181 Weiss, C. 1998. “Size, Growth and Survival in the Upper Austrian Farm Sector”, Small Business Economics 10: 305-312 XERFI, 2002. Le Marché des Fromages, Diagnostic et Prévisions 2002-2003, Institut Xerfi, Paris, France

26

Appendix A1. Summary Results, Sub Sample 1 (SMALL) Obs

PROD

PDO

AP

AvO1

AvO2

RTS

Q

57

V

0

96

198

232

IRS

1

25

V

0

57

59

51

IRS

2

24

B

0

46

81

84

IRS

3

23

C

1

30

6

0

CRS

4

56

V

0

54

41

35

IRS

5

12

V

0

29

3

1

IRS

6

45

B

1

54

51

44

IRS

7

13

C

1

77

97

110

DRS

8

49

H

0

50

66

72

DRS

9

21

A

1

38

29

27

DRS

10

53

H

0

52

72

74

IRS

11

44

C

1

69

110

125

DRS

12

7

A

1

27

0

0

CRS

13

43

F

0

31

4

0

CRS

14

26

G

1

41

21

7

IRS

15

15

D

1

30

7

1

DRS

16

5

H

0

51

52

58

DRS

17

20 9 18

E B A

1 0 1

43 32 38

43 16 22

45 14 23

DRS DRS DRS

18 19 20

10

E

1

41

30

29

DRS

21

37

A

1

45

34

37

DRS

22

40

V

0

40

31

32

DRS

23

3

H

0

36

11

13

DRS

24

4

H

1

31

3

0

CRS

25

6

A

1

46

25

9

DRS

26

38 2 8

A

D

1 0 1

43 52 43

33 41 25

37 32 26

DRS DRS DRS

27 28 29

50

G

1

73

69

72

DRS

30

Average SD

47 16

43 40

43 48

655 390

32

32

543

H

Median Obs : firm number

PROD: first main soft cheese sub product (letters used for data secrecy constraints) PDO : =1 if firm produces a PDO (Protected Designation of Origin) sub product AP : average price AvO1 : average required % output increase, including COLS AvO2 : average required % output increase, excluding COLS RTS : Returns To Scale (IRS: increasing, DRS: decreasing, CRS: constant) Q : quantity produced in tons (ranking presented only, due to data secrecy constraints) SD: standard deviation

27

A2. Summary Results, Sub Sample 2 (LARGE) Obs

PROD

PDO

AP

AvO1

AvO2

RTS

Q

11 33

D D

0 0

57 31

66 30

58 27

IRS IRS

31 32

54

H

0

67

102

119

DRS

33

51

I

1

33

107

118

DRS

34

17 14 32 52 22 34 35 48 41 31 28 42 46 29 19 55 39 47 27 30 16 36 1

K

0 0 1 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1

22 39 27 24 31 29 34 31 24 22 27 45 27 20 29 34 46 32 22 23 46 27 31 33 11

0 52 12 8 8 30 48 36 7 19 27 96 33 7 17 31 71 22 10 15 55 3 17 34 30 27

0 57 11 3 6 29 44 29 0 0 27 103 34 0 15 29 73 22 2 3 56 0 13 32 35 27

CRS DRS IRS IRS IRS DRS DRS DRS CRS CRS DRS DRS DRS CRS DRS DRS DRS DRS DRS DRS DRS CRS DRS

35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 9,514 9,346 5,779

D K B B B B B B F B D K K B B K K D D D K B

Average SD Median Obs : firm number

PROD: first main soft cheese sub product (letters used for data secrecy constraints) PDO : =1 if firm produces a PDO (Protected Designation of Origin) sub product AP : average price AvO1 : average required % output increase, including COLS AvO2 : average required % output increase, excluding COLS RTS : Returns To Scale (IRS: increasing, DRS: decreasing, CRS: constant) Q : quantity produced in tons (firms indexed by increasing output, data secrecy constraints) SD: standard deviation

28

Suggest Documents