Generalized Chi Square Method for the Estimation of ... - Springer Link

4 downloads 0 Views 59KB Size Report
This paper presents a generalized chi square method for the estimation of weights in the analytic hierarchy process and studies its desirable properties such as ...
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 107, No. 1, pp. 183–192, OCTOBER 2000

TECHNICAL NOTE Generalized Chi Square Method for the Estimation of Weights1 Z. S. XU 2 Communicated by Y. C. Ho

Abstract. This paper presents a generalized chi square method for the estimation of weights in the analytic hierarchy process and studies its desirable properties such as invariance under transpose, harmony, and rank preservation. A convergent iterative algorithm is given and its practicality is shown by a numerical example. Key Words. Analytic hierarchy process, generalized chi square method, judgement matrix, harmony, rank preservation.

1. Introduction The analytic hierarchy process (AHP) is a decision-aiding method which has received increasing attention in a variety of areas, such as economic analysis and urban or regional planning and evaluation, since it was proposed by Saaty (Ref. 1). In the AHP, a decision process is modelled as a hierarchy. The levels of the hierarchy describe a system from the lowest level (sets of alternatives), through the intermediate levels (subcriteria and criteria), to the highest level (general objective). Using the AHP methodology, it is required to estimate the weights of the elements in a given level with respect to some or all of the elements in the adjacent level above. Up to now, many methods have been proposed for assessing the weights, such as the eigenvector method (Ref. 2), least square method (Ref. 3), gradient eigenvector method (Ref. 4), logarithmic least square method (Ref. 5), etc. The chi square method is mentioned by Jensen, Blankmeyer, and Wang 1

This research was supported by the National Science Foundation of China. The author is grateful to the referees for valuable suggestions and comments. 2 Associate Professor, Institute of Communications Engineering, Nanjing, Jiangsu, China.

183 0022-3239兾00兾1000-0183$18.00兾0  2000 Plenum Publishing Corporation

184

JOTA: VOL. 107, NO. 1, OCTOBER 2000

(Refs. 6–8). In this paper, a generalized chi square method (GCSM) for determining the weights of decision elements is proposed. Some desirable properties of the GCSM are studied (Section 3), and a convergent iterative alorithm is also given (Section 4).

2. Generalized Chi Square Method Consider the problem of comparing a set of decision elements with respect to a simple criterion. Let AG(aij )nBn be a matrix of pairwise comparisons, where aijH0, aij G1兾aji , for all 1⁄i, j⁄n; A is also called a judgement matrix. Let wG(w1 , w2 , . . . , wn )T be the weight vector associated with the judgement matrix A, where wiH0 and n

∑ wi G1.

(1)

i G1

The judgement matrix AG(aij )nBn is called a consistent matrix if aij Gaik akj ,

i, j, k∈Ω,

(2)

where ΩG{1, 2, . . . , n}. By the property of consistent matrices, we have aij Gwi 兾w j ,

i, j∈Ω;

thus, aijα G(wi 兾w j)α ,

for each i, j∈Ω, α ≠0.

Since (3) can be expressed as wi Gaij w j ,

i, j∈Ω,

(4)

from (1) and (4) we can obtain the exact solution of the weight vector w of the consistent matrix A; that is,



T

n

n

n

i G1

i G1

i G1



w*G 1兾 ∑ ai1 , 1兾 ∑ ai2 , . . . , 1兾 ∑ ain .

(5)

However, judgements of people depend on personal psychological aspects, such as experience, learning, situation, state of mind, and so forth; hence, the consistency condition is rarely satisfied. As a result, (3) does not hold in the general case. Here, we introduce the generalized deviation element fij ; i.e., we let fij G[aijα A(wi 兾w j)α ]2兾(wi 兾w j)α ,

i, j∈Ω, α ≠0,

185

JOTA: VOL. 107, NO. 1, OCTOBER 2000

and construct the generalized deviation function n

n

F(w)G ∑ ∑ [aijα A(wi 兾w j)α ]兾(wi 兾w j)α ,

(6)

i G1 j G1

where



n



w∈DG wG(w1 , w2 , . . . , wn )T 兩w jH0, j∈Ω; ∑ wi G1 , i G1

α ≠0.

Obviously, a reasonable weight vector w* should be determined so as to minimize F(w), w∈D. We term this approach the generalized chi square method (GCSM). The following conclusion can be obtained for F(w). Theorem 2.1. The generalized deviation function F(w) has a unique minimum point w*G(w1 , w2 , . . . , wn )T ∈D, which is also the unique solution of the following set of equations in D: n

n

j G1

j G1

2α α 2α α ∑ (1Caij )(w j 兾wi) G ∑ (1Ca ji )(wi 兾wj ) ,

i∈Ω.

(7)

Proof. (i) Existence. Since D is a bounded vector space, F(w) is a continuous function in D; for any w∈D, F(w)¤0. As a result, F(w) has an infimum; i.e., there exists a constant d such that dGinf{F(w)兩w∈D}. Hence, there exists w*∈D such that F(w*)Gd. As w* is the solution of min F(w) under the constraint n

∑ wi G1,

i G1

and since wi ¤0,

for i∈Ω,

we can construct the Lagrange function



n



L(w, λ )GF(w)Cλ ∑ wiA1 , i G1

(8)

where λ is the Lagrange multiplier. Differentiating (8) with respect to wk , k∈Ω, and setting the partial derivatives equal to zero, we get the following

186

JOTA: VOL. 107, NO. 1, OCTOBER 2000

set of equations: n

α α α A1 α ∑ 2[akj A(wk 兾w j) ][−α (wk 兾w j) (1兾w j)(w j 兾wk) ]

j G1

n

α C ∑ [akj A(wk 兾w j)α ]2[α (w j 兾wk)α A1 (−w j 兾w 2k)] j G1 n

C ∑ 2[aαjkA(w j 兾wk)α ][−α (w j 兾wk)α A1 (−w j 兾w 2k)(wk 兾w j)α ] j G1 n

C ∑ [aαjkA(w j 兾wk)α ]2[α (wk 兾w j)α A1 (1兾w j)]Cλ G0.

(9)

j G1

Multiplying (9) by wk , k∈Ω, we have n

2α )(w j 兾wk)α ]Cλ wk G0, α ∑ [(1Ca2jkα )(wk 兾w j)α A(1Cakj

k∈Ω.

(10)

j G1

By (10), it follows that n

n

n

2α α ∑ ∑ [(1Ca2jkα )(wk 兾w j)α A(1Cakj )(w j 兾wk)α ]Cλ ∑ wk G0. k G1 j G1

k G1

Clearly, the first part of the right-hand side of the above equation is zero. As a result, λ G0. Hence, from (9) and λ G0, it can be obtained that n

2α α 2α α ∑ [(1Ca jk )(wk 兾w j) A(1Cakj )(w j 兾wk) ]G0,

k∈Ω;

j G1

namely, n

n

j G1

j G1

2α α 2α α ∑ (1Cakj )(w j 兾wk) G ∑ (1Ca jk )(wk 兾w j) ,

k∈Ω, α ≠0,

which completes the proof of (i). (ii) Uniqueness. Assume that xG(x1 , x2 , . . . , xn )T ∈D and yG( y1 , y2 , . . . , yn )T ∈D are the solutions of Eq. (7). Let zi Gyi 兾xi and zl Gmax {z j }. i ∈Ω

If there exists j∈Ω such that zjFzl , then as α H0, we have n

n

j G1

j G1

2α α 2α α α ∑ (1Calj )(x j 兾xl) H ∑ (1Calj )(x j 兾xl) (z j 兾zl) n

G ∑ (1Calj2α )( y j 兾yl)α , j G1

(11)

JOTA: VOL. 107, NO. 1, OCTOBER 2000 n

n

j G1

j G1

187

2α α 2α α α ∑ (1Ca jl )(xl 兾x j) F ∑ (1Ca jl )(xl 兾x j) (zl 兾zj ) n

G ∑ (1Cajl2α )( yl 兾yj )α.

(12)

jG1

Thus, by (7), (11), (12), it can be deduced that n

n

j G1

j G1

2α α 2α α ∑ (1Calj )(y j 兾yl) F ∑ (1Ca jl )(yl 兾y j) ,

which contradicts (7). similarly, as α F0, we can also derive a contradiction. Therefore, zi Gzl ,

for each i∈Ω.

Namely, x1 兾y1 Gx2 兾y2 G, · · · ,Gxn 兾yn . Also, since x, y∈D, xi Gyi ,

i∈Ω,

i.e., xGy, which completes the proof of (ii).



3. Some Properties For the judgement matrices AG(aij )nBn and CG(cij )nBn , if we let n

n

F(A, C)G ∑ ∑ (aijα Acijα )2兾cijα , i G1 j G1

then the following result can be obtained easily. Theorem 3.1. Invariance under Transpose. Let AG(aij )nBn and CG(cij )nBn be the judgement matrices. Then, AT and C T are also judgement matrices and F(AT, C T )GF(A, C). (ii) Let AG(aij )nBn be a judgement matrix, and suppose that CG (cij )nBn is a consistent matrix, which is closest to A, obtained by minimizing F(A, C). Then, C T is a consistent matrix, which is closest to AT, obtained by minimizing F(AT, C T ). (i)

188

JOTA: VOL. 107, NO. 1, OCTOBER 2000

Theorem 3.1 guarantees that, if wG(w1 , w2 , . . . , wn )T is the weight vector of A, then wT G(1兾w1 , 1兾w2 , . . . , 1兾wn )T is the weight vector of AT. It follows from Theorem 3.1 that the GCSM has this invariance property. Definition 3.1. Let the judgement matrix AG(aij )nBn , and let AG

冤A



A1 A2 , A4 3

where A1 and A4 are square matrices of order r, s, respectively, with rCsG n. Let T(·) be a priority method, and let uG(u1 , u2 , . . . , ur )T and ν G (νrC1 , νrC2 , . . . , νn )T be the weight vectors of A1 , A4 respectively, where u and ν are derived by using T(·). Furthermore, suppose that there exist aH0, bH0, aCbG1, such that aij G(a兾b)(ui 兾νj ), iG1, 2, . . . , r and jGrC1, rC2, . . . , n. If the weight vector of A is wG(au1 , au2 , . . . , aur , bνrC1 , . . . , bνn )T, then T(·) is called of harmony. Theorem 3.2. The GCSM is of harmony.

The well-known eigenvector method (EM) does not have the properties above. Theorem 3.3. The GCSM has rank preservation; i.e., if aik ¤ajk , k∈Ω, then wi ¤wj , with aik Gajk , k∈Ω, if and only if wi Gwj , where wG (w1 , w2 , . . . , wn )T ∈D. Theorem 3.4. If the judgement matrix A is consistent, then the GCSM and the EM derive the same weight vector for the matrix A.

4. Convergent Iterative Algorithm Let AG(aij )nBn be a judgement matrix, and let k be the number of iterations. To solve Eq. (7), we give a simple convergent iterative algorithm

189

JOTA: VOL. 107, NO. 1, OCTOBER 2000

as follows: Step 1. Given an original weight vector w(0)G(w1(0), w2(0), . . . , wn (0))T ∈D, specify the parameters (, 0F(F1, and α (in general, (⁄兩α 兩⁄2), and let kG0. Step 2. Compute n

η i (w(k))G ∑ [(1Caij2α (w j (k)兾wi (k))α j G1

A(1Ca2jiα )(wi (k)兾w j (k))α ],

i∈Ω.

If 兩η i (w(k))兩F( holds, for all i∈Ω, then w*Gw(k); go to Step 5; otherwise, continue to Step 3. Step 3. Determine the number m such that 兩η m (w(k))兩Gmax兩 η i (w(k))兩, i ∈Ω

and compute



2α )(w j (k)兾wm (k))α 兾 T(k)G ∑ (1Camj j≠m

2α α ∑ (1Ca jm )(wm (k)兾w j (k))

j≠m

xi (k)G

冦w (k),

T(k)wm (k),



1/2α

,

iGm, i ≠ m,

i

n

wi (kC1)Gxi (k)兾 ∑ x j (k),

i∈Ω.

j G1

Step 4. Let kGkC1; go to Step 2. Step 5. Output w*, which is the weight vector. Step 6. End. Theorem 4.1. The above iterative algorithm terminates in finite steps. Proof. Consider a change of F(w), w∈D. Suppose that tH0, and let r(t)GF(x(k))GF(w1 (k), . . . , wmA1 (k), twm (k), wmC1 (k), . . . , wn (k)) α G ∑ [amj A(twm (k)兾w j (k))α ]2兾(twm (k)兾w j (k))α j≠m

C ∑ [aαjmA(w j (k)兾(twm (k)))α ]2兾(w j (k)兾(twm (k)))α j≠m

C ∑ ∑ [aijα A(wi (k)兾w j (k))α ]2兾(wi (k)兾w j (k))α i≠m j≠m

190

JOTA: VOL. 107, NO. 1, OCTOBER 2000 2α G ∑ [(1Camj )(w j (k)兾wm (k))α (1兾t α ) j≠m

2α Ca2jmα )] C(1Ca2jmα )(wm (k)兾w j (k))α t α A2(amj

C ∑ ∑ [aijα A(wi (k)兾w j (k))α ]2兾(ωi (k)兾ωj (k))α.

(13)

i≠m j≠m

Let 2α h0 G ∑ ∑ [aijα A(wi (k)兾w j (k))α ]2兾(wi (k)兾w j (k))α A2 ∑ (amj Ca2jmα ), i≠m j≠m

j≠m

2α mj

α

h1 G ∑ (1Ca )(w j (k)兾wm (k)) , j≠m

h2 G ∑ (1Ca2jmα )(wm (k)兾w j (k))α . j≠m

Then, (13) can be expressed as r(t)Gh1 兾t α Ch2t α Ch0 . Set r(t)G0. Then, there exist t* and r(t*), such that r(t*)Gmin r(t). Namely,



2α t*G ∑ (1Camj )(w j (k)兾wm (k))α 兾 ∑ (1Ca2jmα )(wm (k)兾w j (k))α j≠m

j≠m



1/2α

,

r(t*)G21h1h2Ch0 .

(14a) (14b)

If t*G1, then from (14) it can be obtained that n

2α α 2α α ∑ [(1Camj )(w j (k)兾wm (k)) A(1Ca jm )(wm (k)兾w j (k)) ]G0.

j G1

By the definition of m in Step 3 of algorithm, we have n

2α α 2α α ∑ [(1Caij )(w j (k)兾wi (k)) A(1Ca ji )(wi (k)兾w j (k)) ]G0,

i∈Ω.

j G1

From Theorem 2.1, it follows that the algorithm terminates and w*Gw(k). If t*≠1, then F(w(k))AF(x(k))Gr(1)Ar(t*)Gh1Ch2A21h1h2 G(1h1A1h2)2H0. Since F(w) is a homogenous function, F(x(k))GF(w(kC1)).

(15)

191

JOTA: VOL. 107, NO. 1, OCTOBER 2000

Equation (15) shows that F(w(kC1))FF(w(k)),

for all k∈Ω;

i.e., F(w(k)) is a monotone decreasing sequence. Also, since F(w) is nonnegative function with an infimum in D, by the principle of mathematical analysis, we know that a monotone decreasing bounded sequence must be convergent. This completes the proof of Theorem 4.1. 䊐

5. Numerical Example In this section, a numerical example is given, which is constructed in order to understand more fully the properties of the GCSM. Numerical results are obtained for the Saaty wealth-of-nations matrix given in Ref. 1 and repeated in Table 1. Saaty made estimates of the relative wealth of nations and showed that the eigenvector corresponding to the matrix agreed closely with the GNP. Table 2 compares the Saaty results with those of the Table 1. Wealth-of-nations matrix A.

US USSR China France UK Japan WG

US

USSR

China

France

UK

Japan

WG

1 1兾4 1兾9 1兾6 1兾6 1兾5 1兾5

4 1 1兾7 1兾5 1兾5 1兾3 1兾4

9 7 1 5 5 7 5

6 5 1兾5 1 1 3 3

6 5 1兾5 1 1 3 3

5 3 1兾7 1兾3 1兾3 1 1兾2

5 4 1兾5 1兾3 1兾3 2 1

Table 2. Comparisons of the weights for wealth-of-nations matrix A. GCSM

US USSR China France UK Japan WG ESS

EM

α G±0.1

α G±0.5

α G±1

α G±1.5

α G±2

0.4271 0.2303 0.0208 0.0524 0.0524 0.1227 0.0943 188

0.4167 0.2315 0.0199 0.0536 0.0536 0.1283 0.0963 198

0.4091 0.2318 0.0210 0.0555 0.0555 0.1293 0.0977 162

0.3971 0.2316 0.0227 0.0591 0.0591 0.1307 0.0997 120

0.3876 0.2308 0.0240 0.0622 0.0622 0.1322 0.1010 97

0.3799 0.2295 0.0252 0.0650 0.0650 0.1339 0.1016 85

Notes: Maximum eigenvalue λ max G7.608. Consistency ratio rule of thumb CRG0.077F0.10.

192

JOTA: VOL. 107, NO. 1, OCTOBER 2000

GCSM. Here, n

n

ESSG ∑ ∑ (aij Awi 兾w j)2 i G1 j G1

is the sum of squares of the errors.

6. Summary and Conclusions The numerical results tend to show that the GCSM can be used to obtain the weights. Moreover, the decision makers can take proper values of α in the algorithm according to different decision-making problems. In this paper, it has been shown also that the GCSM has some desirable properties and is preferable to the EM in several important respects.

References 1. SAATY, T. L., A Scaling Method for Priorities in Hierarchical Structures, Journal of Mathematical Psychology, Vol. 15, pp. 234–281, 1977. 2. SAATY, T. L., The Analytic Hierarchy Process, McGraw-Hill, New York, NY, 1980. 3. JENSEN, R. E., An Alternative Scaling Method for Priorities in Hierarchical Structures, Journal of Mathematical Psychology, Vol. 28, pp. 317–332, 1984. 4. COGGER, K. O., and YU, P. L., Eigenweight Vectors and Least-Distance Approximation for Revealed Preference in Pairwise Weight Ratios, Journal of Optimization Theory and Applications, Vol. 46, pp. 483–491, 1985. 5. CRAWFORD, G., and WILLIAMS, C., A Note on the Analysis of Subjective Judgement Matrices, Journal of Mathematical Psychology, Vol. 29, pp. 387–405, 1985. 6. JENSEN, R. E., Comparisons of Eigenvector, Least Squares, Chi Square, and Logarithmic Least Squares Methods of Scaling a Reciprocal Matrix, Trinity University, Working Paper, Vol. 127, 1984. 7. BLANKMEYER, E., Approaches to Consistency Adjustment, Journal of Optimization Theory and Applications, Vol. 54, pp. 479–488, 1987. 8. WANG, Y. M., and FU, G. W., The Chi-Square Priority Method of Comparison Matrix, Journal of Decision and Hierarchy Analytic Process, Vol. 2, pp. 19–27, 1992. 9. XU, Z. S., and WEI, C. P., A Consistency Improving Method in the Analytic Hierarchy Process, European Journal of Operational Research, Vol. 116, pp. 443– 449, 1999.