Algorithmic decision rule using ordinal criteria importance coefficients ...

3 downloads 901 Views 757KB Size Report
Abstract. An exact computational method is proposed for the preferability comparison of various solution variants in multicriteria problems with ...
ISSN 09655425, Computational Mathematics and Mathematical Physics, 2012, Vol. 52, No. 1, pp. 43–59. © Pleiades Publishing, Ltd., 2012. Original Russian Text © A.P. Nelyubin, V.V. Podinovski, 2012, published in Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki, 2012, Vol. 52, No. 1, pp. 48– 65.

Algorithmic Decision Rule Using Ordinal Criteria Importance Coefficients with a First Ordinal Metric Scale A. P. Nelyubina and V. V. Podinovskib a

Blagonravov Institute of Mechanical Engineering, Russian Academy of Sciences, Malyi Khariton’evskii per. 4, Moscow, 101990 Russia b International Laboratory of Decision Choice and Analysis, National Research University 'Higher School of Economics,' Pokrovskii bulv. 11, Moscow, 109028 Russia email: [email protected], [email protected] Received June 20, 2011

Abstract—An exact computational method is proposed for the preferability comparison of various solution variants in multicriteria problems with importanceordered criteria using a common scale along which the growth of preferences slows down. DOI: 10.1134/S0965542512010101 Keywords: multicriteria decision making problems, importance coefficients, first ordinal metric scale, criteria importance theory.

1. INTRODUCTION The overwhelming majority of practical methods involved in the analysis of multicriteria decision mak ing problems use information about the relative importance of criteria, but the concept of criteria impor tance is not defined (see, e.g., [1–4]). The mathematical theory of criteria importance was developed in Russia (see [5] for the history and bibliography). Its first part—the theory of qualitative importance of cri teria—relies on rigorous definitions of the concepts 'one criterion is more important than another' and 'criteria are equally important.' These definitions were used to develop, among other things, decision rules, i.e., methods for constructing preference and indifference relations based on the whole information about criteria importance, i.e., on a collection of statements about equality or superiority of certain crite ria over others in terms of importance. In the general case, decision rules are combinatorial [6–10] and, hence, are rather complicated from a computational point of view. Nevertheless, for problems in which all the criteria are ordered according to importance, an analytical decision rule has been developed [7, 11]. However, it was assumed that the criteria scale is ordinal; i.e., the preferences are only known to increase along the scale. In this paper, an algorithmic decision rule is presented for a frequently occurring practical case when it is additionally known that the growth of preferences along the criteria scale slows down. 2. MATHEMATICAL MODEL AND PRELIMINARIES FROM CRITERIA IMPORTANCE THEORY The subsequent presentation is based on the following mathematical model of an (individual) decision making situation based on multiple criteria without uncertainty: ⺝ = 〈τ, X, f1, …, fm, Z0, R〉. Here, τ is the type of a problem statement (choose one best variant or a given number of best variants, order all the variants according to preferability, etc.); Х is the set of variants (the number of variants is no less than two); f1, …, fm are the criteria (m ≥ 2); Z0 = {1, 2, …, q} is the set of scale grades, or, briefly, the scale (q ≥ 3); and R is a nonstrict preference relation. The criterion fi is a function defined on X and taking values from Z0. The criteria fi form the vector criterion f = ( f1, …, fm). Each variant x in X is characterized by its vector estimate y(x) = f(x) = ( f1(x), …, fm(x)). The set of all vector estimates (both attainable, which correspond to the various variants and comprise the set Y = f(X), and hypothetical) is Z = Z 0m . Therefore, the comparison of variants in terms of preferability is reduced to a comparison of their vector estimates. The nonstrict preference relation R is defined on the set Z; specifically, Z: yRz means that the vector esti 43

44

NELYUBIN, PODINOVSKI

mate y is not less preferable than the vector estimate z. The relation R induces an indifference relation I and a (strict) preference relation P defined as yIz ⇔ yRz ∧ zRy and yPz ⇔ yR z ∧ zRy (here, z R y means that zRy is not true). The relation R is not known beforehand but is constructed in the course of developing a model M based on information about the preferences of the decision maker (DM). In what follows, we assume that R is a partial quasiorder (it is reflexive (i.e., yRy holds for any y∈ Z) and transitive (for any y, z, u ∈ Z, yRz and zRu imply yRu), so that P is a strict partial order (it is irreflexive, i.e., yR y for any y∈ Z, and transitive), while I is an equivalence relation (it is reflexive, transitive, and symmetric: for any y, z ∈ Z, yRz implies zRy). Assume that each criterion is independent of the others in terms of preference and its larger values are preferable to smaller ones. In other words, if νk is the (unknown) value of a grade k ∈ Z0 = {1, 2, …, q}, then ν1 < ν2 < … < νq

(2.1)

(which means that the preferences increase along the scale of criteria). Thus, if the only information about the values of grades is that inequalities (2.1) hold, then the scale of criteria is ordinal and the numbers k in Z0 reflect only the order of grades in terms of preferability. For vectors from Ren (n ≥ 2), we use the following notation: a ≥ b ⇔ (ai ≥ bi, i = 1, 2, …, m);

a ≥ b ⇔ (a ≥ b, a ≠ b);

a > b ⇔ (ai > bi, i = 1, 2, …, m).

If no information is available about the DM’s preferences, then R is replaced by the Pareto relation R0, which is defined on Z taking into account the assumption that the value of a grade grows with its index, namely, yR0z ⇔ y ≥ z. Note that the nonstrict preference relation R0 induces an indifference relation I0 that is the relation of equality between vectors and a (strict) preference relation P0 defined as yP0z ⇔ y ≥ z. Let Ω be qualitative (nonnumerical) information on criteria importance consisting of statements of the form i ≈ j (fi and fj are equally important) and i Ɑ j (fi is more important than fj). On the set Z, the statement i ≈ j induces an indifference relation Ii ≈ j, while the statement i Ɑ j induces a preference relation Pi Ɑ j defined as yIi ≈ jz

for

y = zij

and

zi ≠ zj,

yPi Ɑ jz

for

y = zij

and

zi < zj,

where zij is the vector estimate obtained from a vector estimate z by interchanging the components zi and zj. For example, if z = (3, 1, 2), then z13 = (2, 1, 3). These relations underlie the definitions of equality and superiority of one criterion over others used in criteria importance theory (see [6, 10]). Assume that, in addition to Ω, there is information D that the growth of preferences along the criteria scale slows down; i.e., in addition to (2.1), the grade values νk satisfy the following inequalities for the dif ferences between neighboring grade values:

ν 2 − ν1 > ν3 − ν 2 > ... > ν q − ν q−1.

(2.2)

Following [12], such a scale is called a first ordinal metric scale. If D is available, then, on the set Z, the statements i ≈ j and i Ɑ j induce preference relation Pi ≈ j & D and i Ɑ P j & D, respectively, defined as yPi ≈ j & Dz ⇔ (y = (z|zi + l, zj – l), zi + l ≤ zj – l) ∨ (y = (z|zj + l, zi – l), zj + l ≤ zi – l),

(2.3)

yPi Ɑ j & Dz ⇔ (y = (z|zi + l, zj – l), zi + l ≤ zj – l),

(2.4)

where l ∈ Z0 and (z|zi + l, zj – l) is the vector estimate obtained from z by replacing zi with zi + l and zj with zj – l. These relations are easy to interpret. Definition (3.2) states that, if fi and fj are equally important criteria, then, after increasing the smaller of the components zi and zj by a positive integer l and simulta neously decreasing the larger component by the same number l so that the increased component is not larger than the decreased one, the resulting vector estimate is preferable to the original vector estimate z. Definition (3.3) says that, if fi is more important than fj and the component zi in a vector estimate zj is smaller than the component zj, then, after increasing zi by a positive integer l and simultaneously decreas ing zj by the same number l so that the former is no less than the latter, the resulting vector estimate is pref erable to z. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 52

No. 1

2012

ALGORITHMIC DECISION RULE USING ORDINAL CRITERIA

45

Definitions (2.3) and (2.4) can also be treated as the definition of the first ordinal metric scale with the growth of preferences slowing down along the scale as applied to the problem in question without using inequalities (2.2). The nonstrict preference relation RΩ & D induced by the cumulative information Ω & D is defined as the transitive closure of the union of the Pareto relation R0 and all the relations Ii ≈ j, Pi Ɑ j, Pi ≈ j & D and Pi Ɑ j & D induced by statements from Ω in view of D:

⎛ ⎛ ⎞⎞ (2.5) R Ω&D = TrCl ⎜ R 0 ∪ ⎜ (R ω ∪ R ω&D )⎟⎟ , ⎜ ⎜ ⎟⎟ ⎝ ⎝ ω∈Ω ⎠⎠ where TrCl denotes the transitive closure of a binary relation, Rω = Ii ≈ j and Rω & D = Pi ≈ j & D for ω = i ≈ j, and Rω = Pi Ɑ j and Rω & D = Pi Ɑ j & D for ω = i Ɑ j. According to (2.5), yRΩ & Dz holds if and only if there exists a chain of s + 1 vector estimates u t (with s depending on y and z) of the form



0

γ1 1

1

γ2 2

s −1

γs s

(2.6)

u R u , u R u , ..., u R u ,

where u0 = y; us = z; and every γ t is i ≈ j, i ≈ j & D, i Ɑ j or i Ɑ j & D. Moreover, if P occurs at least once in (2.6) (i.e., P0, Pi ≈ j & D, Pi Ɑ j, or Pi Ɑ j & D), then yPΩ & Dz is true, otherwise yIΩ & Dz holds. Chains of form (2.6) are also called explaining chains. Information Ω & D is (inherently) consistent if there is no chain of form (2.6) for u0 = us in which P occurs at least once. Definition (2.5) can be used directly to verify the consistency of Ω & D and compare attainable vector estimates in pairs in terms of preferability if the binary relations on Z in this definition are represented in matrix form (see [13]). However, this path is unfeasible in practice for even relatively small numbers of cri teria m and grades q. For example, if m = 10 and q = 10, the size of the adjacency matrix is 1010 × 1010. Therefore, we need another approach to the problem. 3. DECISION RULE FOR PROBLEMS WITH IMPORTANCEORDERED CRITERIA ON THE FIRST ORDINAL METRIC SCALE In what follows, we consider the case when all the criteria are ordered in terms of importance. For notational simplicity, the criteria are assumed to be indexed in nondecreasing order of their relative impor tance (so that f1 is the most important criterion, while fm is the least important criterion). According to information Ω of the considered form, by grouping the indices of equally important criteria, the criteria index set M = {1, 2, …, m} can be represented as (3.1) M = M 1 ∪ ... ∪ M n , so that, if i ∈ Mp and j∈Mr, then i ≈ j for p = r and i Ɑ j for p < r. Numbers β1, …, βm are called ordinal importance values of the criteria if they satisfy the conditions i ≈ j ∈ Ω ⇒ βi = βj,

i Ɑ j ∈ Ω ⇒ βi > βj.

(3.2)

If the numbers βi are positive and sum to 1, then they are called ordinal importance coefficients and are denoted by αi. Theorem 1. The information Ω & D is consistent. The proofs of the theorems are given at the end of this paper. Let us construct a constructive form of decision rule (2.5), i.e., a rule defining the relation RΩ & D with the use of the information Ω&D. Consider the numbers ⎧ α i , y i > k, α ik ( y ) = ⎨ ⎩ 0, y i ≤ k,

i = 1,2,…, m,

k = 1,2,…, q − 1.

(3.3)

They make up an m × (q – 1) matrix A(y) = (αik(y)),

(3.4)

corresponding to the vector estimate y. If yi = k, then the first k – 1 elements in the ith row of this matrix are equal to αi, while the next q – k elements are zero. Let αk(y) denote the vector composed of the ele ments of the kth column of A(y): αk(y) = (α1k, α2k, …, αmk). COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

(3.5) Vol. 52

No. 1

2012

46

NELYUBIN, PODINOVSKI

Consider the difference between matrices (3.4) corresponding to vector estimates y and z: C(y, z) = A(y) – A(z).

(3.6)

According to (3.3), the elements of this matrix are given by ⎧ αi , zi ≤ k < yi , ⎪ c ik ( y, z ) = α ik ( y ) – α ik ( z ) = ⎨ – α i , y i ≤ k < z i , ⎪ ⎩ 0, otherwise,

i = 1,2,…, m,

k = 1,2,…, q − 1.

A feature of C(y, z) is that each row i consists of a sequence of zeros followed by a sequence of αi (if yi > zi) or –αi (if yi < zi) and then again by a sequence of zeros. In more detail, this can be illustrated as follows. For yi > zi, the row i of C(y, z) has the form column index р k row elements i

1 0

… 0

zi – 1 0

zi αi

… αi

yi – 1 αi

yi 0

… 0

q–1 0

yi −αi

… −αi

zi – 1 −αi

zi 0

… 0

q–1 0

while, for yi < zi, the row i of C(y, z) has the form column index k row elements i

1 0

… 0

yi – 1 0

If zi = 1 (or yi = 1) and yi ≠ zi, then there is no sequence of zeros at the beginning of the row. Similarly, if yi = q (or zi = q) and yi ≠ zi, then there is no sequence of zeros at the end of the row. If yi = zi, then the entire ith row consists of zeros. If there are equally important criteria, they are indexed so that the corresponding rows of C(y, z) with positive elements have smaller indices; i.e., i < j for i ≈ j, yi > zi, yj < zj. (3.7) Based on the matrix C(y, z) thus constructed, decision rule (2.5) can be formulated as follows. Theorem 2. The relation yRΩ & Dz holds if and only if there exists an injective mapping ηyz of the set of neg ative elements of C(y, z) to the set of its positive elements that maps each negative element c j τ(y, z) = −α j to a positive element cit (y, z) = α i such that i < j and t ≤ τ. Moreover, if the mapping ηyz is such that (i) the number of positive elements in C(y, z) equals the number of its negative elements and (ii) each negative element c j τ(y, z) = −α j corresponds to a positive element ciτ(y, z) = α i with i < j from the same column such that α i = α j , then yIΩ & Dz, but, if at least one of these conditions is violated, then yPΩ & Dz. Since the criteria are indexed in order of nonincreasing importance and in view of (3.7), if cit(y, z) = ηyz(cjτ(y, z)), then αi ≥ αj. According to Theorem 2, for a mapping ηyz to exist, it is necessary that the num ber of positive elements of C(y, z) be no less than the number of its negative elements. Obviously, a similar decision rule can be presented for the importance values βi if αik(y) are replaced by the numbers ⎧ βi , yi > k , β ik ( y ) = ⎨ ⎩ d , yi ≤ k ,

i = 1,2,…, m,

k = 1,2,…, q − 1,

where d is an arbitrary number smaller than any βi (i.e., d < mini βi). Sometimes, this form is more conve nient from a computational point of view. Using the definition of ηyz in Theorem 2, we design an algorithm ᑨ 0(y, z) that verifies the existence of such a mapping and constructs it. Let Cik+ (y, z) denote the submatrix of C(y, z) consisting of the first i – 1 rows and the first k columns. If cik (y, z) = −α i , then this submatrix contains a positive element correspond ing to the indicated negative element in ηyz. Additionally, let Cik− (y, z) denote the submatrix of C(y, z) con sisting of its rows indexed by i + 1, …, m and its columns indexed by k, …, q – 1. If cik (y, z) = α i , then this submatrix contains all the negative elements that can be associated with the indicated positive element in all the possible mappings ηyz. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 52

No. 1

2012

ALGORITHMIC DECISION RULE USING ORDINAL CRITERIA

47

A possible approach is to verify the conditions imposed on ηyz for all possible pairs of negative and pos itive elements of C(y, z), but there is a more efficient solution. The idea behind the algorithm ᑨ 0(y, z) is to sequentially find and fix the relations cit (y, z) = η yz (c j τ(y, z)), which occur in at least one of the possible mappings ηyz, if the latter exist. Then the elements c j τ(y, z) = −α j and cit (y, z) = α i can be removed from consideration (e.g., by setting them to zero in C(y, z)). Importantly, if there is at least one mapping ηyz for the original matrix, then there are also mappings ηyz for the transformed matrix, but only those that con tain the relation cit (y, z) = η yz (c j τ(y, z)). If all the negative elements of C(y, z) become zero by sequentially applying this procedure, then ηyz is obtained in explicit form. If, at some iteration step of the algorithm, the submatrix C +j τ(y, z) has no positive elements for some negative element c j τ(y, z) = −α j , this means that no mapping ηyz exists for the original matrix C(y, z) as well. To find the indicated relation cit (y, z) = η yz (c j τ(y, z)), we choose a negative element in C(y, z) with the smallest column index τ (if there are several negative elements in this column, then we choose the one with the smallest row index j). The desired positive element must be in the submatrix C +j τ(y, z). Since the first τ – 1 columns of C(y, z) have no negative elements, the set of negative elements that can correspond to an arbitrary positive element cit (y, z) = α i from C +j τ(y, z) in all possible mappings ηyz belongs to Ci−τ(y, z); i.e., this set is independent of the column index t ≤ τ. Therefore, the positive element cimaxt (y, z) = α imax with the largest row index imax < j has a preimage in the smallest (under inclusion) set of negative elements—ele ments from Ci−maxτ(y, z); i.e., any other positive element of C +j τ(y, z) can have a preimage in the same set and, possibly, among other negative elements. Therefore, if there is a mapping ηyz in which the element c j τ(y, z) = −α j is associated with an arbitrary positive element of C +j τ(y, z), then, in ηyz, this positive element can always be interchanged with cimaxt (y, z) = α imax . In other words, from all the possible mappings ηyz, if they exist, we can always choose one in which cimaxt (y, z) = η yz (c j τ(y, z)). The desired relation has been found. Algorithm ᑨ 0 (y, z) Step 1. If the matrix C(y, z) has no negative elements, then go to Step 6, otherwise execute Step 2. Step 2. Choose a negative element with the smallest column index τ in the matrix C(y, z). If this col umn contains several negative elements, then choose the one with the smallest row index j. Step 3. If the submatrix C +j τ(y, z) has no positive elements, then go to Step 7, otherwise execute Step 4. Step 4. Choose a positive element with the largest row index i < j in the submatrix C +j τ(y, z). (It is of no matter which of the first τ columns it belongs to; to be definite, we can choose the element with the small est column index t.) Step 5. In the matrix C(y, z), set the elements c j τ(y, z) = −α j and cit (y, z) = α i to zero, and go to Step 1. Step 6. The algorithm terminates: there is a mapping ηyz. Step 7. The algorithm terminates: there is no mapping ηyz. Note that, after executing Step 6, not only the existence of a mapping ηyz has been proved, but also such a mapping has been constructed. 4. EXAMPLE OF APPLYING THE ALGORITHM ᑨ 0 TO PREFERABILITY COMPARISON OF VECTOR ESTIMATES Let m = 5, q = 9, and Ω = {1 ≈ 2, 2 Ɑ 3, 3 ≈ 4, 4 Ɑ 5}, so that α1 = α2 > α3 = α4 > α5. Consider the vector estimates y = (9, 5, 7, 4, 1) and z = (1, 6, 8, 8, 2). For them, the 5 × 8 matrices (3.4) are COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 52

No. 1

2012

48

NELYUBIN, PODINOVSKI

⎛ α1 ⎜α ⎜ 2 A ( y) = ⎜ α 3 ⎜α4 ⎜ ⎝0

α1 α2 α3 α4 0

α1 α2 α3 α4 0

α1 α2 α3 0 0

α1 0 α3 0 0

α1 0 α3 0 0

α1 0 0 0 0

α1 ⎞ ⎛0 ⎟ ⎜α 0 ⎟ ⎜ 2 , 0 ⎟ A (z ) = ⎜ α 3 ⎜α4 0⎟ ⎟ ⎜ 0⎠ ⎝ α5 Let us verify the relation yRΩ & Dz. Matrix (3.6) has the form

0 α2 α3 α4 0

0 α2 α3 α4 0

0 α2 α3 α4 0

0 α2 α3 α4 0

0 0 α3 α4 0

0 0 α3 α4 0

0⎞ 0⎟ ⎟ 0⎟. 0⎟ ⎟ 0⎠

⎛ α1 ↓10 α1 ↓ 02 α1 ↓ 30 α1 ↓ 40 α1 ↓ 50 α1 ↓ 60 α1 ↓ 70 α1 ⎞ ⎜ ⎟ 0 0 0 −α 2 ↓ 30 0 0 0⎟ ⎜ 0 (4.1) C( y, z) = ⎜ 0 0 0 0 0 0 −α 3 ↓ 60 0 ⎟⎟ . ⎜ 2 4 5 7 ⎜ 0 0 0 −α 4 ↓ 0 −α 4 ↓ 0 −α 4 ↓ 0 −α 4 ↓ 0 0 ⎟ ⎜⎜ ⎟ 1 0 0 0 0 0 0 ⎟⎠ ⎝ −α 5 ↓ 0 0 This matrix contains eight positive and seven negative elements. Therefore, the condition for the existence of a mapping ηyz holds and we can use the algorithm ᑨ 0(y, z). The order of its performance is shown by the arrows to the right of the nonzero elements involved: the lower index of an arrow recall that the corre sponding element becomes zero, while the upper index shows the iteration of ᑨ 0(y, z) at which this occurs (if an element does not become zero, it has no arrow). The first and last iterations can be described in detail as follows. Iteration 1 Step 1. Since matrix (4.1) has negative elements, go to Step 2. Step 2. Choose the element c51(y, z) = –α5. Step 3. Since the submatrix C51+ (y, z) has a positive element, go to Step 4. Step 4. Choose the positive element c11(y, z) = α1 in the submatrix C51+ (y, z) . Step 5. Set the elements c51(y, z) and c11(y, z) to zero and go to Step 1 of Iteration 2, etc. Iteration 8 Step 1. The matrix obtained at Step 7 has no negative elements (it has only one nonzero element c18(y, z) = α5 > 0), so go to Step 6. Step 6. The algorithm terminates: a mapping ηyz exists. Thus, the following mapping ηyz has been constructed: c51(y, z) = −α5 → c11(y, z) = α1, c44(y, z) = –α4 → c12(y, z) = α1, c25(y, z) = –α2 → c13(y, z) = α1, c45(y, z) = –α4 → c14(y, z) = α1,

c46(y, z) = –α4 → c15(y, z) = α1,

c37(y, z) = –α3 → c16(y, z) = α1,

c47(y, z) = –α4 → c17(y, z) = α1. Therefore, yRΩ & Dz holds. Since the number of positive elements in matrix (4.1) is larger than the number of negative elements, according to Theorem 2, we have yPΩ & Dz. 5. ALGORITHM FOR CONSTRUCTING AN EXPLAINING CHAIN The algorithm ᑨ 0(u, v) verifies whether or not the relation yRΩ & Dz holds (and also the relations yP z and yIΩ & Dz), but, if so, it does construct a chain of form (2.6), whose knowledge is important for explaining the result of the preferability comparison of vector estimates. To construct explaining chains, we can use a recursive algorithm ᑨ (u, v) (see below). Its input data are vectors u, v ∈ Z = Z 0m for which a mapping ηuv exists. The algorithm ᑨ (u, v) returns either uRΩ & Dv or a vector estimate w for which uRΩ & Dw or wRΩ & Dv. The algorithm is recursive since, if uRΩ & Dw, then the algorithm ᑨ (w, v) starts, and if wRΩ & Dv, then the algorithm ᑨ (u, w) starts. First, we assume that u = y and v = z. Ω&D

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 52

No. 1

2012

ALGORITHMIC DECISION RULE USING ORDINAL CRITERIA

49

The algorithm ᑨ (u, v) consists of three stages. First, we describe the basic ideas underlying each stage and then briefly outline the entire algorithm in steps. Stage 1. Construct the matrix C(u, v). If it has no negative elements, then, obviously, uR0v. If there are positive elements, then uP0v, otherwise u = v. Terminate the algorithm. If the matrix has negative elements, then consider one with the largest column index τ (if this column contains several such elements, then choose the one with the smallest row index j). The existence condi tion for a mapping ηuv implies that, for the considered element c j τ(u,v) = −α j in C +jτ(u,v), there must be a positive element cit (u,v) = ηuv (c jτ(u,v)). The case when the column τ of this submatrix contains positive elements ciτ(u,v) = α i , i < j is considered at Stage 2, while the case of no such elements is treated at Stage 3; i.e., the next two stages are alternatives. Stage 2. In the column τ, consider a positive element ciτ(u,v) = α i the nearest from above to the ele ment c j τ(u,v) = −α j (i.e., possessing the maximum row index i < j). Since the columns τ + 1, …, q – 1 con tain no negative elements, the set of negative elements of Ci−τ(u,v) that can correspond to the element ciτ(u,v) = α i in all possible mappings ηuv consists of only the elements in τ below the row i. Moreover, the negative element c j τ(u,v) = −α j is the uppermost of them. Therefore, in any mapping ηuv the positive ele ment corresponding to c j τ(u,v) = −α j can be interchanged with the element ciτ(u,v) = α i . In other words, all the possible mappings ηuv always contain one for which ciτ(u,v) = ηuv (c jτ(u,v)). Then, following the idea behind the algorithm ᑨ 0(y, z), we can fix the relation ciτ(u,v) = ηuv (c jτ(u,v)), setting the elements c j τ(u,v) = −α j and ciτ(u,v) = α i to zero. However, in contrast to ᑨ 0(y, z), they are not set to zero artificially but rather the vector estimate v is transformed (by varying two of its components) into a new vector estimate w such that these elements of C(u, w) become zero. Since the main goal of the algorithm ᑨ (u, v) is to construct a chain of form (2.6) explaining the validity of uRΩ & Dv, the desired transformation must satisfy the condition wRωv, where ω is 0, i ≈ j, i ≈ j & D, i Ɑ j, or i Ɑ j & D (or there must exist a chain of form (2.6) explaining wRΩ & Dv). The condition that the elements c j τ(u,v) = −α j and ciτ(u,v) = α i in a single column cancel out is satisfied by the transformation w = vij (recall that this notation means that w is obtained from v by permuting the components vi and vj), for which, by definition, wI i ≈ jv or wPi Ɑ jv. In the general case, this transformation affects not only the elements c j τ(u,v) = −α j and ciτ(u,v) = α i but also some of the elements in the first τ – 1 columns. Moreover, it is shown below that, in the transition from the matrix C(u, v) to C(u, w), new nonzero elements can appear in these columns or some of the old elements can become zero. However, this is of no matter since it suffices to ensure that the required ele ments in the column τ become zero, while the other columns will be sequentially considered at the next iterations of ᑨ (u, v). It is important that, under this transformation, the existence of a mapping ηuw be preserved after the transition from C(u, v) to C(u, w). It is only then that the algorithm ᑨ (u, w) can be recursively executed. In the general case, it turns out that this property is not preserved for C(u, w) under the transformation w = vij. For example, it is possible that (τ = 2, j = 4, i = 1)

⎛ α1 α 1 ⎞ ⎛ 0 0⎞ ⎜ −α 0 ⎟ ⎜ −α 0 ⎟ ⎟ , C(u, w) = ⎜ 2 ⎟ , C(u,v) = ⎜ 2 ⎜ α3 0 ⎟ ⎜ α3 0 ⎟ ⎜ −α −α ⎟ ⎜ 0 0⎟ ⎝ 4 ⎝ ⎠ 4⎠ 14 where w = v . Here, ηuw exists for C(u, v), whereas no ηuv exists for C(u, w). 1

This difficulty can be overcome by decomposing w = vij into a sequence of transformations v1 = v j j , 2 1

s s −1

s

j j j j ij v 2 = v1 , …, v s = v s −1 , w = v s . Since, under the transformation w = v j*j (i < j* < j), the element c j τ(u,v) = −α j of C(u, w) is lifted to the intermediate row j* (i.e., c j *τ(u, w) = −α j *), the same element is chosen at Stage 1 at the next iteration of ᑨ (u, w). Therefore, this sequence of transformations of vector estimates can be represented as a sequence of iterations of ᑨ (u, v), each, except for the last, involving the transformation w = v j*j and the last involving the transformation w = vij.

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 52

No. 1

2012

50

NELYUBIN, PODINOVSKI

Table 1 Row

Variant 1

Variant 2

i j

αi → 0 –αj → 0

αi → 0 0 → αj

Row

Variant 1

Variant 2

j* j

0 → –αj* –αj → 0

0 → –αj* 0 → αj

Table 2

To find the intermediate row j*, we consider a submatrix of C(u, v) consisting of the rows {i + 1, …, j – 1} and the columns {vi, …, τ – 1} (here, vi is the index of the first positive element in the ith row). If this submatrix is empty (for example, when i and j are neighboring rows) or does not have negative elements, then the transformation w = vij is performed (which completes the zeroing of elements). If this submatrix has negative elements, then we choose the one with the largest column index τ* (if this column contains several such elements, we choose the one with the smallest row index j*). In this case, the transformation w = v j*j is performed. In the above example, the submatrix consisting of rows 2, 3 and column 1 contains such a negative element: j* = 2, τ* = 1. Then the transformed matrix is given by

⎛ α1 α1 ⎞ ⎜ −α −α ⎟ 2 ⎟, C(u, w) = ⎜ 2 ⎜ α3 0 ⎟ ⎜ −α 0 ⎟ ⎝ 4 ⎠ where w = v24. Here, c42(u,v) = −α 4 is lifted to the second intermediate row with the preservation of ηuw. At the next iteration of ᑨ (u, w), this negative element is lifted to the first row and becomes zero together with the corresponding positive element. The procedure completing the zeroing of elements can be described in detail as follows. Consider a vector estimate w = vij for which, by definition, wI i ≈ jv or wPi Ɑ jv. Let us show that a mapping ηuw exists. It suffices to prove this for columns k ∈ {vi, …, τ}. For each of them, there are two variants of the trans formation in the transition from the matrix C(u, v) to C(u, w) (see Table 1). For k = τ, we have only variant 1. As k decreases, variant 1 occurs in the kth columns as long as c jk (u,v) = −α j . Since these columns of C(u, v) have no negative elements between the rows i and j, from all possible mappings ηuv, we can always choose one such that cik (u,v) = ηuv (c jk (u,v)) for all k for which variant 1 occurs. In C(u, w), these corresponding elements become zero. Therefore, ηuw can be obtained from ηuv by eliminating these correspondences. Variant 2 occurs in the remaining columns, if any. Consider one of these columns k. Since the subma trix of C(u, v) consisting of the rows {i + 1, …, j – 1} and the columns {k, …, q – 1} has no negative ele ments, the element cik (u,v) = α i in the mapping ηuv can correspond only to negative elements with an row index larger than j. If there is such a correspondence in ηuv, then ηuw can be obtained from ηuv by replacing cik (u,v) = α i with c jk (u, w) = α j . Thus, we have explicitly described how to pass from ηuv to ηuw in the transition from the matrix C(u, v) to C(u, w) for each possible variant of transformation in the columns k ∈ {vi, …, τ}. This means that the existence of ηuv implies the existence of ηuw. Therefore, we can proceed to the execution of the algo rithm ᑨ (u, w). Now we describe the procedure for lifting to an intermediate row j*. Consider a vector estimate w = v j*j for which, by definition, wI j* ≈ jv or wP j * Ɑjv. Let us show that a mapping ηuw exists. It suffices to prove this for columns k ∈ {τ* + 1, …, τ}. For each of them, there are two variants of the transformation in the transition from the matrix C(u, v) to C(u, w) (see Table 4). COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 52

No. 1

2012

ALGORITHMIC DECISION RULE USING ORDINAL CRITERIA

51

For k = τ, we have only variant 1. As k decreases, variant 1 occurs in the kth columns as long as c jk (u,v) = −α j . The maximality of τ* implies that the submatrix of C(u, v) consisting of the rows {{i + 1, …, j – 1} and the columns {τ* + 1, …, τ} has no negative elements. Moreover, cik (u,v) = α i for all k ∈ {τ* + 1, …, τ} since τ* ≥ vi. Therefore, from all possible mappings ηuv, we can always choose one such that cik (u,v) = ηuv (c jk (u,v)) for all k for which variant 1 occurs. Then the mapping ηuw can be obtained from ηuv by replacing these correspondences with cik (u, w) = ηuw (c j *k (u, w)). In the remaining columns, if any, we have variant 2. Consider one of these columns k. Since the sub matrix of C(u, v) consisting of the rows i + 1, …, j – 1} and the columns {k, …, q – 1} has no negative elements, the element cik (u,v) = α i can correspond in ηuv only to negative elements with an row index larger than j. If there is such a correspondence in ηuv, then cik (u,v) = α i can be replaced by c jk (u, w) = α j in the mapping ηuw as compared with ηuv. Then cik (u, w) = α i is free to correspond to the negative element c j *k (u, w) = −α j * appearing in C(u, w); i.e., the correspondence cik (u, w) = ηuw (c j *k (u, w)) is added toηuw in comparison with ηuv. Thus, we have explicitly described how to pass from ηuv to ηuw in the transition from the matrix C(u, v) to C(u, w) for each possible variant of the transformation in the columns k ∈ {τ* + 1, …, τ}. This means that the existence of ηuv implies the existence of ηuw. Therefore, we can proceed to the execution of the algorithm ᑨ (u, w). Stage 3. If the first j – 1 rows of the column τ have no positive elements, then the considered element c j τ(u,v) = −α j corresponds to the positive element cit (u,v) = α i in the first j – 1 rows and the first τ – 1 col umns. Like at Stage 2, the goal is to make the element c j τ(u,v) = −α j zero by applying suitable transfor mations of u and v. In some cases, as will be shown below, for this purpose, the entire row j has to sequen tially become zero, starting at the first element c j τ1(u,v) = −α j , where τ1 = uj. Let i be the index of a row nearest from above to j that contains positive elements in the first τ – 1 col umns. Later, this index will be reduced, if necessary. For the current index i, let t1 = vi and t = ui – 1 denote the indices of the first and last positive elements, respectively. First, we verify whether the element c j τ(u,v) = −α j can be made zero immediately. If there exists a map ping ηuv for which cit1(u,v) = ηuv (c jτ(u,v)), then consider the vector estimate w = (v|vi + 1, vj – 1). By def inition, it is true that wPi ≈ j & Dv or wPi Ɑ j & Dv. In the matrix C(u, w), in comparison with C(u, v), the cor responding elements c j τ(u,v) = −α j and cit1 (u,v) = α i become zero. A mapping ηuw exists, since it can be obtained from ηuv by eliminating this correspondence. Therefore, we can proceed to the execution of the algorithm ᑨ (u, w). However, a case is possible when the element cit1 (u,v) = α i in all the possible mappings ηuv is associated with other negative elements; i.e., cit1 (u,v) ≠ ηuv (c jτ(u,v)), for example,

⎛ α1 α1 0 ⎞ ⎜ 0 0 −α ⎟ . 2 ⎜⎜ ⎟⎟ −α 0 0 ⎝ 3 ⎠ Here, the element c11(u,v) = α1 is associated with c31(u,v) = −α 3. Note that there is always a mapping ηuv for which the associated positive elements in the ith row, if they exist, are arranged sequentially on the left, starting at the first element cit1 (u,v) = α i . Then we verify whether at least part of the row j can be made zero due to the positive elements in the row i. Consider the case of t < τ1. Since ui/uj, we can try to “degrade” the vector estimate u according to (2.3) or (2.4) by subtracting a positive integer from ui and adding it to uj. The maximum positive integer L is found for which there exists a mapping ηuv with ci t −l (u,v) = ηuv (c j τ1+l (u,v)) for all l ∈ {0, 1, …, L – 1}. The following variants are possible: 1. The element cit (u,v) = α i in all the possible mappings ηuv cannot correspond to c j τ1(u,v) = −α j , since the former is related to other negative elements. Then we can conclude that all the positive elements of the row i are interrelated and cannot correspond to any negative element of the row j in any mapping ηuv. Therefore, i can be reduced to the index of the next row that contains positive elements in the first τ – 1 columns and verify this row. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 52

No. 1

2012

52

NELYUBIN, PODINOVSKI

Table 3 Row

Variant 1

Variant 2

i j

αi → 0 –αj → 0

0 → –αi –αj → 0

2. There exists a mapping ηuv in which cit (u,v) = ηuv (c jτ1(u,v)). Then L ≥ 1 and we can consider the vec tor estimate w = (u|ui – L, uj + L), for which, by definition, uPi ≈ j & Dw or uPi Ɑ j & Dw. In the matrix C(w, v), in contrast to C(u, v), the corresponding elements c j τ1 +l (u,v) = −α j and cit −l (u,v) = αi (l = 0, 1, …, L – 1) become zero. A mapping ηwv exists, since it can obtained from ηuv by eliminating these correspondences. Therefore, we can proceed to the execution of the algorithm ᑨ (w, v). Note that, if L is equal to the number of negative elements in the row j of C(u, v), then the entire jth row of C(w, v) becomes zero. Now consider the case when τ1 ≤ t; i.e., the last positive element in the row i is over one of the negative elements of the row j. Then we necessarily have t < τ; otherwise Stage 2 would be executed. In this case, some of the negative elements on the left in the row j can be made zero by canceling with the positive ele ments of the ith row. Consider the vector estimate w = uij, for which, by definition, uIi ≈ jw or uPi Ɑ jw. We show that a mapping ηwv exists. It suffices to prove this for the columns k ∈ {τ1, …, t}. For each them, there are two variants of the transformation in the transition from the matrix C(u, v) to C(u, w) (see Table 3). Variant 1 occurs for the column t and for decreasing k as long as there are positive elements in the ith row or negative elements in the jth row of C(u, v). Between the rows i and j of C(u, v), there can be only negative elements and associated positive elements, which have no effect on the negative elements in the jth row. Therefore, from all the possible mappings ηuv, we can always choose one such that cik (u,v) = ηuv (c jk (u,v)) for all k for which variant 1 occurs. In the matrix C(u, w), these corresponding ele ments become zero. Therefore, ηuw can be obtained from ηuv by eliminating these correspondences. If t1 > τ1, then variant 2 occurs for the columns k ∈ {τ1, …, t1 – 1}. The positive elements that corre spond to c jk (u,v) = −α j in ηuv are higher than the row i. Therefore, in ηwv, they can correspond to the ele ments cik (w,v) = −α i . Thus, we have explicitly described how to pass from ηuv to ηwv in the transition from the matrix C(u, v) to C(u, w) for each possible variant of transformations in the columns k ∈ {τ1, …, t}. This means that the existence of ηuv implies the existence of ηwv. Therefore, we can proceed to the execution of the algorithm ᑨ (w, v). Moreover, the next time the algorithm is executed, the ith row of C(w, v) either has no positive elements or satisfies t < τ1. Now the algorithm is described in steps. Algorithm ᑨ (u, v) Stage 1 Step 1.1. Construct the matrix C(u, v) as described in Section 3 with condition (3.7) satisfied. If the matrix has no negative elements, then go to Step 4, otherwise execute Step 1.2. Step 1.2. Choose a negative element with the largest column index τ (if this column contains several such elements, then choose the one with the smallest row index j): c j τ(u,v) = −α j . Step 1.3. If the column τ has positive elements in the rows i < j, then go to Stage 2, otherwise go to Stage 3. Stage 2 Step 2.1. In the column τ, choose a positive element ciτ(u,v) = α i with the largest row index i < j. Step 2.2. Construct a submatrix of C(u, v) (vi consisting of the rows {i + 1, …, j – 1} and the columns {vi, …, τ – 1} (where vi is the index of the first positive element in the ith row). If this submatrix is empty or does not contain negative elements, then execute Step 2.3. If it has negative elements, then go to Step 2.4. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 52

No. 1

2012

ALGORITHMIC DECISION RULE USING ORDINAL CRITERIA

53

Step 2.3. Construct the vector estimate w = vij (for which wI i ≈ jv or wPi Ɑ jv). Execute the algorithm ᑨ (u, w). Step 2.4. In the considered submatrix (see Step 2.2), choose a column with the largest index τ* that contains negative elements. In this column, choose a negative element with the smallest row index j* (indexing is with respect to the matrix C(u, v)). Step 2.5. Construct the vector estimate w = v j*j (for which wI j* ≈ jv or wP j* Ɑ jv). Execute the algo rithm ᑨ (u, w). Stage 3 + Step 3.1. Consider the index set I τ− 1 of rows of C(u, v) that contain positive elements in the first τ – 1 + + columns. Define a variable index i ∈ I τ−1 , which is first set equal to the maximum index from I τ− 1 that sat isfies i < j. Step 3.2. For the current index i, let t1 = vi and t = ui – 1 denote the indices of the first and last positive elements, respectively. If there is a mapping ηuv in which cit1 (u,v) = ηuv (c jτ(u,v)), then execute Step 3.3, otherwise go to Step 3.4. Step 3.3. Construct the vector estimate w = (v|vi + 1, vj – 1) (for which wPi ≈ j & Dv or wPi Ɑ j & Dv). Execute the algorithm ᑨ (u, w). Step 3.4. If t < τ1, then execute Step 3.5; if t ≥ τ1, then go to Step 3.7. Step 3.5. If there exists a mapping ηuv in which cit (u,v) = ηuv (c jτ1(u,v)), then execute Step 3.6, otherwise + reduce i to the next index from I τ− 1 and go to Step 3.2. Step 3.6. Find a maximum positive integer L for which there exists a mapping ηuv such that ci t −l (u,v) = ηuv (c j τ1+l (u,v)) for all l ∈ {0, 1, …, L – 1}. The fulfillment of the condition at Step 3.5 ensures that L ≥ 1. Construct the vector estimate w = (u|ui – L, uj + L) (for which uPi ≈ j & Dw or uPi Ɑ j & Dw). Exe cute the algorithm ᑨ (w, v). Step 3.7. Construct the vector estimate w = uij (for which uIi ≈ jw or uPi Ɑ jw). Execute the algorithm ᑨ (w, v). Step 4. The algorithm terminates: it holds that uR0v (moreover, if C(u, v) has positive elements, then 0 uP v, otherwise u = v). Let us show that a matrix C(u, v) with no negative elements is produced after a finite number of itera tions of ᑨ (u, v); i.e., the algorithm terminates at Step 4. After a finite number of executions of Stage 2, the corresponding positive and negative elements in the column τ cancel out. If there are no more negative elements in τ, a column with a smaller index is consid ered, etc. If τ still contains negative elements, then Stage 3 is executed. As a result, either the element c j τ(u,v) = −α j becomes zero at Step 3.3 or the entire row j is sequentially made zero from the left at Steps 3.6 and 3.7. This follows from the fact that, for the current first negative element c j τ1(u,v) = −α j , all the posi tive elements from C +j τ1 (u,v) are searched through until the corresponding element is found (which exists in ηuv by assumption). At each iteration of the algorithm, a new vector estimate w is introduced such that uRΩ & Dw or wRΩ & Dv. As a result, the following two chains are obtained:

γ1 1

1

γ2 2

yR u , u R u , ..., u

p −1

γp

R u

p

and

u

p +1

R

γ p +1

u

p +2

s −1

γs

, ..., u R z,

where each γ t is i ≈ j, i ≈ j & D, i Ɑ j, or i Ɑ j & D. To complete the execution of the algorithm at Step 4, these chains are matched using the Pareto relation u pR0u p + 1. As a result, we obtain a chain of form (2.6) in which u0 = y and us = z. Therefore, yRΩ & Dz holds. 6. EXAMPLE OF CONSTRUCTING AN EXPLAINING CHAIN To illustrate the performance of the algorithm ᑨ (u, v), we use it to construct an explaining chain for the example from Section 4. For the vector estimates y = (9, 5, 7, 4, 1) and z = (1, 6, 8, 8, 2), it was shown above that a mapping ηyz exists. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 52

No. 1

2012

54

NELYUBIN, PODINOVSKI

Algorithm ᑨ (u = y, v = z) Stage 1 Step 1.1. Construct the matrix C (u, v ) = C (y, z ) (see (4.1)). Step 1.2. Choose the element c37(u,v) = −α3 , so that j = 3 and τ = 7. Step 1.3. The first two rows have a positive element in column 7; hence, execute Stage 2. Stage 2 Step 2.1. Choose the element c17(u,v) = α1 (i = 1 < j). Step 2.2. The submatrix consisting of row 2 and columns 1, …, 6 contains a negative element; there fore, go to Step 2.4. Step 2.4. In this submatrix, choose the element c25(u,v) = −α 2, so that j* = 2 and τ* = 5. Step 2.5. Construct the vector estimate w = v23 = (1, 8, 6, 8, 2), for which wP2 Ɑ 3v. Note that, in the transition from the matrix C(u, v) to C(u, w), variant 1 occurs in column 7 and variant 2 occurs in column 6. Algorithm ᑨ (u = y, v = (1, 8, 6, 8, 2)) Stage 1 Step 1.1. Construct the matrix

⎛ α1 α1 α1 α1 α1 α1 α1 α1 ⎞ ⎜ 0 0 0 0 −α −α −α 0 ⎟ 2 2 2 ⎜ ⎟ C( u, v ) = ⎜ 0 0 0 0 0 α3 0 0 ⎟ . ⎜ 0 0 0 −α 4 −α 4 −α 4 −α 4 0 ⎟ ⎜ ⎟ 0 0 0 0⎠ ⎝ −α 5 0 0 0 Step 1.2. Choose the element c27(u,v) = −α 2 column, so that j = 2 and τ = 7. Step 1.3. The first row has a positive element in column 7; hence, go to the next stage. Stage 2 Step 2.1. Choose the element c17(u,v) = α1 (i = 1 < j). Step 2.2. Since i and j are neighboring rows, execute Step 2.3. Step 2.3. Construct the vector estimate w = v12 = (8, 1, 6, 8, 2), for which wI1 ≈ 2v. Note that, in the transition from the matrix C(u, v) to C(u, w), variant 1 occurs in columns 5–7 and variant 2 occurs in col umns 1–4. Algorithm ᑨ (u = y, v = (8, 1, 6, 8, 2)) Stage 1 Step 1.1. Construct the matrix

⎛ 0 ⎜α ⎜ 2 C( u, v ) = ⎜ 0 ⎜ 0 ⎜ ⎝ −α 5

0 α2 0 0 0

0 0 0 0 0 α2 α2 0 0 0 0 0 0 α3 0 0 −α 4 −α 4 −α 4 −α 4 0 0 0 0 0

α1 ⎞ 0⎟ ⎟ 0 ⎟. 0⎟ ⎟ 0⎠

Step 1.2. Choose the element c47(u,v) = −α 4 , so that j = 4 and τ = 7. Step 1.3. The first three rows have no positive elements in column 7; therefore, go to Stage 3. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 52

No. 1

2012

ALGORITHMIC DECISION RULE USING ORDINAL CRITERIA

55

Stage 3 Step 3.1. Choose the row i = 3 < j from the set I 6+ . Step 3.2. For this row, t1 = t = 6. Since there exists a mapping ηuv in which c36(u,v) = ηuv (c47(u,v)), then execute Step 3.3. Step 3.3. Construct the vector estimate w = (v|v3 + 1, v4 – 1) = (8, 1, 7, 7, 2), for which wP3 ≈ 4 & Dv. Algorithm ᑨ (u = y, v = (8, 1, 7, 7, 2)) Stage 1 Step 1.1. Construct the matrix

⎛ 0 ⎜α ⎜ 2 C( u, v ) = ⎜ 0 ⎜ 0 ⎜ ⎝ −α 5

0 α2 0 0 0

0 0 0 0 0 α2 α2 0 0 0 0 0 0 0 0 0 −α 4 −α 4 −α 4 0 0 0 0 0 0

α1 ⎞ 0⎟ ⎟ 0 ⎟. 0⎟ ⎟ 0⎠

Step 1.2. Choose the element c46(u,v) = −α 4 , so that j = 4 and τ = 6. Step 1.3. The first three rows have no positive elements in column 6; therefore, go to Stage 3. Stage 3 Step 3.1. Choose the row i = 2 < j from the set I 6+ . Step 3.2. For this row, t1 = 1 and t = 4. Since the element c21(u,v) = α 2 is associated with c51(u,v) = −α 5, there exists no mapping ηuv in which c21(u,v) = ηuv (c46(u,v)). Therefore, go to Step 3.4. Step 3.4. Since t ≥ τ1, go to Step 3.7. Step 3.7. Construct the vector estimate w = u24 = (9, 4, 7, 5, 1), for which uP2 Ɑ 4w. Note that, in the transition from the matrix C(u, v) to C(w, v), variant 1 occurs in column 4. Algorithm ᑨ (u = (9, 4, 7, 5, 1), v = (8, 1, 7, 7, 2)) Stage 1 Step 1.1. Construct the matrix

⎛ 0 ⎜α ⎜ 2 C( u, v ) = ⎜ 0 ⎜ 0 ⎜ ⎝ −α 5

0 α2 0 0 0

0 α2 0 0 0

0 0 0 0 0

0 0 0 0 0 0 0 0 0 −α 4 −α 4 0 0 0 0

α1 ⎞ 0⎟ ⎟ 0 ⎟. 0⎟ ⎟ 0⎠

Step 1.2. Choose the element c46(u,v) = −α 4 , so that j = 4 and τ = 6. Step 1.3. The first three rows have no positive elements in column 6; therefore, go to Stage 3. Stage 3 Step 3.1. Choose the row i = 2 < j from the set I 6+ . Step 3.2. For this row, t1 = 1 and t = 3. Since there is no mapping ηuv in which c21(u,v) = ηuv (c46(u,v)), go to Step 3.4. Step 3.4. Since now t < τ1, execute Step 3.5. Step 3.5. Since there exists a mapping ηuv in which c23(u,v) = ηuv (c45(u,v)), execute Step 3.6. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 52

No. 1

2012

56

NELYUBIN, PODINOVSKI

Step 3.6. Since there exists a mapping ηuv in which, in addition to c23(u,v) = ηuv (c45(u,v)), it is true that c22(u,v) = ηuv (c46(u,v)), we have L = 2. Therefore, consider the vector estimate w = (u|u2 – 2, u4 + 2) = (9, 2, 7, 7, 1), for which uP2 Ɑ 4 & Dw. Algorithm ᑨ (u = (9, 2, 7, 7, 1), v = (8, 1, 7, 7, 2)) Stage 1 Step 1.1. Construct the matrix

⎛ 0 ⎜α ⎜ 2 C(u,v) = ⎜ 0 ⎜ 0 ⎜ ⎝ −α5

0 0 0 0 0

0 0 0 0 0

0 0 0 0 0

0 0 0 0 0

0 0 0 0 0

0 0 0 0 0

α1 ⎞ 0⎟ ⎟ 0 ⎟. 0⎟ ⎟ 0⎠

Step 1.2. Choose the element c51(u,v) = −α 5, so that j = 5 and τ = 1. Step 1.3. The first four rows have a positive element in column 1; therefore, go to the next stage. Stage 2 Step 2.1. Choose the element c21(u,v) = α 2 (i = 2 < j). Step 2.2. Since v2 = 1 > τ – 1 = 0, the submatrix consisting of the rows {i + 1, …, j – 1} and the columns {vi, …, τ – 1} is empty. Therefore, execute the next step. Step 2.3. Construct the vector estimate w = v25 = (8, 2, 7, 7, 1), for which wP2 Ɑ 5v. Algorithm ᑨ (u = (9, 2, 7, 7, 1), v = (8, 2, 7, 7, 1)) Stage 1 Step 1.1. Construct the matrix ⎛ 0 0 0 0 0 0 0 α1 ⎞ ⎜0 0 0 0 0 0 0 0 ⎟ ⎜ ⎟ C(u,v) = ⎜ 0 0 0 0 0 0 0 0 ⎟ . ⎜0 0 0 0 0 0 0 0 ⎟ ⎜ ⎟ ⎝0 0 0 0 0 0 0 0 ⎠ There are no negative elements in it; therefore, go to Step 4. Step 4. Conclude that uP0v, since the matrix C(u, v) has a positive element. The algorithm produces the following explaining chain: (9, 5, 7, 4, 1)P2 Ɑ 4(9, 4, 7, 5, 1),

(9, 4, 7, 5, 1)P2 Ɑ 4 & D(9, 2, 7, 7, 1),

(9, 2, 7, 7, 1)P0(8, 2, 7, 7, 1), (8, 2, 7, 7, 1)P2 Ɑ 5(8, 1, 7, 7, 2), (8, 1, 6, 8, 2)I 1 ≈ 2(1, 8, 6, 8, 2),

(8, 1, 7, 7, 2)P3 ≈ 4 & D(8, 1, 6, 8, 2), (1, 8, 6, 8, 2)P2 Ɑ 3(1, 6, 8, 8, 2).

7. PROOFS OF THE THEOREMS AND LEMMAS Proof of Theorem 1. On Z define the function m

ϕ(y α) =

∑ α ln y , i

i

i =1

in which the ordinal importance coefficients are consistent with the information Ω in the sense indicated above: if i ∈ Mp and j ∈ Mr (see (3.1), (3.2)), then αi = αj for p = r and αi > αj for p < r. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 52

No. 1

2012

ALGORITHMIC DECISION RULE USING ORDINAL CRITERIA

57

If yP0z, then, in view of (2.1) and y ≠ z, we have m

ϕ(y α) − ϕ(z α) =

∑ (ln y

i

− ln z i ) > 0.

i =1

If yPi Ɑ jz, then, in view of y = zij, zi < zj, and αi > αj, we have

ϕ(y α) − ϕ(z α) = α i ln z j + α j ln z i − α i ln z i − α j ln z j = (α i − α j )(ln z j − ln z i ) > 0. If yIi ≈ jz, then, in view of y = zij and αi = αj, we have

ϕ(y α) − ϕ(z α) = α i ln z j + α j ln z i − α i ln z i − α j ln z j = 0. Using the mean value theorem from calculus, we find for b > a > 0 that

ln b − ln a = (ln t)'t =ξ(b − a) = 1 (b − a), ξ

where

a < ξ < b.

If yPi Ɑ j & Dz, then, in view of y = (z|zi + l, zj – l), zi + l ≤ zj – l (see 2.4)), and αi > αj, we have ϕ(y α) − ϕ(z α) = α i ln(z l + l ) + α j ln(z j − l ) − α i ln z i − α j ln z j = l(α i / ξ i − α j / ξ j ),

where ξi ∈ (zi , z i + l ) and ξ j ∈ (z j − l, z j ). Since ξi < ξ j and αi > αj, it is true that ϕ(y α) − ϕ(z α) > 0 . Let yPi ≈ j & Dz. Since y = (z|zi + l, zj – l), zi + l ≤ zj – l or y = (z|zj + l, zi – l), zj + l ≤ zi – l (see 2.3)), and αi = αj, performing an analysis similar to the previous one, we conclude that ϕ(y α) − ϕ(z α) > 0 . Suppose that there exists chain (2.6) in which P0, Pi Ɑ j, Pi Ɑ j & D, or Pi ≈ j & D occurs at least once. Then we have the inequalities ϕ(u0|α) ≥ ϕ(u1|α), ϕ(u1|α) ≥ ϕ(u2|α), …, ϕ(us – 1|α) ≥ ϕ(us|α), among which at least one is strict. Therefore, ϕ(u 0 α) > ϕ(u s α). This inequality cannot hold for u0 = us, which means that the information Ω & D is consistent. Proof of Theorem 2. Necessity. If yR0z, then the matrix C(y, z) has no negative elements. Therefore, a mapping ηyz exists. Suppose that yPi Ɑ jz, so that i < j and αi > αj. By definition, y = zij and zi < zj. Therefore, only two rows i and j are nonzero in C(y, z): column index (k) elements of row i elements of row j

1 0 0

… 0 0

zi – 1 0 0

zi αi −αj

… αi −αj

zj – 1 αi −αj

zj 0 0

… 0 0

q–1 0 0

Obviously, a mapping ηyz exists in this matrix C(y, z): to each element c jk (y, z) = −α j , it is sufficient to assign the element cik (y, z) = α i . Let yIi ≈ jz, so that αi = αj. Since Ii ≈ j = I j ≈ i, we can assume that i < j. Based on an analysis similar to the previous case (for i Ɑ j), we see that a mapping ηyz exists, and each inequality αi ≥ αj holds as equality. Let yPi Ɑ j & Dz, so that i < j and αi > αj. By definition, y = (z|zi + l, zj – l) and zi + l ≤ zj – l. Therefore, only two rows i and j are nonzero in C(y, z): k i j

1 0 0

… 0 0

zi – 1 0 0

zi αi 0

… αi 0

zi + l – 1 αi 0

zi + l 0 0

… 0 0

zj – l −1 0 0

zj – l 0 −αj

… 0 −αj

zj – 1 0 −αj

zj 0 0

… 0 0

q–1 0 0

Obviously, a mapping ηyz exists in this matrix C(y, z), since the number l of positive elements is equal to the number of negative elements and all the positive elements are in columns with smaller indices. Let yPi ≈ j & Dz, so that αi = αj. According to (2.3), Pi ≈ j & D = Pj ≈ I & D; therefore, we can assume that i < j. Finally, by the symmetry of i and j in the two conditions in (2.3), the analysis can be performed only for the first of them. Proceeding as in the previous case (for i Ɑ j), we easily conclude the existence of ηyz. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 52

No. 1

2012

58

NELYUBIN, PODINOVSKI

Let us show that the existence of ηyz is a transitive property in the sense that, if mappings ηuw and ηwv exist for the matrices C(u, w) and C(w, v), then a mapping ηuv exists for the matrix C(u, v). Indeed, we have C(u, v) = C(u, w) + C(w, v) = A(u) – A(w) + A(w) – A(v) = A(u) – A(v). (7.1) If the sets of nonzero elements of C(u, w) and C(w, v) do not intersect, then, as ηuv, we can use a mapping that coincides with ηwv for negative elements taken to C(u, v) from C(u, w) and coincides with ηuw for negative elements taken from C(w, v). It follows from (7.1) that identical elements in C(u, w) and C(w, v) cannot be both positive or both negative (i.e., it is not possible that cik (u, w) = cik (w,v) = ±αi ). If cik (u, w) = −α i and cik (w,v) = α i for some i ∈ {1, 2, …, m} and k ∈ {1, 2, …, q – 1} (for cik (u, w) = α i and cik (w,v) = −α i , the argument is similar), then this element becomes zero in C(u, v):cik (u,v) = 0. Moreover, the positive element corresponding in ηuw to the element cik (u, w) = −α i belongs to the submatrix Cik+ (u,v). Therefore, in ηuv, it can correspond to a neg ative element associated with cik (w,v) = α i in ηwv. Thus, ηuv is preserved when C(u, w) and C(w, v) are added. t −1

ω

t

If yRΩ & Dz, then there exists a chain of form (2.6). For each link of this chain u R u , as was estab t −1 t lished, there exists a mapping ηut −1ut in the matrix C(u , u ). The matrix C(y, z) can be represented as the t −1

t

sum of the matrices C(u , u ). Therefore, by the transitivity property shown above, a mapping ηyz exists. Sufficiency. It follows from the design of the algorithm ᑨ (u, v) in Section 5. It remains to show that, if conditions (i) and (ii) in Theorem 2 hold for ηyz, then yIΩ & Dz, but if at least one of them is violated, then yPΩ & Dz. If condition (ii) holds, then Stage 3 in the algorithm ᑨ (u, v) is never executed. Moreover, the vector j* j estimates w = v constructed at Stage 2 are such that wI j * ≈ jv, since αi = αj* = αj. If condition (i) is sat isfied, then all the positive elements vanish when all the negative elements become zero. Then Step 1.1 at the final iteration of ᑨ (u, v) produces u = v. Therefore, only Ii ≈ j and I0 occur in the explaining chain, which implies yIΩ & Dz. If the negative elements in C(y, z) are fewer than the positive ones, then, irrespective of the remaining elements in the explaining chain, the algorithm produces uP0v. Then yPΩ & Dz. If a negative element c j τ(y, z) = −α j corresponds to the positive element cit (y, z) = α i with the column index t < τ, then Stage 3 is executed at one of the iterations of the algorithm ᑨ (u, v). The element c j τ(y, z) = −α j can become zero only at Steps 3.3 and 3.6, but simultaneously vector estimates w are con structed for which uPi ≈ j & Dw, uPi Ɑ j & Dw, wPi ≈ j & Dv, or wPi Ɑ j & Dv. This again implies that yPΩ & Dz. Finally, if ηyz includes a relation ciτ(y, z) = η yz (c j τ(y, z)) in which αi > αj, then Stage 2 of the algorithm ᑨ (u, v) produces the relation Pi Ɑ j. Therefore, again, yPΩ & Dz. Theorem 2 is proved. t

8. CONCLUSIONS An algorithmic decision rule was presented for multicriteria decision making problems in which all the criteria are ordered according to their importance and use a first ordinal metric scale. An algorithm was proposed that constructs explaining chains showing why one variant in a pair is preferable to the other or why they are identical in terms of preferability. The decision rule has been implemented in a new version of the decision support system DASS [14, 15], thus replacing a previously used approximate optimization method. REFERENCES 1. R. E. Steuer, Multiple Criteria Optimization: Theory, Computation and Application (Wiley, New York, 1986; Radio i Svyaz’, Moscow, 1992). 2. T. L. Saaty, The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation (Univ. of Pittsburgh, Pittsburgh, 1988; Radio i Svyaz’, Moscow, 1993). 3. O. I. Larichev, Theory and Methods of Decision Making (Universitetskaya Kniga, Moscow, 2006), 3rd ed. [in Russian]. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 52

No. 1

2012

ALGORITHMIC DECISION RULE USING ORDINAL CRITERIA

59

4. A. V. Lotov and I. I. Pospelova, Multicriteria Decision Making Problems (MAKS, Moscow, 2008) [in Russian]. 5. V. V. Podinovski, Introduction to the Theory of Criteria Importance in Multicriteria Decision Making (Fizmatlit, Moscow, 2007) [in Russian]. 6. V. V. Podinovski, “Multicriterial Problems with ImportanceOrdered Homogeneous Criteria,” Avtom. Tele mekh., No. 11, 118–127 (1976). 7. V. V. Podinovski, “Axiomatic Solution of the Problem of Criteria Importance Estimation in Multicriterial Deci sion Making Problems,” in State of the Art of Operations Research (Nauka, Moscow, 1979), pp. 117–145 [in Rus sian]. 8. V. V. Podinovski, “Multicriterial Optimization Problems with ImportanceOrdered Criteria,” in Optimization Methods in EconomicMathematical Simulation (Nauka, Moscow, 1991), pp. 308–324 [in Russian]. 9. V. V. Podinovski, “Multicriteria Optimization Problems Involving ImportanceOrdered Criteria,” in Modern Mathematical Methods of Optimization (Akademie, Berlin, 1993), pp. 254–267. 10. V. V. Podinovski, “Problems with ImportanceOrdered Criteria,” UserOriented Methodology and Techniques of Decision Analysis and Support, Lecture Notes in Economics and Math. Systems, Vol. 397, (SpringerVerlag, Ber lin, 1993), pp. 150–155. 11. V. V. Podinovski, “Importance Coefficients of Criteria in DecisionMaking Problems: Serial or Ordinal Coeffi cients,” Avtom. Telemekh., No. 10, 130–141 (1978). 12. P. C. Fishburn, Decision and Value Theory (Wiley, New York, 1964). 13. V. A. Osipova, V. V. Podinovski, and N. P. Yashina, “On Noncontradictory Extension of Preference Relations in DecisionMaking Problems,” Zh. Vychisl. Mat. Mat. Fiz. 24, 831–840 (1984). 14. V. V. Podinovski and M. A. Potapov, “Theoretical Foundations and Systems of Multicriteria Decision Making Support,” Proceedings of 34th International Conference on Information Technologies in Science, Education, Tele communications, and Business, Gurzuf, Ukraine, May 20–30, 2007, Supplement to the Journal Open Education, 87–89 (2007). 15. V. V. Podinovski, “Analysis of Multicriteria Choice Problems by Methods of the Theory of Criteria Importance, Based on Computer Systems of DecisionMaking Support,” J. Comput. Syst. Sci. 47, 221–225 (2008).

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 52

No. 1

2012

Suggest Documents