The Probabilistic Weighted Average and Its Application in Multiperson Decision Making Jose´ M. Merigo´ ∗ Department of Business Administration, University of Barcelona, 08034 Barcelona, Spain
We present the probabilistic weighted average (PWA). It is an aggregation operator that unifies the probability and the weighted average in the same formulation. Its main advantage is that it provides a formulation that it is able to deal with probabilities and weighted averages in the same formulation, considering the degree of importance that each concept has in the analysis. We extend this approach by using moving averages. We study some of its main properties, and we see that the applicability of the PWA is very broad because we can apply it in a wide range of fields including statistics, economics, and engineering. We focus on a multiperson decision-making application regarding the selection of fiscal policies. We see that with this approach we can unify decision-making problems under risk environment with subjective and objective information. C 2012 Wiley Periodicals, Inc.
1. INTRODUCTION The weighted average (WA) is one of the most common aggregation operators found in the literature.1 Another interesting concept that can be used as an aggregation operator is the probability.2,3 These two concepts have been used in a number of applications including statistics, economics, engineering, and physics. Probably, these two concepts are among the most relevant in statistics. However, there are a number of other aggregation operators such as the ordered weighted averaging (OWA) operator,4−6 the quasi-arithmetic mean,7−12 distance measures,13,14 and induced aggregation operators.15−20 In this paper, we present a new approach that unifies the weighted average and the probability in the same formulation. We call it the probabilistic weighted average (PWA). Its main advantage is that it unifies them considering the degree of importance that each concept has in the aggregation problem considered. Thus, we are able to consider subjective and objective information in the same formulation being able to assess the information in a more efficient and representative way. It includes a wide range of particular cases including the classical weighted average ∗
Author to whom all correspondence should be addressed: e-mail:
[email protected].
INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, VOL. 27, 457–476 (2012) 2012 Wiley Periodicals, Inc. View this article online at wileyonlinelibrary.com. • DOI 10.1002/int.21531
C
458
´ MERIGO
(WA), the probabilistic aggregation (PA), the arithmetic mean (AM), and many others. We extend the PWA operator by using moving averages21 forming the probabilistic weighted moving average (PWMA). Thus, we are able to analyze the subjective and objective information of the problem in a dynamical way where we may consider different periods of time. Thus, we provide a model close to the real world where a number of changes happen over time. We study some particular properties and cases of this new approach. We study the applicability of the PWA operator, and we see that it is very broad because all the previous studies that use the probability or the weighted average can be revised and extended with this new approach. The reason for this is that we can always reduce the model to the classical WA or PA. Therefore, if it is enough with the classical approach, we can simply reduce the PWA operator to this formulation. But in the cases where we have more information, with the PWA operator we can deal with this information in a more efficient way. We study some theoretical applications in statistics including the expected value, the variance, the covariance, and a linear regression model by using the PWA operator. We also study its applicability in multiperson decision making22,23 forming the multiperson-PWA (MP-PWA) operator. We see that this operator is more complete because it permits to deal with the opinion of several persons in the analysis. With this model we are able to form a new decision-making process that we call decision making under subjective and objective risk. We focus on an application in political decision making regarding the selection of the optimal fiscal policy of a government. By using the PWA operator, we obtain a more complete representation of the problem because we are able to consider subjective and objective information in the same problem. This paper is organized as follows. In Section 2, we briefly describe some basic preliminaries regarding the weighted average and the probability. Section 3 introduces the PWA operator, and Section 4 extends it by using moving averages. Section 5 studies the applicability of the PWA operator, and Section 6 focuses on a multiperson decision-making application. Section 7 presents an illustrative example in political management, and Section 8 summarizes the main conclusions of the paper. 2. PRELIMINARIES In this section, we briefly review some basic concepts regarding weighted and PA operators. 2.1. Weighted Aggregation Operators Weighted aggregation functions are those functions that weigh the aggregation process by using the weighted average. Some examples are the aggregation with the weighted average, with belief structures that use the weighted average24 and with the weighted OWA (WOWA) operator.25 The weighted average can be defined as follows. International Journal of Intelligent Systems
DOI 10.1002/int
PROBABILISTIC WEIGHTED AVERAGE AND ITS APPLICATION
459
n DEFINITION 1. A WA operator of dimension n is a mapping n WA: R → R that has an associated weighting vector W, with wi ∈ [0, 1] and i=1 wi = 1, such that
WA(a1 , . . . , an ) =
n
wi ai ,
(1)
i=1
where ai represents the ith argument variable. Other extensions of the weighted average are those that use it with the OWA operator such as the WOWA operator,25 the hybrid average,26 and the importance OWA operator.27 Recently,15 Merig´o suggested another approach called the OWA weighted average (OWAWA) operator. Its main advantage is that it includes the OWA and the WA considering the degree of importance that each concept has in the aggregation. It can be defined as follows: DEFINITION 2. An OWAWA operator of dimension n is a mapping OWAWA: Rn → R that n has an associated weighting vector W of dimension n such that wj ∈ [0, 1] and j =1 wj = 1, according to the following formula:
OWAWA(a1 , . . . , an ) =
n
vˆ j bj ,
(2)
j =1
where b j is the jth largest of the ai , each argument ai has an associated weight (WA) vi with ni=1 vi = 1 and vi ∈ [0, 1], vˆ j = βwj + (1 − β)vj with β ∈ [0, 1] and vj is the weight (WA) vi ordered according to bj , that is, according to the jth largest of the ai . Note that other approaches for unifying the OWA and the WA are possible as it was suggested in Ref. 15 including a similar approach than the immediate probability. Thus, in the WA we get the immediate weighted OWA operator that could be defined, for example, by using vˆ j = (wj vj / nj=1 wj vj ) or vˆ j = [wj + vj / nj=1 (wj + vj )].
2.2. Probabilistic Aggregation Operators PA functions are those functions that weigh the aggregation process by using the probability. Some examples are the aggregation with the expected value, with belief structures that use probabilities,16,27 with OWA operators,23 and with the immediate probability.3,28−30 PA can be defined as follows: International Journal of Intelligent Systems
DOI 10.1002/int
460
´ MERIGO
n DEFINITION 3. A PA operator of dimension n is a mapping n PA: R → R that has an associated weighting vector P, with pi ∈ [0, 1] and i=1 pi = 1, such that
PA (a1 , . . . , an ) =
n
pi ai ,
(3)
i=1
where ai represents the ith argument variable. The immediate probability is an approach that uses OWAs and probabilities in the same formulation. It can be defined as follows: DEFINITION 4. An immediate probability (IPOWA) operator of dimension n is a mapping IPOWA: R n → R that has an associated weighting vector W of dimension n such that wj ∈ [0, 1] and nj=1 wj = 1, according to the following formula: IPOWA (a1 , . . . , an ) =
n
vˆ j bj ,
(4)
j =1
where argument ai has a probability vi with n bj is the jth largest of the ai , each ˆ j = (wj vj / nj=1 wj vj ) and vj is the probability vi i=1 vi = 1 and vi ∈ [0, 1], v ordered according to bj , that is, according to the jth largest of the ai . PA has been implemented in an astonishingly wide range of applications including statistics, soft computing, engineering, economics, management, and actuarial sciences. 3. THE PROBABILISTIC WEIGHTED AVERAGE The probabilistic weighted averaging (PWA) operator is an aggregation operator that unifies the probability and the weighted average in the same formulation considering the degree of importance that each concept has in the aggregation. It is defined as follows: DEFINITION 5. A PWA operator of dimension n is a mapping PWA: R n → R such that PWA (a1 , . . . , an ) =
n
pˆ i ai ,
(5)
j =1
where the ai are argument variables, each argument ai has an associated weight the n (WA) v with vi = 1 and vi ∈ [0, 1], and a probabilistic weight pi with i i=1 n ˆ i = βpi + (1 − β)vi with β ∈ [0, 1] and pˆ i is the p = 1 and p i ∈ [0, 1], p i=1 i weight that unifies probabilities and WAs in the same formulation. International Journal of Intelligent Systems
DOI 10.1002/int
PROBABILISTIC WEIGHTED AVERAGE AND ITS APPLICATION
461
Note that it is also possible to formulate the PWA operator separating the part that strictly affects the probabilistic information and the part that affects the WAs. This representation is useful to see both models in the same formulation. n DEFINITION 6. A PWA operator is a mapping PWA: n R → R of dimension n, if it has an associated probabilistic vector P, with i=1 pi = 1 and pi ∈ [0, 1] and a weighting vector V that affects the WA, with ni=1 vi = 1 and vi ∈ [0, 1], such that
f (a1 , . . . , an ) = β
n
pi ai + (1 − β)
j =1
n
vi ai ,
(6)
i=1
where the ai are the argument variables and β ∈ [0, 1]. P = as
Note nvector of probabilities or WAs is not normalized, i.e., n that if the weighting p = 1, or V = i=1 i i=1 vi = 1, then, the PWA operator can be expressed
f (a1 , . . . , an ) =
n n β (1 − β) pi ai + vi ai . P j =1 V i=1
(7)
The PWA is monotonic, bounded, and idempotent. It is monotonic because if ai ≥ ui , for all ai , then, PWA(a1 , . . . , an ) ≥ PWA(u1 , u2 . . . , un ). It is bounded because the PWA aggregation is delimitated by the minimum and the maximum. That is, Min{ai } ≤ PWA(a1 , . . . , an ) ≤ Max{ai }. It is idempotent because if ai = a, for all ai , then, PWA(a1 , . . . , an ) = a. To understand this new approach, let us look to a simple numerical example. Example 1. Assume the following set of arguments A = (50, 30, 90, 60). We assume the following weights for the probability P = (0.4, 0.3, 0.2, 0.1) and for the weighted average V = (0.3, 0.3, 0.3, 0.1); and β = 0.6. By using Eq. (6) we get PWA = 0.6 × (0.4 × 50 + 0.3 × 30 + 0.2 × 90 + 0.1 × 60) + 0.4 × (0.3 × 50 + 0.3 × 30 + 0.3 × 90 + 0.1 × 60) = 45.4. If B is a vector corresponding to the arguments ai , we shall call this the argument vector and W T is the transpose of the weighting vector, then the PWA operator can be expressed as PWA (a1 , a2 , . . . , an ) = W T B.
(8)
A further interesting result consists of using infinitary aggregation operators.31 Thus, we can represent the aggregation process with an unlimited number of arguments ˆ i = 1. By using the PWA that appear in the aggregation process. Note that ∞ i=1 p International Journal of Intelligent Systems
DOI 10.1002/int
462
´ MERIGO
operator, we get the infinitary PWA (∞-PWA) operator as follows: ∞ − PWA(a1 , a2 , . . . , an ) =
∞
pˆ i ai .
(9)
i=1
The aggregation process is very complex because we have an unlimited number of arguments. For further reading, see Ref. 31 The PWA operator can also be extended following Refs. 13, 15, 22, 32, and 33 Thus, we can develop a generating function for the arguments of the PWA operator that represents the formation of this information, such as the use of a multiperson process. We call this formulation the mixture PWA (MPWA) operator, and it is defined as follows: DEFINITION 7. A MPWA operator of dimension n is a mapping MPWA: R n → R that has associated a vector of weighting functions f , s: R m → R, such that n fi (sy (ai ))sy (ai ) n , MPWA (sy (a1 ), . . . , sy (an )) = i=1 i=1 fi (sy (ai ))
(10)
where ai is the argument variable and sy (ai ) indicates that each argument is formed using a different function. Another interesting issue is to analyze the measures for characterizing the weighting vector Vˆ . The entropy of dispersion4,34 measures the amount of information being used in the aggregation. For the PWA operator, it is defined as follows: H (Vˆ ) = − β
n
pi ln(pi ) + (1 − β)
i=1
n
vi ln(vi ) .
(11)
i=1
Note that pi is the ith weight of the PA aggregation and vi the ith weight of the WA aggregation. As we can see, if β = 1 or β = 0, we get the classical Shannon entropy of dispersion.34 By using a different manifestation in the weighting vectors, we can obtain a wide range of PWA operators. Remark 1. If β = 0, we get WA and if β = 1, we get PA. The more of β located to the top, the more importance we give to the PA operator and vice versa. Remark 2. If pi = 1/n and vi = 1/n, for all i, then, we get AM. Note that the AM is also found if β = 1 and pi = 1/n, for all i, and if β = 0 and vi = 1/n, for all i. Remark 3. If vi = 1/n, for all i, then, we get the arithmetic probabilistic aggregation International Journal of Intelligent Systems
DOI 10.1002/int
PROBABILISTIC WEIGHTED AVERAGE AND ITS APPLICATION
463
(A-PA). A-PA (a1 , . . . , an ) = β
n
1 ai . n i=1 n
pi ai + (1 − β)
i=1
(12)
Remark 4. If pi = 1/n, for all i, then, we get the arithmetic WA (A-WA) A-WA (a1 , . . . , an ) = β
n n 1 ai + (1 − β) vi ai . n i=1 i=1
(13)
Note that other particular cases could be found by using different types of weighting vectors in the probability and in the weighted average.
4. MOVING AVERAGES IN THE PWA OPERATOR The PWA operator can be extended by using moving averages. Thus, we can represent the information in a dynamic way. By using moving averages,21 we form PWMA. Its main advantage is that it uses probabilities (objective information) and weighted averages (subjective information) in the formulation and considering the degree of importance that each concept has in the moving average. It can be defined as follows: DEFINITION 8. A PWMA operator of dimension m + t is a mapping PWMA: R m+t → R such that PWMA (a1+t , a2+t , . . . , am+t ) =
m+t
pˆ i ai ,
(14)
i=1+t
argument variables, each argument ai has an associated where the ai are the weight (WA) vi with m+t i=1+t vi = 1, and vi ∈ [0, 1], and a probabilistic weight m+t pi with i=1+t pi = 1 and pi ∈ [0, 1], pˆ i = βpi + (1 − β)vi with β ∈ [0, 1], pˆ i is the weight that unifies probabilities and WAs in the same formulation, m is the total number of arguments considered from the whole sample, and t indicates the movement done in the average from the initial analysis. Note that an interesting issue here is that the dynamic process of the probabilities and the weighted averages may be equal or different. If it is equal, we assume that both concepts are assessing the information in a similar way. If they are different, it means that they are not dealing exactly with the same information and therefore the dynamic process is not the same. For example, if there is missing International Journal of Intelligent Systems
DOI 10.1002/int
464
´ MERIGO
information in one of the concepts, the dynamic process has to be treated differently. In this type of problems, it becomes relevant to clearly separate the part that affects the weighted average and the part that affects the probability. In this case, we can modify Eq. (14) using m+t
PWMA (a1+t , a2+t , . . . , am+t ) = β
pi ai + (1 − β)
i=1+t
l+s
(15)
vi ai ,
i=1+s
where m and l are the total number of arguments considered from the whole sample and t and s indicate the movement done in the average from the initial analysis. In these situations, it is worth noting that β may change during the dynamic process because the degree of importance of each concept in the aggregation may change over time. Note that for each period (sequence), the probabilities and the weights of the PWMA may also change. Observe that if β = 1, the PWMA operator becomes the probabilistic moving average and if β = 0, the weighted moving average. Note that a similar extension could be developed by using mixture operators as explained in Eq. (10). In the following, we present a very simple example: Example 2. Assume a time series forecasting problem with the following information presented in Table I regarding a variable that we want to forecast from period 6 to 11. From a previous experiment (objective information), we have assumed that the probabilities were P = (0.1, 0.2, 0.2, 0.2, 0.3) while our subjective opinion is V = (0.1, 0.1, 0.1, 0.3, 0.4), due to other reasons such as expectations for the future and some asymmetric knowledge. We assume that both concepts are equally important, that is, β = 0.5. Thus, we can develop the following forecasts shown in Table II by using the PWMA operator:
Table I. Set of arguments for 10 periods.
A
1
2
3
4
5
6
7
8
9
10
37
46
68
35
28
48
46
49
37
42
Table II. Forecasts by using different types of generalized moving averages. 1 MA WMA PMA PWMA
2
3
4
5
6
7
8
9
10
11
42.8 36.8 41.9 39.35
45 42.5 45.2 43.85
45 45.9 42.8 44.35
41.2 44.5 42.6 43.55
41.6 41.7 42.5 42.1
44.4 42.2 43.8 43
International Journal of Intelligent Systems
DOI 10.1002/int
PROBABILISTIC WEIGHTED AVERAGE AND ITS APPLICATION
465
As we can see, it is possible to use probabilities and weighted averages in the same formulation. Note that the motivation for this is that we may have different assumptions in the problem. 5. APPLICABILITY OF THE PWA OPERATOR In this section, we analyze the applicability of the PWA operator focusing on some basic examples in statistics. 5.1. Introduction The PWA operator can be applied in a number of fields because all the previous studies that use the weighted average or the probability can be revised and extended with this new approach. Thus, we can apply it in • Statistics: The PWA is a key instrument to revise the majority of the statistical sciences. For example, we can extend it to probability theory and a number of other related areas such as descriptive statistics, hypothesis testing, and inference statistics; • Decision theory and operational research; • Fuzzy set theory and soft computing; • Economics and politics; • Business administration; • Physics and chemistry; • Biology and medicine; • Engineering; and • Other sciences.
Note that we can revise all these theories because the PWA operator includes the probability and the weighted average as a particular case. Therefore, we can always reduce the model to the classical approach and extend it when we have more available information. 5.2. Some Theoretical Examples in Statistics The PWA operator can be used in key statistical concepts based on the weighted average or the probability. For example, we can revise the average and the variance of a population (discrete case) using the PWA operator. Note that the average (or the weighted average) should be replaced by the PWA operator, using Eq. (5) or (6). Note also that this aggregation also represents the expected value (EV) in a decision-making process. In this case, we assume that the EV has objective and subjective information in the same problem. Thus, it can be formulated as follows for a set of elements X = {x1 , . . . , xn }: EV-PWA (X) = β
n
pi xi + (1 − β)
j =1
International Journal of Intelligent Systems
n
vi xi ,
i=1
DOI 10.1002/int
(16)
466
´ MERIGO
where β ∈ [0, 1], pi represents the objective probabilities of the problem with n pi = 1 and pi ∈ [0, 1] and vi the subjective probabilities of the problem with i=1 n i=1 vi = 1 and vi ∈ [0, 1]. For the variance, we obtain the following formulation: Var-PWA (X) =
n
pˆ i (xi − μ)2 ,
(17)
i=1
variable, where xi is the argument μ is the average (in this case, the PWA operator), vi ∈ [0, 1] and ni=1 vi = 1, ni=1 pi = 1 and pi ∈ [0, 1], pˆ i = βvi + (1 − β)pi with β ∈ [0, 1]. Note that the variance can also be expressed as Var-PWA (X) = β
n
pi (xi − μ)2 + (1 − β)
i=1
n
vi (xi − μ)2 .
(18)
i=1
With the PWA operator, we are able to include subjective and objective information in the formulation. Note that Yager35,36 has been developing similar models by using the OWA operator in the variance. Obviously, once we have the variance, it is straightforward to obtain the standard deviation (SD) with the PWA operator by using n pˆ i (xi − μ)2 . SD-PWA =
(19)
i=1
Following a similar methodology, we can represent the covariance by using the PWA operator as follows: Cov-PWA (X, Y ) =
n
pˆ i (xi − μ)(yi − λ),
(20)
i=1
where xi is the argument variable of the first set of elements X = {x1 , . . . , xn } and yi the argument variable of the second set of elements Y = {y1 , . . . , yn }, μ and λ are the average (in this case, the PWA operator) of the sets X and Y, respectively, vi ∈ [0, 1] and ni=1 vi = 1, ni=1 pi = 1, and pi ∈ [0, 1], pˆ i = βvi + (1 − β)pi with β ∈ [0, 1]. Note that we could also represent the covariance as follows: Cov-PWA (X, Y ) = β
n
pi (xi − μ)(yi − λ) + (1 − β)
i=1
n
vi (xi − μ)(yi − λ),
i=1
(21) From this, we can analyze several measures of correlation, such as the Pearson coefficient (PC). The Pearson coefficient with the PWA (PC-PWA) is formulated as International Journal of Intelligent Systems
DOI 10.1002/int
PROBABILISTIC WEIGHTED AVERAGE AND ITS APPLICATION
467
Cov-PWA (X, Y ) PC-PWA = √ . Var-PWA (X) × Var-PWA (Y )
(22)
follows:
The PC-PWA is 1 if there is an increasing linear relationship and −1 if there is a decreasing linear relationship. If the variables X and Y are independent, the PC-PWA is 0. Furthermore, it is possible to formulate a linear regression process using the PWA operator. To construct the linear regression model yh = α + βxh , we calculate β as follows: Cov-PWA(X, Y ) . βˆ PWA = Var-PWA(X)
(23)
Next, we calculate the αˆ PWA value as follows: αˆ PWA = y¯ PWA − βPWA x¯ PWA , where x¯ PWA and y¯ PWA are the average of the sets X and Y calculated by using a PWA operator. Once we have and βˆ PWA , we can construct the linear regression model with the PWA operator as follows: yh = αˆ PWA + βˆ PWA xh .
(24)
Note that other existing methods can be revised and extended using the PWA operator in a similar way. It is possible to apply the PWA to many other problems in descriptive statistics, inferential statistics, hypothesis testing, correlation, and regression as well as in other scientific areas. 6. MULTIPERSON DECISION MAKING WITH THE PWA OPERATOR 6.1. Introduction In decision theory, we see that there are a number of methodologies for assessing the problem including multiple criteria decision making, group decision making, and game theory.38−39 In general terms, we can distinguish between three forms of decision-making environments: • Decision making under certainty: We know exactly what is going to happen in the future. • Decision making under risk: We know the possible outcomes that will occur in the future. However, we do not know which of them is going to occur but we can assess the information with probabilities. • Decision making under uncertainty: We know the possible outcomes but we do not know which of them is going to occur in the future, and we do not have any probabilistic information.
More specifically, by using the PWA operator we can use subjective and objective probabilities in the analysis. Thus, we can formulate two other decision-making processes: International Journal of Intelligent Systems
DOI 10.1002/int
468
´ MERIGO Table III. Matrix with states of nature and alternatives. S1 A1 ... Ah ... Ak PWA
a11 ... ah1 ... ak1 Y1
Si ... ... ... ... ... ...
a1i ... ahi ... aki Yi
... ... ... ... ... ...
Sn
PWA
a1n ... ahn ... akn Yn
T1 ... Th ... Tk
• decision making under subjective risk and • decision making under objective risk.
Considering the new developments presented in this paper, we can introduce the use of decision-making problems under subjective risk and under objective risk in the same formulation and considering the degree of importance that each concept has in the analysis. Therefore, with the PWA operator we can formulate a new decision-making approach: • Decision making under subjective and objective risk. ◦ If β = 1, we get decision making under objective risk. ◦ If β = 0, we get decision making under subjective risk.
Furthermore, it is interesting to consider the meaning of the set of arguments aggregated. In decision-making problems, it is very common to consider a set of arguments a that depend on a set of states of nature S and a set of alternatives A, as shown in Table III. As we can see, we can summarize the problem in three types of decision-making methodologies: • Decision-making “ex-ante”: Select an alternative (action) and see its potential results (aggregation of a row). • Decision-making “ex-post”: Assume that a state of nature occurs and see how we can react (aggregation of a column). • Decision-making “ex-ante” and “ex-post”: Mix both cases in the same decision-making process.
Note that if we combine these situations with the previous ones, we could consider a wide range of decision-making approaches as shown in Table IV. Note also that these approaches could be studied with a multiperson analysis or more generally with a multiaggregation process. Thus, we get • multiperson decision making under certainty, • multiperson decision making under risk.,
International Journal of Intelligent Systems
DOI 10.1002/int
PROBABILISTIC WEIGHTED AVERAGE AND ITS APPLICATION
469
Table IV. Decision-making (DM) approaches.
DM “ex-ante” DM “ex-post” DM “ex-ante” and “ex-post”
DM: certainty
DM under risk
DM: uncertainty
DM under certainty “ex-ante” DM under certainty “ex-post” DM under certainty “ex-ante” and “ex-post”
DM under risk “ex-ante”
DM – uncertainty “ex-ante”
DM under risk “ex-post”
DM – uncertainty “ex-post”
DM under risk “ex-ante” and “ex-post”
DM – uncertainty “ex-ante” and “ex-post”
◦ multiperson decision making under objective risk, and ◦ multiperson decision making under subjective risk, • multiperson decision making under uncertainty.
Finally, let us note that there are a number of other frameworks for dealing with decision making that could be considered in the analysis. 6.2. Multiperson Decision-Making Approach Next, we are going to analyze a political decision-making application regarding the selection of the optimal fiscal policies by using a multiperson analysis. A multiperson analysis provides more robust information because it is based on the opinion of several experts. Therefore, by aggregating the opinion of different experts we can obtain a more representative view of the problem. The procedure to select fiscal policies with the PWA operator in multiperson decision making is described as follows: Note that in the literature we find many other group decision-making models.11−13,15,19,23 Step 1: Let A = {A1 ,A2 , . . . , Am } be a set of finite alternatives, S = {S1 , S2 , . . . , Sn }, a set of finite states of nature, forming the payoff matrix (ahi )m×n . Let E = {e1 , e2 , . . . , eq } be a finite set of decision makers. Let U q= (u1 ,u2 , . . . , uq ) be the weighting vector of the decision makers such that k=1 uk = 1 and uk ∈ (k) )m×n . [0, 1]. Each decision maker provides his own payoff matrix (ahi . . . , pn ) such that ni=1 pi = 1 Step 2: Calculate the weighting vector P = (p1 , p2 , and pi ∈ [0, 1], and V = (v1 ,v2 , . . . , vn ) such that ni=1 vi = 1 and vi ∈ [0, 1], to be used in the PWA aggregation. For the unified aggregation between V and P use Pˆ = β × P + (1 − β) × V . Step 3: Use WA to aggregate the information of the decision makers E using the weighting vector U . The result is the collective payoff matrix (ahi )m×n . Thus, q k . Note that it is possible to use other types of aggregation operators ahi = k=1 uk ahi instead of the WA to aggregate this information. International Journal of Intelligent Systems
DOI 10.1002/int
470
´ MERIGO
Step 4: Calculate the aggregated results using the PWA operator explained in Eq. (5). Step 5: Adopt decisions according to the results found in the previous steps. Select the alternative(s) that provides the best result(s). Moreover, establish a ranking of the alternatives. This aggregation process can be summarized using the following aggregation operator that we call the multiperson-PWA (MP-PWA) operator. n q DEFINITION 9. A MP-PWA operator is a mapping q MP-PWA: R × R → R that has a weighting vector U of dimension qwith k=1 uq = 1 and uk ∈ [0, 1] and a weighting vector V of dimension n with ni=1 vi = 1 and vi ∈ [0, 1], such that
MP-PWA
n 1
q
a1 , . . . , a1 , . . . , an1 , . . . , anq = pˆ i ai ,
(25)
i=1
n where each argument ai has an associated probability p with i i=1 pi = 1 and pi q ∈ [0, 1], pˆ i = βpi + (1 − β)vi with β ∈ [0, 1], ai = k=1 uk aik , aik is the argument variable provided by each person (or expert). The MP-PWA operator includes a wide range of particular cases following the methodology explained in Section 3 such as • • • • •
the multiperson-PA (MP-PA) operator, the multiperson-WA (MP-WA) operator, the multiperson-arithmetic mean (MP-AM) operator, the multiperson-arithmetic-PA (MP-APA) operator, and the multiperson-arithmetic-WA (MP-AWA) operator.
Note that the MP-PWA operator has similar properties to those explained in Section 3, such as idempotency, monotonicity, and so on. Moreover, we could also extend the MP-PWA operator by using moving averages as explained in Section 4, obtaining the multiperson-PWMA (MP-PWMA) operator. 7. ILLUSTRATIVE EXAMPLE In this section, we present a numerical example of the new approach in a multiperson decision-making problem regarding the selection of fiscal policies in a government. We focus on the decision of increasing or decreasing the taxes from a general context in a country the next period. Step 1: Assume the government of a country has to decide on the type of fiscal policy to use the next year. They consider six alternatives: • A1 = Develop an extremely strong expansive fiscal policy (increase the taxes 50%). • A2 = Develop a strong expansive fiscal policy (increase the taxes 30%). International Journal of Intelligent Systems
DOI 10.1002/int
PROBABILISTIC WEIGHTED AVERAGE AND ITS APPLICATION • • • •
A3 A4 A5 A6
471
= Develop an expansive fiscal policy (increase the taxes 10%). = Do not develop any change in the fiscal policy. = Develop a contractive fiscal policy (decrease the taxes 10%). = Develop a strong contractive fiscal policy (decrease the taxes 30%).
To evaluate these strategies, the government has brought together a group of experts. This group considers that the key factor is the economic situation of the world economy for the next period. They consider seven possible states of nature that could happen in the future: • • • • • • •
S1 S2 S3 S4 S5 S6 S7
= very bad economic situation, = bad economic situation, = regular-bad economic situation, = regular economic situation, = regular-good economic situation, = good economic situation, and = very good economic situation.
The experts are classified into four groups. Each group is led by one expert and gives different opinions than the other three groups. The results of the available policies, depending on the state of nature Si and the alternative Ak that the government chooses, are shown in Tables V–VIII. Step 2–3: Now, we can develop different methods for establishing the weighting vector. In this example, we ask the experts of the government to give their preferences and we find the following absolute values. Note that each person has one vote that can be divided in different ways across the weighting vector, and we assume that
Table V. Opinion of the first group of experts.
A1 A2 A3 A4 A5 A6
S1
S2
S3
S4
S5
S6
S7
20 22 46 50 39 22
27 44 41 66 28 27
35 36 25 38 26 32
40 29 28 42 60 40
37 46 40 46 40 38
40 37 36 36 27 30
35 42 30 29 27 37
Table VI. Opinion of the second group of experts.
A1 A2 A3 A4 A5 A6
S1
S2
S3
S4
S5
S6
S7
15 30 42 47 29 24
31 42 40 63 40 29
34 34 29 36 27 38
36 35 28 41 50 36
41 44 37 48 37 40
36 37 34 37 22 32
38 40 26 33 28 39
International Journal of Intelligent Systems
DOI 10.1002/int
472
´ MERIGO Table VII. Opinion of the third group of experts.
A1 A2 A3 A4 A5 A6
S1
S2
S3
S4
S5
S6
S7
25 23 44 43 33 23
30 42 39 62 34 32
32 40 31 40 26 36
42 33 28 43 44 40
39 42 35 45 37 39
38 37 32 39 25 33
32 39 24 32 29 40
Table VIII. Opinion of the fourth group of experts.
A1 A2 A3 A4 A5 A6
S1
S2
S3
S4
S5
S6
S7
40 21 48 48 35 23
28 44 40 61 34 36
31 38 27 38 29 34
38 31 28 42 42 36
35 40 36 45 38 39
42 37 34 40 26 29
39 43 28 34 32 44
Table IX. Absolute frequency.
P
p1
p2
p3
p4
p5
p6
p7
3
7
20
25
22
15
8
Table X. Relative frequency.
P
p1
p2
p3
p4
p5
p6
p7
0.03
0.07
0.2
0.25
0.22
0.15
0.08
everybody is equally important. Note also that we have four groups of experts, each of them constituted by 25 persons. The results are presented in Table IX. We now normalize the information so the weighting vector sums to 1. The results are shown in Table X. Next, we do a similar analysis for the weighted average where we ask each expert to provide its opinion regarding the subjective information of the problem and then we normalize the information to form the weighted vector. The results are shown in Tables XI and XII.
Table XI. Absolute frequency.
V
v1
v2
v3
v4
v5
v6
v7
5
10
20
25
20
12
8
International Journal of Intelligent Systems
DOI 10.1002/int
PROBABILISTIC WEIGHTED AVERAGE AND ITS APPLICATION
473
Table XII. Relative frequency.
V
v1
v2
v3
v4
v5
v6
v7
0.05
0.1
0.2
0.25
0.2
0.12
0.08
Table XIII. Relative frequency.
Pˆ
pˆ 1
pˆ 2
pˆ 3
pˆ 4
pˆ 5
pˆ 6
pˆ 7
0.044
0.091
0.2
0.25
0.206
0.129
0.08
Table XIV. Collective results.
A1 A2 A3 A4 A5 A6
S1
S2
S3
S4
S5
S6
S7
25 24 45 47 34 23
29 43 40 63 34 31
33 37 28 38 27 35
39 32 28 42 49 38
38 43 37 46 38 39
39 37 34 38 25 31
36 41 27 32 29 40
Table XV. Aggregated results.
A1 A2 A3 A4 A5 A6
Min
Max
AM
PA
WA
PWA
25 24 27 32 25 23
39 43 45 63 49 40
34.14 36.71 34.14 43.71 33.71 33.85
36.22 37.42 32.15 42.3 35.48 35.79
35.66 37.22 32.49 43.07 35.67 35.47
35.828 37.28 32.388 42.839 35.613 35.566
In this problem, we assume that β = 30%. Thus, we give more importance to the subjective information obtaining the following weighting vector Pˆ shown in Table XIII. First, we aggregate the information of the four groups into one collective matrix that represents all the experts of the problem. We assume that all the groups are equally important, that is, U = (0.25, 0.25, 0.25, 0.25). The results are shown in Table XIV. Step 4: With this information, we can aggregate the expected results for each state of nature to make a decision by using Eq. (6). In Table XV, we present the results obtained using different types of PWA operators. Step 5: If we establish a ranking of the alternatives, we get the results shown in Table XVI. Note that the first alternative in each ordering is the optimal choice. Depending on the particular type of PWA operator used, the results may be different. Therefore, the decision regarding the selection of the optimal fiscal policy International Journal of Intelligent Systems
DOI 10.1002/int
474
´ MERIGO Table XVI. Ranking of the fiscal policies. Ranking Min Max AM
⎬
⎬
Ranking ⎬
⎬
A4 A3 A1 = A5 A2 A6 ⎬ ⎬ ⎬ ⎬ ⎬ A4 A5 A3 A2 A6 A1 ⎬ ⎬ ⎬ ⎬ A4 A2 A1 = A3 A6 A5
PA WA PWA
⎬
⎬
⎬
⎬
⎬
A4 A2 A1 A6 A5 A3 ⎬ ⎬ ⎬ ⎬ ⎬ A4 A2 A5 A1 A6 A3 ⎬ ⎬ ⎬ ⎬ ⎬ A4 A2 A1 A5 A6 A3
may be different. However, note that in this example it seems clear that A4 should be the optimal choice. 8. CONCLUSIONS We have presented the PWA operator. It is a new aggregation operator that unifies the weighted average and the probability in the same formulation. Its main advantage is that it is able to unify the WA and the probability considering the degree of importance that each concept has in the aggregation. Moreover, the model is also able to consider only the probability or the WA if it is necessary in the analysis. Thus, the applicability of this new formulation is extremely broad because all the previous studies that have used the WA or the probability in the analysis can be revised and extended by using the new approach presented in this paper. For example, we can develop a number of applications in statistics, economics, engineering and physics. We have focused on an application in multiperson decision making regarding the selection of fiscal policies. We have seen that with the PWA operator we can use subjective and objective probabilities (or information) in the same formulation thus obtaining a more complete representation of the problem. By using a multiperson analysis, we have been able to form the MP-PWA operator. It accomplishes similar properties than the PWA operator and includes a wide range of particular cases including the MP-WA and the MP-PA operator. Note that we believe that this unification is extremely important because two of the main statistical concepts are unified in the same formulation. Thus, using the expressions frequently used in physics regarding the “physics unification” when talking about the unification between the relativity and the quantum mechanics and subsequently the unification of all the known forces; with the new approach presented in this paper, we could be talking about some initial “statistics unification.” Note that this unification is obtained by unifying the weighted average with the probability and subsequently the unification of all the research done with these two concepts separately. Note that in future research we will put more attention on how to understand and create this unification process from the point of view of the PWA operator and from a more complete perspective that considers other aggregation operators in the analysis such as the OWA operator. Moreover, we expect to develop further developments by introducing more characteristics in the problem such as the use of generalized means, distance measures, and uncertain information represented with interval numbers, fuzzy numbers, linguistic variables, and expertons. International Journal of Intelligent Systems
DOI 10.1002/int
PROBABILISTIC WEIGHTED AVERAGE AND ITS APPLICATION
475
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27.
Beliakov G, Pradera A, Calvo T. Aggregation functions: A guide for practitioners. Berlin, Germany: Springer-Verlag; 2007. McClave JT, Sincich T. Statistics, 9th ed. Upper Saddle River: NJ: Prentice-Hall: 2003. Merig´o JM. Fuzzy decision making with immediate probabilities. Comp Ind Eng 2010;58:651–657. Yager RR. On ordered weighted averaging aggregation operators in multi-criteria decision making. IEEE Trans Syst Man Cybern B 1988;18:183–190. Yager RR, Kacprzyck J. The ordered weighted averaging operators: Theory and applications. Norwell, MA: Kluwer; 1997. Yager RR, Kacprzyk J, Beliakov G. Recent developments on the ordered weighted averaging operators: Theory and practice. Berlin, Germany: Springer-Verlag; 2011. Merig´o JM, Casanovas M. The fuzzy generalized OWA operator and its application in strategic decision making. Cybern Syst 2010;41:359–370. Merig´o JM, Casanovas M. Fuzzy generalized hybrid aggregation operators and its application in fuzzy decision making. Int J Fuzzy Syst 2010;12:15–24. Merig´o JM, Casanovas M. The uncertain induced quasi-arithmetic OWA operator. Int J Intell Syst 2011;26:1–24. Yager RR. Generalized OWA aggregation operators. Fuzzy Optim Decision Making 2004;3:93–107. Zhao H, Xu ZS, Ni M, Liu S, Generalized aggregation operators for intuitionistic fuzzy sets. Int J Intell Syst 2010;25:1–30. Zhou LG, Chen HY. Generalized ordered weighted logarithm aggregation operators and their applications to group decision making. Int J Intell Syst 2010;25:683–707. Merig´o JM, Casanovas M. Decision making with distance measures and induced aggregation operators. Comp Ind Eng 2011;60:66–76. Merig´o JM, Gil-Lafuente AM. New decision-making techniques and their application in the selection of financial products, Inform Sci 2010;180:2085–2094. Merig´o JM. A unified model between the weighted average and the induced OWA operator, Expert Syst Appl 2011;38:11560–11572. Merig´o JM, Casanovas M. Induced aggregation operators in decision making with Dempster–Shafer belief structure. Int J Intell Syst 2009;24:934–954. Merig´o JM, Gil-Lafuente AM. The induced generalized OWA operator. Inform Sci 2009;179:729–741. Merig´o JM, Gil-Lafuente AM, Gil-Aluja J. Decision making with the induced generalized adequacy coefficient. Appl Comput Math 2011;2:321–339. Tan C, Chen X. Induced Choquet ordered averaging operator and its application in group decision making. Int J Intell Syst 2010;24:59–82. Yager RR, Filev DP. Induced ordered weighted averaging operators. IEEE Trans Syst Man Cybern B 1999;29:141–150. Yager RR. Time series smoothing and OWA aggregation. IEEE Trans Fuzzy Syst 2008;16:994–1007. Merig´o JM, Gil-Lafuente AM. Fuzzy induced generalized aggregation operators and its application in multi-person decision making. Expert Syst Appl 2011;38:9761–9772. Merig´o JM, Wei GW. Probabilistic aggregation operators and their application in uncertain multi-person decision making. Technol Econ Dev Econ 2011;17:335–351. Yager RR, Liu L. Classic works of the Dempster–Shafer theory of belief functions. Berlin, Germany: Springer-Verlag; 2008. Torra V. The weighted OWA operator. Int J Intell Syst 1997;12:153–166. Xu ZS, Da QL. An overview of operators for aggregating information. Int J Intell Syst 2003;18:953–969. Yager RR. Including importances in OWA aggregation using fuzzy systems modelling. IEEE Trans Fuzzy Syst 1998;6:286–294. International Journal of Intelligent Systems
DOI 10.1002/int
476 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39.
´ MERIGO Merig´o JM, Casanovas M, Mart´ınez L. Linguistic aggregation operators for linguistic decision making based on the Dempster–Shafer theory of evidence. Int J Uncertainty Fuzzy Knowl-Based Syst 18:287–304. Engemann KJ, Filev DP, Yager RR. Modelling decision making using immediate probabilities. Int J Gen Syst 1996;24:281–294. Yager RR, Engemann KJ, Filev DP. On the concept of immediate probabilities. Int J Intell Syst 1995;10:373–397. Yager RR. Including decision attitude in probabilistic decision making. Int J Approx Reason 1999;21:1–21. Mesiar R, Pap E. Aggregation of infinite sequences. Inform Sci 2008;178:3557–3564. Mesiar R, Spirkova J. Weighted means and weighting functions. Kybernetika 2006;42:151– 160. Torra V, Narukawa Y. Some relations between Losonczi’s based OWA generalizations and the Choquet–Stieltjes integral. Soft Comput 2010;14:465–472. Shannon CE. A mathematical theory of communication. Bell Syst Tech J 1950;27:379–423. Yager RR. On the inclusion of variance in decision making under uncertainty. Int J Uncertainty Fuzzy Knowl-Based Syst 1996;4:401–419. Yager RR. Generalizing variance to allow the inclusion of decision attitude in decision making under uncertainty. Int J Approx Reason 2006;42:137–158. Figueira J, Greco S, Ehrgott M. 2005. Multiple criteria decision analysis: State of the art surveys. Boston, MA: Springer-Verlag. Merig´o JM, Gil-Lafuente AM, Gil-Aluja J. A new aggregation method for strategic decision making and its application in assignment theory. Afr J Business Manag 2011;5:4033–4043.
International Journal of Intelligent Systems
DOI 10.1002/int