This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, -
1
A Model Based on Linguistic 2-tuples for Dealing with Heterogeneous Relationship among Attributes in Multi-expert Decision Making Bapi Dutta, Student Member, IEEE, Debashree Guha, and Radko Mesiar
Abstract—Classical Bonferroni mean (BM), defined by Bonferroni in 1950, assumes homogeneous relation among the attributes, i.e., each attribute Ai is related to the rest of the attributes A \ {Ai }, where A = {A1 , A2 , ..., An } denotes the attribute set. In this paper, we emphasize the importance of having an aggregation operator, which we will refer to as the extended Bonferroni mean (EBM) operator to capture heterogeneous interrelationship among the attributes. We provide an interpretation of “heterogeneous inter-relationship” by assuming that some of the attributes, denoted as Ai , are related to a subset Bi of the set A \ {Ai } and others have no relation with the remaining attributes. We provide an interpretation of this operator as computing different aggregated values for a given set of inputs as interrelationship pattern is changed. We also investigate the behaviour of the proposed EBM aggregation operator. Further to investigate, a multi-attribute group decision making (MAGDM) problem with linguistic information we analyze EBM operator in linguistic 2-tuple environment and develop three new linguistic aggregation operators: 2-tuple linguistic EBM (2TLEBM), weighted 2-tuple linguistic EBM (W2TLEBM) and linguistic weighted 2-tuple linguistic EBM (LW-2TLEBM). A concept of linguistic similarity measure of 2-tuple linguistic information is introduced. Subsequently, a MAGDM technique is developed in which the attributes’ weights are in the form of 2-tuple linguistic information and experts’ weights information are completely unknown. Finally, a practical example is presented to demonstrate the applicability of our results. Index Terms—Linguistic 2-tuple, extended Bonferroni mean (EBM), 2-tuple linguistic extended Bonferroni mean (2TLEBM), multi-attribute group decision making (MAGDM)
I. I NTRODUCTION
M
ULTI-ATTRIBUTE group decision making (MAGDM) problem has been one of the major research fields in the decision sciences over the last few decades [1]–[13]. It is characterized by a set of experts whose aim is to find the most suitable alternative among the finite set of alternatives assessed on a finite set of attributes, both qualitative and Manuscript received April 15, 2014; revised July 23, 2014 and September 9, 2014; accepted October 29, 2014. The first author gratefully acknowledges the financial support provided by the Council of Scientific and Industrial Research, New Delhi, India (Award No. 09/1023(007)/2011-EMR-I). The third author was supported by the grant APVV-0073-10 and by the European Regional Development Fund in the IT4Innovations Centre of Excellence project (CZ.1.05/1.1.00/02.0070). Bapi Dutta and Debashree Guha are with the Department of Mathematics, Indian Institute of Technology, 800 013, Patna, India (e-mail:
[email protected];
[email protected]. Radko Mesiar is with the Department of Mathematics, Slovak University of Technology in Bratislava. Radlinsk´eho 11, 81368-Bratislava, Slovak Republic. He is also with IRAFM, University of Ostrava, 30.dubna 22, 701 03 Ostrava 1, Czech Republic (e-mail:
[email protected]).
quantitative. In the process of decision making, experts provide their judgments/opinions against the alternatives with respect to the attributes. However, in many situations due to lack or abundance of information, subjective estimation or vagueness, incomplete knowledge about the complex system experts’ preferences may not be assessed with both precision and certainty. Then more realistic approach is using linguistic terms instead of numerical values. In such scenario, a linguistic computational model is required to capture these linguistic terms within a mathematical framework and to facilitate the computation between linguistic terms. Several feasible and effective computational models have been suggested in literature from different perspectives [14]–[17]. We will focus on linguistic computational model based on an ordinal scale. This model is also called symbolic model which makes direct computations on linguistic labels using the ordinal structure of the linguistic term set [17]–[19]. Among different symbolic computation models, 2-tuple linguistic computational model has been found to be highly useful due to its simplicity in computation and its capability to avoid information loss during the aggregation of linguistic labels. Its justification and deeper discussion can be found in [20]. Thus, over the last decade, MAGDM problems under 2-tuple linguistic environment appear to be an emerging area of research. The aim of this paper is not to cover all the range of MAGDM problems under linguistic environment, but merely to address the aggregation step. It is known that aggregation is an important step of MAGDM problem and in this step each alternative’s overall ratings are computed from alternative’s linguistic performances under different attributes by using suitable linguistic aggregation operators. In view of this, various aggregation operators have been proposed over the last several years for aggregating 2-tuple linguistic information. We will provide a brief overview of the existing 2-tuple linguistic aggregation operators in section 2, including the motivation of our approach considering heterogeneous relations among the attributes. The paper is planned as follows. In section 3, a short survey of 2-tuple linguistic model is given, including the concept of linguistic similarity measure. In section 4, we define the proposed operator which we refer to as the extended Bonferroni mean (EBM) and we also discuss its variety in some special cases. In section 5, a 2-tuple linguistic extended Bonferroni mean (2TLEBM) is developed and its special cases are studied. This section also introduces two weighted form of 2TLEBM operators: weighted 2-tuple linguistic extended
1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, -
Bonferroni mean (W2TLEBM) and linguistic weighted 2-tuple linguistic extended Bonferroni mean (LW-2TLEBM) operator. An approach for solving MAGDM problem with 2-tuple linguistic information is presented in section 6. To illustrate the working of the proposed MAGDM technique, a site location selection problem is presented in section 7. The results of the problem are also compared with the other existing aggregation operators, while section 8 concludes the discussion. II. B RIEF OVERVIEW OF 2- TUPLE LINGUISTIC AGGREGATION OPERATORS AND MOTIVATION OF OUR APPROACH
As mentioned in introduction, several 2-tuple linguistic aggregation operators have been introduced in the literature. Based on arithmetic mean, Herrera and Mart´ınez [17] developed 2-tuple averaging operator, 2-tuple weighted averaging operator, 2-tuple ordered weighted averaging operator. In [22], Jiang and Fan introduced 2-tuple weighted geometric operator and 2-tuple ordered weighted geometric operator. Wei [5] presented a MAGDM method based on extended 2-tuple linguistic weighted geometric operator and extended 2-tuple ordered weighted geometric operator. In [6], Wei proposed three new aggregation operators: generalized 2tuple weighted average operator, generalized 2-tuple ordered weighted average operator and induced 2-tuple generalized ordered weighted average operator. Wan [8] developed several hybrid 2-tuple linguistic aggregation operators, such as, 2tuple hybrid linguistic weighted average operator, extended 2tuple hybrid linguistic weighted average operator. Park et al. [9] defined 2-tuple linguistic harmonic operator, 2-tuple linguistic weighted harmonic operator, 2-tuple linguistic ordered weighted harmonic operator and 2-tuple linguistic harmonic hybrid operator. In [7], Wei developed some dependent 2-tuple linguistic aggregation operators in which the associated weight only depends on the aggregated 2-tuple linguistic information. Merig´o et al. [23] introduced induced 2-tuple linguistic generalized ordered weighted averaging operator. The common characteristic of all the aforementioned aggregation operators are that they emphasize on the importance of each input, but they cannot reflect any kind of interrelationships among the aggregated inputs. However, in real-life decision making problems, there are interrelationship among the attributes of MAGDM problems and this interrelationship among the attributes has a reflection in the corresponding arguments. Thus, sometimes more conjunctions are required to take account in the aggregation process to model inherent connection among the aggregated arguments [24]. In view of this, by using Choquet integral, Yang and Chen [10] developed some new aggregation operators including 2-tuple correlated averaging operator, 2-tuple correlated geometric operator and generalized 2-tuple correlated averaging operator in which correlation between aggregated arguments are measured subjectively by expert. Based on power average operator, Xu and Wang [11] developed three new linguistic aggregation operators: 2-tuple linguistic power average operator, 2-tuple linguistic weighted power average operator and 2-tuple linguistic ordered weighted power average operator which allow the aggregated arguments to support
2
each other in the aggregation process and on this basis weight vector of the aggregated arguments is determined. Both Choquet integral and power average operator focus on capturing the interrelationship among the inputs by adopting different strategies to generate the weights of the inputs. They directly do not address various conjunctions among the attributes. In this respect, Bonferroni mean (BM) focuses on directly aggregated arguments to capture the interrelationships among them. BM was first introduced by Bonferroni [25] and was generalized by Yager [26] and other researchers [27]–[30]. Yager [26] interpreted BM as a composition of “anding” and “averaging” operator and generalized it by replacing simple averaging operator with other well known averaging operators, such as, ordered weighted aggregation operator and Choquet integral [31]. Beliakov et al. [27] explored the modelling capability of BM and depicted that it is capable of modeling any type of mandatory requirements in the aggregation process. Considering interrelationship among three arguments instead of two, Xia et al. [28] defined weighted generalized BM and geometric weighted generalized BM. Zhou and He [29] introduced normalized weighted form of BM. Combining BM with geometric mean, Xia et al. [32] developed geometric BM. To aggregate various types of uncertain information, BM have been further extended in intuitionistic fuzzy and hesitant fuzzy environments [24], [28]–[30], [32]–[34]. BM in its inherent structure assumes that each input is related to the rest of the inputs, i.e., while it is used for aggregating alternatives’ performances under different attributes, inherently it assumes that each attribute is related to the rest of the attributes. However, in real-life situations such homogeneous connection among the attributes may not always exist. There may arise situations in which some of the attributes are related to only a non-empty subset of the rest of the attributes and others have no relation with the remaining attributes. This analysis forms the background of our present study where we shall model such kind of heterogeneous connection among the attributes by extending the concept of BM and introduce extended Bonferroni mean operator (EBM). We provide the mathematical description of heterogeneous relation among the attributes in section IV. We also provide an example to illustrate the working nature of the proposed EBM operator in comparison with the other existing aggregation operators. After introducing the concept of EBM operator, we analyze the proposed operator in linguistic 2-tuple environment and subsequently, a MAGDM technique is developed by assuming heterogeneous interrelationships among the attributes. III. 2- TUPLE LINGUISTIC COMPUTATIONAL MODELS A. Brief review of 2-tuple linguistic computational models Let S = {l0 , l1 , ..., lh } be a linguistic term set with the odd cardinality h + 1. Any term li ∈ S denotes a possible value for linguistic variable. The following properties should hold for the term set S [17], [21]: • •
the set S should be ordered, i.e., li ≥ lj if i ≥ j negation of any linguistic term li ∈ S : neg(li ) = lj such that j = h − i
1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, -
• •
the maximum of any two linguistic terms li , lj ∈ S : max(li , lj ) = li if li ≥ lj the minimum of any two linguistic terms li , lj ∈ S : min(li , lj ) = li if li ≤ lj
The cardinality of linguistic term set S must be small enough so as not to impose useless precision to the users and it should be rich enough to allow discrimination of the performances of each criterion in a limited number of grades [3], [35]. In fact, the psychologists recommended the use of 7 ± 2 labels, less than 5 being not sufficiently informative, more than 9 being too much for a proper understanding of their differences [36]. In view of this, a linguistic term set, S with seven labels can be defined as follows: S = {l0 = very low (VL), l1 = low (L), l2 = moderately low (ML), l3 = normal (N), l4 = moderately high (MH), l5 = high (H), l6 = very high (VH)} Here, we have adopted 2-tuple linguistic representation model, which was developed by Herrera and Mart´ınez [17], [21] based on the concept of symbolic translation. We recall that a symbolic aggregation operations H on scale S is any non-decreasing function H : S n → [0, h] such that H(l0 , l0 , ..., l0 ) = 0 and H(lh , lh , ..., lh ) = h. As typical symbolic aggregation operations, we recall the mean to the median of label indices. Definition 1: Let us assume that β ∈ [0, h] be the result of symbolic aggregation operation on the indices of the labels of linguistic term set S = {l0 , l1 , ..., lh }. If i = round(β) and α = β − i, be two values such that i ∈ {0, 1, ..., h} and α ∈ [−0.5, 0.5), then α is called the symbolic translation. On the basis of symbolic translation [17], [21], the linguistic information is represented by means of 2-tuple (li , αi ) where li ∈ S represents the linguistic label and αi ∈ [−0.5, 0.5) denotes the symbolic translation. In view of Definition 1, there are some important observations regarding the ranges of values of α and β which are summarized below in the form of remark. Remark 1: Observe that if β ∈ [0, 0.5) then i = 0 and α = β ∈ [0, 0.5). On the other hand, if β ∈ [h − 0.5, h] then i = h and α ∈ [−0.5, 0]. Having in mind these constraints, we will still formally consider α ∈ [−0.5, 0.5) for every β ∈ [0, h], to keep the notation as simple as possible. This convention may be applied also to the related scales. This observation clearly indicates that to cover the domain of (l, α) if we consider S × [−0.5, 0.5) then it implies that we take the value of (l, α) from the domain S × [−0.5, 0.5), which means in fact, that domain of (l, α) is {l0 } × [0, 0.5) ∪ {l1 , l2 , ..., lh−1 } × [−0.5, 0.5) ∪ {lh } × [−0.5, 0]. With the above observation at the background, the conversion of symbolic aggregation result into equivalent linguistic 2-tuple can be done by using the following function: Definition 2: Let S = {l0 , l2 , ..., lh } be linguistic term set and β ∈ [0, h] be the numerical value which is obtained from symbolic aggregation operation on the labels of S, then the 2-tuple that conveys the equivalent information to β is given by the following function, ∆ : [0, h] → S × [−0.5, 0.5), ∆(β) = (li , α)
3
where i = round(β) is the usual round operation on label of index, .i.e., i is the index of the considered label closest to β, and α is the value of symbolic translation given by β − i, α ∈ [−0.5, 0.5) if i 6= 0, h α = β, α ∈ [0, 0.5) if i = 0 β − h, α ∈ [−0.5, 0] if i = h Example 1: Assume that S = {l0 , l1 , l2 , l3 , l4 , l5 , l6 } represents a linguistic term set as described above and β = 2.7 is obtained from symbolic aggregation operation. Then from Definition 2, we can convert β = 4.2 into linguistic 2-tuple ∆(2.7) = (l3 , −0.3) =(Normal −0.3), which is presented in Fig. 1. (Normal, -0.3)
Very Low
Low
Moderately Low
0
0.17
0.33
Normal Moderately High
0.5
0.67
High
0.83
Very High
1
Fig. 1. A 2 tuple linguistic representation
When an expert expresses his/her judgment by linguistic 2tuple (l, α), then l denotes the nearest linguistic term in the predefined term set S and α represents expert’s deviation from that linguistic term. For example, a company thinks regarding a location that the possibility of its further extension is “almost very high”. In this scenario, location’s rating can be quantified by linguistic 2-tuple (l6 , α), i.e., the rating of the location is not exactly l6 , however little bit less than l6 , which can be modeled by alpha. Definition 3: Let S = {l0 , l1 , ..., lh } be a linguistic term set. For any linguistic 2-tuple (li , αi ), its equivalent numerical value is obtained by the following function: ∆−1 : S × [−0.5, 0.5) → [0, h] ∆−1 (li , αi ) = i + αi = βi where βi ∈ [0, h]. Example 2: Assume that S = {l0 , l1 , l2 , l3 , l4 , l5 , l6 } represents a linguistic term set and (l3 , −0.3) be a linguistic 2tuple. Based on Definition 3, the equivalent numerical value of (l3 , −0.3) is ∆−1 (l3 , −0.3) = 3 + (−0.3) = 2.7 From Definition 2 and Definition 3, it is noted that any linguistic term can be converted into a linguistic 2-tuple as follows: l ∈ S ⇒ (l, 0). The ordering of two linguistic 2-tuples (lm , αm ) and (ln , αn ) can be done according to lexicographic order as follows: (1) If m > n then (lm , αm ) > (ln , αn ). (2) If m = n then
1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, -
(a) (lm , αm ) = (ln , αn ), for αm = αn (b) (lm , αm ) > (ln , αn ), for αm > αn
4
is sim((l3 , 0.3), (l4 , −0.4)) = ∆S 0 (6 − |∆−1 (l3 , 0.3) − ∆−1 (l4 , −0.4)|)
B. A new concept of linguistic similarity measure of 2-tuples In literature, similarity measure between linguistic 2-tuples was proposed in [7]. However, the existing similarity measure basically computes the similarity degree by an exact numerical value. As linguistic description is easily understood and interpretative by human beings even when concepts are abstract, a natural question at this stage is whether it is reasonable to represent the similarity between two linguistic terms by a precise value [37]. For example, suppose we want to develop a consensus support system (CSS) for decision making so that it can be useful for the customer who need not to be a knowledgeable person in computation of linguistic 2-tuple. In CSS it may be suggested that the experts whose opinions are “moderately similar” are in consensus. The customer can comfortably understand the suggestions, it is not required that he/she should know the concept lying behind the computation method of linguistic term “moderately similar”. In view of this, we are of the opinion that similarity between two linguistic opinions should be expressed in linguistic manner. This fact motivates us to define the concept of linguistic similarity measure between two linguistic 2-tuples information. Regarding the definition of linguistic similarity measure, from the axiomatic point of view, we are inspired by the idea of Bustince et al. [38]. Namely, the similarity should be symmetric, and the similarity of two identical linguistic 2tuples should be maximum. On the other hand, the similarity of the most distinct pairs, namely of (l0 , 0) and (lh , 0), should be minimal. Moreover, our similarity should possess a kind of monotonicity, namely, the similarity of linguistic 2-tuples (li , αi ) and (lk , αk ) cannot be larger than the similarity of (li , αi ) and (lj , αj ) whenever (li , αi ) ≤ (lj , αj ) ≤ (lk , αk ). Among several possible choices, we propose the next rather genuine definition of linguistic similarity measure. Definition 4: Let S = {l0 , l2 , ..., lh } be a linguistic term set as defined in section 2. Consider a new linguistic term set S 0 = {s00 , s02 , ..., s0h } whose term s0i represents the linguistic evaluation of the similarity between any two linguistic terms lp and lr from S such that |p − r| = h − i. Then, linguistic similarity degree between any two linguistic 2-tuples (lp , αp ) and (lr , αr ) is given as follows:
= ∆S 0 (5.7) = (s06 , −0.3) The above evaluation clearly indicates that linguistic similarity degree of (l3 , 0.3) and (l4 , −0.4) is slightly less than “perfectly similar”. Based on Definition 4, we can define the similarity between two collections of linguistic 2-tuples in the following way, again inspired by the idea proposed in [38]. Definition 5: Let A = ((lj1 , αj1 ), (lj2 , αj2 ), ..., (ljm , αjm )) and B = ((lk1 , γk1 ), (lk2 , γk2 ), ..., (lkm , γkm )) be the two collections of linguistic 2-tuples. Then, linguistic similarity between A and B is defined as follows: X m 1 −1 0 ∆ 0 (sim((lji , αji ), (lki , γki )) sim(A, B) = ∆S m i=1 S (2) Note that, formally, similarity of linguistic 2 tuples (or collections of linguistic 2-tuples) can be expressed as a real value from the interval [0, h]. However, then the interpretation is out of the linguistic scope, which we believe to be preferable for the customers, and, thus, also we prefer to deal with the proposed concept of linguistic similarity. IV. B ONFERRONI MEAN AND ITS EXTENSION In its original form BM is a mean type aggregation operator and it is analyzed by Yager [26]: Definition 6: Let (a1 , a2 , ..., an ), n ≥ 2 be a collection of non-negative real values. Assume p ≥ 0 and q ≥ 0. The general BM of the collection (a1 , a2 , ..., an ) is defined as 1 p+q n X 1 p q p,q a a BM (a1 , a2 , ..., an ) = (3) n(n − 1) i,j=1 i j i6=j
BM has been mainly used in multi-attribute decision making (MADM) to assess the alternatives’ performances under the inter-related attributes. Here, our main purpose is to analyze BM in the context of MADM problem. In aim of this, the special case of BM is considered when p = q = 1 . Then (3) becomes: 12 n X 1 1,1 ai aj (4) BM (a1 , a2 , ..., an ) = n(n − 1) i,j=1
−1 sim((lp , αp ), (lr , αr )) = ∆S 0 (h−|∆−1 i6=j S (lp , αp )−∆S (lr , αr )|) (1) Here, aj denotes the satisfaction degree of the alternative Example 3: Assume that x with respect to the attribute Aj and product operation S 0 = {s00 = perfectly dissimilar, s01 = close to perfectly is used to implement an “anding” of attribute satisfaction. With this assumption in background, Yager [26] has provided dissimilar, s02 = moderately dissimilar, s03 = medium similar, an interpretation of BM as computing the average of the s04 = moderately similar, s05 = close to perfectly similar , satisfaction of each pair of attributes Ai AND Aj . Then, (4) s06 = perfectly similar} is transformed by Yager [26] into the following equation: 12 X n n be the linguistic term set to represent the similarity between 1 X 1 1,1 a a (5) BM (a , a , ..., a ) = 1 2 n i j two linguistic terms of S = {l0 , l1 , l2 , l3 , l4 , l5 , l6 }. The simn i=1 n − 1 j=1 ilarity between two linguistic 2-tuple (l3 , 0.3) and (l4 , −0.4) j6=i
1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, -
1 n−1
5
Pn
aj denotes the average satisfaction degree Pn 1 of all the attributes except Ai and ai ( n−1 j=1 aj ) models
where
j=1 j6=i
j6=i
conjunction of the satisfaction Ai with the average of the satisfaction of the rest of the attributes. The foregoing discussion shows that when BM is used to compute the alternative x’s performance with respect to the attribute Ai , it assumes that each attribute Ai has relationship with rest of the attributes A \ {Ai }. This relationship can be depicted as in Fig. 2. However, in real-life MADM problems attributes may not always follow this kind of homogeneous interrelationship patterns. There may be some attributes Ai which are only related to a non-empty subset Bi of the set A \ {Ai } and others have no relationship with the remaining attributes. So there is a need for an aggregation operator to enable the modelling of this kind of heterogeneous relationship among attributes. This is the aspect that inspired us to extend the concept of BM so that we can model interrelationship among the attributes in MADM scenario in a more intuitive manner.
where the empty sum is 0 by convention, (i.e., if either card(I 0 ) = 0 this concerns the last sum, or if card(I 0 ) = n this concerns the first sum), and we have made the convention 0 0 = 0, [39]–[41] (in fact, we only need to define 0/0, its conventional real value is not important here). The relationship between attributes is depicted in Fig. 3, where C = {Ah1 , Ah2 , ...., Ahu } represents the set of the dependent attributes. The subset Bh1 = {A1h1 , A2h1 , ..., At1 h1 } of C denotes the set of the attributes which are related to Ah1 .
A
A4
A. n .
A3 Set of attributes ...
Ai
Dependent attributes
Independent attributes
D
Relationship between attributes of C
An
Ai
A2
A2
….
C A1
A1
C A2
A3
An
A1
A3
An
A1
A2
An
A1
A2
An1 A h1
Fig. 2. Interrelationships among attributes where Ai − Aj represents that Ai is related to Aj A 1h1
A. Extended Bonferroni mean
Bh1
Let A = (a1 , a2 , ..., an ) be a collection of inputs related to the attributes A = {A1 , A2 , ..., An }. Basically, ai s are non-negative real numbers. The attributes are heterogeneously related to each others. Based on their relationship pattern, they can be classified into two disjoint set C and D, where each attribute Ai from C is related to a non-empty subset of attributes Bi ⊂ C(⊂ A) \ {Ai } (therefore, attributes from C will be called dependent), while each attribute Aj from D is not related to any other attribute from A \ {Aj } (therefore, attributes from D will be called independent). Let Ii denotes the set of indices of the attributes from Bi . Let I 0 denotes the indices of the attributes which are in D, and the symbol card(I 0 ) means the cardinality of the set I 0 . With these assumptions and notations in background, the EBM of the collection of inputs (a1 , a2 , ..., an ) is defined as follow: Definition 7: For anyp > 0 and q ≥ 0, the EBM aggregation operator of dimension n is a mapping EBM : (R+ )n → R+ such that EBM p,q (a1 , a2 , ..., an ) X p 1 1 n − card(I 0 ) = ai n n − card(I 0 ) card(I i) i∈I / 0 p 1 X q p+q X p p card(I 0 ) 1 aj + ai (6) n card(I 0 ) 0 j∈Ii
A 2h1
i∈I
At1h1
A1h2
A2h2
A hu
Ah j
Ah2
A t 2h 2 A1h j
Bh 2
A 2h
j
At jh j
Bh j
A1hu A 2hu
Atu hu
B hu
Fig. 3. Heterogeneous interrelationships among attributes where Ai − Aj represents that Ai is related to Aj . The attributes {Ah1 , Ah2 , ..., Ahu } are dependent and each Ahi ∈ C is related to a subset Bhi of C.
We shall first describe how we interpret EBM operator when modelling heterogeneous relationship among the attributes during the aggregation step of MADM process. For this purpose, we shall consider the particular case when p = q = 1. Let (a1 , a2 , ..., an ) be the satisfaction degree of the alternatives against the attributes {A1 , A2 , ..., An }. Then using the proposed aggregation operator (6), denoted as EBM 1,1 , we get as our aggregated value: EBM 1,1 (a1 , a2 , ..., an ) 1 X X 2 n − card(I 0 ) 1 1 = ai aj n n − card(I 0 ) card(Ii ) j∈Ii i∈I / 0 X card(I 0 ) 1 + ai (7) n card(I 0 ) 0 i∈I
P 1 It is important to note here that card(I j∈Ii aj indii) cates the average satisfaction degree of the subset of the attributes Bi ⊂ C \ P {Ai }, which are related to Ai . Then the expression ai |I1i | j∈Ii aj models conjunction of the satisfaction of the attributes Ai with the average satisfaction
1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, -
6
of its inter-related attributes Bi . Then, the evaluation of 21 P P 1 1 gives the satisi∈I / 0 ai card(Ii ) j∈Ii aj n−card(I 0 ) faction of dependent attributes by taking average of the evaluations of each statement: “satisfaction of Ai and the average satisfactionP of its inter-related attributes Bi ”. On the other hand, 1 0 i∈I 0 ai indicates the total satisfaction degree of card(I ) the independent attributes. Finally, by EBM 1,1 (a1 , a2 , ..., an ) we compute average satisfaction degree of heterogeneously related attributes. Depending on the nature of the set I 0 , in the present work, the proposed EBM operator is transformed into three particular cases: 1) if card(I 0 ) = n, i.e., if all input arguments are independent, we obtain power-root arithmetic mean as given below, independent of q: p1 X n 1 p p,q a EBM (a1 , a2 , ..., an ) = n i=1 i 2) if card(I 0 ) = 0 and each input argument is dependent on all other input arguments, Bonferroni mean BM p,q is recovered as follows: EBM p,q (a1 , a2 , ..., an ) p n X q p+q n−0 1 X p 1 a aj = n n − 0 i=1 i card(Ii ) j∈Ii p1 +0 =
1 p+q n 1X p 1 X q a a n i=1 i n − 1 j=1 j
j6=i
=
1
p+q n X 1 p q a a n(n − 1) i,j=1 i j i6=j
= BM
p,q
(a1 , a2 , ..., an )
(8)
0
3) if card(I ) = 0 and each input argument is dependent on some other, but not always all, input arguments, then (6) becomes: EBM p,q (a1 , a2 , ..., an ) 1 X n X q p+q 1 1 p = ai aj n card(Ii ) i=1
(9)
j∈Ii
From construction of EBM operator, one may realize that the aggregated value computed by EBM depends on the interrelationship among the inputs. To support this idea, we present two examples, in which we provide the same set of inputs with different inter-relationship pattern. Subsequently, we perform the aggregation by using both EBM and BM operator. Example 4: Let us consider the inputs a1 = 0.5, a2 = 0.6, a3 = 0.4 and a4 = 0.7. Assume that input a1 is related to a subset of inputs B1 = {a2 , a3 , a4 }, a2 is related to a subset of inputs B2 = {a1 , a3 }, input a3 is related to a subset of
inputs B3 = {a1 , a2 } and input a4 is related to a subset of inputs B4 = {a1 }, then I1 = {2, 3, 4}, I2 = {1, 3}, I3 = {1, 2} and I4 = {1}. Clearly, all the inputs are dependent, i.e., card(I 0 ) = 0 and every input is dependent on some other, but not always all, inputs. Therefore, we can utilize (9) to compute the aggregated value of the inputs. For the sake of simplicity in computation, we take p = q = 1 in (9) and obtain the aggregated value of inputs as follows: EBM 1,1 (0.5, 0.6, 0.4, 0.7) 0.6 + 0.4 + 0.7 0.5 + 0.4 1 0.5 + 0.6 = 4 3 2 21 0.5 + 0.6 +0.4 + 0.7(0.5) 2 21 1 = (0.5 × 0.57 + 0.6 × 0.45 + 0.4 × 0.55 + 0.7 × 0.5) 4 = 0.527
Using BM operator ((3) with p = q = 1), the aggregated value of the inputs can be calculated as follows: BM 1,1 (0.5, 0.6, 0.4, 0.7) 1 0.6 + 0.4 + 0.7 0.5 + 0.4 + 0.7 = 0.5 + 0.6 4 3 3 12 0.5 + 0.6 + 0.7 0.5 + 0.6 + 0.4 + 0.4 + 0.7 3 3 21 1 (0.5 × 0.57 + 0.6 × 0.53 + 0.4 × 0.6 + 0.7 × 0.5) = 4 = 0.546 Example 5: Let us consider the same inputs as in Example 4 with different relationship among input arguments. Suppose input a1 is related to {a3 }, input a2 is related to {a3 , a4 }, input a3 is related to {a1 , a2 } and input a4 is related to {a2 }. As there are no independent attributes and every input is dependent on some other, but not always all, inputs, we can utilize (9) to aggregate the inputs as follows (for the sake of simplicity, we take p = q = 1 in (9)): EBM 1,1 (0.5, 0.6, 0.4, 0.7) 1 0.7 + 0.4 0.5 + 0.6 = 0.5(0.4) + 0.6 + 0.4 4 2 2 21 + 0.7(0.6)
1 = (0.5 × 0.4 + 0.6 × 0.55 + 0.4 × 0.55 + 0.7 × 0.6) 4 = 0.541
21
The aggregated value obtains by BM operator ((3) with p = q = 1) is: BM 1,1 (0.5, 0.6, 0.4, 0.7) = 0.546 In examples 4 and 5, the same set of inputs are considered although the interrelationships among the input arguments are different. Obviously, BM results into the same aggregated
1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, -
7
TABLE I S ATISFACTION OF LOCATIONS AGAINST MULTIPLE ATTRIBUTES
TABLE II OVERALL S ATISFACTION OF THE ALTERNATIVES UNDER DIFFERENT AGGREGATION OPERATORS
Attribute
Location1
Location2
(L1)
(L2)
Labour characteristic (A1 )
0.3
0
Availability of raw materials (A2 )
0.45
0.2
Possibility of further extension (A3 )
0
0.3
Political scenario (A4 )
0.5
0.4
values in both cases. Hence, the obtained result is not satisfactory. In this respect, there is a need to develop more general aggregation operator. By EBM, we get different aggregated values for the given inputs. This observation clearly indicates that our approach to develop a new aggregation operator, in fact more intuitively model the interrelationship pattern among the inputs. In order to explore the modeling capability of the proposed EBM operator in comparison with some well-known aggregation operators including BM, we provide another example. Consider Table I showing two locations with their satisfaction against four attributes evaluated by a multinational company. The company finds that the attributes have following relationships: A1 , A2 , A4 are influenced by A3 , i.e., A3 is interrelated with A1 , A2 and A4 . However, A1 , A2 , A4 have no interrelationship. If company employs BM for aggregating locations’ total satisfaction, L1 with total satisfaction 0.3125 (obtained by using Eq. (4) from L1’s individual satisfactions under different attributes, i.e., (0.3, 0.45, 0, 0.5) as provided in Table I) would be more suitable even though the degree of satisfaction of inter-related attributes (A1 A3 , A2 A3 , A4 A3 ) (i.e., (0.3 × 0, 0.45 × 0, 0.5 × 0)) are zero. With the proposed operator, we can enforce that the satisfaction degrees of at least one pair among the inter-related pairs of attributes Ai and Aj will be above zero for a non-zero output. Thus, by using the proposed operator, we can model the exact relationship among the attributes. Before computing the total satisfactions of L1 and L2 by using proposed one, we can describe the relationship among the attributes more specifically as follows: A1 is related to {A3 }, A2 is related to {A3 }, A3 is related to {A1 , A2 , A4 } and A4 is related to {A3 }. Now by employing the proposed EBM operator and individuals satisfactions values for the locations L1 (i.e., (0.3, 0.45, 0, 0.5) as provided in Table I) and L2 (i.e., (0, 0.2, 0.3, 0.4) as provided in Table I), the total satisfactions of L1 and L2 are obtained as 0 and 0.245, respectively. Therefore, by using the proposed operator, we can capture the exact relationship among the attributes and can select L2 as more suitable location. From Table II, we can also find that if the company was to average satisfaction by using arithmetic mean, L1’s zero satisfaction for A3 would be compensated by the satisfactions for A1 ,A2 and A4 neglecting their interrelationships. On the other hand, if the company was to average satisfaction by using geometric mean, they would be unable to differentiate between L1 and L2 as both possess zero average satisfaction. Thus, EBM
Aggregation
Overall Satisfaction
Overall Satisfaction
Operators
of L1
of L2
Arithmetic Mean
0.3125
0.225
Geometric Mean
0
0
BM
0.2915
0.2082
EBM
0
0.245
shows certain advantage by capturing interrelationship among the aggregated arguments more intuitively than some of the existing aggregation operators. Now, we investigate the desirable properties of EBM operator. Theorem 1: (Idempotency) If all the input arguments are equal, i.e., ai = a for all i, then EBM p,q (a, a, ..., a) = a
(10)
Proof: From (6), we have EBM p,q (a1 , a2 , ..., an ) p X X p+q 1 1 n − card(I 0 ) p q a = a n n − card(I 0 ) card(Ii ) j∈Ii i∈I / 0 1 X p 1 card(I 0 ) p a + n card(I 0 ) i∈I 0 p p+q 1 X n − card(I 0 ) 1 card(I 0 ) p p p+q + = a a n n − card(I 0 ) n i∈I / 0 1 n − card(I 0 ) p card(I 0 ) p p = a + a n n =a
Theorem 2: (Monotonocity) Let (a1 , a2 , ..., an ) and (b1 , b2 , ..., bn ) be two collections of input arguments such that ai ≤ bi for all i. If both the input sets have the same kind of interrelationship among the arguments then EBM p,q (a1 , a2 , ..., an ) ≤ EBM p,q (b1 , b2 , ..., bn )
(11)
Proof: Since, ai ≤ bi for all i ⇒
X q X q 1 1 aj ≤ bj , for all q ≥ 0 card(Ii ) card(Ii ) j∈Ii
j∈Ii
and Ii (i = 1, 2, ..., n) ⇒ api
X q X q 1 1 aj ≤ bpi bj for all q ≥ 0 card(Ii ) card(Ii ) j∈Ii
j∈Ii
and i = 1, 2, ..., n
1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, -
8
It follows that p X p X q p+q 1 1 aj ai n − card(I 0 ) card(Ii ) j∈Ii i∈I / 0 p X p X q p+q 1 1 ≤ (12) bi bj n − card(I 0 ) card(Ii ) 0 j∈Ii
i∈I /
2) if card(I 0 ) = 0, and each argument is dependent on all other arguments, 2TLEBM reduces to 2-tuple linguistic Bonferroni mean (2TLBM): 2T LEBM p,q ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) 1 p+q n X 1 =∆ βip βjq (16) n(n − 1) i,j=1 i6=j
Clearly, X i∈I 0
apj
≤
X
bpj
for all p ≥ 0
(13)
i∈I 0
From (12) and (13), we obtain EBM
p,q
(a1 , a2 , ..., an ) ≤ EBM
p,q
Corollary 1: (Boundedness) Let au = maxi ai and al = mini ai . Then the aggregated value by EBM satisfies: (14)
Proof: Boundedness is a consequence of idempotency and monotonicity, i.e., Theorems 1 and 2. V. 2- TUPLE LINGUISTIC EXTENDED B ONFERRONI MEANS Based on (6), we define 2-tuple linguistic extended Bonferroni mean operator as follows: Definition 8: Let ST be the set of all linguistic 2-tuples on the linguistic term set S and (gi , αi )(i = 1, 2, ..., n) be a collection of linguistic 2-tuples. For any p > 0 and q ≥ 0, 2TLEBM aggregation operator of dimension n is a mapping 2T LEBM : (ST )n → ST such that p,q
2T LEBM ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) X n − card(I 0 ) 1 =∆ (∆−1 (gi , αi ))p 0 n n − card(I ) i∈I / 0 p p+q X 1 −1 q (∆ (gj , αj )) card(Ii ) j∈Ii p1 X card(I 0 ) 1 −1 p + (∆ (gi , αi )) n card(I 0 ) i∈I 0 X p 1 1 n − card(I 0 ) βi =∆ n n − card(I 0 ) card(I i) i∈I / 0 p 1 X q p+q X p p card(I 0 ) 1 + βi βj n card(I 0 ) 0 j∈Ii
i∈I
(15) where βi = ∆−1 (gi , αi ). It is important to note here that based on the cardinality of independent arguments, we can derive the following three particular cases: 1) if card(I 0 ) = n, i.e., the arguments are independent, we obtain 2-tuple linguistic power-root arithmetic mean as given below: 2T LEBM p,q ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) p1 X n 1 βip =∆ n i=1
3) if card(I ) = 0 and each argument is dependent on some other, but not always all, arguments, then (15) becomes: 2T LEBM p,q ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) 1 X n X q p+q 1 1 p =∆ βj (17) βi n card(Ii )
(b1 , b2 , ..., bn )
al ≤ EBM p,q (a1 , a2 , ..., an ) ≤ au
0
j∈Ii
i=1
In the following list, let us consider some special cases of 2TLEBM operator by taking different values of the parameters p and q. 1) when q = 0, 2TLEBM operator, defined in (15) reduces to 2-tuple linguistic power arithmetic mean (the proof is given in Appendix A). In this case, no interrelationship is captured between 2-tuple linguistic information. 2) when p = 1 and q = 0, 2TLEBM operator defined in (15) reduces to 2-tuple linguistic arithmetic mean as follows: 2T LEBM 1,0 ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) X n 1 =∆ βi (18) n i=1
3) When p → 0 and q = 0, 2TLEBM operator defined in (15) reduces to 2-tuple linguistic geometric mean operator as follows: lim 2T LEBM p,0 ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn ))
p→0
p1 X n 1 p βi = ∆ lim p→0 n i=1 Y n1 n =∆ βi
(19)
i=1
A. Properties of 2TLEBM The desirable properties of the 2TLEBM aggregation operator are described in the following theorems (their validity follows from Theorem 1, Theorem 2 and Corollary 1): Theorem 3 (Idempotency): If all the 2-tuples linguistic information are equal, i.e., (gi , αi ) = (l, α) (i = 1, ..., n) then, 2T LEBM p,q ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) = (l, α) (20) Theorem 4 (Monotonicity): Consider the two collections of linguistic 2-tuples input arguments ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) and ((g10 , α10 ), (g20 , α20 ), ..., (gn0 , αn0 )) with (gi0 , αi0 ) ≤ (gi , αi ) for all i. We further assume that both the input sets have same kind of interrelationship among the input arguments.
1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, -
9
Then 2T LEBM ((g10 , α10 ), (g20 , α20 ), ..., (gn0 , αn0 )) ≤ 2T LEBM p,q ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) p,q
(21)
Theorem 5 (Boundedness): For any collection of linguistic 2-tuples ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) min(gi , αi ) ≤ 2T LEBM p,q ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) i
≤ max(gi , αi )
(22)
i
B. Weighted form of 2TLEBM In the aforementioned analysis, only input arguments and their patterns of interrelationships are considered in the aggregation process. However, the importances of the inputs are not emphasized. In many practical applications the input arguments may have different importances or weights. Thus, it would be more worthwhile to define the aggregation operator by considering the weights of the input arguments. In view of this, we define two weighted form of 2TLEBM. 1) Numerical weighted form of 2TLEBM: In this case, the weights of the inputs are completely known and they are represented by exact numerical values. Definition 9: Let ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) be a collection of linguistic 2-tuples. For any p > 0 and q ≥ 0, weighted 2-tuple linguistic extended Bonferroni mean (W2TLEBM) aggregation operator of dimension n is a mapping W 2T LEBM : (ST )n → ST such that W 2T LEBM p,q ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) X X 1 P wi ∆−1 (gi , αi ))p = ∆ (1 − wi ) 1 − w 0 i i∈I i∈I 0 i∈I / 0 p p+q X 1 −1 q P wj (∆ (gj , αj )) j∈Ii wj j∈Ii p1 X X 1 −1 p wi (∆ (gi , αi )) + wi P i∈I 0 wi 0 0 i∈I
i∈I
of the inputs in exact numerical values due to lack of information, time pressure and incomplete knowledge of the studied system. For example, in MADM, it may be difficult task for the experts to assign attributes’ weights in exact numerical values. Then, they may fully comfortable to provide the importances of the attributes in linguistic terms. Thus, an aggregation operator is needed to compute overall ratings of the alternatives where the importance of the attribute is expressed in linguistic terms. In order to take account of linguistic weights of inputs, we define linguistic weighted form of 2TLEBM as follows: Definition 10: Let ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) be a collection of linguistic 2-tuples and (ui , θi ) (differs from (l0 , 0) at least for one i) be the associated linguistic importance of the input. Then, for any p > 0 and q ≥ 0, linguistic weighted 2-tuple linguistic extended Bonferroni mean (LW2TLEBM) aggregation operator of dimension n is a mapping LW − 2T LEBM : (ST )n → ST such that LW − 2T LEBM p,q ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) X X 1 0 p P 1 P w β = ∆ (1 − wi0 ) i i 0 1 − i∈I 0 wi0 j∈Ii wj i∈I 0 i∈I / 0 p p+q p1 X X X 0 P 1 0 q 0 p + wi wj βj wi βi 0 i∈I 0 wi 0 0 i∈I
j∈Ii
i∈I
(24) 0 −1 where (ui , θi )/ i =∆ Pw n 0 and j= wj = 1.
Pn
j=1
∆−1 (uj , θj ). Clearly, wj0 ≥ 0
As earlier, based on the cardinality of independent arguments, we derive three particular cases as follows: 1) if card(I 0 ) = n, i.e., if the arguments are independent, we obtain linguistic weighted 2-tuple linguistic powerroot arithmetic mean given as below: W − 2T LEBM p,q ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) p1 X n 0 p =∆ wi βi i=1 0
2) if card(I ) = 0, and each argument is dependent on all other arguments, LW-2TLEBM reduces to linguis1 − i∈I 0 wi j∈Ii wj i∈I 0 i∈I / 0 tic weighted 2-tuple linguistic Bonferroni mean (LWp p+q p1 X X X 2TLBM): 1 wj βjq + wi P wi βip w i∈I 0 i i∈I 0 LW − 2T LEBM p,q ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) i∈I 0 j∈Ii 1 p+q X n (23) wi0 p q β β (25) =∆ 1 − wi0 i j where wi (i = 1, 2, ..., n) indicates the relative importance of i,j=1 i6=j (g Pin, αi )(i = 1, 2..., n) and satisfies the conditions: wi ≥ 0 and 0 w = 1. i=1 i 3) if card(I ) = 0 and each argument is dependent on some When each (gi , αi )(i = 1, 2..., n) has equal importance, i.e., other, but not always all arguments, then LW-2TLEBM w = ( n1 , n1 , ..., n1 )T , then (23) becomes 2TLEBM operator becomes: (15) (the proof is given in Appendix B). When p = 1 and LW − 2T LEBM p,q ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) q = 0 (23) reduces to weighted 2-tuple linguistic arithmetic 1 X p+q n X mean (W2TLAM) (the proof is given in Appendix C). 0 p P 1 0 q =∆ w i βi wj βj 0 j∈Ii wj j∈Ii i=1 2) Linguistic weighted form of 2TLEBM: In some situation, (26) it is not always possible to determine the relative importances X = ∆ (1 − wi )
1 P
X
wi βip P
1
1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, -
10
Moreover, it can be easily proved that numerical and linguistic weighted form of 2TLEBM satisfy idempotency, monotonicity, and boundedness properties of the aggregation operator.
follows: O= X1 X2 .. . Xm
VI. A N APPROACH FOR MAGDM WITH LINGUISTIC ASSESSMENTS
In this section, we propose a method for solving MAGDM problem with 2-tuple linguistic information. A MAGDM problem with linguistic information is depicted as follows. There is a group of t experts {J1 , J2 , ..., Jt } and a set of m alternatives X = {X1 , X2 , ..., Xm }. The experts’ aim is to choose the best alternative among m alternatives depending on n attributes A = {A1 , A2 , ..., An }. We further assume that attributes are heterogeneously interrelated, i.e., some of the attributes, denoted as Ai , are related to a subset Bi of the set A \ {Ai } and others are independent. Each expert comes from different background and, possesses different level of knowledge and ability which makes their importances different in the decision making process. The weights of the experts are completely unknown here. Assume that the experts provide the weight vector of the attributes in 2-tuple linguistic form. Let Wk = ((w1k , η1k ), (w2k , η2k ), ..., (wnk , ηnk )) be the 2-tuple linguistic weight vector of the attributes given by the expert Jk (1 ≤ k ≤ t), where wjk (1 ≤ j ≤ n) belongs to the predefined linguistic term set S and ηjk ∈ [−0.5, 0.5). The k-th expert, Jk provides his/her rating of an alternative Xi (1 ≤ i ≤ m) with respect to the attribute Aj as a 2(k) (k) (k) (k) tuple dij = (gij , αij ) where gij belongs to the predefined (k) linguistic term set S and αij ∈ (−0.5, 0.5]. The expert Jk ’s ratings of the alternatives is summarized in 2-tuple linguistic (k) decision matrix Dk = (dij )m×n as follows:
Dk =
X1 X2 .. .
A1 (k) d11 d(k) 21 . ..
A2 (k) d12 (k) d22 .. .
Xm
(k) dm1
(k) dm1
··· ··· ··· ··· ···
··· ··· ···
J2 (2) (2) (r1 , α1 ) (2) (2) (r2 , α2 ) .. . (2) (2) (rm , αm )
··· ···
Jt (t) (t) (r1 , α1 ) (t) (t) (r2 , α2 ) .. . (t) (t) (rm , αm )
where k-th column r(k) = (k) (k) (k) (k) (k) (k) ((r1 , α1 ), (r2 , α2 ), ..., (rm , αm ))T of the matrix O represents expert Jk ’s overall ratings about the alternatives, X1 , X2 , ..., Xm . Step 2: In this step, we construct similarity matrix. This matrix plays a key role in determining experts’ weights. The aim is to calculate the similarity between each pair of experts’ overall ratings about the alternatives. For this purpose, let us consider any two experts, Jk1 and Jk2 (1 ≤ k1 , k2 ≤ t) among the t experts. From the matrix O, it can be said that the overall ratings of the alternatives provided by the experts Jk1 and Jk2 are r(k1 ) = (k ) (k ) (k ) (k ) (k ) (k ) and ((r1 1 , α1 1 ), (r2 1 , α2 1 ), ..., (rm 1 , αm 1 ))T (k ) (k ) (k2 ) (k2 ) (k2 ) (k2 ) (k2 ) r = ((r1 , α1 ), (r2 , α2 ), ..., (rm 2 , αm 2 ))T , respectively. Since experts provide their assessments in linguistic terms, therefore, the similarity between these experts’ opinions may also be expressed in linguistic term. In view of this, similarity between the overall opinions of Jk1 and Jk2 against the alternatives can be calculated by the proposed linguistic similarity measure (2) of linguistic 2-tuples. Let smk1 k2 be the linguistic similarity between the opinions of the experts Jk1 and Jk2 , which can be derived as follows: smk1 k2 = sim(r(k1 ) , r(k2 ) ) X m 1 (k2 ) (k2 ) (k1 ) (k1 ) −1 = ∆S 0 ∆ 0 (sim((ri , αi ), (ri , αi ))) m i=1 S (28) The evaluation of the linguistic similarity between each pair of experts can be presented in the matrix form as follows:
An (k) d1n (k) d2n .. . (k) dmn
On the basis of the above decision inputs, an algorithm for solving MAGDM problem is presented here. The objective of the algorithm is two folds: (1) obtaining experts’ unknown weights by utilizing linguistic similarity measure; (2) selecting the best alternative from the alternative set X with respect to the attribute set A. The steps of the proposed algorithm are as follows: Step 1: From k-th expert’s 2-tuple linguistic decision (k) matrix Dk = (dij )m×n , alternative Xi ’s over(k) (k) all performance value (ri , αi ) is calculated by utilizing LW-2TLEBM operator (24), which take the form of (27) as shown in the beginning of the next page. The parameters of (27) are as (k) (k) (k) 0 follows: βij1 =P∆−1 S (lij1 , αij1 ) and wj1 k = n −1 −1 ∆ (wj1 k , ηj1 k )/ j=1 ∆ (wjk , ηjk ). The above evaluation can be summarized in the matrix form as
J1 (1) (1) (r1 , α1 ) (1) (1) (r2 , α2 ) .. . (1) (1) (rm , αm )
J1 J2 S= . .. Jt Step 3:
J1 sm
J2 sm12 sm22 .. . smt2
11
sm21 . . . smt1
··· ··· ··· ··· ···
Jt sm1t sm2t .. . smtt
Calculate the average linguistic similarity of each expert Jk (k = 1, 2, ..., t) with the rest of the experts as follows: t 1 X −1 smk = ∆S 0 ∆S 0 (smkk1 ) (29) t−1 k1 =1 k1 6=k
Step 4:
Determine the relative importance, i.e, the weight of the each expert Jk as follows: λk = Pt
∆−1 S 0 (smk )
u=1
∆−1 S 0 (smu )
(30)
P clearly, λk > 0 and tk=1 λk = 1. Step 5: Finally, for each alternative Xi (i = 1, 2, ..., m), calculate the group overall rating oi = (ri , αi )(i = 1, 2, ..., m)
1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, -
(k)
(k)
(k)
(k)
11
(k)
(k)
(k)
(k)
(ri , αi ) = LW − 2T LEBM p,q ((gi1 , αi1 ), (gi2 , αi2 ), ..., (gin , αin )) p p+q p1 X X X X 1 1 (k) q (k) p (k) p 0 0 0 0 P P w (β ) w (β ) + w (β ) = ∆ (1 − wj1 k ) j2 k ij2 j1 k ij1 j1 k ij1 0 1 − i∈I 0 wj0 1 k j2 ∈Ii wj2 k 0 0 0 j1 ∈I
j2 ∈Ii
j1 ∈I /
j1 ∈I
(27)
by using (32) as follows: (1)
(1)
(2)
(2)
(t)
(t)
W 2T LAM ((ri , αi ), (ri , αi ), ..., (ri , αi )) X t =∆ λk βik (31) k=1 (k)
(k)
T where βik = ∆−1 S (ri , αi ) and λ = (λ1 , λ2 , ..., λt ) is the weight vector of experts. Step 6: Rank the alternatives Xi (i = 1, 2, ..., m), based on their group overall rating values (ri , αi ) (i = 1, 2, ..., m) by using the comparison method of linguistic 2-tuples described in Section 2, and choose the best alternative according to their ranking order.
We next present a real-life example to illustrate the proposed MADM technique.
A1
A4
A2
A6
A3
A7
A7
A4
A1
A5
A6
A6
A7
A2
A7
A4
A3
A5
Fig. 4. Interrelationships among attributes where Ai − Aj represents they are inter-related TABLE III 2- TUPLE LINGUISTIC DECISION MATRIX D1
A1
A2
A3
A4
A5
A6
A7
X1 (l3 , 0) (l4 , 0) (l2 , 0) (l5 , 0) (l3 , 0) (l2 , 0) (l3 , 0) X2 (l1 , 0) (l3 , 0) (l4 , 0) (l4 , 0) (l2 , 0) (l3 , 0) (l2 , 0)
VII. A PRACTICAL EXAMPLE
X3 (l4 , 0) (l3 , 0) (l5 , 0) (l3 , 0) (l5 , 0) (l4 , 0) (l3 , 0)
A U.S. based major bicycle manufacturer company is planning to expand their business in the Asian market due to high demand of their products in this region and possibility of further growth of the company. Currently, the company is facing difficulty to meet the overseas demand as company has only one production unit in U.S. Owing to the lower production cost in Asia and the potential of high growth, the management of company has decided to open a new production unit in Asia. After initial screening, the management of company has found four possible potential locations in four different Asian countries to set up their production unit. In order to identify the best suitable location, the management of company has formed a committee, which consists of three experts, J1 , J2 and J3 . Analyzing all possible factors, the management of company has found a set of seven attributes to evaluate the alternatives, i.e., the four locations X1 , X2 , X3 and X4 . The attributes are as follows: A1 : Market A2 : Business climate A3 : Labor Characteristic A4 : Infrastructure A5 : Availability of raw materials A6 : Investment cost A7 : Possibility for further expansion The attributes are inter-related and interrelationship among them is presented in Fig. 4. From Fig. 4, we observe that all the attributes are dependent, i.e., there is no independent attributes. We also note that each attribute is dependent on only a subset of the attribute set. In this scenario, LW-2TLEBM (26) may be more intuitively handle the interrelationship pattern among the attributes and, therefore, to compute the alternatives’ overall performances, we will utilize (26) further.
X4 (l5 , 0) (l4 , 0) (l3 , 0) (l3 , 0) (l4 , 0) (l5 , 0) (l2 , 0)
As the information is highly uncertain, experts are unable to give their preferences in numerical values. They decide to provide their preferences by using 2-tuple linguistic information according to the following linguistic terms set: S = {l0 = very low (VL), l1 = low (L), l2 = moderately low (ML), l3 = normal (N), l4 = moderately high (MH), l5 = high (H), l6 = very high (VH)} The linguistic assessments of the four locations given by the experts with respect to all the attributes are presented in Tables III-V. Experts also use the linguistic variables from the above linguistic term set, S to assess the relative importance of the attributes. The weight vectors of the attributes provided by the experts are summarized in Table VI. Now, the proposed MAGDM method can be applied for the selection of the best
TABLE IV 2- TUPLE LINGUISTIC DECISION MATRIX D2
A1
A2
A3
A4
A5
A6
A7
X1 (l2 , 0) (l5 , 0) (l3 , 0) (l4 , 0) (l5 , 0) (l3 , 0) (l2 , 0) X2 (l2 , 0) (l4 , 0) (l5 , 0) (l2 , 0) (l1 , 0) (l2 , 0) (l5 , 0) X3 (l1 , 0) (l4 , 0) (l6 , 0) (l4 , 0) (l4 , 0) (l3 , 0) (l4 , 0) X4 (l4 , 0) (l3 , 0) (l3 , 0) (l4 , 0) (l5 , 0) (l6 , 0) (l3 , 0)
1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, -
12
Step 4: Experts’ weight vector is derived by using (30) as follows:
TABLE V 2- TUPLE LINGUISTIC DECISION MATRIX D3
A1
A2
A3
A4
A5
A6
A7
X1 (l4 , 0) (l3 , 0) (l2 , 0) (l5 , 0) (l4 , 0) (l2 , 0) (l3 , 0) X2 (l3 , 0) (l5 , 0) (l4 , 0) (l3 , 0) (l2 , 0) (l4 , 0) (l1 , 0) X3 (l4 , 0) (l5 , 0) (l5 , 0) (l2 , 0) (l4 , 0) (l5 , 0) (l3 , 0) X4 (l4 , 0) (l3 , 0) (l5 , 0) (l3 , 0) (l3 , 0) (l5 , 0) (l3 , 0) TABLE VI 2- TUPLE LINGUISTIC WEIGHT OF THE ATTRIBUTES A1
A2
A3
A4
A5
A6
A7
W1 (l4 , 0) (l3 , 0.3) (l4 , 0) (l5 , 0) (l3 , −0.5) (l4 , 0) (l2 , 0) W2 (l4 , −0.5) (l3 , −0.5) (l4 , 0) (l4 , −0.3) (l5 , 0) (l4 , 0.4) (l4 , 0) W3 (l3 , 0) (l4 , 0) (l5 , 0) (l4 , 0.3) (l5 , 0) (l6 , 0) (l3 , 0)
TABLE VII A LTERNATIVES INDIVIDUAL OVERALL RATINGS
J1
J2
J3
X1
(l3 , 0.18)
(l3 , 0.11)
(l3 , 0.20)
X2
(l3 , −0.30)
(l3 , 0.15)
(l3 , −0.02)
X3
(l4 , −0.41)
(l4 , −0.24)
(l4 , −0.23)
X4
(l4 , −0.45)
(l4 , −0.06)
(l4 , −0.4)
location. Step 1: From each expert’s decision matrix Dk = (k) (dij )4×7 (k = 1, 2, 3) (provided in Tables II-IV), by utilizing (26) with p = q = 1( for the sake of simplicity in computation) and linguistic weight vector Wk (k = 1, 2, 3) (provided in Table V), we compute the overall ratings of the alternatives Xi (i = 1, 2, 3, 4). The aggregation results are summarized in Table VII. Step 2: We construct the similarity matrix S of the experts by utilizing (28). The similarity matrix is presented in Table VIII. Step 3: The average linguistic similarity of each expert is computed by using (29) as given below. sm1 = (s06 , −0.21), sm2 = (s06 , −0.22), sm3 = (s06 , −0.14).
TABLE VIII S IMILARITY MATRIX OF EXPERTS
J1 J1 J2 J3
(s06 , 0) 0 (s6 , −0.28) (s06 , −0.14)
J2 0 (s6 , −0.28) (s06 , 0) (s06 , −0.16)
J3 0 (s6 , −0.14) (s06 , −0.16) (s06 , 0)
λT = (0.3324, 0.3318, 0.3358) Step 5: The group overall ratings of the alternatives are computed by using the weight vector of the experts (obtained in Step 4), and (31) as follows: (r1 , α1 ) = (l3 , 0.17), (r2 , α2 ) = (l3 , −0.06), (r3 , α3 ) = (l4 , −0.29), (r4 , α4 ) = (l4 , −0.30) Step 6: According to the overall rating values, the ranking order of the alternatives is X3 > X4 > X1 > X2 . Hence, the best suitable location to set up new production unit is X3 . There are some important observations in the results of the above problem depending on the values of the parameters p and q which we would like to present below: In the above computation we have taken the values of the parameters p and q as one. But if we take p = 1 and q = 3 then we obtain the group overall ratings against the alternatives as follows: (r1 , α1 ) = (l3 , 0.13), (r2 , α2 ) = (l3 , 0.14), (r3 , α3 ) = (l4 , −0.19), (r4 , α4 ) = (l4 , −0.23). The new evaluation produces a new ranking order of the alternatives as: X4 > X3 > X2 > X1 . Hence, X4 is the most desirable alternative. This ranking result is slightly different from the ranking order of the alternatives obtained by taking the parameters p = q = 1. That is, the ranking order of the pairs X3 and X4 , and X2 and X1 are reversed. Hence, the ranking result may be different for different values of p and q. In general, p and q can take any values between zero to infinity. In the above example, the alternatives’ group overall ratings are changing with respect to different values of the parameters p and q in between zero to infinity, and the results are depicted in Fig. 5. From aforementioned example and figures, we observe that the values obtained by LW-2TLEBM also depend on the choice of the parameters p and q. It is also clear that these parameters are not robust [33]. For the larger values of p and q, more computational effort is required, nevertheless, in special cases, if one of the parameters becomes zero, then no interrelationship among the aggregated arguments is captured. Therefore, for practical applications’, we suggest to take the values of parameters p and q as one, which is not only intuitive and simple, but also reflects the interrelationship among the aggregated arguments. A. Comparison of performances with the others existing aggregation operators To further illustrate the applicability of the proposed operator, we solve the above location selection problem by using five existing linguistic aggregation operators: weighted 2-tuple linguistic arithmetic mean (W2TLAM) operator, weighted 2-tuple linguistic geometric mean (W2TLGM) operator, weighted 2-tuple linguistic harmonic mean (W2TLHM) operator, 2-tuple linguistic weighted power arithmetic mean (2TLWPAM) and weighted 2-tuple linguistic Bonferroni mean (W2TLBM).
1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, -
13
of the alternatives obtained by these existing operators are significantly different from the ranking result obtained by using W2TLEBM operator. W2TLEBM operator identifies X3 as the most desirable location to set up the production plant whereas other aggregation operators choose X4 as the best location. The main reason behind the significant differences in the ranking order is that the W2TLEBM operator can model the exact relationships among the attributes while in other aggregation techniques there is no such scope of modeling this kind of interrelationship among the aggregated arguments.
VIII. C ONCLUSION
Fig. 5. (a), (b), (c), and (d) represent the changes in the group overall ratings of the alternatives X1 , X2 , X3 and X4 , respectively, when the parameters p and q of (26) are taking the values from the interval (0, 20]. TABLE IX G ROUP OVERALL RATINGS OF THE ALTERNATIVES OBTAINED BY USING DIFFERENT AGGREGATION OPERATOR
Alternatives
W2LAM
W2TLGM
W2LHM
W2TPAM
W2LBM
X1
(l3 , 0.27)
(l3 , 0.08)
(l3 , −0.1)
(l3 , 0.06)
(l3 , 0.23)
X2
(l3 , 0)
(l3 , −0.32)
(l2 , 0.33)
(l3 , −0.27) (l3 , −0.06)
X3
(l4 , −0.1)
(l4 , −0.31)
(l3 , 0.43)
(l4 , −0.31) (l4 , −0.13)
X4
(l4 , −0.08) (l4 , −0.21) (l4 , −0.34) (l4 , −0.26) (l4 , −0.12)
TABLE X R ANKING ORDER OF THE ALTERNATIVES IN DIFFERENT CASE
Aggregation operator
Ranking order of the alternatives
W2LAM
X4 > X3 > X1 > X2
W2TLGM
X4 > X3 > X1 > X2
W2LHM
X4 > X3 > X1 > X2
W2LBM
X4 > X3 > X1 > X2
W2PAM
X4 > X3 > X1 > X2
W2TLEBM
X3 > X4 > X1 > X2
In order to compare performance of W2TLEBM with the aforementioned aggregation operators, we compute alternatives’ overall performances from each expert’s decision matrix Dk by using these aggregation operators and following the same steps as in the proposed decision making process, alternatives’ group overall performances are calculated. The result of each case is summarized in the Table IX. Based on the alternatives’ group overall performances, ranking order of the alternatives in each case are presented in Table X. It is clear from the Table IX that the ranking orders
In this paper, we have proposed an aggregation operator which we refer to as the EBM. The proposed operator is able to more intuitively handle interrelationship pattern among the attributes. Semantically, EBM can model heterogeneous connections among the attributes where BM captures homogeneous relationships among the attributes. Furthermore, we have examined that for a certain input set, the aggregated values, computed by EBM, varies depending on the interrelationship pattern of the input arguments. We have discussed variety of special cases of EBM operator. Moreover, EBM operator satisfies the properties of mean-type aggregation operator, such as, idempotency, monotocity, boundedness. Further to deal with linguistic information, we have extended it to 2tuple linguistic environment and proposed 2-tuple linguistic extended Bonferroni mean (2TLEBM) operator. The desirable properties of 2TLEBM have been studied in details. Taking different values of the parameters p and q, we have shown that 2-tuple linguistic arithmetic mean and geometric mean are special cases of 2TLEBM. Based on the weighted form of 2TLEBM and linguistic similarity measure, a technique for solving MAGDM problems has been developed. Finally by the help of site selection problem, we have shown that W2TLEBM is capable to capture the specific interrelationships among the attributes while other existing linguistic aggregation operators including BM fail to reflect the exact interrelationship among the attributes. The main advantages of the proposed MAGDM technique can be pointed out as follows: (1) by taking conjunction of satisfaction of only inter-related attributes, the proposed linguistic EBM operator, not only can model the heterogeneous connection among attributes, but also can avoid the effect of conjunction of unrelated attributes during the aggregation of the alternatives’ performances under different attributes; (2) by interpreting similarity between 2-tuple linguistic information using linguistic terms, the proposed similarity measure helps the experts to understand it more intuitively than numerical value representation (3) linguistic weights of the attributes help the experts to express their uncertainty in weight information more comfortably. In further research, other type of interrelationship among the attributes and its reflection in aggregation process needs to be explored by introducing new types of aggregation operators.
1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, -
DEDUCTION OF
A PPENDIX A 2- TUPLE LINGUISTIC POWER ARITHMETIC MEAN FROM 2TLEBM
From (15), we have 2T LEBM p,0 ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) X p 1 n − card(I 0 ) 1 βi =∆ n n − card(I 0 ) card(I i) i∈I / 0 p 1 p+0 p X p X card(I 0 ) 1 0 βi βj + n card(I 0 ) i∈I 0 j∈Ii X 1 1X p p 1 p =∆ βi + βi n n i∈I 0 i∈I / 0 X p1 n 1 =∆ βip n i=1
DEDUCTION OF
A PPENDIX B 2TLEBM FROM W2TLEBM
From (23), we have W 2T LEBM p,q ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) X1 X 1 p 1 1 P = ∆ (1 − ) βi P 1 1 n 1 − i∈I 0 n n j∈Ii n i∈I 0 i∈I / 0 p 1 X 1 q p+q X 1 X 1 p p 1 P β + βi 1 n j n i∈I 0 n i∈I 0 n j∈Ii i∈I 0 X 1 p 1 1 card(I 0 ) ) β = ∆ (1 − 0) n n i card(Ii ) 1 − card(I i∈I / 0 n n p p1 0 X 1 q p+q X card(I ) 1 p 1 + βj β card(I 0 ) n n n i j∈Ii i∈I 0 n X p n − card(I 0 ) 1 1 =∆ βi n n − card(I 0 ) card(I i) i∈I / 0 p 1 X q p+q X p p 1 card(I 0 ) βj βi + n card(I 0 ) 0 j∈Ii
i∈I
A PPENDIX C DEDUCTION OF WEIGHTED 2- TUPLE LINGUISTIC ARITHMETIC FROM W2TLEBM From (23), we obtain W 2T LEBM 1,0 ((g1 , α1 ), (g2 , α2 ), ..., (gn , αn )) X X 1 1 P = ∆ (1 − wi ) wi βi P 1 − w 0 i i∈I j∈Ii wj i∈I 0 i∈I / 0 X X X 1 wi βi wj βj0 + wi P i∈I 0 wi 0 0 j∈Ii
i∈I
X X =∆ wi βi + w i βi i∈I 0
i∈I / 0
X n =∆ wi βi i=1
i∈I
14
ACKNOWLEDGMENT We are very grateful to the Editors and the anonymous reviewers for their insightful and constructive comments and suggestions for the improvement of the manuscript. R EFERENCES [1] G. Bordogna, M. Fedrizzi, and G. Pasi, “A linguistic modeling of consensus in group decision making based on OWA operators,” IEEE Trans. Syst. Man, Cybern. - Part A Syst. Humans, vol. 27, no. 1, pp. 126–133, 1997. [2] F. Herrera and L. Mart´ınez, “An approach for combining linguistic and numerical information based on the 2-tuple fuzzy linguistic representation model in decision-making,” Int. J. Uncertainty, Fuzziness Knowledge-Based Syst., vol. 8, no. 5, pp. 539–562, 2000. [3] F. Herrera and L. Mart´ınez,“A model based on linguistic 2-tuples for dealing with multigranular hierarchical linguistic contexts in multiexpert decision-making” IEEE Trans. Syst. Man. Cybern. B. Cybern., vol. 31, no. 2, pp. 227–34, Jan. 2001. [4] F. Herrera, L. Mart´ınez, and P. S´anchez, “Managing non-homogeneous information in group decision making,” Eur. J. Oper. Res., vol. 166, no. 1, pp. 115–132, 2005. [5] G. Wei, “A method for multiple attribute group decision making based on the ET-WG and ET-OWG operators with 2-tuple linguistic information,” Expert Syst. Appl., vol. 37, no. 12, pp. 7895–7900, 2010. [6] G. Wei, “Some generalized aggregating operators with linguistic information and their application to multiple attribute group decision making,” Comput. Ind. Eng., vol. 61, no. 1, pp. 32–38, 2011. [7] G. Wei and X. Zhao, “Some dependent aggregation operators with 2tuple linguistic information and their application to multiple attribute group decision making,” Expert Syst. Appl., vol. 39, no. 5, pp. 5881– 5886, 2012. [8] S.-P. Wan, “2-tuple linguistic hybrid arithmetic aggregation operators and application to multi-attribute group decision making,” KnowledgeBased Syst., vol. 45, no. 0, pp. 31 – 40, 2013. [9] J. H. Park, J. M. Park, and Y. C. Kwun, “2-Tuple linguistic harmonic operators and their applications in group decision making,” KnowledgeBased Syst., vol. 44, pp. 10–19, 2013. [10] W. Yang and Z. Chen, “New aggregation operators based on the Choquet integral and 2-tuple linguistic information,” Expert Syst. Appl., vol. 39, no. 3, pp. 2662–2668, 2012. [11] Y. Xu and H. Wang, “Approaches based on 2-tuple linguistic power aggregation operators for multiple attribute group decision making under linguistic environment ,” App. Soft Comput., vol. 11, no. 5, pp. 3988 – 3997, 2011. [12] G. Wei, “Extension of TOPSIS method for 2-tuple linguistic multiple attribute group decision making with incomplete weight information,” Knowl. Inf. Syst., vol. 25, no. 3, pp. 623–634, 2009. [13] G. Wei, “Grey relational analysis method for 2-tuple linguistic multiple attribute group decision making with incomplete weight information,” Expert Syst. Appl., vol. 38, no. 5, pp. 4824 – 4828, 2011. [14] M. K¨oksalan and C. Ulu, “An interactive approach for placing alternatives in preference classes,” Eur. J. Oper. Res., vol. 144, no. 2, pp. 429–439, 2003. [15] R. Degani and G. Bortolan, “The problem of linguistic approximation in clinical decision making,” Int. J. Approx. Reason., vol. 2, no. 2, pp. 143–162, 1988. [16] M. Delgado, J. L. Verdegay, and M. A. Vila, “On aggregation operations of linguistic labels,” Int. J. Intell. Syst., vol. 8, no. 3, pp. 351–370, 1993. [17] F. Herrera and L. Mart´ınez, “A 2-tuple fuzzy linguistic representation model for computing with words,” IEEE Trans. Fuzzy Syst., vol. 8, no. 6, pp. 746–752, 2000. [18] J.-H. Wang and J. Hao, “A new version of 2-tuple fuzzy linguistic representation model for computing with words,” IEEE Trans. Fuzzy Syst., vol. 14, no. 3, pp. 435–445, Jun. 2006. [19] Y. Dong, G. Zhang, W.-C. Hong, and S. Yu, “Linguistic Computational Model Based on 2-Tuples and Intervals,” IEEE Trans. Fuzzy Syst., vol. 21, no. 6, pp. 1006–1018, Dec. 2013. [20] R. M. Rodr´ıguez and L. Mart´ınez, “An analysis of symbolic linguistic computing models in decision making,” Int. J. Gen. Syst., vol. 42, no. 1, pp. 121–136, 2012. [21] L. Mart´ınez and F. Herrera, “An overview on the 2-tuple linguistic model for computing with words in decision making: Extensions, applications and challenges ,” Inform. Sci., vol. 207, no. 1, pp. 1–18, 2012.
1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TFUZZ.2014.2379291, IEEE Transactions on Fuzzy Systems IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL.-, NO.-, -
[22] Y. Jiang and J. Fan, “Property analysis of the aggregation operators for 2-tuple linguistic information,” Control and Decision, vol. 18, pp. 754–757, 2003. [23] J. M. Merig´o and A. M. Gil-Lafuente, “Induced 2-tuple linguistic generalized aggregation operators and their application in decisionmaking,” Inform. Sci., vol. 236, pp. 1–16, 2013. [24] B. Zhu, Z. Xu, and M. Xia, “Hesitant fuzzy geometric Bonferroni means,” Inform. Sci., vol. 205, pp. 72–85, 2012. [25] C. Bonferroni, “Sulle medie multiple di potenze,” Bolletino Matematica Italiana, vol. 5, pp. 267–270, 1950. [26] R. R. Yager, “On generalized Bonferroni mean operators for multicriteria aggregation,” Int. J. Approx. Reason., vol. 50, no. 8, pp. 1279– 1286, 2009. [27] G. Beliakov, S. James, J. Mordelov´a, T. R¨uckschlossov´a, and R. R. Yager, “Generalized Bonferroni mean operators in multi-criteria aggregation,” Fuzzy Sets Syst., vol. 161, no. 17, pp. 2227–2242, 2010. [28] M. Xia, Z. Xu, and B. Zhu, “Generalized intuitionistic fuzzy Bonferroni means,” Int. J. Intell. Syst., vol. 27, no. 1, pp. 23–47, 2012. [29] W. Zhou and J.-M. He, “Intuitionistic fuzzy normalized weighted Bonferroni mean and its application in multicriteria decision making,” J. Appl. Math., vol. 2012, Article ID 136254. [30] G. Beliakov, S. James, and R. Mesiar, “A Generalization of the Bonferroni Mean based on partitions,” in Proc. of IEEE Int. Conf. on Fuzzy Syst. FUZZ-IEEE, 2013, article no. 6622348. [31] G. Choquet, “Theory of capacities,” Annales de l’institut Fourier, vol. 5, pp. 131–295, 1954. [32] M. Xia, Z. Xu, and B. Zhu, “Geometric Bonferroni means with their application in multi-criteria decision making,” Knowledge-Based Syst., vol. 40, pp. 88–100, 2013. [33] Z. Xu and R. R. Yager, “Intuitionistic fuzzy Bonferroni means.” IEEE Trans. Syst. Man. Cybern. B. Cybern., vol. 41, no. 2, pp. 568–78, 2011. [34] B. Dutta and D. Guha, “Trapezoidal intuitionistic fuzzy Bonferroni means and its application in multi-attribute decision making,” in Proc. of IEEE Int. Conf. on Fuzzy Syst. FUZZ-IEEE, 2013, article no. 6622367. [35] O. Cord´on, F. Herrera and I. Zwir, “Linguistic modeling by hierarchical systems of linguistic rules”, IEEE Trans. Fuzzy Syst., vol. 10, pp. 2–20, 2002. [36] G. A. Miller, “The magical number seven, plus or minus two: Some limits on our capacity of processing information”, Psychol. Rev., vol. 63, pp. 81–97, 1956. [37] D. Guha and D. Chakraborty, “A new approach to fuzzy distance measure and similarity measure between two generalized fuzzy numbers,” App. Soft Comput., vol. 10, no. 1, pp. 90 – 99, 2010. [38] H. Bustince, E. Barrenechea and M. Pagola, “Restricted equivalence functions”, Fuzzy Sets Syst., vol. 157, no. 17, pp. 2333–2346, 2006. ˇ [39] J. Spirkov´ a, “Weighted operators based on dissimilarity function”, Inform. Sci., vol. 281, pp.172–181, 2014. [40] P. Bonacich and T. M. Liggett, “Asymptotics of a matrix valued Markov chain arising in sociology”, Stoch. Proc. Appl., vol. 104, pp. 155–171, 2003. [41] C. H. Chu, K. C. Hung and P. Julian, “A complete pattern recognition approach under Atanassov’s intuitionistic fuzzy sets”, Knowledge-Based Syst., vol. 66, pp. 36–45, 2014.
15
Bapi Dutta received his M.Sc. degree in industrial mathematics and informatics from Indian Institute of Technology, Roorkee, India in 2010 and B.Sc. Degree in Mathematics from University of Kalyani, Kalyani, India in 2008. He is currently Ph.D candidate in department of mathematics in Indian Institute of Technology, Patna, India. His current research interests include: aggregation operators, multi-attribute decision making, fuzzy optimization, type-2 fuzzy logic.
Debashree Guha received her B.Sc. and M.Sc. degree in mathematics from Jadavpur University, Calcutta, India in 2003 and 2005, respectively. She received her Ph.D degree in Mathematics from Indian Institute of Technology, Kharagpur, India in 2011. Presently, she is the assistant professor of department of mathematics in Indian Institute of Technology, Patna, India. Her current research interest includes: multi-attribute decision making, fuzzy mathematical programming, aggregation operators and fuzzy logic. Her research results have been published in the Computer & Industrial Engineering, Applied Soft Computing, among others.
Radko Mesiar received the Ph.D. Degree from Comenius University, Bratislava, Slovakia, and the D.Sc. degree from the Czech Academy of Sciences, Prague, in 1979 and 1996, respectively. He is a Professor of mathematics at Slovak University of Technology, Bratislava. His major research interests are in the area of uncertainty modeling, fuzzy logic and several types of aggregation techniques, nonadditive measures, and integral theory. He is coauthor of a monograph on triangular norms and author/coauthor of more than 200 journal papers and chapters in edited volumes. He is an Associate Editor of four international journals and a member of the European Association for Fuzzy Logic and Technology. He is a Fellow Researcher with UTIA AV CR Prague (since 1995) and IRAFM Ostrava (since 2005).
1063-6706 (c) 2013 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.