Advanced Computing with Words Using Syllogistic Reasoning and Arithmetic Operations on Linguistic Belief Structures Mohammad Reza Rajati
Jerry M. Mendel
Signal and Image Processing Institute Ming Hsieh Department of Electrical Engineering Department of Mathematics University of Southern California Los Angeles, CA 90089 Email:
[email protected]
Signal and Image Processing Institute Ming Hsieh Department of Electrical Engineering University of Southern California Los Angeles, CA 90089-2560 Email:
[email protected]
Abstract—In this paper, we present solutions to an Advanced Computing with Words problem that is equivalent to one of Zadeh’s challenge problems on linguistic probabilities. We use a syllogism based on the entailment principle to interpret the problem so that it yields two linguistic belief structures. Then we perform an addition of those linguistic belief structures to obtain a belief structure on the variable about which linguistic probabilities have to be inferred. We show that pessimistic (lower) and optimistic (upper) probabilities can be inferred from such a belief structure using Linguistic Weighted Averages and pessimistic and optimistic compatibility measures. Then, we choose vocabularies for linguistic attributes (lifetimes of products) and linguistic probabilities that are involved in the problem statement. The vocabularies are modeled using interval type-2 fuzzy sets. We calculate optimistic (upper) and pessimistic (lower) probabilities, and map them into words present in the vocabulary of linguistic probabilities, so that the results can be comprehended by a human. Index Terms—advanced computing with words, belief structures, compatibility measures, interval type-2 fuzzy sets, linguistic weighted average, lower and upper probabilities, operations on belief structures
I. I NTRODUCTION Advanced Computing with Words (ACWW) [1] is a methodology of computation in which carriers of information can be numbers, intervals and words. In such a methodology, assignment of attributes to variables may be implicit, and one generally deals with linguistic truth, probability, and possibility. Modeling of words in natural languages plays a pivotal role in ACWW. Mendel argues that since words mean different things to different people, a first-order uncertainty model of a word should be an interval type-2 fuzzy set (IT2 FS) [2]. Moreover, Zadeh anticipates that type-2 fuzzy sets will play a more important role in ACWW in the future [1]. Therefore, it is plausible to implement reasoning schemes for ACWW using type-2 fuzzy sets. Zadeh has introduced a set of challenge ACWW problems in his recent works [3], in which linguistic usuality and probability values are involved. We have already proposed
methodologies for solving some of those problems employing Linguistic Weighted Averages (LWAs) and IT2 FSs [4]–[7]. In this paper, we focus on an engineering problem that is similar to one of Zadeh’s challenge problems (the Robert’s Problem (RP)), which he has formulated as: Usually Robert leaves his office at about 5 p.m. Usually, it takes Robert about an hour to get home from work. What is the probability that Robert is home before 6:15 p.m.? Robert’s problem involves a linguistic usuality word (Usually) and linguistic attributes (about 5 p.m., about an hour, before 6:15 p.m.). Note that the assignment of “Usually” to the probability1 that “Robert leaves his office at about 5 p.m.” and the probability that “it takes him about an hour to get home from work” are implicit. It is therefore categorized as an ACWW problem. We feel that it is very important to demonstrate the applicability of ACWW to engineering kinds of problems; hence, in this paper, we focus on the following analogous problem, which involves engineering judgment about the lifetime of a product, and propose a Novel Weighted Average (NWA) approach for solving it: Usually Product I lasts for about 5 years and is then replaced by a refurbished one. Usually, the refurbished Product I lasts for about 2 years. What is the probability that a new Product I is not needed until the eighth year? The rest of this paper is organized as follows: In Section II, Zadeh’s methodology of solution to the Product Lifetime Problem (PLP) is investigated. In Section III, our solution to the problem, via compatibility measures and LWAs, is proposed. In Section IV, this solution is computationally implemented. Finally, in Section V, some conclusions are drawn and a framework for future works is offered. 1 Zadeh
treats usuality words as fuzzy probabilities [3].
II. Z ADEH ’ S M ETHODOLOGY OF S OLUTION TO T HE PLP In this section, we demonstrate how Zadeh solves problems like the PLP, and how his solution methodology can readily be applied to the Robert’s problem. The time before a new product is needed, 𝑍, the time before replacement with a refurbished one, 𝑋, and the lifetime of the refurbished one, 𝑌 are three random variables that satisfy: 𝑍 =𝑋 +𝑌
(∩𝑚 ) The GEP basically extends the relation 𝑔 𝑖=1 𝑓𝑖−1 (⋅) : 𝑉 → 𝑉 to T1 FSs, where 𝑓𝑖−1 is the pre-image of the function 𝑓𝑖 (⋅). Given that 𝑃𝐴 and 𝑃𝐵 are both constrained by “Usually”, we want to find the constraint Ψ on 𝑃𝐶 by the GEP. In other words: ∫ 𝑏1 𝑝𝑋 (𝜃)𝜇𝐴 (𝜃)𝑑𝜃 is 𝑈 𝑠𝑢𝑎𝑙𝑙𝑦
(1)
∫
The probability density function of 𝑍 is calculated as [8]: ∫ +∞ 𝑝𝑋 (𝜃)𝑝𝑌 (𝜃 − 𝑣)𝑑𝜃 ≡ 𝑝𝑋 ∗ 𝑝𝑌 (2) 𝑝𝑍 (𝑣) =
𝑎2
𝑃𝐶 =
The probability of the fuzzy event 𝐴 ≡ 𝐴𝑏𝑜𝑢𝑡 5 𝑦𝑒𝑎𝑟𝑠, 𝑃𝐴 ∫ 𝑃𝐴 =
𝑏1
𝑎1
𝑝𝑋 (𝜃)𝜇𝐴 (𝜃)𝑑𝜃
(3)
where 𝑎1 and 𝑏1 are respectively the minimum and maximum possible lifetimes for a new product. The probability of the fuzzy event 𝐵 ≡ 𝐴𝑏𝑜𝑢𝑡 2 𝑦𝑒𝑎𝑟𝑠, 𝑃𝐵 is: ∫ 𝑏2 𝑝𝑌 (𝜃)𝜇𝐵 (𝜃)𝑑𝜃 (4) 𝑃𝐵 = 𝑎2
where 𝑎2 and 𝑏2 are respectively the smallest and the largest possible lifetimes for the refurbished product. The probability that the product lasts until 𝑡 = 8 𝑦𝑒𝑎𝑟𝑠., 𝑃𝐶 , is: ∫ 𝑡 𝑝𝑍 (𝜃)𝑑𝜃 ≡ 𝑤 (5) 𝑃𝐶 = 𝑡0
where 𝑡0 is the minimum possible lifetime for the product. Zadeh solves such problems using the calculi of generalized constraints [9]. To derive the soft constraint imposed on 𝑃𝐶 by the fact that 𝑃𝐴 is constrained by “Usually” and 𝑃𝐵 is also constrained by “Usually”, one needs to use the framework of the Generalized Extension Principle (GEP), which is an important tool for propagation of possibilistic constraints, and was originally introduced in [10]. Assume that 𝑓1 (⋅), . . . , 𝑓𝑚 (⋅), and 𝑔(⋅) are real functions: 𝑓1 , . . . , 𝑓𝑚 , 𝑔 : 𝑈1 × 𝑈2 × ⋅ ⋅ ⋅ × 𝑈𝑛 −→ 𝑉
(6)
𝑓𝑗 (𝑋1 , 𝑋2 , ⋅ ⋅ ⋅ , 𝑋𝑛 ) is 𝐴𝑗 , 𝑗 = 1, . . . , 𝑚 𝑔(𝑋1 , 𝑋2 , ⋅ ⋅ ⋅ , 𝑋𝑛 ) is 𝐵 where 𝐴𝑖 , (𝑖 = 1, 2, . . . , 𝑚) and 𝐵 are T1 FSs. Then the 𝐴𝑖 ’s induce 𝐵 as follows: 𝜇𝐵 (𝑣) = { sup min (𝜇𝐴1 (𝑓1 (u)), ⋅ ⋅ ⋅ , 𝜇𝐴𝑚 (𝑓𝑚 (u))) ∃𝑣 = 𝑔(u) 0
Then: 𝜇Ψ (𝑤) =
⎛
𝑡0
𝑝𝑍 (𝜃)𝑑𝜃
1
𝑝𝑋 (𝜃)𝑑𝜃=1
2 𝑎2
𝑝𝑌 (𝜃)𝑑𝜃=1
𝑎
∫ 𝑏1
𝑡0
sup min ⎝
∫𝑡
𝑤= ∫𝑏
𝑡
(𝑝𝑋 ∗ 𝑝𝑌 )(𝜃)𝑑𝜃 ≡ 𝑤 is Ψ
(∫
) ⎞
𝑏1 𝑝𝑋 (𝜃)𝜇𝐴 (𝜃)𝑑𝜃 , (∫𝑎1 ) 𝑏2 𝜇𝑈 𝑠𝑢𝑎𝑙𝑙𝑦 𝑎2 𝑝𝑌 (𝜃)𝜇𝐵 (𝜃)𝑑𝜃
𝜇𝑈 𝑠𝑢𝑎𝑙𝑙𝑦
⎠
(8)
Note that the solution to the Robert’s problem is similar to (8), when 𝑍, 𝑋, and 𝑌 are interpreted as the time of arrival of Robert, the time of departure of Robert, and his travel time, respectively; also 𝐴 and 𝐵 must be interpreted as 𝐴𝑏𝑜𝑢𝑡 5 𝑝.𝑚. and 𝐴𝑏𝑜𝑢𝑡 1 ℎ𝑜𝑢𝑟, respectively, and 𝑡 must be interpreted as “6:15 p.m.”. The above mathematical program is very complicated and must be solved over all possible distributions that are pertinent to the problem [11].2 Our journal version of this paper will include this solution. In what follows, we offer another methodology for solving the problem, one that is also based on using the Extension Principle, but in a different way. Recall that the Extension Principle is a special case of the GEP, in which 𝑋𝑖 is 𝐴𝑖 , 𝑖 = 1, . . . , 𝑛 induces 𝑔(𝑋1 , . . . , 𝑋𝑛 ) : 𝑈1 ×⋅ ⋅ ⋅ 𝑈𝑛 → 𝑉 is 𝐵, where:3 𝜇𝐵 (𝑢) =
sup
𝑢=𝑔(𝑥1 ,⋅⋅⋅ ,𝑥𝑛 )
min (𝜇𝐴1 (𝑥1 ), . . . , 𝜇𝐴𝑛 (𝑥𝑛 ))
(9)
III. O UR S OLUTION TO T HE PLP
Moreover, assume that:
u∣𝑣=𝑔(u)
𝑝𝑌 (𝜃)𝜇𝐵 (𝜃)𝑑𝜃 is 𝑈 𝑠𝑢𝑎𝑙𝑙𝑦 ∫
−∞
is:
𝑎1 𝑏2
∕ ∃𝑣 = 𝑔(u) (7)
where u is the shorthand for (𝑢1 , 𝑢2 , . . . , 𝑢𝑛 ) ∈ 𝑈1 × 𝑈2 × ⋅ ⋅ ⋅ × 𝑈𝑛 .
This section presents our solution to the PLP using LWAs. To begin, we need to translate the problem into a form suitable for LWAs. We argue that “Usually Product I lasts for about 5 years” implies that “Rarely Product I lasts for not about 5 years.” Such an intuition can be derived formally from the following syllogism [12] obtained from the entailment principle: The probability that 𝐴 is 𝐵 is 𝑃 The probability that 𝐴 is 𝐵 ′ is ¬𝑃 2 Additional world knowledge that is needed is the knowledge of 𝑎1 , 𝑏 1 , 𝑎 2 , 𝑏 2 , 𝑡 0 . 3 The Extension Principle for extending the function 𝑔(u) to T1 FSs is a special case of (7), where 𝑓𝑖 (u) = 𝑢𝑖 .
in which ¬𝑃 is the antonym of the fuzzy set 𝑃 , and its membership function is given by:
where 𝐶 = 𝐴 ⊕ 𝐵 is the arithmetic sum of the fuzzy sets 𝐴 and 𝐵, and is determined as:
𝜇¬𝑃 (𝑢) = 𝜇𝑃 (1 − 𝑢), 𝑢 ∈ [0, 1]
𝜇𝐶 (𝑧) = sup min(𝜇𝐴 (𝑥), 𝜇𝐵 (𝑦))
(10)
′
and 𝐵 is the complement of the fuzzy set 𝐵, characterized by: (11) 𝜇𝐵 ′ (𝑢) = 1 − 𝜇𝐵 (𝑢) Using the above syllogism, we have the following linguistic belief structures for the problem: ˜1 = {(𝐴𝑏𝑜𝑢𝑡 5, 𝑈 𝑠𝑢𝑎𝑙𝑙𝑦), (𝑁 𝑜𝑡 𝐴𝑏𝑜𝑢𝑡 5, 𝑅𝑎𝑟𝑒𝑙𝑦)} B ˜2 = {(𝐴𝑏𝑜𝑢𝑡 2, 𝑈 𝑠𝑢𝑎𝑙𝑙𝑦), (𝑁 𝑜𝑡 𝐴𝑏𝑜𝑢𝑡 2, 𝑅𝑎𝑟𝑒𝑙𝑦)} (12) B in which 𝐴𝑏𝑜𝑢𝑡 5 and 𝑁 𝑜𝑡 𝐴𝑏𝑜𝑢𝑡 5 are focal elements ˜1 , 𝐴𝑏𝑜𝑢𝑡 2 and 𝑁 𝑜𝑡 𝐴𝑏𝑜𝑢𝑡 2 are focal elements of of B ˜2 , and 𝑈 𝑠𝑢𝑎𝑙𝑙𝑦 and 𝑅𝑎𝑟𝑒𝑙𝑦 are fuzzy probability mass B ˜2 . Note that B ˜1 represents the ˜1 and B assignments for both B knowledge on 𝑋, the time the product lasts before replacement with a refurbished one, and B2 represents knowledge about 𝑌 , the lifetime of the refurbished product. The difference between these belief structures and the belief structures that are studied in the literature [13]–[17] is that the probability mass assignments are words rather than numeric values. Belief structures with fuzzy-valued probability mass assignments were first introduced by Zadeh [18]; however, they have not been in the mainstream of research in the evidential reasoning community. In the past two decades, there has been some research on belief structures with interval-valued probability mass assignments [19]–[28]. As a natural extension, some studies formulate fuzzy-valued probability mass assignments [29], [30]. Assume that one has a belief structure B1 which represents knowledge on the variable 𝑋. This belief structure has focal elements {𝐴1 , 𝐴2 , ⋅ ⋅ ⋅ , 𝐴𝑛 } ⊆ F𝑈 , whose probability mass assignments are determined by the function 𝑚1 : {𝐴1 , 𝐴2 , ⋅ ⋅ ⋅ , 𝐴𝑛 } → [0, 1]. Assume, also, that one has a belief structure B2 which represents knowledge on the variable 𝑌 . It has focal elements {𝐵1 , 𝐵2 , ⋅ ⋅ ⋅ , 𝐵𝑚 } ⊆ F𝑈 , whose probability mass assignments are determined by the function 𝑚2 :{𝐵1 , 𝐵2 , ⋅ ⋅ ⋅ , 𝐵𝑚 } → [0, 1] (i.e., both mass assignments 𝑚1 and 𝑚2 are numeric). Note that F𝑈 represents the set of all Type-1 Fuzzy Sets (T1 FSs) over the universe of discourse 𝑈 . The belief structure that represents the knowledge on 𝑍 = 𝑋 + 𝑌 , B3 = B1 ⊞ B2 ,4 has focal elements {𝐶1 , 𝐶2 , ⋅ ⋅ ⋅ , 𝐶𝑟 } and its probability mass assignments are determined by 𝑚3 : {𝐶1 , 𝐶2 , ⋅ ⋅ ⋅ , 𝐶𝑟 } → [0, 1] , in which [31]:5 ∑ 𝑚3 (𝐶𝑘 ) = 𝑚1 (𝐴𝑖 )𝑚2 (𝐵𝑗 ), 𝑘 = 1, 2, . . . , 𝑟 (13) 𝐴𝑖 ⊕𝐵𝑗 =𝐶𝑘 4 Arithmetic operations other than summation on belief structures can also be envisioned, e.g. subtraction ⊟, multiplication ⊠, and inversion or division □−1 . 5 This equation is analogous to calculation of the sum of two discrete random variables. Note that if 𝑚1 and 𝑚2 are respectively probability mass functions of mass function of 𝑍 = 𝑋 + 𝑌 is ∑𝑋 and 𝑌 , then the probability ∑ 𝑚3 (𝑟) = 𝑟=𝑠+𝑡 𝑚1 (𝑠)𝑚2 (𝑡) = +∞ 𝑠=−∞ 𝑚1 (𝑠)𝑚2 (𝑟 − 𝑠).
𝑧=𝑥+𝑦
(14)
The belief structures of (12) have fuzzy focal elements as well as fuzzy probability mass assignments. Next, we generalize the results stated in (13) and (14) to the case where focal elements and mass assignments are IT2 FSs, since in the solution of the PLP, we deal with Linguistic Belief Structures whose probability mass assignments and focal elements are modeled by IT2 FSs.6 ˜1 which Assume that one has a belief structure B represents knowledge on the variable 𝑋 and has focal ˜𝑈 , whose probability ˜2 , ⋅ ⋅ ⋅ , 𝐴 ˜𝑛 } ⊆ F ˜1 , 𝐴 elements {𝐴 mass assignments are determined by the function 𝑚1 : ˜[0, 1] . Assume also that one has ˜2 , . . . , 𝐴 ˜𝑛 } → F ˜1 , 𝐴 {𝐴 ˜ a belief structure B2 which represents knowledge on the ˜𝑈 , ˜2 , . . . , 𝐵 ˜𝑝 } ⊆ F ˜1 , 𝐵 variable 𝑌 and has focal elements {𝐵 whose probability mass assignments are determined by the ˜[0, 1] . Note that F ˜𝑈 ˜1 , 𝐵 ˜2 , . . . , 𝐵 ˜𝑝 } → F function 𝑚2 : {𝐵 represents the set of all IT2 FSs over the universe of discourse 𝑈 . The belief structure that represents the knowledge ˜1 ⊞ B ˜2 , has focal elements ˜3 = B on 𝑍 = 𝑋 + 𝑌 , B ˜2 , . . . , 𝐶 ˜𝑟 } and its probability mass assignments are ˜1 , 𝐶 {𝐶 ˜[0, 1] . Let’s denote ˜1 , 𝐶 ˜2 , ⋅ ⋅ ⋅ , 𝐶 ˜𝑟 } → F determined by 𝑚3 :{𝐶 ˜ ˜ ˜ ˜ ˜𝑘 ) = 𝑃˜3𝑘 . Then, 𝑚1 (𝐴𝑖 ) = 𝑀1𝑖 , 𝑚2 (𝐵𝑗 ) = 𝑁2𝑗 , 𝑚3 (𝐶 generalization of (13) to IT2 FSs can be done by generalizing the following function to IT2 probability mass assignments ˜2𝑗 , 𝑗 = 1, . . . , 𝑝, by the Extension ˜1𝑖 , 𝑖 = 1, . . . , 𝑛 and 𝑁 𝑀 Principle. which is stated in (9): ∑ { 𝜑𝒦 (𝑥1 , . . . , 𝑥𝑛 , 𝑦1 , . . . , 𝑦𝑝 ) = (𝑖,𝑗)∈𝒦 𝑥𝑖 𝑦𝑗 (15) ˜𝑖 ⊕ 𝐵 ˜𝑗 = 𝐶 ˜𝑘 } 𝒦 = {(𝑖, 𝑗)∣𝐴 ˜ = 𝐴 ˜⊕𝐵 ˜ is the arithmetic sum of the IT2 FSs 𝐴 ˜ where 𝐶 ˜ and 𝐵, and is determined as [32], [33]: { 𝜇𝐶˜ (𝑧) = sup𝑧=𝑥+𝑦 min(𝜇𝐴˜(𝑥), 𝜇𝐵˜ (𝑦)) (16) 𝜇𝐶˜ (𝑧) = sup𝑧=𝑥+𝑦 min(𝜇𝐴˜(𝑥), 𝜇𝐵˜ (𝑦)) It is a well-known fact that the numeric mass assignments in a belief structure sum up to 1, therefore, extension of the above function to IT2 FSs must be performed subject to the constraint determined by the following relation: D = {(𝑥1 , . . . , 𝑥𝑛 , 𝑦1 , . . . , 𝑦𝑝 )∣
𝑛 ∑ 𝑖=1
𝑥𝑖 = 1,
𝑝 ∑
𝑦𝑗 = 1}
𝑗=1
(17) It was shown [34] that constraints of the same form as D may make the optimization problem associated with the Extension Principle have no solution, especially when models of the probability words are synthesized using collecting data about words from subjects. To avoid this, one can use the Doubly 6 Note that T1 FSs are special cases of IT2 FSs for which the lower membership function and upper membership function are the same, so this generalization applies to both T1 and IT2 FSs.
Normalized Linguistic Weighted Average (DNLWA) operator, which is the extension of the following function to IT2 FSs : ∑ (𝑟,𝑠)∈𝒦 𝑥𝑟 𝑦𝑠 ) (∑ ) (18) 𝜓𝒦 (𝑥1 , . . . , 𝑥𝑛 , 𝑦1 , . . . , 𝑦𝑝 ) = (∑ 𝑛 𝑛 𝑥 𝑦 𝑗 𝑗 𝑗=1 𝑗=1 Therefore: { ˜11 , . . . , 𝑀 ˜1𝑛 , 𝑁 ˜21 , . . . , 𝑁 ˜2𝑝 ) 𝑃˜3𝑘 = 𝜓𝒦 (𝑀 ˜ ˜ ˜ 𝒦 = {(𝑖, 𝑗)∣𝐴𝑖 ⊕ 𝐵𝑗 = 𝐶𝑘 }
(19)
˜3 = In such a framework, using (19), we can calculate B ˜ ˜ B1 ⊞ B2 for the belief structures of (12), which represents the knowledge on 𝑍, the time before Product I is changed, as: ⎧ ⎫ ˜ ∙𝑈 ˜ ˜ 𝑅 ˜ 𝑈 𝑅∙ ′ ′ ⎨ (˜ ⎬ ˜ ˜ 2⊕˜ 5, (𝑈˜ +𝑅)( ), ( 2 ⊕ 5 , ), ˜ 𝑈 ˜ +𝑅) ˜ ˜ +𝑅)( ˜ 𝑈 ˜ +𝑅) ˜ (𝑈 ˜3 = B ˜ 𝑈 ˜ ˜ ∙𝑅 ˜ 𝑈 ˜ ˜′ ⎩ (˜ ⎭ 2′ ⊕ ˜ 5, ˜ 𝑅∙ ˜ ˜ ˜ ), (2 ⊕ 5 , ˜ ˜ ˜ ˜ ) (𝑈 +𝑅)(𝑈 +𝑅)
(𝑈 +𝑅)(𝑈 +𝑅)
(20) where ˜ 5 and ˜ 2 represent 𝐴𝑏𝑜𝑢𝑡 5 and 𝐴𝑏𝑜𝑢𝑡 2. Consequently, ′ ˜ and 𝑅 ˜ represent 𝑈 𝑠𝑢𝑎𝑙𝑙𝑦 ˜ 2′ are their complements. 𝑈 5 and ˜ and 𝑅𝑎𝑟𝑒𝑙𝑦. Pairs whose indices appear in 𝒦 for determining the mass assignment of each focal element of the combined ˜3 are shown in the numerator of an expresbelief structure B sive formula with a bullet ∙ between them. To solve the PLP, one needs to calculate pessimistic and optimistic probabilities of the hypothesis 𝐷 = 𝑍 > 8 induced ˜3 . Next, we demonstrate how such by the belief structure B probabilities can be calculated. In his original article [18], Zadeh uses the following setting for calculating Expected Certainty, 𝐸𝐶, and Expected Possibility, 𝐸Π: 7 Assume that one has a belief structure with focal elements {𝐶1 , 𝐶2 , ⋅ ⋅ ⋅ , 𝐶𝑟 } ⊆ F𝑈 , whose probability mass assignments are {𝑀1 , 𝑀2 , ⋅ ⋅ ⋅ , 𝑀𝑟 } ⊆ F[0,1] , in which F𝑈 represents the set of all T1 FSs over the universe of discourse 𝑈 . Then, for a hypothesis 𝐷, 8 the Expected Certainty and Possibility are calculated via: ⎧ 𝜇𝐸𝐶(𝐷) (𝑧) = min (𝜇𝑀1 (𝑝1 ), 𝜇𝑀2 (𝑝2 ), ⋅ ⋅ ⋅ , 𝜇𝑀𝑟 (𝑝𝑟 )) sup ⎨ 𝑧=𝑝1 𝑥1 +𝑝2 𝑥2 +⋅⋅⋅+𝑝𝑟 𝑥𝑟
overlap (intersection) measure as noted by Yager [15], or more generally, as a pessimistic and an optimistic compatibility measure, ℐ and 𝒮, respectively, since they satisfy 𝑥𝑖 ≤ 𝑦𝑖 [15]. Unfortunately, the above optimization problems may have no solutions [34], and instead, one can use Fuzzy Weighted Averages (FWAs) represented by the following expressive formulas [37]:9 ∑𝑟 { 𝑖=1 𝑀𝑖 ×𝑥𝑖 𝐸𝐶(𝐷) = ∑ 𝑛 ∑𝑟 𝑖=1 𝑀𝑖 (23) 𝑖=1 𝑀𝑖 ×𝑦𝑖 ∑ 𝐸Π(𝐷) = 𝑛 𝑀𝑖 𝑖=1
Similarly, if the focal elements and the probability mass assignments are IT2 FSs, one can use LWAs to guarantee that there are solutions for the problems of determining lower and upper probabilities. Assume that one has a belief structure with ˜𝑈 , whose probability ˜2 , ⋅ ⋅ ⋅ , 𝐶 ˜𝑟 } ⊆ F ˜1 , 𝐶 focal elements {𝐶 ˜[0,1] , in which ˜1 , 𝑀 ˜2 , ⋅ ⋅ ⋅ , 𝑀 ˜𝑟 } ⊆ F mass assignments are {𝑀 ˜𝑈 represents the set of all IT2 FSs over the universe of F ˜ the lower and upper discourse 𝑈 . Then, for a hypothesis 𝐷, probabilities are calculated via the following LWAs: ⎧ ∑𝑟 − ˜ 𝑖=1 𝑀𝑖 ×𝑥𝑖 ˜ = ∑ ˜ (𝐷) ⎨ LProb 𝑛 ˜𝑖 𝑀 (24) ∑𝑟 𝑖=1 + ˜𝑖 ×𝑦𝑖 𝑀 𝑖=1 ˜ ˜ ⎩ LProb (𝐷) = ∑𝑛 ˜ 𝑖=1
−
𝑀𝑖
+
˜ are the lower (pessimistic) ˜ and LProb in which LProb ˜𝑖 , 𝐷) ˜ ˜ 𝑥𝑖 = ℐ(𝐶 and the upper (optimistic) probabilities of 𝐷, ˜𝑖 and represents a pessimistic measure of compatibility of 𝐶 ˜𝑖 , 𝐷) ˜ represents an optimistic measure of ˜ and 𝑦𝑖 = 𝒮(𝐶 𝐷, ˜ which satisfy [15]: ˜𝑖 and 𝐷, compatibility of 𝐶 ˜ 𝐷) ˜ ≤ 𝒮(𝐶, ˜ 𝐷) ˜ ℐ(𝐶,
(25)
In such a framework, considering (20), the focal ele˜1 , ⋅ ⋅ ⋅ , 𝐶 ˜4 } = {˜ 5 ⊕˜ 2, ˜ 5′ ⊕ ˜ 2′ , ˜ 5′ ⊕ ments of B3 are {𝐶 ′ ˜ ˜ ˜ 2, 5 ⊕ 2 } and its fuzzy probability mass assignments are ˜4 } = {(𝑈 ˜ ∙𝑈 ˜ )/[(𝑈 ˜ +𝑅)( ˜ 𝑈 ˜ +𝑅)], ˜ (𝑅 ˜ ∙ 𝑅)/[( ˜ ˜+ ˜1 , ⋅ ⋅ ⋅ , 𝑀 𝑈 {𝑀 ˜ 𝑈 ˜ +𝑅)], ˜ (𝑅 ˜∙𝑈 ˜ )/[(𝑈 ˜ +𝑅)( ˜ 𝑈 ˜ +𝑅)], ˜ (𝑈 ˜ ∙ 𝑅)/[( ˜ ˜ +𝑅)( ˜ 𝑈 ˜+ 𝑅)( 𝑈 ˜ 𝑅)]}. We can calculate the LWA’s of (24) to obtain the lower and upper probabilities that a new Product I is not needed 𝑝1 +𝑝2 +⋅⋅⋅+𝑝𝑟 =1 before eight years 𝐷 = {𝑍 ≥ 8}. Note that 𝐷 is a non-fuzzy (𝑧) = 𝜇 𝐸Π(𝐷) set in the PLP, but can be treated as a special case of a T1 min (𝜇 (𝑝 ), 𝜇 (𝑝 ), ⋅ ⋅ ⋅ , 𝜇 (𝑝 )) sup 𝑀1 1 𝑀2 2 𝑀𝑟 𝑟 ⎩ 𝑧=𝑝1 𝑦1 +𝑝2 𝑦2 +⋅⋅⋅+𝑝𝑟 𝑦𝑟 or an IT2 FS, i.e a fuzzy set whose membership function is 𝑝1 +𝑝2 +⋅⋅⋅+𝑝𝑟 =1 (21) the indicator function of 𝐷. Therefore, the lower and upper probabilities of 𝐷 can still be computed by (24). in which: Yager uses the following measures (which are exactly { 𝑥𝑖 = inf 𝑢∈𝑈 [max(1 − 𝜇𝐶𝑖 (𝑢), 𝜇𝐷 (𝑢))] Zadeh’s certainty and possibility measures): (22) 𝑦𝑖 = sup𝑢∈𝑈 [min(𝜇𝐶𝑖 (𝑢), 𝜇𝐷 (𝑢))] (26) ℐ𝑌 (𝐶, 𝐷) = min [max (1 − 𝜇𝐶 (𝑢), 𝜇𝐷 (𝑢))] which are respectively Zadeh’s suggestions for a certainty and 𝑢∈𝑈 a possibility measure. They actually act as a subsethood and an (27) 𝒮𝑌 (𝐶, 𝐷) = max [min (𝜇𝐶 (𝑢), 𝜇𝐷 (𝑢))] 7 These are more commonly called belief and plausibility in the evidential reasoning literature (see e.g. [35]), and, in a non-fuzzy setting, are called lower and upper probabilities by Dempster [36]. We use the latter expressions since they are more suitable for the present study. 8 For the PLP, 𝐷 is the hypothesis that a new Product I is not needed before eight years.
𝑢∈𝑈
9 This is called an expressive way to summarize the FWA rather than a computational way to summarize the FWA, because the FWA is not computed by multiplying, adding, and dividing T1 FSs. It is computed by solving optimization problems.
Yager notes that ℐ𝑌 and 𝒮𝑌 in (26) and (27) are measures of inclusion and intersection for T1 FSs, and satisfy (25). He proves that they reduce to ordinary inclusion and intersection when 𝐶 and 𝐷 are non-fuzzy, and also satisfy the following property: ℐ𝑌 (𝐶, 𝐷) = 1 − 𝒮𝑌 (𝐶, 𝐷′ )
(28)
in which 𝐷′ is the complement of 𝐷. When the probability mass assignments are numeric, (28) results in Prob− (D) + Prob+ (D′ ) = 1, i.e. the lower probability of 𝐷 and the upper probability of its complement sum up to 1. Unfortunately, due to the presence of both min and max in the definition of ℐ𝑌 and 𝒮𝑌 which determine the compatibilities based on some “critical points,” they are very conservative [14], [17], i.e. they are insensitive to the change of evidence (or focal elements), although they have desirable mathematical properties; therefore, some alternative compatibility measures have been proposed in the literature [13], [16], [17], [38]–[40] to overcome this issue, by relaxing some of the mathematical properties of ℐ𝑌 and 𝒮𝑌 . Our journal version of this paper will include solving the problem with various alternative compatibility measures. Inspired by Yager’s work, we suggest the following measures for IT2 FSs: [ ( )] ˜ 𝐷) ˜ = 1 min max 1 − 𝜇𝐶 (𝑢), 𝜇 (𝑢) ℐ𝑌 (𝐶, 𝐷 2 𝑢∈𝑈 [ ( )] 1 (29) + min max 1 − 𝜇𝐶 (𝑢), 𝜇𝐷 (𝑢) 2 𝑢∈𝑈 [ ( )] 1 max min 𝜇𝐴 (𝑢), 𝜇𝐷 (𝑢) 2 𝑢∈𝑈 1 + max [min (𝜇𝐴 (𝑢), 𝜇𝐷 (𝑢))] 2 𝑢∈𝑈
˜ 𝐷) ˜ = 𝒮𝑌 (𝐶,
(30)
˜ 𝐷) ˜ and 𝒮𝑌 (𝐶, ˜ 𝐷) ˜ reduce to ℐ𝑌 (𝐶, 𝐷) and Note that ℐ𝑌 (𝐶, 𝒮𝑌 (𝐶, 𝐷) when 𝜇𝐶 = 𝜇𝐴 and 𝜇𝐷 = 𝜇𝐷 . We summarize how to obtain the lower and upper probabilities for the PLP: 1) Determine the focal elements and probability masses of the belief structures representing knowledge on each variable using the syllogism presented in Section III; the focal elements may be non-convex fuzzy sets (see (33)). 2) Using (19), perform arithmetic operations on the belief structures in the previous step to infer the belief structure which represents knowledge on the variable of interest. ˜𝑟 and its ˜1 . . . 𝑀 Call its linguistic probability masses 𝑀 ˜ ˜ focal elements 𝐶1 . . . 𝐶𝑟 . Those focal elements may be non-convex fuzzy sets. 3) Choose appropriate pessimistic and optimistic compatibility measures, ℐ and 𝒮. 4) To calculate the lower and upper probabilities of a ˜ calculate the pessimistic and optimistic hypothesis 𝐷, ˜ and each of the focal elements, compatibility between 𝐷 ˜ ˜ ˜𝑖 , 𝐷). ˜ 𝑥𝑖 = ℐ(𝐶𝑖 , 𝐷) and 𝑦𝑖 = 𝒮(𝐶 5) Use (24) to calculate lower and upper probabilities.
IV. I MPLEMENTATION OF THE S OLUTION In this section, we solve the PLP based on the theory provided in Section III. First, we establish a vocabulary of words and their IT2 FS models for the linguistic variable probability, the latter obtained by collecting data from subjects on the Amazon Mechanical Turk website [41]10 and applying the Enhanced Interval Approach [42] to that data [43]. The words are: {Extremely improbable, Very improbable, Improbable, Somewhat improbable, Tossup, Somewhat probable, Probable, Very probable, Extremely probable}. Their FOUs are depicted in Fig. 1. Extremely Improbable Very Improbable 1
1
0.5
0.5
0.5
0
0
0
0.5
1
0
Somewhat Improbable
0.5
1
Tossup
0
1
1
1
0.5
0.5
0
0.5
1
0
0
Probable
0.5
1
Very Probable
0
1
1
0.5
0.5
0.5
0
0
0.5
1
0
0.5
0.5
1
0
0.5
1
Extremely Probable
1
0
0
Somewhat Probable
0.5 0
Fig. 1.
Improbable
1
1
0
0
0.5
1
Vocabulary of IT2 FSs representing linguistic probabilities.
In order to verify that the vocabulary provides a suitable partitioning of the universe of discourse of numeric probabilities, we calculated pairwise Jaccard similarities between its words, to make sure that each of the distinct words are adequately dissimilar from other words, and observed that the words are pairwise dissimilar, since all the pairwise similarities were less than 0.5. Note that the Jaccard’s similarity measure between ˜ and 𝐵 ˜ is calculated as: IT2 FSs 𝐴 ˜ 𝐵) ˜ = 𝑠𝐽 (𝐴, ∫ (min(¯ 𝜇𝐴˜(𝑢), 𝜇 ¯𝐵˜ (𝑢)) + min(𝜇𝐴˜(𝑢), 𝜇𝐵˜ (𝑢)))𝑑𝑢 ∫𝑈 (max(¯ 𝜇𝐴˜(𝑢), 𝜇 ¯𝐵˜ (𝑢)) + max(𝜇𝐴˜(𝑢), 𝜇𝐵˜ (𝑢)))𝑑𝑢 𝑈
(31)
where 𝑈 is the universe of discourse. We needed fuzzy set models for the words “Usually” and “Rarely,” and used the same fuzzy set models for them as the ones used for the words “Probable” and “Improbable.” 11 We modeled the words “About 5” and “About 2” using IT2 fuzzy numbers. The complement of those fuzzy sets were calculated, as: (32) 𝜇𝐶˜′ = 1 − 𝜇𝐶˜ , 𝜇𝐶˜′ = 1 − 𝜇𝐶˜ The fuzzy set models for 𝐴𝑏𝑜𝑢𝑡 5 ≡ ˜ 5 and 𝐴𝑏𝑜𝑢𝑡 2 ≡ ˜ 2 and their complements are depicted in Fig. 2. For calculating the complements, the ranges for the base variables of 𝐴𝑏𝑜𝑢𝑡 5 ≡ ˜ 5 and 𝐴𝑏𝑜𝑢𝑡 2 ≡ ˜ 2 (i.e., 𝑋 and 𝑌 ) must be available. This is additional world knowledge that is needed for our solution 10 The
native language of the subjects was English. are currently researching modeling usuality words using the Enhanced Interval Approach. 11 We
•U /[( U + R)( U + R)] R • R/[( U + R)( U + R)] U
methodology. We assumed that 𝑋 ∈ [0, 10] and 𝑌 ∈ [0, 5].12 Observe that the complements of ˜ 5 and ˜ 2 are non-convex, so in order to perform our later computations of the focal elements ˜3 in (20), that require computing the complements of of B ˜ 5 and ˜ 2, we formulated each of them as the union of two convex13 fuzzy sets (see Fig. 2), as:14 ˜ 5′ = ˜ 5′𝑎 ∪ ˜ 5′𝑏 , ˜ 2′ = ˜ 2′𝑎 ∪ ˜ 2′𝑏 (33)
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2 0
0.2
0
0.2
0.4
0.6
0.8
1
0
0
0.2
0.4
0.6
0.8
1
• R/[( U + R)( U + R)] U 1 0.8 0.6
1.4
0.4
2
1.2
0.2
1
0.6
0
2 b
2 a
0.8
0
0.2
0.4
0.6
0.8
1
0.4 0.2 0
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
1.2
5 b
5 a
0.6
IT2 FSs representing the fuzzy probability mass assignments.
˜ 𝐵) ˜ = 𝑓 (𝐴 ˜1 , 𝐵) ˜ ∪ 𝑓 (𝐴 ˜2 , 𝐵), ˜ provided that union Then, 𝑓 (𝐴, is carried out by the max t-conorm.
1 0.8
Fig. 3.
5
0.4
Proof: See [5].
0.2 0
0
1
2
3
4
5
6
7
8
9
10
Fig. 2. IT2 FSs representing 𝐴𝑏𝑜𝑢𝑡 5 ≡ ˜ 5 and 𝐴𝑏𝑜𝑢𝑡 2 ≡ ˜ 2, and their complements.
To derive the fuzzy probability mass assignments of the ˜ ∙𝑈 ˜ )/[(𝑈 ˜+ belief structure in (20), we calculated 𝑌˜1 = (𝑈 ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜∙ ˜ ˜ ˜ ˜ 𝑅)(𝑈 + 𝑅)], 𝑌2 = (𝑅 ∙ 𝑅)/[(𝑈 + 𝑅)(𝑈 + 𝑅)], and 𝑌3 = (𝑈 ˜ 𝑈 ˜ )/[(𝑈 ˜ +𝑅)( ˜ 𝑈 ˜ +𝑅)], ˜ using ˜ ˜ +𝑅)( ˜ 𝑈 ˜ +𝑅)] ˜ = 𝑌˜4 = (𝑅∙ 𝑅)/[( 𝑈 𝛼-cut decomposition [32], [33], which translates the problem of calculating DNLWA into the following statement: for each 𝛼, the 𝛼-cut of the LMF (𝑌 𝑖 (𝛼) = [𝑦 𝐿 (𝛼), 𝑦 𝑅 (𝛼)]) and the 𝛼cut of the UMF (𝑌 𝑖 (𝛼) = [𝑦 𝐿 (𝛼), 𝑦 𝑅 (𝛼)]) of each DNLWA, 𝑌˜𝑖 is: ⎧ 𝑦 (𝛼) = min𝑥1 ,𝑦1 ∈𝑈 (𝛼),𝑥2 ,𝑦2 ∈𝑅(𝛼) 𝜓𝒦𝑖 (𝑥1 , 𝑥2 , 𝑦1 , 𝑦2 ) ⎨ 𝑦 𝐿 (𝛼) = max 𝑥1 ,𝑦1 ∈𝑈 (𝛼),𝑥2 ,𝑦2 ∈𝑅(𝛼) 𝜓𝒦𝑖 (𝑥1 , 𝑥2 , 𝑦1 , 𝑦2 ) 𝑅 𝑦 (𝛼) = min 𝑥1 ,𝑦1 ∈𝑈 (𝛼),𝑥2 ,𝑦2 ∈𝑅(𝛼) 𝜓𝒦𝑖 (𝑥1 , 𝑥2 , 𝑦1 , 𝑦2 ) ⎩ 𝐿 𝑦 𝑅 (𝛼) = max𝑥1 ,𝑦1 ∈𝑈 (𝛼),𝑥2 ,𝑦2 ∈𝑅(𝛼) 𝜓𝒦𝑖 (𝑥1 , 𝑥2 , 𝑦1 , 𝑦2 ) (34) Note that 𝜓𝒦𝑖 is calculated form (18), where 𝒦1 = {(1, 1)}, 𝒦2 = {(2, 2)}, 𝒦3 = {(1, 2)}, 𝒦4 = {(2, 1)}. We solved the optimization problems in (34) by the active set method [44], and the FOUs of 𝑌˜𝑖 ’s are depicted in Fig. 3. Recall that 𝑌˜3 = 𝑌˜4 . Next, we calculated the focal elements of the belief structure 2′ are non-convex, we used the in (20). Because ˜ 5′ and ˜ following Theorems: ˜ ˜ ˜ ˜∈F Theorem 1. Let 𝐴 𝑈 and 𝐵 ∈ F𝑈 be IT2 FSs, and 𝑓 be ˜2 . ˜=𝐴 ˜1 ∪ 𝐴 any function of two variables. Also assume that 𝐴 12 The maximum possible age for the refurbished product is assumed to be less than that of a brand new one. 13 We call an IT2 FS convex if both of its lower and upper membership functions are convex. 14 We need such a formulation in terms of convex fuzzy sets to be able to calculate LWAs, as will be seen in the sequel.
˜ ˜∈F ˜ ˜ Theorem 2. Let 𝐴 𝑓 be 𝑈 and 𝐵 ∈ F𝑈 be IT2 FSs, and ˜ ˜ = ∪𝑛 𝐴 any function of two variables. Also assume that 𝐴 𝑖=1 𝑖 ∪𝑚 ˜ ∪𝑛 ∪ 𝑚 ˜ ˜ ˜ ˜ ˜ and 𝐵 = 𝑗=1 𝐵𝑗 . Then, 𝑓 (𝐴 ∪ 𝐵) = 𝑖=1 𝑗=1 𝑓 (𝐴𝑖 , 𝐵𝑗 ), provided that union is carried out by the max t-conorm. Proof: It’s sufficient to prove the theorem for 𝑛 = 𝑚 = 2. The rest of the proof follows from induction on 𝑚, 𝑛, which is straightforward and is left to the reader. By Theorem 1: ˜2 , 𝐵 ˜1 ∪ 𝐵 ˜2 ) = 𝑓 (𝐴 ˜1 , 𝐵 ˜1 ∪ 𝐵 ˜2 ) ∪ 𝑓 (𝐴 ˜2 , 𝐵 ˜1 ∪ 𝐵 ˜2 ) ˜1 ∪ 𝐴 𝑓 (𝐴 (35) Applying Theorem 1 again to each of the two terms on the right hand side of (35), ˜2 , 𝐵 ˜1 ∪ 𝐵 ˜2 ) = 𝑓 (𝐴 ˜1 , 𝐵 ˜1 ∪ 𝐵 ˜2 ) ∪ 𝑓 (𝐴 ˜2 , 𝐵 ˜1 ∪ 𝐵 ˜2 ) ˜1 ∪ 𝐴 𝑓 (𝐴 ˜1 , 𝐵 ˜1 ) ∪ 𝑓 (𝐴 ˜2 , 𝐵 ˜1 ) ∪ 𝑓 (𝐴 ˜2 , 𝐵 ˜1 ) ∪ 𝑓 (𝐴 ˜2 , 𝐵 ˜2 ) =𝑓 (𝐴 =
2 ∪ 2 ∪
˜𝑖 , 𝐵 ˜𝑗 ) 𝑓 (𝐴
(36)
𝑖=1 𝑗=1
Using Theorem 1, we can compute the addition of nonconvex IT2 FSs using addition of convex IT2 FSs. For example, we have: ˜ 5′ = ˜ 2′𝑎 ⊕ ˜ 5′𝑎 ∪ ˜ 2′𝑎 ⊕ ˜ 5′𝑏 ∪ ˜ 2′𝑏 ⊕ ˜ 5′𝑎 ∪ ˜ 2′𝑏 ⊕ ˜ 5′𝑏 2′ ⊕ ˜
(37)
˜3 are shown in Fig. The FOUs of the focal elements of B 4. Note that we have also used the fact that the addition of two fuzzy sets with normal trapezoidal membership functions with parameters (𝑎1 , 𝑎2 , 𝑎3 , 𝑎4 ) and (𝑏1 , 𝑏2 , 𝑏3 , 𝑏4 ) is a fuzzy set with trapezoidal membership function represented by (𝑎1 + 5′ ⊕ ˜ 2′ is the whole 𝑏1 , 𝑎2 + 𝑏2 , 𝑎3 + 𝑏3 , 𝑎4 + 𝑏4 ). Interestingly, ˜ [0, 15] interval, that can be interpreted as the word 𝑈 𝑛𝑘𝑛𝑜𝑤𝑛 [45]. We implemented our solution according to (24) using − ˜ Yager’s measures extended to IT2 FSs, ℐ𝑌 and 𝒮𝑌 ; LProb ˜ and LProb
+
are shown in Fig. 5.
1.4
1.4
5⊕ 2
1.2 1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2 0
0.2 0
5
10
0
15
2 5 ⊕
1.2
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
Fig. 4.
0
5
10
15
0.2 0
5
10
0
15
0
5
10
15
IT2 FSs representing the fuzzy probability mass assignments.
1.4
−
(D) LProb
1.2
Table I. We observed that the most similar words to the lower and upper probabilities are Somewhat improbable and Very probable, respectively. We can therefore conclude that a linguistic solution to the PLP is: “The probability that a new Product I is not needed before eight years is between Somewhat Improbable and Very probable.” TABLE I S IMILARITIES BETWEEN THE LOWER AND UPPER PROBABILITIES OF 𝐷 CALCULATED USING ℐ𝑌 AND 𝒮𝑌 , AND MEMBERS OF THE VOCABULARY OF LINGUISTIC PROBABILITIES IN F IG . 1
5⊕ 2
1.2
1
0
2 5 ⊕
1.2
1 0.8
Linguistic probability (𝑃˜𝑖 ) Extremely improbable Very improbable Improbable Somewhat improbable Tossup Somewhat probable Probable Very probable Extremely probable
−
˜ , 𝑃˜𝑖 ) 𝑠𝐽 (LProb 0 0.0030732 0.17199 0.2767 0.048117 0 0 0 0
+
˜ , 𝑃˜𝑖 ) 𝑠𝐽 (LProb 0 0 0 0 0 0 0.024919 0.51676 0.39984
0.6 0.4 0.2 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
+ (D) LProb
1.2 1 0.8 0.6 0.4 0.2 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
−
˜ Pessimistic (lower) probability LProb and optimistic (upper) + ˜ probability LProb of 𝐷 calculated using ℐ𝑌 and 𝒮𝑌 .
Fig. 5.
It is well known that the centroid of an IT2 FS can be used to quantify its uncertainty (e.g., [46]). The centroid can therefore be used to quantify the uncertainty about the numeric lower and upper probabilities, which are the solutions to the PLP. The term around can be used to express the uncertainty represented by the centroids; it expresses the inter-person and intra-person uncertainties about the words propagated by the LWA. This cannot be done by any solution involving T1 FSs, including Zadeh’s solution, since T1 FSs do not reflect the uncertainty about the membership values. The centroid of the lower probability calculated using Yager’s measures is [0.20, 0.35] and its average centroid is 0.28. The centroid of the corresponding upper probability is [0.83, 0.95], and its average centroid is 0.89. Consequently, the solution using Yager’s measures is: “The probability that a new Product I is needed before eight years is from around 28% to around 89%;” In order to express these results in a way that is more understandable to humans, we calculated the Jaccard’s similarity of − + ˜ (resulting from Yager’s measures) with ˜ and LProb LProb the members of the vocabulary of IT2 fuzzy probabilities. The similarities of the solutions with the members of the vocabulary of linguistic probabilities is summarized in
V. C ONCLUSIONS AND F UTURE W ORK In this paper, we translated an ACWW problem into two belief structures whose focal elements and probability mass assignments are IT2 FSs, that represent knowledge for two variables: lifetime of a new product and lifetime of a refurbished one. We then performed addition of those two belief structures to obtain a belief structure that carried knowledge about the variable of interest, which is the sum of the lifetime of a new product and that of a refurbished one. We used the LWA and Yager’s measures to calculate the lower probabilities and the upper probabilities of the hypothesis “No new product is needed before 8 years.” Future research should be devoted to the fusion of conflicting information [27], [28], [47], [48] using belief structures with fuzzy probability mass assignments, and calculating their expected values; the results can then be applied to more complicated ACWW problems that involve linguistic information from different sources. Also, some criteria need to be devised to investigate the correctness of the solutions. A viable measure is to examine whether the solution methodology gives the correct answer for problems whose solutions are known intuitively [11]. An interesting fact that reveals itself in this paper is: the solutions of ACWW problem are very complicated and require the use of some complicated mathematics. This should not be surprising, because this is also true when solving problems in probability theory. More specefically, ACWW problems involve both probabilistic and fuzzy uncertainties, calling for the use of mathematical tools from both fields to solve them. ACKNOWLEDGMENT The authors would like to thank Prof. Lotfi A. Zadeh for his enthusiasm and continuing support, and for his comments on his solutions to the ACWW challenge problems. Mohammad Reza Rajati would like to acknowledge the generous support of Annenberg Fellowship Program and the Summer Research Institute Fellowship of the University of Southern California.
R EFERENCES [1] J. M. Mendel, L. A. Zadeh, E. Trillas, R. R. Yager, J. Lawry, H. Hagras, and S. Guadarrama, “What computing with words means to me [discussion forum],” IEEE Computational Intelligence Magazine, vol. 5, no. 1, pp. 20–26, 2010. [2] J. M. Mendel, “Computing with words: Zadeh, Turing, Popper and Occam,” IEEE Computational Intelligence Magazine, vol. 2, no. 4, pp. 10–17, 2007. [3] L. A. Zadeh, Computing with words: Principal concepts and ideas, vol. 277 of Studies in Fuzziness and Soft Computing. Springer Berlin Heidelberg New York, 2012. [4] M. R. Rajati, J. M. Mendel, and D. Wu, “Solving Zadeh’s Magnus challenge problem on linguistic probabilities via linguistic weighted averages,” in Proceedings of 2011 IEEE International Conference on Fuzzy Systems, pp. 2177–2184, IEEE, 2011. [5] M. R. Rajati, D. Wu, and J. M. Mendel, “On solving Zadeh’s tall Swedes problem,” in Proceedings of 2011 World Conference on Soft Computing, 2011. [6] M. R. Rajati and J. M. Mendel, “Solving Zadeh’s Swedes and Italians challenge problem,” in Proceedings of 2012 Annual Meeting of the North American Fuzzy Information Processing Society (NAFIPS), pp. 1–6, IEEE, 2012. [7] M. R. Rajati and J. M. Mendel, “Lower and upper probability calculations using compatibility measures for solving Zadeh’s challenge problems,” in Proceedings of 2012 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1–8, IEEE, 2012. [8] A. Papoulis and S. Pillai, Probability, random variables and stochastic processes. McGraw-Hill Education (India) Pvt Ltd, 2002. [9] L. A. Zadeh, “Toward a generalized theory of uncertainty (GTU)—-an outline,” Information Sciences, vol. 172, no. 1, pp. 1–40, 2005. [10] L. A. Zadeh, “Fuzzy logic= computing with words,” IEEE Transactions on Fuzzy Systems, vol. 4, no. 2, pp. 103–111, 1996. [11] M. R. Rajati and J. M. Mendel, “Advanced computing with words using the generalized extension principle for type-1 fuzzy sets,” Submitted to IEEE Transactions on Fuzzy Systems, 2013. [12] L. A. Zadeh, “A note on Z-numbers,” Information Sciences, vol. 181, no. 14, pp. 2923–2932, 2011. [13] M. Ishizuka, K. S. Fu, and J. T. P. Yao, “Inference procedures under uncertainty for the problem-reduction method,” Information Sciences, vol. 28, no. 3, pp. 179–206, 1982. [14] C. Lucas and B. N. Araabi, “Generalization of the Dempster-Shafer theory: a fuzzy-valued measure,” IEEE Transactions on Fuzzy Systems, vol. 7, no. 3, pp. 255–270, 1999. [15] R. R. Yager, “Generalized probabilities of fuzzy events from fuzzy belief structures,” Information Sciences, vol. 28, no. 1, pp. 45–62, 1982. [16] M.-S. Yang, T.-C. Chen, and K.-L. Wu, “Generalized belief function, plausibility function, and Dempster’s combinational rule to fuzzy sets,” International Journal of Intelligent Systems, vol. 18, no. 8, pp. 925–937, 2003. [17] J. Yen, “Generalizing the dempster-schafer theory to fuzzy sets,” IEEE Transactions on Systems, Man and Cybernetics, vol. 20, no. 3, pp. 559– 570, 1990. [18] L. A. Zadeh, “Fuzzy sets and information granularity,” in Advances in fuzzy set theory and applications (M. Gupta, R. Ragade, and R. R. Yager, eds.), vol. 11, pp. 3–18, Amsterdam, The Netherlands: North Holland, 1979. [19] H. Nguyen, V. Kreinovich, and Q. Zuo, “Interval-valued degrees of belief: applications of interval computations to expert systems and intelligent control,” International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 5, no. 3, pp. 317–358, 1997. [20] T. Denœux, “Reasoning with imprecise belief structures,” International Journal of Approximate Reasoning, vol. 20, no. 1, pp. 79–111, 1999. [21] Z.-G. Su, P.-h. Wang, X.-j. Yu, and Z.-Z. Lv, “Maximal confidence intervals of the interval-valued belief structure and applications,” Information Sciences, vol. 181, no. 9, pp. 1700–1721, 2011. [22] Y. M. Wang, J. B. Yang, D. L. Xu, and K. S. Chin, “The evidential reasoning approach for multiple attribute decision analysis using interval belief degrees,” European Journal of Operational Research, vol. 175, no. 1, pp. 35–66, 2006. [23] Y. M. Wang, J. B. Yang, D. L. Xu, and K. S. Chin, “On the combination and normalization of interval-valued belief structures,” Information Sciences, vol. 177, no. 5, pp. 1230–1247, 2007.
[24] R. R. Yager, “Dempster–Shafer belief structures with interval valued focal weights,” International Journal of Intelligent systems, vol. 16, no. 4, pp. 497–512, 2001. [25] C. Fu and S. Yang, “The conjunctive combination of interval-valued belief structures from dependent sources,” International Journal of Approximate Reasoning, vol. 53, no. 5, pp. 769–785, 2012. [26] C. Fu and S. Yang, “The combination of dependence-based intervalvalued evidential reasoning approach with balanced scorecard for performance assessment,” Expert Systems with Applications, vol. 39, no. 3, pp. 3717–3730, 2012. [27] C. Fu and S. Yang, “Group consensus based on evidential reasoning approach using interval-valued belief structures,” Knowledge-Based Systems, vol. 35, pp. 167–176, 2012. [28] C. Fu and S. Yang, “An evidential reasoning based consensus model for multiple attribute group decision analysis problems with intervalvalued group consensus requirements,” European Journal of Operational Research, vol. 223, no. 1, pp. 167–176, 2012. [29] T. Denœux, “Modeling vague beliefs using fuzzy-valued belief structures,” Fuzzy Sets and Systems, vol. 116, no. 2, pp. 167–199, 2000. [30] I. Couso and L. S´anchez, “Upper and lower probabilities induced by a fuzzy random variable,” Fuzzy Sets and Systems, vol. 165, no. 1, pp. 1– 23, 2011. [31] R. R. Yager, “Arithmetic and other operations on Dempster-Shafer structures,” International Journal of Man-Machine Studies, vol. 25, no. 4, pp. 357–366, 1986. [32] H. Hamrawi, Type-2 Fuzzy Alpha-cuts. PhD thesis, De Montfort University, 2011. [33] H. Hamrawi and S. Coupland, “Type-2 fuzzy arithmetic using alphaplanes,” in Proceedings of 2009 IFSA-EUSFLAT Conference, pp. 606– 611, 2009. [34] M. R. Rajati and J. M. Mendel, “Novel weighted averages versus normalized sums in computing with words,” Information Sciences, vol. 235, pp. 130–149, 2013. [35] D. Dubois and H. Prade, “Random sets and fuzzy interval analysis,” Fuzzy Sets and Systems, vol. 42, no. 1, pp. 87–101, 1991. [36] A. Dempster, “Upper and lower probabilities induced by a multivalued mapping,” Annals of Mathematical Statistics, vol. 38, no. 2, pp. 325– 339, 1967. [37] F. Liu and J. M. Mendel, “Aggregation using the fuzzy weighted average as computed by the Karnik–Mendel algorithms,” IEEE Transactions on Fuzzy Systems, vol. 16, no. 1, pp. 1–12, 2008. [38] C. Hwang and M. Yang, “Generalization of belief and plausibility functions to fuzzy sets based on the Sugeno integral,” International Journal of Intelligent Systems, vol. 22, no. 11, pp. 1215–1228, 2007. [39] H. Ogawa, K. S. Fu, and J. T. P. Yao, “An inexact inference for damage assessment of existing structures,” International Journal of ManMachine Studies, vol. 22, no. 3, pp. 295–306, 1985. [40] J. Xiao, M. Tong, Q. Fan, and S. Xiao, “Generalization of belief and plausibility functions to fuzzy sets,” Applied Mathematics & Information Sciences, vol. 6, no. 3, pp. 697–703, 2012. [41] “Amazon Mechanical Turk.” https://www.mturk.com/mturk/, 2012. [Online; accessed August 2012]. [42] D. Wu, J. M. Mendel, and S. Coupland, “Enhanced interval approach for encoding words into interval type-2 fuzzy sets and its convergence analysis,” IEEE Transactions on Fuzzy Systems, vol. 20, no. 3, pp. 499– 513, 2012. [43] M. R. Rajati and J. M. Mendel, “Modeling linguistic probabilities and linguistic quantifiers using interval type-2 fuzzy sets,” in Proceedings of 2013 Joint IFSA World Congress and NAFIPS Annual Meeting, IEEE, 2013. [44] P. Gill, W. Murray, and M. Wright, Numerical linear algebra and optimization. McMaster University, 2007. [45] L. A. Zadeh, “The concept of a linguistic variable and its application to approximate reasoning-II,” Information Sciences, vol. 8, no. 4, pp. 301– 357, 1975. [46] J. M. Mendel and D. Wu, Perceptual computing: Aiding people in making subjective judgments. Wiley-IEEE Press, 2010. [47] T. Denœux, “Conjunctive and disjunctive combination of belief functions induced by nondistinct bodies of evidence,” Artificial Intelligence, vol. 172, no. 2, pp. 234–264, 2008. [48] D. Dubois and H. Prade, “Evidence, knowledge, and belief functions,” International Journal of Approximate Reasoning, vol. 6, no. 3, pp. 295– 319, 1992.