European Journal of Operational Research 259 (2017) 1191–1199
Contents lists available at ScienceDirect
European Journal of Operational Research journal homepage: www.elsevier.com/locate/ejor
Innovative Applications of O.R.
Study of aggregation algorithms for aggregating imprecise software requirements’ priorities Persis Voola a,∗, Vinaya Babu A. b a
Department of Computer Science and Engineering, Adikavi Nannaya University, Raja Raja Narendra Nagar, 533296 Rajamundry, East Godavari District, Andhra Pradesh, India b Department of CSE, JNTU Hyderabad, India
a r t i c l e
i n f o
Article history: Received 24 October 2013 Accepted 22 November 2016 Available online 9 December 2016 Keywords: Requirements Prioritization Interval Evidential Reasoning Uncertain assessment Multiple Attribute Utility Theory
a b s t r a c t Extensive Numerical Assignment (ENA) is a novel Requirements Prioritization Technique introduced by the authors that acknowledges the uncertain and imprecise nature of human judgment. A controlled experiment is conducted during which data are collected using ENA for the requirements assessment of university website system. The objective of this paper is to study how the imprecise data obtained from ENA can be aggregated using aggregation algorithms: Multiple Attribute Utility Theory (MAUT) and Interval Evidential Reasoning (IER) to generate requirements’ priorities in the presence of conflicting personal preferences among assessors. A simplified version of IER called Laplace Evidential Reasoning (LER) is introduced and the results are discussed. LER has the potential to emerge as a competent aggregation algorithm when compared to MAUT and IER, because of its reasonable processing requirements when compared to IER and its ability to produce rich set of outputs when compared to MAUT. © 2016 Elsevier B.V. All rights reserved.
1. Introduction A Requirements Prioritization Technique (RPT) facilitates to perform Requirements Prioritization (RP), which is a very important and commonly practiced activity in software development. RP provides support to identify the small set of most valuable requirements out of a large set of requirements and still produce software that satisfies its customers. An ordered set of requirements obtained through prioritization helps to plan and select requirements to be delivered in successive releases of software. RP further helps to focus the best efforts of developers on the features that make it possible to develop right product in the right time. If RP is not properly planned and executed, then developers may likely end up in a product that does not satisfy customer expectations (Karlsson & Ryan, 1996). Several RPTs based on precise assessments are available in the literature and experiments investigating the characteristics of RPTs (Karlsson, Wohlin, & Regnell, 1997; Karlsson & Ryan, 1997; Lehtola & Kauppinen, 2004; Karlsson, Berander, Regnell, & Wohlin, 2004; Ahl, 2005; Sahni, 2016) record their usefulness under various environments. Prioritization as a decision making problem has to deal with uncertain and imprecise nature of human judgment. Uncertainty ∗
Corresponding author. E-mail addresses:
[email protected],
[email protected] (P. Voola).
http://dx.doi.org/10.1016/j.ejor.2016.11.040 0377-2217/© 2016 Elsevier B.V. All rights reserved.
is a popular idea in the literature on decision making (Lipshitz & Strauss, 1997; Tversky & Kahneman, 1974). Assessors are humans whose judgment is based on intuition, experience, intelligence, assumptions, opinions and beliefs, which is more likely to be subjective and imprecise rather than objective and precise. Psychologists pointed out that a assessor psychologically prefers ranges for judgment rather than single points (Viswanathan, Sudman, & Johnson, 2004). Therefore, subjectivity and imprecision of human judgment have to be properly acknowledged by the prioritization problem. Requirements’ priorities are uncertain guesses of the upcoming product. Imprecision and uncertainty during RP is attributed for several reasons like imprecise nature of human judgment, deficient knowledge of requirements, vagueness of meaning about requirements etc. The requirements decisions are thought to be hard because of the uncertainty and incompleteness of the information available (Ngo-The & Ruhe, 2005). Requirements uncertainty is also characterized as information deficit with respect to the specification of requirements (Keutel & Mellis, 2011). Several researchers have acknowledged the presence of uncertainty during Requirements Prioritization and other related concepts (Voola & Vinaya Babu, 2013). Hence, a clear need to develop an RPT, which can handle this imprecision and uncertainty has come into picture. Techniques that aid in determining priorities of requirements must give space to the inclusion of uncertainty as a central aspect. With this drive, authors have introduced Extensive Numerical
1192
P. Voola, V.B. A. / European Journal of Operational Research 259 (2017) 1191–1199
Table 1 Interpretation of grades with ENA. Grade
Interpretation
Low Medium High Low–Medium Medium–High Low–High
A A A A A A
nice requirement to have, whose presence is desirable, where as its absence does not affect the level of satisfaction. fundamental requirement whose presence will be a cause for greater satisfaction, where as its absence will be a cause for greater dissatisfaction. crucial requirement that must be present, where as its absence makes the product unacceptable. requirement whose importance can be precisely assessed neither to Low nor to Medium but may lie in between Low and Medium. requirement whose importance can be precisely assessed neither to Medium nor to High but may lie in between Medium and High. requirement whose importance the assessor is completely unsure of and may lie anywhere in between Low and High.
Assignment (ENA) that is a tailored version of Numerical Assignment (NA), acknowledging uncertainty (Voola & Vinaya Babu, 2013). A controlled experiment is conducted with a closer look at the three RPTs: Numerical Assignment, Analytic Hierarchy Process (AHP) and Extensive Numerical Assignment. The aim is to understand the capability of ENA in dealing with the uncertain and attitudinal nature of human judgment by collecting some objective and subjective measures like time consumption, usability, attractiveness, reprioritizability and scalability. The experiment is conducted with students assessing the importance of requirements for our university website system. The list of requirements used for the experiment is provided in Appendix A. ENA based on interval scale emerged as an efficient RPT in modelling the imprecise and uncertain nature of human judgment very closely with reasonable effort. ENA is modelled as a Multiple Attribute Decision Making (MADM) problem incorporating subjectivity and uncertainty, where assessments about requirements’ importance are expressed using belief functions. Several conventional methods like AHP (Saaty, 2008), Simple Additive Weighting (SAW) (Hwang & Yoon, 1981) and ELECTRE (Roy, 1991) for solving an MADM problem are available in the literature, which do not incorporate uncertainty element. However, Multiple Attribute Utility Theory (MAUT) is one of the conventional methods, which is capable of solving an MADM problem with or without uncertainty (Keeney, 1972). Solving an MADM problem incorporating uncertainty has become familiar with the introduction of Evidential Reasoning (ER) approach (Yang & Xu, 2002). This is a nonlinear aggregation algorithm where belief functions aggregation is based on combination rule of the Dempster–Shafer theory of evidence. ER is extended to incorporate interval uncertainty and thus named Interval Evidential Reasoning (IER). IER demands complex processing requirements. Laplace Evidential Reasoning (LER) is introduced by the authors, which uses Laplace Principle of Insufficient Reason to simplify the complex aggregation process of IER. MAUT, IER and LER are compatible aggregation algorithms to work with inputs in the form of belief functions obtained using ENA. The objective of this paper is to examine how the requirements’ assessments obtained using ENA can be aggregated using MAUT, IER and LER to produce rigorous and reliable results, in the presence of conflicting personal preferences among assessors. MAUT is linear whereas IER and LER are nonlinear aggregation algorithms. The algorithms are discussed in the context of the RP problem used in the experiment. Section 2 gives a brief introduction of ENA. The description, operation and the results produced by the three aggregation algorithms: MAUT, IER and LER are discussed in Section 3. A comparison of the results obtained with the three is discussed in Section 4. Section 5 presents the conclusion and future work. 2. Description of ENA NA is a simple, easy to understand and widely used RP technique, where requirements’ priority is assigned precisely to one
of the assessment grades: Low, Medium and High. But, in reality requirement’s priority assessment cannot be done precisely for several reasons as discussed in Introduction. With this drive, NA is made extensive in order to incorporate the inherent imprecision of requirements’ priorities. The enhanced technique is called ENA. With ENA, imprecision is expressed by means of probability, which is a measure of the degree of belief in the assessment done. ENA allows assessors to express uncertainty using probability distribution across individual and interval assessment grades. This notion of expressing imprecision is derived from the earlier studies (Hampton, Moore, & Thomas, 1973; Moisiadis, 2002; Nguyen, Kreinovich, & Zuo, 1997). The assessment grades chosen vary from application to application and must be mutually exclusive and collectively exhaustive. Following this, the grades and their interpretations as shown in Table 1 are chosen and communicated to the assessors of requirements’ priorities. The individual and interval grades shown in Table 1 along with probabilities facilitate in expressing uncertainty, ignorance and incompleteness. A requirement can now be assessed in the form of belief function
{(L, l% ) (M, m%) (H, h%) (LM, lm%) (MH, mh%) (LH, lh% )}
(1)
where l, m, h are the degrees of belief associated with the individual grades: L(Low), M(Medium) and H(High) respectively and lm, mh, lh are the degrees of belief associated with interval grades: LM(Low–Medium), MH(Medium–High) and LH(Low–High), respectively. Degrees of belief l, m, h, lm, mh, lh are probability measures and their sum must be 100 for each requirement. Suppose, a requirement R1 is assessed to the grade Low with 80% belief and the remaining 20% to the grade Medium–High, is represented as R1: {(High, 80%) (Medium–High, 20%)}.Suppose, another requirement R2 is assessed to the grade Low with 70% belief and the remaining 30% the assessor is unsure of, is represented as R2: {(Low, 70%) (Low–High, 30%)}.If the assessor is completely ignorant of the importance of a requirement, it can be assessed to the grade Low–High with 100% belief. The assessment of all requirements carried out in this manner can be arranged in the form of a matrix, whose data can be aggregated using IER (Xu, Yang, & Wang, 2006). ENA based assessments, collected as a result of execution of the experiment are provided in Appendix B. This data is used as input for the algorithms. A total of 8 students of the second and third year Master of Computer Applications course in a 1:1 ratio were selected as representative participants of the experiment. Representative participation is opted with basis as reliance on earlier experiments of similar context (Xu et al., 2006; Berander, 2016; Chin, Wang, Yang, & Poon, 2009) and also availability of partici pants. The prerequisite for the aggregation algorithm N i=1 wi = 1, N is the number of assessors selected for assessing the relative importance of software requirements with each assessor assigned a relative weight wi > 0 (i = 1,…,N). The second year students are assigned a relative weight of 0.4 and the third years are assigned a relative weight of 0.6 and this is done subjectively with the assumption that third year students do better than second years. Third years are better in the sense that
P. Voola, V.B. A. / European Journal of Operational Research 259 (2017) 1191–1199
1193
Table 2 Results of aggregation using MAUT. Req. id
Average utility value (weight 0.4)
R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11 R12 R13 R14 R15
Average utility value (weight 0.6)
A2
A3
A4
A5
A6
A7
A8
utility value
100 100 69.4 61.05 46.2 59.4 69.4 44.55 100 100 100 98.3 89.9 49.75 62.7
96.6 100 89.9 100 48 83 79.6 100 73.2 66 98.3 66 94.9 53 96.65
93.2 100 69.4 100 62.7 72.8 66.1 46.3 66 100 100 96.65 100 72.8 66
96.6 100 79.8 79.8 69.6 69.6 100 33 86.4 86.4 83.1 83 100 46.2 79.6
37.95 96.6 72.85 53 39.7 74.5 33 36.3 36.3 57.75 59.4 93.2 72.8 43.05 94.9
96.6 89.8 83 93.2 36.3 37.95 69.5 46.2 76.2 84.7 93.2 98.3 62.9 61.15 93.2
84.85 69.45 66.2 93.3 49.75 74.5 49.75 63 74.5 66.15 89.8 96.6 53 33 49.75
100 100 100 71.1 42.9 69.4 72.8 46.2 100 94.9 100 96.6 98.3 63 70.25
0.8655 0.9338 0.7916 0.8068 0.4795 0.6693 0.6527 0.5114 0.7561 0.8077 0.8950 0.9210 0.8153 0.5221 0.7671
they have some practical experience earned as part of their course curriculum in the previous year. The participation of students in the experiment is justified as there is no significant difference between involving industrial professionals and students (Host, Regnell, & Wohlin, 20 0 0). It is noted that the experimental results with students as participants are in agreement with that of professionals in similar experiments (Tichy, 20 0 0).
This section presents the operation and the results obtained with three aggregation algorithms: MAUT, IER and LER in the context of RP problem used in the experiment. 3.1. Results of aggregation using Multiple Attribute Utility Theory MAUT is one of the linear aggregation algorithms that is capable of considering quantitative and qualitative attributes (Keeney, 1972). MAUT is based on expected utility theory as stated by French (1988) and Von Winterfeldt and Edwards (1986). Additive form of a value function is used by MAUT to compute the utility value. The collected preferences for alternatives along each criterion are aggregated to give the overall preference of each alternative i.e., a single utility value out of distributed data assessment is provided. A number of applications using MAUT were surveyed and presented to understand the range of problems being addressed by MAUT (Keeney, 1972). In the present context, MAUT is employed to rank software requirements by aggregating the uncertain assessments of requirements’ priorities obtained using ENA. The stronger constraint imposed by MAUT to ensure preference independence is satisfied by the problem under study. It is assumed that the utilities of the grades are placed equidistantly in the normalized space. Then the three assessment grades are quantified as follows.
u(M ) = 0.66
u (H ) = 1
(2)
where u(L), u(M) and u(H) represent the utilities of the grades Low, Medium and High, respectively. As intervals are present in the assessment grades, the minimum, maximum and average expected utilities are computed for each requirement Ri using (4) and (5). This process has to be repeated for all the requirements’ assessments provided by all the assessors. If the uncertainty turned out to be against the assessed requirement, then the minimum utility value is calculated as
umin (Ri ) = u(L ) × l% + u(M ) × m% + u(H ) × h% + u(L ) × l% + u(M ) × m% + u(L ) × l%
If the uncertainty turned out to be favorable to the assessed requirement, then the maximum utility value is calculated as:
umax (Ri ) = u(L ) × l% + u(M ) × m% + u(H ) × h% + u(M ) × m% + u ( H ) × h% + u ( H ) × h%
(3)
(4)
The average of the maximum and minimum utility values is computed using Eq. (5).
uavg ( Ri ) =
3. Study of aggregation algorithms
u(L ) = 0.33
Aggregated
A1
u min(Ri ) + u max(Ri ) 2
(5)
The assessment data of Appendix B is used to compute minimum, maximum and average utilities of requirements using (2)– (5). The computed average utility values given in Table 2 are added up separately for each group of assessors. The weighted sum is averaged resulting in the aggregated utility value for each requirement as given in Table 2. These values when arranged in sorted order give the ranks of the requirements. MAUT is very simple in terms of computations. On the other hand, MAUT has to be used with care as it requires the satisfaction of additive or preferential independence condition by the participating attributes (Keeney & Raiffa, 1993). If X and Y are two attributes, then attribute X is said to be preferentially independent of Y if preferences for specific outcomes of X do not depend on the level of attribute Y. This is not easy to check especially when there are more than 3 attributes in the problem at hand. Further, during the computation of utility value, compensation effect may appear where linear combination of weights may dominate an optimal alternative (Vetschera, 2006). 3.2. Results of aggregation using Interval Evidential Reasoning Interval Evidential Reasoning is an extension of Evidential Reasoning that is based on Dempster–Shafer theory of evidence (DS Theory) for belief degrees aggregation (Yang, 2001; Glenn Shafer, 1976; Yang & Xu, 2002). The simple difference between IER and ER is the way ignorance is modelled. ER algorithm allows assessments to individual grades only and any ignorance, if present, is assigned to the complete set of grades called global ignorance. ER and its variants have been used in a variety of applications such as motorcycle assessment (Yang & Singh, 1994), marine system safety analysis and synthesis (Wang, Yang, & Sen, 1995), executive car assessment (Yang & Xu, 1998), environmental impact assessment (Wang, Yang & Xu, 2006), organizational selfassessment (Yang, Dale, & Siow, 2001), best contractor selection (Sonmez, Yang, & Holt, 2001), self-assessment (Xu & Yang, 2003), best student performer (Xu et al., 2006), cargo ship selection (Wang, Yang, Xu & Chin, 2006) and best electrical appliance
1194
P. Voola, V.B. A. / European Journal of Operational Research 259 (2017) 1191–1199
Table 3 Combined probability masses using IER. A j A k
mjl : wj lj
mjm : wj mj
mjh : wj hj
mjlm : wj lmj
mjmh: wj mhj
mjlh: wj lhj
mjG : 1−wj
mkl : wk lk mkm : wk mk mkh : wk hk mklm : wk lmk mkmh : wk mhk mklh : wk lhk mkG : 1-wk
mjl mkl {L} mjl mkm {ϕ } mjl mkh {ϕ } mjl mklm {L} mjl mkmh {ϕ } mjl mklh {L} mjl mkG {L}
mjm mkl {ϕ } mjm mkm {M} mjm mkh {ϕ } mjm mklm {M} mjm mkmh {M} mjm mklh {M} mjm mkG {M}
mjh mkl {ϕ } mjh mkm {ϕ } mjh mkh {H} mjh mklm {ϕ } mjh mkmh {H} mjh mklh {H} mjh mkG {H}
mjlm mkl {L} mjlm mkm {M} mjlm mkh {ϕ } mjlm mklm {LM} mjlm mkmh {M} mjlm mklh {LM} mjlm mKg {LM}
mjmh mkl {ϕ } mjmh mkm {M} mjmh mkh {H} mjmh mklm {M} mjmh mkmh {MH} mjmh mklh {MH} mjmh mkG {MH}
mjlh mkl {L} mjlh mkm {M} mjlh mkh {H} mjlh mklm {LM} mjlh mkmh {MH} mjlh mklh {LH} mjlh mkG {LH}
mjG mkl {L} mjG mkm {M} mjG mkh {H} mjG mklm {LM} mjG mkmh {MH} mjG mklh {LH} mjG mkG
(Chin, Yang, Guo & Lam, 2008) etc. In all these applications, ER and its variants guaranteed consistent and reliable results. As experience shows that assessor is more comfortable in providing assessments to subsets of adjacent grades rather than individual grades, IER (Xu et al., 2006) is developed for the modelling and subsequent processing of such assessments. Quantification of ignorance in IER is treated as local ignorance or interval uncertainty. The working of IER with example is discussed at length in (Voola & Vinaya Babu, 2012). A brief description of IER applied in the context of RP problem is as follows: Let the set of individual and interval grades for assessment is represented as G = {L, M, H, LM, MH, LH}. Let Aj and Ak represent the two groups of assessors. Let the consolidated assessment of a requirement Ri by Aj and Ak is represented as given in (6) and (7).
A j (Ri ) = {(L, l j % )(M, m j % )(H, h j % )(LM, l m j % ) × (MH, mh j % )(LH, l h j % )}
(7)
Let wj and wk be the normalized weights of Aj and Ak . Basic probability masses are computed as the product of the assessor weight and the degree of belief corresponding to each grade. These are placed across and along the first row and column of Table 3 for Aj and Ak , respectively. The last element of the first row denotes the remaining probability mass that is to be assigned depending on the relevance of other assessors. The last element of the first column is interpreted in the same way. The combined probability masses for individual and interval grades are calculated from Table 3 by summing all the probability mass elements assigned to that grade as given below.
CL =
1 [m jl mkl + m jlm mkl + m jlh mkl + m jG mkl 1− f + m jl mklm + m jl mklh + m jl mkG ]
CM =
(8)
(9)
1 CH = [m jh mkh + m jmh mkh + m jlh mkh + m jG mkh + m jh mkmh 1− f + m jh mklh + m jh mkG ] (10) CLM =
CMH =
Combined belief degrees L
M
H
LM
MH
LH
R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11 R12 R13 R14 R15
0.0933 0.0 0 0 0 0.0140 0.0911 0.5278 0.1004 0.2310 0.5463 0.1242 0.0 0 0 0 0.0453 0.0 0 0 0 0.1314 0.5520 0.0721
0.0258 0.1157 0.3794 0.2187 0.0873 0.5370 0.4221 0.0796 0.3325 0.4035 0.2862 0.0817 0.2014 0.2034 0.3959
0.7616 0.8071 0.3593 0.5222 0.0 0 0 0 0.0408 0.1212 0.0726 0.3856 0.4441 0.3554 0.7293 0.5613 0.0165 0.3647
0.0799 0.0129 0.0673 0.0454 0.2290 0.0880 0.0 0 0 0 0.2068 0.0310 0.0692 0.2202 0.0 0 0 0 0.0125 0.0 0 0 0 0.0144
0.0393 0.0643 0.1665 0.0952 0.0652 0.2335 0.1408 0.0945 0.0776 0.0415 0.0863 0.1833 0.0930 0.1533 0.0768
0.0 0 0 0 0.0 0 0 0 0.0133 0.0272 0.0904 0.0 0 0 0 0.0846 0.0 0 0 0 0.0487 0.0415 0.0062 0.0056 0.0 0 0 0 0.0745 0.0759
CLH =
1 m jlh mklh + m jG mklh + m jlh mkG 1− f
(13)
The probability mass at large in G is given by
1 m jG mkG 1− f
CG =
(14)
where f is the combined probability mass assigned to the empty set ϕ as shown in (15).
f = [m jm mkl + m jh mkl + m jmh mkl + m jl mkm + m jh mkm + m jl mkh + m jm mkh + m jlm mkh + m jh mklm + m jl mkmh ] The scaling factor
1 1− f
(15)
is to ensure
CL + CM + CH + CLM + CMH + CLH + CG = 1
1 [m jm mkm + m jlm mkm + m jmh mkm + m jlh mkm 1− f + m jG mkm + m jm mklm + m jmh mklm + m jm mkmh + m jlm mkmh + m jm mklh + m jm mkG ]
Req. id
(6)
Ak (Ri ) = {(L, lk % )(M, mk % )(H, hk % )(LM, l mk % ) ×(MH, mhk % )(LH, l hk % )}
Table 4 Combined belief degrees using IER.
(16)
Eqs. (8)–(13) are the final combined probability masses. The aggregated belief degrees are calculated from these values by assigning CG back to all individual and interval grades proportionally as given in (17).
CL 1 − CG CLM lm = 1 − CG l =
CM CH h= 1 − CG 1 − CG CMH CLH mh = lh = 1 − CG 1 − CG m=
(17)
Aggregated assessment of Ri by Aj and Ak together is now expressed as shown in (18) with combined belief degrees of (17).
A(Ri ) = {(L, l% )(M, m% )(H, h% )(LM, lm% )(MH, mh% )(LH, lh% )}
1 [m jlm mklm + m jlh mklm + m jG mklm 1− f + m jlm mklh + m jlm mkG ]
(11)
1 [m jmh mkmh + m jlh mkmh + m jG mkmh 1− f + m jmh mklh + m jmh mkG ]
(12)
(18) The above said process is repeated for all the requirements. Results are shown in Table 4. The combined belief degrees data shown in Table 4 is not sufficient to produce ranks. The data has to be transformed into values using (3)–(5). The minimum, maximum and average values thus obtained are shown in Table 5. The
P. Voola, V.B. A. / European Journal of Operational Research 259 (2017) 1191–1199 Table 6 Combined probability masses using LER.
Table 5 Minimum, maximum and aggregated utilities using IER. Req. id R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11 R12 R13 R14 R15
Minimum 0.8617 0.9302 0.7509 0.7835 0.3804 0.6116 0.5970 0.4362 0.7238 0.7745 0.6911 0.9061 0.8033 0.4589 0.7304
Maximum 0.9015 0.9563 0.8387 0.8491 0.5388 0.7201 0.7016 0.5366 0.7931 0.8393 0.7974 0.9722 0.8391 0.5609 0.8121
Aggregated utility value 0.8816 0.9432 0.7948 0.8163 0.4596 0.6659 0.6493 0.4864 0.7584 0.8068 0.7443 0.9392 0.8212 0.5098 0.7713
A j A k
mjl :wj lj
mjm :wj mj
mjh :wj hj
mjG :1−wj
a mkl :wk lk mkm :wk mk mkh :wk hk mkG :1-wk
mjl mkl {L} mjl mkm {ϕ } mjl mkh {ϕ } mjl mKg {L}
mjm mkl {ϕ } mjm mkm {M} mjm mkh {ϕ } mjm mkG {M}
mjh mkl {ϕ } mjh mkm {ϕ } mjh mkh {H} mjh mkG {H}
mjG mkl {L} mjG mkm {M} mjG mkh {H} mjG mkG
Table 7 Results of aggregation using LER. Req. id
average values are the aggregated utility values when arranged in sorted order gives the ranks of the requirements. The minimum and maximum utility values also assist in determining which requirement is more important than another and to what extent (Xu et al., 2006). Suppose the minimum and maximum utilities of two requirements R1 and R2 are denoted with two intervals [a, b], [c, d] such that a ≤ b and c ≤ d, where a, b, c and d are all positive. The extent to which [a, b] ≥ [c, d] is calculated using Eq. (19).
max(0, b − c ) − max(0, a − d ) P (R1 > R2 ) = (b − a ) + (d − c )
1195
(19)
IER has become prominent with the rich set of outputs in the form of combined belief degrees it is capable of producing. But the concern is the complicated aggregation processing requirements of IER. Hence, to simplify the aggregation process Laplace’s principle of insufficient reason is introduced as discussed in the following section. 3.3. Results of aggregation using Laplace Evidential Reasoning (LER) The strength of MAUT is its simplicity in calculations. On the contrary, except average values it does not provide any additional information for analysis like diversity of assessments, combined belief degrees and by how much probability one requirement is more important than the other etc. The strength of IER is its ability to provide rich set of outputs, which can be used for further analysis. But the weakness of IER is its complex aggregation process. Hence, as a trade-off Laplace Principle of Insufficient Reason (Laplace, 1902) is brought in, to simplify the aggregation process of IER and simultaneously provide rich set of outputs for analysis. The modified IER is LER, which is a simplified version of IER. 3.3.1. Laplace principle of insufficient reason The principle of insufficient reason is renamed as principle of indifference by Keynes (1921) with a note on its applicability. The principle can be conveniently applied in the absence of knowledge indicating unequal probabilities. Savage (1972) mentioned that a probabilistic assessment requires that an assessor has information about the probability of all events. If this is not available, the equal distribution function is popularly used, supported by Laplace’s principle of insufficient reason. In a given sample space, if the probability distribution of all simple events is not known, equal probabilities can be assigned. Each simple event can be assigned a probability equal to 1/n, for n > 1 mutually exclusive and collectively exhaustive events (Sentz & Ferson, 2002). Consider for example three possible components
R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11 R12 R13 R14 R15
Combined belief degrees
Aggregated
L
M
H
utility value
0.1357 0.0065 0.0544 0.1244 0.6845 0.1540 0.2750 0.6535 0.1560 0.0505 0.0337 0.0020 0.1410 0.5890 0.1087
0.0820 0.1576 0.4906 0.2909 0.2495 0.6833 0.5112 0.2245 0.4012 0.4724 0.2087 0.1741 0.2537 0.2944 0.4558
0.7821 0.8357 0.4548 0.5846 0.0659 0.1625 0.2136 0.1219 0.4426 0.4770 0.7574 0.8238 0.6051 0.1164 0.4353
0.8811 0.9419 0.7966 0.8177 0.4565 0.6644 0.6418 0.4857 0.7590 0.8055 0.9063 0.9394 0.8191 0.5052 0.7721
A, B, C have caused a system failure. An expert assigns a probability of 0.4 to the component A as a source of failure. The expert does not know anything about the other 2 sources of failure, B and C. Applying the principle of insufficient reason a probability of 0.3 could be assigned to each of B and C. Applying Laplace principle to the data obtained with ENA, the degrees of belief associated with interval grades are distributed in equal proportion to the individual grades participating in that interval. As LM and MH span across two grades, the belief degree associated with each of them is made into 2 equal parts and is added accordingly. As LH spans across 3 grades, it is divided into 3 equal parts and added accordingly. Hence, the ENA based assessment collected in the form of (1) is reduced to (20).
{(L, Ll% )(M, Lm%)(H, Lh%)}
(20)
where Ll, Lm and Lh denote Laplace belief degrees associated with the individual grades L, M and H, respectively. Thus all belief degrees assigned to interval grades are reduced to belief degrees assigned with individual grades using Laplace belief degrees as given in (22) and (23).
lm lh + 2 3 lm mh lh Lm = m + + + 2 2 3 mh lh Lh = h + + 2 3 Ll = l +
(21) (22) (23)
Now the customized assessments in the form of (20) are inputs for aggregation using LER as given in Appendix C. Calculations are reduced by a significant number since the combined probability masses are now to be computed only for the individual grades as shown in Table 6. These computational requirements are even lesser than ER (Yang & Singh, 1994) since the global ignorance modelled in ER is also reduced completely. The subsequent calculations to find combined belief degrees, average utilities are also reduced thus. The results of aggregation using LER are shown in Table 7. A rational aggregation process has to follow the following four synthesis axioms as stated by Yang and Xu (2002).
1196
P. Voola, V.B. A. / European Journal of Operational Research 259 (2017) 1191–1199
Table 8 Comparison of priorities obtained using MAUT, IER and LER. Req. id
MAUT
IER
LER
R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11 R12 R13 R14 R15
4 1 8 7 15 11 12 14 10 6 3 2 5 13 9
4 1 8 6 15 11 12 14 10 7 3 2 5 13 9
4 1 8 6 15 11 12 14 10 7 3 2 5 13 9
Axiom 1: if the assessments undergoing aggregation are not evaluated to a particular grade then the combined assessment should also be not assessed to that grade. Axiom 2: if the assessments undergoing aggregation are evaluated precisely to a particular grade then the combined assessment should also be assessed precisely to the same grade. Axiom 3: if the assessments undergoing aggregation are completely assessed to a subset of grades then the combined assessment should also be assessed to the same subset. Axiom 4: if the assessments undergoing aggregation are incomplete then the combined assessment should also express incompleteness. Axioms 1 and 2 are satisfied in the case of LER. Axioms 3 and 4 are not applicable as subset of grades and incomplete assessments are reduced to precise assessments with individual grades. Hence, LER as a rational aggregation process can be used to generate reliable results. 4. Comparison of results obtained using aggregation algorithms The aggregated utility values of Tables 2, 5 and 7 are ranked as shown in Table 8. It is clear from Table 8 that IER and LER could produce exactly the same rankings. This infers the fact that the idea of using Laplace’s principle of insufficient reason is a significant improvement over original IER as LER demands relatively small number of computations when compared with IER and still
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
R1
R2
R3
R4
R5
R6
could provide the same results as IER. The ranks of MAUT are in line with those of IER and LER except for the ranks of R4 and R10. This does not matter as RP is not the problem of finding the best requirement among a set of requirements. It is indeed finding an optimal set of requirements out of a larger set for implementation. The aggregated utility values of requirements obtained using MAUT, IER and LER are plotted as shown in Fig. 1. It can be inferred that the three lines depicting the aggregated utility values of requirements are nearly in complete coincidence with one another. Hence, the decision of choosing one among MAUT, IER and LER is influenced not by the order of rankings, but by the understanding of the three algorithms in terms of input requirements, processing complexity and the richness of outputs as explained below. Input: the basic input requirements for MAUT, IER and LER are the same, except for LER the inputs need to be customized and used. Processing: processing requirements are simple for MAUT, whereas IER demands complex processing. The processing requirements of LER lie in between MAUT and IER. Processing complexity of the three algorithms is studied with respect to number of grades "n" to which the requirements are assessed. Let gi denote the number of individual grades, Gi denote the set of individual and interval grades obtained from gi , N(gi ) denote the number of individual grades, N(Gi ) denote the number of individual and interval grades. Processing complexity of MAUT is O(n) as the computations are based on Simple Additive Weighting, which is directly proportional to n. It should be noted from Tables 3 and 6 that computation of probability masses with IER and LER is proportional to the square of N(Gi ). With IER, every addition of assessment grade to N(gi ) results in exponential increase of N(Gi ) i.e., N(Gi ) = 3 for N(gi ) = 2, N(Gi ) = N(gi ) + N(Gi − 1 ) for i = 3,…,m. Hence, the processing complexity of IER is O(2n ). With LER, every addition of assessment grade to N(gi ) will not affect N(Gi ), since N(Gi ) = N(gi ) for i = 2,…,m. However, the subsequent computations of probability masses are directly proportional to N(Gi ). Thus, the processing complexity of LER is O(n2 ). Output: outputs play a significant role in decision making and analysis. Hence, they should be properly understood and utilized. MAUT outputs aggregated utility value in the form of single numerical value for each requirement as shown in Table 2. It does not show the diversity of original assessment in the form of
R7
R8
R9
R10 R11 R12 R13 R14 R15
MAUT 0.8660.9340.7920.8070.4790.6690.6530.5110.7560.8080.8950.9210.8150.5220.767 IER
0.8820.9430.7950.816 0.46 0.6660.6490.4860.7580.807 0.9 0.9390.821 0.51 0.771
LER
0.8810.9420.7970.8180.4570.6640.6420.4860.7590.8060.9060.9390.8190.5050.772
Fig. 1. Comparison of aggregated utility values obtained using MAUT, IER and LER.
P. Voola, V.B. A. / European Journal of Operational Research 259 (2017) 1191–1199
combined belief degrees. On the other side, IER and LER provide rich set of outputs in the form of aggregated distributed assessments as shown in Tables 4 and 7, which may be useful in obtaining consensus among assessors. The aggregated distributed assessments of IER further provide the preference information as the extent to which one is preferred over another. 5. Conclusion and future work MAUT, IER and LER can be practically applied by the requirements engineers to aggregate the ENA based requirements’ assessments. All the three algorithms are capable of handling both complete and/or incomplete assessments in a consistent manner. Hence, even an inexperienced assessor can produce realistic values. MAUT is linear whereas IER and LER are non linear aggregation algorithms. IER is an extension of ER, capable of accommodating uncertainty caused by interval valued evaluations. LER is introduced to simplify complex processing requirements of IER. The processing complexity of LER is O(n2 ), which is less than that of IER, O(2n ). All the three algorithms are efficient in the sense that they provide rational and reliable results by providing the best possible solution for all the assessors, in the presence of competing personal preferences. Hence, any one of the algorithms can be adopted by the requirements engineer. However, LER has the potential to emerge as a competent aggregation algorithm when compared to MAUT and IER, because of its reasonable processing requirements when compared to IER and its ability to produce rich set of outputs when compared to MAUT. The following two sub sections present future directions for the interested researchers, which demand considerable attention and further effort. 5.1. Tool for integrating ENA with LER A cost effective web based tool has to be developed integrating ENA for collection of assessments with LER for aggregation of those assessments. With the development of the tool, the following benefits can be observed, which facilitates widespread adoption of the tool. Accessibility: the usage of tool makes it possible for the geographically distributed stakeholders as well as co-located stakeholders to provide assessments from any place via access to the web. The tool must provide a user friendly interface accommodating both quantitative and qualitative information, less time consuming, scalable and reliable. Flexibility: the tool must be flexible to accommodate precise and/or imprecise data and must be capable of assessing priorities over a single attribute or multiple attributes, which are relevant to the problem under consideration. The tool must be flexible to operate on both positively oriented and negatively oriented attributes and also must be capable of providing different sets of assessment grades for different attributes. The feasibility of interval belief degrees in modelling uncertainty in addition to the already existing interval grades has to be explored. Scalability: the scalability feature has to be investigated both in terms of user friendliness and computational efficiency (Kotonya & Sommerville, 1997). User friendliness is defined as the ability of the RPT to handle growing number of requirements in a graceful manner. Experimental results confirm that ENA is scalable with respect to number of requirements for which assessment is done by the assessors. But, the scalability feature is not addressed with respect to computational efficiency with an increase in number of assessors. The experiment in this paper is with 8 participants organized into 2 groups keeping in view other studies and also availability of the participants. It has to be further studied how increase in the number of participants affects the results.
1197
Sensitivity analysis: sensitivity analysis is exploring various what if scenarios to help take better decisions. Sensitivity analysis of the aggregated data is helpful in achieving consensus among assessors. The priorities of the requirements accompanied by aggregated values of distributed data assessment, relative importance of one requirement over the other, selected and discarded list of requirements for implementation are communicated to the assessors. The list can be carried forward for subsequent activities of implementation, if consensus is achieved among the assessors. If all the efforts have failed in achieving consensus, it makes sense to perform sensitivity analysis in order to settle on an acceptable solution. Assessors are assigned varying weights to check how the outcomes are affected. The process can be repeated in a systematic way till a consensus is attained. Finally, a coherent set of requirements have to be identified for implementation which is agreed upon by all the assessors. If an optimal set of requirements cannot be materialized even after all the efforts, redoing the assessment of requirements to be considered. The whole process has to be understood in a practical environment. 5.2. Prioritization of quality requirements This work primarily focused on the prioritization of functional requirements. Non functional requirements also called as quality requirements are critical to the success of the project. It is important to find the right balance among competing quality requirements as they have a share of 5–10% of the total number of functional requirements (Svensson, Gorschek, Regnell, Torkar, Shahrokni and Feldt, 2011). Several taxonomies of quality attributes give an idea of what they are (Dromey, 1995; McCall, Richards, & Walters, 1977). For example FURPS identifies Functionality, Usability, Reliability, Performance and Supportability as quality attributes (Pressman, 2001). By seeing the terms, it is obvious that they are more convenient to be expressed in terms of subjective rather than objective assessments. Appendix A List of requirements – University website system R1: scroll news of latest events like results declared, conferences, seminars and any other items. R2: information about the university, departments, courses, admission procedure. R3: information about syllabi, academic calendar, and class work timetables, evaluation information like credit system, continuous assessment and CGPA/SGPA to marks converter. R4: check for availability of text books in library and get access to e journals and books subscribed. R5: opt for CBCS online. R6: get result data analysis for the exams appeared so far both in textual and graphical formats. R7: online query/feedback to the teacher or any part of administration. R8: chat rooms for students. R9: search facility with a keyword. R10: email facility. R11: SMS alerts to mobile. R12: online submissions of examination application, payment of exam fees, get e-hall ticket. R13: to stand against traffic, indexing of results in multiple sites. R14: download application on smart phone. R15: get marks memos through a dedicated terminal.
1198
P. Voola, V.B. A. / European Journal of Operational Research 259 (2017) 1191–1199
Appendix B Assessment data by the participants with relative weight 0.4 Req. id
A1
A2
A3
A4
R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11 R12 R13 R14 R15
(H,1) (H,1) (M,0.8)(MH,0.2) (M,0.7)(LM,0.3) (L,0.2)(LM,0.8) (M,0.6)(LM,0.4) (M,0.8)(MH,0.2) (L,0.3)(LM,0.7) (H,1) (H,1) (H,1) (H,0.9)(MH,0.1) (H,0.8)(LM,0.2) (L,0.5)(LH,0.5) (M,0.8)(LM,0.2)
(H,0.8)(MH,0.2) (H,1) (H,0.8)(LM,0.2) (H,1) (L,0.7)(MH,0.3) (M,0.5)(H,0.5) (M,0.2)(MH,0.8) (H,1) (H,0.2)(LH,0.8) (M,1) (H,0.9)(MH,0.1) (M,1) (H,0.7)(MH,0.3) (L,0.6)(MH,0.4) (H,0.9)(LH,0.1)
(M,0.2)(H,0.8) (H,1) (M,0.8)(MH,0.2) (H,1) (M,0.8)(LM,0.2) (M,0.6)(MH,0.4) (M,0.8)(LH,0.2) (L,0.6)(LM,0.2)(MH,0.2) (M,1) (H,1) (H,1) (H,0.9)(LH,0.1) (H,1) (M,0.6)(MH,0.4) (M,1)
(M,0.1)(H,0.9) (H,1) (L,0.2)(M,0.2)(H,0.6) (H,0.6)(LM,0.4) (LM,0.4)(MH,0.6) (LM,0.4)(MH,0.6) (H,1) (L,1) (M,0.4)(H,0.6) (M,0.4)(H,0.6) (L,0.1)(M,0.3)(H,0.6) (MH,1) (H,1) (L,0.6)(M,0.4) (M,0.6)(H,0.4)
Assessment data by the participants with relative weight 0.6 Req. id
A5
A6
A7
A8
R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11 R12 R13 R14 R15
(L,0.7)(LM,0.3) (H,0.8)(MH,0.2) (M,0.7)(H,0.2)(LH,0.1) (L,0.6)(MH,0.4) (L,0.8)(LH,0.2) (M,0.5)(MH,0.5) (H,1) (L,0.8)(LM,0.2) (L,0.8)(LM,0.2) (M,0.5)(LM,0.5) (L,0.2)(M,0.8) (H,0.6)(MH,0.4) (M,0.8)(H,0.2) (L,0.7)(LH,0.3) (H,0.7)(MH,0.3)
(H,0.8)(MH,0.2) (M,0.3)(H,0.7) (MH,1) (M,0.2)(H,0.8) (L,0.8)(LM,0.2) (L,0.7)(LM,0.3) (L,0.1)(M,0.7)(H,0.2) (M,0.7)(H,0.3) (M,0.7)(H,0.3) (M,0.45)(H,0.55) (M,0.2)(H,0.8) (M,0.05)(H,0.95) (L,0.3)(M,0.5)(H,0.2) (L,0.25)(M,0.65)(H,0.1) (M,0.2)(H,0.8)
(H,0.7)(LM,0.3) (M,0.6)(LM,0.1)(MH,0.3) (M,0.4)(H,0.2)(LM,0.4) (H,0.8)(LH,0.2) (L,0.5)(LH,0.5) (M,0.5)(MH,0.5) (L,0.5)(LH,0.5) (M,0.5)(MH,0.5) (M,0.5)(MH,0.5) (M,0.7)(LH,0.3) (H,0.4)(MH,0.6) (H,0.8)(MH,0.2) (L,0.6)(MH,0.4) (L,1) (L,0.5)(LH,0.5)
(H,1) (H,1) (H,1) (M,0.7)(MH,0.3) (L,0.4)(LM,0.6) (M,0.8)(MH,0.2) (M,0.6)(MH,0.4) (H,1) (H,1) (H,0.7)(MH,0.3) (H,1) (H,0.8)(MH,0.2) (H,0.9)(MH,0.1) (L,0.4)(MH,0.6) (M,0.75)(MH,0.25)
Appendix C Customized assessment data for LER Req. Id
Assessors with relative weight 0.4
Assessors with relative weight 0.6
L
M
H
L
M
H
R1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11 R12 R13 R14 R15
0.0 0 0 0 0.0 0 0 0 0.0750 0.0875 0.40 0 0 0.10 0 0 0.0166 0.5875 0.0666 0.0 0 0 0 0.0250 0.0083 0.0250 0.4666 0.0333
0.10 0 0 0.0 0 0 0 0.5250 0.2625 0.4875 0.6500 0.5916 0.1375 0.4166 0.3500 0.0875 0.3958 0.0625 0.3917 0.6333
0.90 0 0 1.0 0 0 0.40 0 0 0.6500 0.1125 0.2500 0.3916 0.2750 0.5166 0.6500 0.8875 0.5958 0.9125 0.1417 0.3333
0.2500 0.0125 0.0583 0.1667 0.7833 0.2125 0.4417 0.6250 0.2250 0.0875 0.0500 0.0 0 0 0 0.2250 0.6125 0.1667
0.10 0 0 0.30 0 0 0.4583 0.3292 0.1583 0.6375 0.4167 0.30 0 0 0.3875 0.5375 0.3250 0.1125 0.3875 0.2625 0.3479
0.6500 0.6875 0.4833 0.5042 0.0583 0.1500 0.1417 0.0750 0.3875 0.3750 0.6250 0.8875 0.3875 0.1250 0.4854
Supplementary materials Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.ejor.2016.11.040. References Ahl, A. (2005). An experimental comparison of five prioritization techniques investigating ease of use, accuracy, and scalability. (Master Thesis). School of Engineering, Blekinge Institute of Technology.
Berander, P. (2016). Evolving prioritization for software product management (Doctoral Dissertation Series No 2007: 07). Blekinge Institute of Technology. Chin, K. S., Wang, Y. M., Yang, J. B., & Poon, K. K. G. (2009). An evidential reasoning based approach for quality function deployment under uncertainty. Expert Systems with Applications, 36, 5684–5694. Chin, K. S., Yang, J. B., Guo, M., & Lam, J. P. (2008). An evidential-reasoning-intervalbased method for new product design assessment. IEEE Transactions on Engineering Management, 56(1)). doi:10.1109/TEM.20 08.20 09792. Dromey, R. G. (1995). A model for software product quality. IEEE Transactions on Software Engineering, 2, 146–163. French, S. (1988). Reading in decision analysis. London: Chapman and Hall. Hampton, J. M., Moore, P. G., & Thomas, H. (1973). Subjective probability and its measurement. Journal of the Royal Statistical Society, 136, 21–42. Host, M., Regnell, B., & Wohlin, C. (20 0 0). Using students as subjects – a comparitive study of students and professionals in lead-time impact assessment. Empirical Software Engineering, 5(3), 201–214. Hwang, C. L., & Yoon, K. (1981). Multiple attribute decision making methods and applications: A state of the art survey. Berlin, Heidelberg, New York: Springer-Verlag. Karlsson, J., & Ryan, K. (1996). Supporting the selection of software requirements. In Proceedings of the 8th workshop on software specification and design (pp. 146–149). Karlsson, J., & Ryan, K. (1997). A cost-value approach for prioritizing requirements. IEEE Software, 67–74. Karlsson, J., Wohlin, C., & Regnell, B. (1997). An evaluation of methods for prioritizing software requirement (pp. 939–947). New York: Elsevier. Karlsson, L., Berander, P., Regnell, B., & Wohlin, C. (2004). Requirements prioritisation: An experiment on exhaustive pair-wise comparisons versus planning game partitioning. In Proceedings of empirical assessment in software engineering (EASE 2004). Keeney, L., & Raiffa, H. (1993). Decisions with multiple objectives-preferences and value trade-offs (2nd ed.). Cambridge, UK: Cambridge University Press. Keeney, R. L. (1972). Utility functions for multiattributed consequences. Management Science, 18, 276–287. Keutel, M., & Mellis, W. (2011). An in-depth interpretive case study in requirements engineering research: experiences and recommendations. In Proceedings of workshop on empirical research on in requirements engineering: challenges and solutions. Keynes, J. M (1921). The principle of indifference. A treatise on probability (pp. 41–64). Mc Millan and Co.
P. Voola, V.B. A. / European Journal of Operational Research 259 (2017) 1191–1199 Kotonya, G., & Sommerville, I. (1997). Requirements engineering. Devon: John Wiley & Sons. Laplace, P. S. (1902). A philosophical essay on probabilities (pp. 107–108). New York: J. Wiley & Sons. Lehtola, L., & Kauppinen, M. (2004). Empirical evaluation of two requirements prioritization methods in product development projects. In Proceedings of the European software process improvement conference (pp. 161–170). Lipshitz, R., & Strauss, O. (1997). Coping with uncertainty: A naturalistic decision making analysis. Organizational Behavior and Human Decision Processes, 69(2), 149–163 Article No. Ob972679. McCall, J., Richards, P., & Walters, G. (1977). Factors in software quality. NTIS. Moisiadis, F. (October 2002). The fundamentals of prioritizing requirements. In Proceedings of systems engineering, test and evaluation conference. Ngo-The, A., & Ruhe, G. (2005). Decision support in requirements engineering. Engineering and managing software requirements (pp. 267–286). Berlin: Springer Verlag. Nguyen, H. T., Kreinovich, V., & Zuo, Q. (1997). Interval valued degrees of belief: Applications of interval computations to expert systems and intelligent control. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 5(3), 317–358. Pressman, R. S. (2001). Software engineering: A practitioner’s approach (5th ed.). McGraw Hill ISBN 0073655783. Roy, B. (1991). The outranking approach and the foundations of electre methods. Theory and Decision, 31(1), 49–73. Saaty, T. L. (2008). Decision making with the analytic hierarchy process. International Journal Services, 1(1), 83–98. Sahni, D. A controlled experiment on analytical hierarchy process and cumulative voting -investigating time, scalability, accuracy, ease of use and ease of learning. (Master Thesis) No. MSE-2007-17, School of Engineering, Blekinge Institute of Technology. Savage, L. J. (1972). The foundations of statistics. New York: Dover Publications. Sentz, K., & Ferson, S. (2002). Combination of evidence in Dempster–Shafer theory. Binghamton University Technical Report. Shafer, Glenn (1976). A mathematical theory of evidence. Princeton, NJ: Princeton University Press. Sonmez, M., Yang, J. B., & Holt, G. D. (2001). Addressing the contractor selection problem using an evidential reasoning approach. Engineering, Construction and Architectural Management, 8(3), 198–210. Svensson, R. B., Gorschek, T., Regnell, B., Torkar, R., Shahrokni, A., Feldt, R., et al. (2011). Prioritization of quality requirements: State of practice in eleven companies. In Proceedings of 2011 IEEE international requirements engineering conference (pp. 69–78). Tichy, W. F. (20 0 0). Hints for reviewing empirical work in software engineering. Empirical Software Engineering, 5(4), 309–312. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases, 185(4157), 1124–1131 Science, New Series.
1199
Vetschera, R. (2006). Preference-based decision support in software engineering. Value-Based Software Engineering (pp. 67–89). Springer. doi:10.1007/ 3- 540- 29263- 2_4. Viswanathan, M., Sudman, S., & Johnson, M. D. (2004). Maximum versus meaningful discrimination in scale response: Implications for validity of measurement of consumer perceptions about products. Journal of Business Research, 57(2), 108– 124. doi:10.1016/S0148-2963(01)00296-X. Von Winterfeldt, D., & Edwards, W. (1986). Decision analysis and behavioural research. Cambridge: Cambridge University Press. Voola, P., & Vinaya Babu, A. (2012). Requirements uncertainty prioritization approach: A novel approach for requirements prioritization. Software Engineering: An International Journal (SEIJ), 2(2), 37–49. Voola, P., & Vinaya Babu, A. (August 2013). 4A frameworks for requirements prioritization. International Journal of Computer Applications, 76(1), 38–44. Voola, P., & Vinaya Babu, A. (July 2013). Comparison of requirements prioritization techniques employing different scales of measurement. ACM SIGSOFT Software Engineering Notes, 38(4), 1–10. doi:10.1145/2492248.2492278. Wang, J., Yang, J. B., & Sen, P. (1995). Safety analysis and synthesis using fuzzy sets and evidential reasoning. Reliability Engineering and System Safety, 47(2), 103–118. Wang, Y.-M., Yang, J. B., Xu, D. L., & Chin, K. S. (2006). The evidential reasoning approach for multiple attribute decision analysis using interval belief degrees. European Journal of Operational Research, 175, 35–66. doi:10.1016/j.ejor.2005.03. 034. Wang, Y. M., Yang, J. B., & Xu, D. L. (2006). Environmental impact assessment using the evidential reasoning approach. European Journal of Operational Research, 174. doi:10.1016/j.ejor.2004.09.059. Xu, D.-L., & Yang, J.-B. (2003). Intelligent decision system for self assessment. Journal of Multi Criteria Decision Analysis, 12, 43–60. doi:10.1002/media.343. Xu, D.-L., Yang, J.-B., & Wang, Y.-M. (2006). The evidential reasoning approach for multi-attribute decision analysis under interval uncertainty. European Journal of Operational Research, 174, 1914–1943 Elsevier. Yang, J. B. (2001). Rule and utility based evidential reasoning approach for multi-attribute decision analysis under uncertainty. European Journal of Operational Research, 131(1), 31–61. Yang, J. B., Dale, B. G., & Siow, C. H. R. (2001). Self assessment of excellence: An application of the evidential reasoning approach. International Journal of Production Research, 39(16), 3789–3812. Yang, J. B., & Singh, M. G. (1994). An evidential reasoning approach for multiple attribute decision making with uncertainty. IEEE Transactions on Systems, Man and Cybernetics, 24(1), 1–18. Yang, J. B., & Xu, D. L. (1998). Knowledge based executive car evaluation using the evidential reasoning approach. In R. W. Baines, A. Taleb-Bendiab, & Z. Zhao (Eds.), Advances in manufacturing technology XII (pp. 741–749). London, UK: Professional Engineering. Yang, J. B., & Xu, D. L. (2002). On the evidential reasoning algorithm for multiple attribute decision analysis under uncertainty. IEEE Transactions on Systems, Man and Cybernetics, 32(3), 289–304.