Pre-prints of the 18th International Working Seminar on Production Economics, Vol. 4, pp. 225-236, Innsbruck, Austria, 25-29 February, 2014.
Modelling economic project uncertainty: Beware of best practice Hans Schjær-Jacobsen Center for Bachelor of Engineering Studies, Technical University of Denmark DTU Ballerup Campus, 15 Lautrupvang, 2750 Ballerup, Denmark
[email protected] Abstract Quantitative risk and uncertainty modeling of economic costs and benefits incurred by execution of large projects is usually done by representation of uncertain input variables by probability distributions and subsequent calculation of uncertain output variables by Monte Carlo simulation. This approach is widely recommended in the literature and even by the EU as a guide to handling of large infrastructure investment projects to avoid underestimation of cost and overestimation of benefits. Thus the probabilistic approach is established as a de facto best practice and also a broadly accepted tool among engineers and economists in construction companies and public authorities. At the same time there are an increasing number of reports documenting serious cost overruns and benefit shortfalls as well as ex post results beyond the uncertainty limits predicted by ex ante analysis thereby challenging the relevance and usefulness of the probabilistic approach to modeling of economic project uncertainty. This paper points out some of the weaknesses and pitfalls of the probabilistic modeling approach and explains why it can lead to wrong conclusions due to loss of information during information processing. Special attention is given to loss of focus on extreme economic outcomes occurring and to distortion of prior knowledge of project base case information. Alternatively, it is proposed to represent uncertain variables by intervals and fuzzy numbers and it is demonstrated that this approach has a number of advantages over the probabilistic approach. By using triangular uncertainty distributions the two approaches are compared and a study of a railway line investment case is presented suggesting that a combination of the two approaches have some potential advantages. Keywords: Project uncertainty, interval, fuzzy number, triangular distribution, worst case, best practice.
1. Introduction Large projects are seldom realised within scheduled budget, time, and specifications. Most often cost budgets are overrun, benefits turn out to have been overestimated, time schedules are not met etc. Of course these three crucial performance dimensions originally put down in ex ante analyses of a project prior to decision to build are not mutually independent. Cost of building increases as milestones are not met. In order to meet milestones and finishing date cost is increased deliberately. Ambition level of specification is reduced as cost accumulates. Benefits from operations depend on time of completion as well as quality of specification and functionality. More examples of possible interdependencies could be given. Studies of discrepancies between ex ante estimation and ex post reality fall in two categories. The first category studies the discrepancy as a mere difference between the forecasted performance at the time of decision to build (cost of building, benefits, net present value, etc.) and the realized performance after completion. Explanations of plausible causes may be offered as well as remedies for cure. Flyvbjerg and associates has done extensive research into the magnitude of discrepancies in large infrastructure projects. Basically, they offer two explanations for discrepancies, namely optimism bias and strategic misrepresentation. Much inspired by the Daniel Kahnemann, the Nobel Prize winner 2002 in economics, they propose to use “the outside view” (in contrast to “the inside view”) and reference class forecasting applied at the project level as a cure for optimism bias, Flyvbjerg and COWI (2004), Flyvbjerg (2006). According to this approach any ex ante estimation should be adjusted by an amount taken from a reference group of identical, or at least similar to a certain degree, projects already completed. In a later work they present a more comprehensive analysis and offer prescriptive advices for the strategic misinterpretation problem (now called deception) pertaining to different actors in project processes, Flyvbjerg et al. (2009). Recently, and
interesting alternative explanation was presented by Eliasson and Fosgerau (2013) stating that the bias may arise simply as a selection bias without there being any bias at all in the ex ante predictions. The second category of studies of discrepancies between ex ante and ex post project performance may be considered an extension of the first category. Here not only the discrepancy of project performance is studied but also the ex ante uncertainty estimations compared to the ex post outcome. The basic idea is that the ex post performance of a project will always be different from the ex ante prediction, i.e. better or worse. Such studies are rare since uncertainty analyses are normally considered to be classified information not publicly available. In a recent study by Lundberg et al. (2011) it was found that the variance predicted by the Successive Calculation approach (Lichtenberg 2000) was considerably lower than the actual outcome. 2. The analytical framework The analytical focus of this study is the comparison of two alternative approaches to ex ante representation and calculation of uncertainty. For the purpose of the paper we suggest an extension to the above mentioned inside and outside views in order to justify our analytical framework. The analytical framework is summarised in Table 1. Here the upper right field corresponds to the approach by Flyvbjerg and associates that produces a bench mark for the project at hand at an aggregated project level, what is called “a view from the top”. By applying an appropriate adjustment a realistic estimate of total project performance is arrived at. However, nothing is known about how lower level processes of the project should be adjusted, so no detailed guidance is created concerning revision of cost targets (or other targets) of sub-processes. The adjustment merely serves the purpose of a supplementary budget allocated to the project to meet unforeseen contingencies. For example, the ex ante investment costs of rail projects should be given an uplift of between 40% and 80% with an acceptability of subsequent cost overrun between 50% and 5% (Flyvbjerg and COWI, 2006, Fig. 9). Likewise, the Successive Calculation approach corresponds to the lower left field where the probability distributions of uncertain variables are estimated subjectively and the propagation of uncertainties to the performance variable under consideration (like total cost) is done by linear approximation of the function or by Monte Carlo simulation. Since the uncertainty estimation of the project is obtained by aggregating uncertainty contributions from lower level sub-processes we call it “view from the bottom”.
View from the top View from the bottom
View from the inside
View from the outside
Subjective judgment at project level Subjective judgment at item level
Reference Class forecasting at project level Reference Class forecasting at item level
Table 1. Analytical framework. Reference class forecasting has got problems of its own, Hájek (2007). Since projects have many different attributes it is far from clear which projects already executed qualify to become member of a particular reference class. Anyway, the reference group will contain members that are not exactly results of a repeated experiment with random outcomes but rather outcomes of different projects that are similar to a certain degree. Consider an upcoming railway project. A reference class is created by selecting N similar already executed railway projects. Now we want to build railway project No. N+1 and predict the ex post
performance by an ex ante analysis based on reference class data. This is triggy considering the fact that the new project may have attributes not represented by the reference class and vice versa. One way out is to switch to the lower right field of Table 1 and create reference classes of better defined lower level items like volumes and unit prices, Schjær-Jacobsen (2010a). We shall not go deeper into these questions, since they are not relevant for this paper. It is generally agreed that the outside view is useful as a way of verifying the quality of estimates, no matter how they were done. In this paper we take the “view from the bottom” and represent uncertain variables in the ex ante analysis by two alternative approaches, namely probability distributions and intervals as well as fuzzy numbers. Since the distributions used are defined by numerically identical parameters but have different interpretations the opportunity of comparison independently of the view being from the outside or the inside is created. 3. Uncertainty representation by intervals 3.1 Epistemic uncertainty. Representation of uncertain variables by intervals is especially well suited to cater for epistemic uncertainty, also known as genuine ignorance and lack of precision. Appropriate interval methods are also necessary in order to apply fuzzy numbers (see later). For a particular uncertain variable all that is known is that it will be contained within an interval [a; b], a ≤ b, where a is the lower limit and b is the upper limit. Among others Schjær-Jacobsen (1996) proposed this approach. Once the intervals of the input variables have been established, there are some problems and challenges to be faced: 3.2 Basic interval arithmetic. Propagation of interval uncertainty through a mathematical system model is not trivial. Explicit formulas for the basic arithmetic interval operations exist, Moore (1966). For example we have for addition: [a; b] = [a1; b1] + [a2; b2] = [a1 + a2; b1 + b2]. It can be shown that the basic arithmetic interval operations are inclusion monotonic, commutative and associative. However, the distributive rule is not valid in general. Instead, the so-called sub-distributivity holds. This means that I1∙(I2 + I3) ⊆ I1∙I2 + I1∙I3, where I1, I2 and I3 are intervals. Furthermore, from a rational real valued function f(x) of a real valued variable x we can create an interval extension function simply by replacing the real variable x by an interval and the real operators by interval operators. 3.3 Results depend on sequence of operations. Based on the system model f(x) = x∙(1–x) we calculate the uncertain output variable as a function of the uncertain input variable [0; 1]. By application of the basic arithmetic interval operations we get either [–1; 1] or [0; 1] depending on the sequence of arithmetic operations. Obviously [0; 1] ⊆ [–1; 1], which demonstrates the sub-distributivity of basic interval arithmetic as mentioned earlier. But there is more to this example! In both cases the resulting interval is too wide, since the extreme values of f(x), 0 ≤ x ≤ 1, are 0 and 0.25, which is easily seen. This confirms the need to use global optimization in order to find the extremes attained at interior points of the input variables. 3.4 Non-monotonic system models. Consider a system model with by the output variable f(x) = sin(x), where x is the input variable. Calculate the uncertain output variable f([0; π/4]). Since f(x) is monotonic within the argument interval the minimum and maximum is attained at the interval end points so we get the result [sin(0); sin(π/4)] = [0; 0.707]. Then calculate f([0; 3π/4]). In this case the lower limit 0 is attained at the lower end point x = 0. However, due to non-monotonicity, the upper limit of 1 is not attained at the upper end point x = 3π/4 but rather at the interior point x = π/2. We get the resulting interval output [0; 1]. This
example demonstrates that in order to calculate correctly the propagation of interval uncertainty through non-monotonic system models we have to use global optimization to detect lower and upper limits attained at interior points. 3.5 Excess width due to variables appearing more than once. An urn contains one hundred 1€ coins. We know that because we counted them. A man grabs a handful of coins from the urn and quickly estimates that he holds between 10 and 15 coins in his hand. This means that the urn now contains 100 – [10; 15] = [85; 90] coins. The man then empties his hand into the urn. The urn now contains [85; 90] + [10; 15] = [95; 105] coins, which appears to be intuitively incorrect since no coins disappeared and no extra coins were introduced. Let the number of coins in his handful be denoted by the interval number n. Then we have for the interval number N of coins in the urn at this stage: N = 100 – n + n = 100 – [10; 15] + [10; 15] = [95; 105]. However, before calculating N we should reduce the equation by eliminating n so that we get N = 100, which is a fact that can be confirmed by recounting the coins in the urn. This example demonstrates that classical interval arithmetic may produce excessive widths if a variable appears more than once in the equation. 3.6 Indirect excess width due to interdependent intermediate variables. If an output variable depends on intermediate variables that share a common input variable, excess width may occur, Hyvönen and De Pascale (2000) and Schjær-Jacobsen (2010b). This is due to the fact that the intermediate variables are assumed to be mutually independent which they are not. Consider an input interval variable x, two intermediate interval variables u = a∙x and v = b∙x and an output variable z = u – v. If, for example, x = [1; 2] and u = v = 1 we get u = v = 1∙[1; 2] = [1; 2] and then z = u – v = [1; 2] – [1; 2] = [–1; 1]. However, considering the whole computational chain the only possible value of the output variable z is 0: z = u – v = a∙x – b∙x = 1∙x – x∙1 = x – x = 0. 3.7 Calculation of interval functions. The interval analysis program Interval Solver 2000 as an add-in module to MS Excel, Hyvönen and De Pascale (2000), is used throughout this paper to propagate interval uncertainty. Interval Solver makes use of global optimization and eliminates the risk of excess width. Correct results can be obtained to an accuracy specified by the user. However, indirect excess width may still occur so functions must be programmed in such a way that intermediate variables that share a common variable are avoided. 4. Uncertainty representation by probability distributions 4.1 The classical approach. The classical way of representing uncertain variables is that of probability distributions. An uncertain input variable is represented by a probability distribution and so is an output variable. In the simple case it is represented by its expected value and standard deviation {µ; σ}, in the general case it is determined by its probability distribution function. The propagation of uncertainties through the system model is calculated by appropriate statistical methods such that the uncertain output variables can be determined in terms of their expected values and standard deviations or probability distributions. We are here dealing with aleatory uncertainty which is based on knowledge of the statistical behaviour of the input variables. We do not know the actual value of the uncertain input variable in a specific realization of the physical system but we know that the variable will attain an actual value with a certain likelihood of occurrence or that the probability of an actual value is below (or above) a particular number. Once the probability distributions of input parameters have been established, there are some problems and challenges to be faced: 4.2 Calculation of uncertainty propagation. How to propagate uncertainties defined by probability distributions of the input parameters through a mathematical system model? For
two uncorrelated uncertain variables X1 and X2 represented by expected values and standard deviations {µ1; σ1} and {µ2; σ2} we can perform basic arithmetic operations. For example we have for addition: {µ; σ} = {µ1; σ1} + {µ2; σ2} = {µ1 + µ2; sqrt(σ12 + σ22)}. In case the uncertain output parameter is a function Y of m uncorrelated uncertain variables, Y = Y(X1, X2,…, Xm) we can construct a linear approximation to Y by means of a Taylor series (ignoring second and higher order terms). Hence, the uncertainty is propagated by a linear approximation that may only be valid in a small vicinity and at the same time we assumed uncorrelated input variables. Further application details can be found in SchjærJacobsen (2004). Propagation of uncertainties represented by probability distributions is in this paper done by means of Monte Carlo simulation implemented as an add-in module to MS Excel, @RISK (2013), allowing for taking correlation between variables into account. 4.3 Uncertainty of output variables depend on level of analysis. If an input variable at a certain level of analysis is further subdivided, the standard deviation is generally reduced. For example, at a certain level of analysis we have a sum of 10 identical probability distributions with expected values 10 and standard deviations 10%. Thus, the sum has the expected value 100 and standard deviation 3.16%. If each of the variables is subdivided into 10 variables with expected value of unity and standard deviation 10%, the new expected value of the sum is still 100 but the standard deviation is reduced to 1%. It is seen that the uncertainty result is somewhat arbitrary because it depends on the level of analysis. 4.4 The fallacy of normally distributed output variables. Due to the central limit theorem an output variable being a function of many uncertain variables tends in general to become a normally distributed probability density function (under certain conditions) irrespective of the shape of the input distributions. However, several studies of realized projects have shown that this is not the case for cost and benefit distributions in the real world where more often project costs come out above expectations and project benefits below expectations. Thus there is a contradiction. 4.5 Low likelihood occurrences are disregarded. The propagation process of uncertainty according to best practice tends to focus attention on expected value and standard deviation. Usually the process concentrate the probability mass around the expected value and the likelihood of outcomes at the tales of distributions may become extremely small. In Monte Carlo analyses the likelihood of such outcomes simply may vanish completely. This leads to the risk of ignoring outcomes with low likelihood of occurrence although they are indeed as possible as outcomes with high likelihood. 4.6 Base Case information is easily lost. It is common practice to work with a base case represented by a set of real valued input variables that may be established by an honest and experienced specialist. If there is a need to qualify the base case one may add uncertainty data in terms of a probability distribution containing the base case. In the statistical propagation process the base case data are easily lost because focus is switching towards expected values. As is seen in the following the base case may even not be contained in the resulting output uncertainty. 5. Triangular representation of uncertainty 5.1 Base case without uncertainty. For the sake of simplicity we introduce triangular uncertainty in three steps. First we consider the nominal base case without uncertainty. The system response function is conventionally calculated with input variables attaining there
nominal values. As an example consider the function of only one variable f(x) = x(1-x). With the nominal base case value x = 0.2 we get f(0.2) = 0.16. 5.2 Uncertainty representation by interval. In the second step the uncertainty of the input variable is represented by an interval containing the base case x = 0.2, for example x = [0; 1]. This means that there is no possibility of x attaining values smaller than 0 or larger than 1. Since the probability distribution of x is unknown it cannot be propagated. However, propagation of the interval uncertainty can be calculated: f([0; 1]) = [0; 0.25] by inspection of f(x) or by global optimization. Now the base case as well as worst and best cases are known. 5.3 Uncertainty representation by triangular distributions. As the third step the base case value of x = 0.2 and the interval uncertainty [0; 1] are combined to form a triangular representation that may be interpreted as a triangular probability distribution (left axis) or, alternatively, as a triangular fuzzy number (right axis), see Fig. 1. The probability distribution of the output variable is calculated by Monte Carlo simulation, @RISK (2013). The fuzzy output variable is calculated at a sequence of α-cuts, α = 0(0.05)1 by Interval Solver 2000, Hyvönen and De Pascale (2000). Notice that the α-cut corresponding to α = 0 generates true worst and best bases The results are shown in Fig. 2 where probability is presented by bars referring to the left axis and fuzzy number by solid curve referring to the right axis, see Schjær-Jacobsen (2010b) for further details. Since the two alternative approaches are based on numerically identical input distributions (except for normalisation) results can be compared.
Figure 1. Triangular distributions of uncertain variable x.
Figure 2. Alternative distributions of uncertain variable x(1-x).
Obviously, both distributions get the range of x(1-x) correctly to be between 0 and 0.25, although the probability distribution is not very precise on the lower limit, it actually produces a minimum of 0.0035 and a maximum of 0.2500. The probability distribution complete misses the base case of 0.16 and produces a mean of 0.1933 and mode 0.2500. 6. Investment in a railway line 6.1 The challenge. In Section 6 the challenge is to investigate and compare the ex ante uncertainty characteristics of a large railway line investment project derived from the best practice approach using probabilistic uncertainty representation and the alternative approach using interval and fuzzy number representation. The comparison is enabled by identical (except for normalisation) numerical triangular representation of uncertain variables. By reference to Table 1 the view is “from the bottom”. The results obtained reveal differences that are specific for the modelling approaches and indifferent to the uncertainty data being obtained by the inside or outside view. An earlier attempt to uncertain investment analysis was done in Schjær-Jacobsen (2002). 6.2 The base case. This case study is based on (Floris et al., 2008, Section 4.2, Option 2, Table 4.22) about investment in a railway line. The base case is established by honest and competent people as an economic analysis over a 30 years period with a discount rate of 5.5% per year. Each of the first three years the case includes four investments items. For the purpose of the present paper we focus on the net present value NPV, which in the base case without uncertainty is €1,953.3 million. (This monetary unit is used throughout Section 6.) 6.3 Uncertain investments. The 12 uncertain investment variables are now represented by triangular distributions (Florio et al. 2008, Fig. 4.4). In this way, the worst case of the individual investment item is 200% larger than the base case and the best case is 10% smaller than the base case. Subsequently, in this paper, the distributions of the uncertain input variables are interpreted as probability and possibility distributions, respectively. In the first instance, the variables represented by probability distributions are assumed to be independent and uncorrelated. The resulting NPV is shown in Fig. 3 for the probability as well as possibility interpretation. (In Figs. 3-7 probability distributions are represented by bars referring to the left axis and membership functions by solid curves referring to the right axis.)
Figure 3. Uncertain NPV with uncorrelated uncertain investments.
With the possibilistic representation the base case NPV of 1,953 is clearly reproduced whereas the probabilistic representation produces an expected value of 1,131, which is considerably lower and the distribution does not even contain the base case. On the other hand, the probabilistic representation produces minimum and maximum values considerably different from the possibilistic representation. The probability distribution it very closely fitted with a normal distribution. It follows from these observations that there is a risk to misinterpret the consequences of uncertainty if only one of the alternative representations is considered. The likelihood of NPV attaining values between -644 and 239 as well as between 1,824 and 2,083 comes out equal to zero even though these values are indeed possible according to the possibilistic representation. With no correlation the probability distribution still reflects a relatively low variability which to the “probabilist” indicates that this a low risk project and even in the worst case the NPV is still positive. On the contrary, the “possibilist” consider this project to have a large possibility to flop economically and even turn out with a negative NPV. Secondly, 100% correlation between the 12 uncertain investment variables is introduced in the Monte Carlo simulation by a 12 by 12 correlation matrix. The results are shown in Fig. 4. Although the mean value is still 1,131 the full possibility space is now made visible to the ”probabilist” and the base case is seen to be reflected by the mode of the probability distribution. It should be mentioned here that reliable correlation data are difficult to obtain in practice. In this paper correlation is introduced to facilitate comparison of the two alternative approaches studied.
Figure 4. Uncertain NPV with 100% correlated uncertain investments. It seems that this example offers still another possible explanation of the large discrepancies observed between ex ante and ex post performance of large infrastructure projects mentioned earlier. Firstly, there is a discrepancy between the base case NPV value of 1,953 and the expected value of 1,131 accounting for a possible degradation in NPV of 42%. Secondly, by taking into account correlation, the likelihood of even worse outcomes is further seriously increased. In the worst case NPV is reduced to -644 corresponding to 133% below the base case, totally ruining the project economically. Furthermore, it should be remembered that we have only considered uncertainty originating from uncertain investment costs. You have to add other uncertainties as well.
6.4 Uncertain discount rates. The yearly discount rate used throughout the 30 years comprised by the base case study is 5.5%. The discount rate is now subject to uncertainty. Each of the years the discount rate can assume any value between 5% and 10%. This defines 30 uncertain variables represented by triangular distributions and it is now investigated how this kind of uncertainty influences the NPV. Firstly, we assume that the variables represented by probability distributions are uncorrelated. This gives the results shown in Fig. 5 and Table 2. The resulting probability distribution is very well fitted with a normal distribution but rather far from covering the NPV base case of 1,953. Judging from the NPV standard deviation of 82.7 and range of 643 the railway investment is rather certain and not risky with a mean value and mode of ≈1,470. The likelihood of an NPV smaller than 1,123 or larger than 1,766 is equal to zero even if those values are indeed possible according to the possibility analysis with a range of possible outcomes of 1,778.
Figure 5. Uncertain NPV with uncorrelated uncertain discount rates.
Figure 6. Uncertain NPV with 100% correlated uncertain discount rates.
Secondly, a 100% correlation of the 30 discount rates defined by triangular probability distributions is introduced by a 30 by 30 correlation matrix. The results are shown in Fig. 6 and Table 2. It is seen that the range obtained by the probability approach is somewhat smaller than the one obtained by the interval approach. This is due to the fact that the net cash flows of the first three years of the project are negative whereas they are positive for the remaining 27 years. This means that a low discount rate is good for NPV the first three years but bad for NPV the rest of the project life time. The opposite is true for a high discount rate. Consequently, a Monte Carlo simulation with a 100% correlation of discount rates does not catch the extreme worst and best cases. On the contrary, the interval analysis calculates exactly the true worst and best cases. Thirdly, ±100% correlation of uncertain discount rate is introduced, meaning that the signs of correlation factors are chosen in such a way that the following effect is produced: When the discount rates are low during the first three years they are high during the remaining 27 years and vice versa. The results are shown in Fig. 7 and Table 2. It is clearly verified that within calculation accuracy both approaches find true worst and best cases.
Figure 7. Uncertain NPV with ±100% correlated uncertain discount rates (True worst and best case).
Uncorrelated (Fig. 5) 100% correlated (Fig. 6) ±100% correlated (Fig. 7)
Min 1,123 639 526
Max 1,766 2,178 2,292
Range 643 1,539 1,766
Possibility (Figs. 5-7)
520
2,298
1,778
Mean Mode 1,479 1,469 1,479 1,948 1,479 1,796 Base Case 1,953
Std 82.7 366.7 417.7
Std/Mean 5.6% 24.8% 28.2%
-
-
Table 2. Uncertain NPV with uncertain discount rates. With the possibilistic representation the base case NPV of 1,953 is clearly reproduced whereas the probabilistic representation again produces an expected value of 1,479. While the uncorrelated probabilistic representation produces minimum and maximum values that are considerably different from the possibilistic representation this is not the case when full correlation is taken into account. With no correlation the probability distribution still exhibits a relatively low variability which to the “probabilist” indicates that this a low risk project. By
introducing full correlation the NPV probability distribution increases its standard deviation relative to its mean value from 5.6% to 28.2% so that the likelihood of outcomes below the mean is dramatically increased. If the result of executing this project is an ex post outcome of NPV being more than 42% below the base case the “probabilist” ignoring correlation of discount rates would be very surprised. This is because his ex ante calculations show that such an outcome is impossible taking into account that the probability of said outcome is equal to zero (Fig. 5 and Table 2). However, as the calculations demonstrate a major discrepancy like that is inherently present in the actual project model of uncertainty as pointed out by the true worst case analysis delivered by the interval approach and verified by the Monte Carlo simulations with ±100% correlation. Notice that this example ignores other uncertainty sources than the discount rate. 7. Conclusions Some specific characteristics of probability and possibility approaches to uncertainty modelling of projects have been reviewed and analysed. By a view “from the bottom” a comparison of the alternatives is done on the basis of numerically identical uncertain input distributions allowing for a more detailed understanding of methodological features. The results on net present value analysis of a railway line investment case suggest that the best practice approach to modelling of uncertainty by probability distributions is able to produce large discrepancies between expected values and base case values. However, the uncertainty predicted in terms of standard deviation and range seems to be too small unless correlation is taken into account. On the other hand, true worst and best case predictions are not easily obtained in the general case because the required specific tailoring of the correlation matrix can only be done from a detailed knowledge of the mathematical model function. On the contrary, by applying an adequate interval analysis method with global optimization true worst and best cases are readily obtained while at the same time preserving precise information of the base case. It is conjectured that the simultaneous application of both approaches may significantly improve the potential for more precise estimation of uncertainties involved in large projects allowing for risk and uncertainty mitigation. 8. References @RISK, 2013. Palisade Corporation, Version 6.1, Monte Carlo simulation add-in module for MS Excel, www.palisade.com/risk. Eliasson, J., Fosgerau, M., 2013. Cost overruns and demand shortfalls – deception or selection? MPRA Paper No. 49744, Munich Personal RePEc Archive. Florio, M. et al., 2008. Guide to cost-benefit analysis of investment projects. European Commission, Directorate General Regional Policy. Flyvbjerg, B., COWI, 2004. Procedures for dealing with optimism bias in transport planning. Guidance Document, British Department for Transport. Flyvbjerg, B., 2006. From Nobel prize to project management: Getting risks right. Project Management Journal, 37 (3), 5-15. Flyvbjerg, B., Garbuio, M., Lovallo, D., 2009. Delusion and deception in large infrastructure projects: Two models for explaining and preventing executive disaster. California Management Review, 51 (2), 170-193. Hájek, A. 2007. The reference class problem is your problem too. Synthese, 156, 563-585. Hyvönen, E., De Pascale, S., 2000. Interval Solver 2000 for Microsoft Excel, User’s Guide, Version 4.0, Delisoft LtD, Helsinki, Finland. Lichtenberg, S., 2000. Proactive management of uncertainty using the Successive Principle. Polyteknisk Press, Copenhagen.
Lundberg, M., Jenpanitsub, A., Pyddoke, R., 2011. Cost overruns in Swedish transport projects. Working Paper 2011:11, Centre for Transport Studies, KTH Royal Institute of Technology, Sweden. Moore, R. E., 1966. Interval Analysis, Prentice-Hall, Englewood Cliffs, NJ. Schjær-Jacobsen, H., 1996. A new method for evaluating worst- and best-case economic consequences of technological development. International Journal of Production Economics, 46-47, 241-250. Schjær-Jacobsen, H., 2002. Representation and calculation of economic uncertainties: Intervals, fuzzy numbers and probabilities. International Journal of Production Economics. 78, 91-98. Schjær-Jacobsen, H., 2004. Modelling of economic uncertainty. Fuzzy Economic Review, 9, 49-73. Schjær-Jacobsen, H., 2010a. New budgeting requires new approaches to Risk Management. Pre-Prints of the 16th International Working Seminar on Production Economics, 1, 431442. Schjær-Jacobsen, H., 2010b. Numerical calculation of economic uncertainty by intervals and fuzzy numbers. Journal of Uncertain Systems, 4, 47-58.