Statistical Comparison of Two Sum-of-Disjoint-Product Algorithms for ...

9 downloads 6478 Views 369KB Size Report
can be applied to design new algorithms and manage the computation ... named ReNeT (Reliability of Network Topologies) which can be run from our homepage http://www.informatik.uni-hamburg.de/TKRN/world/tools on your own computer.
Statistical Comparison of Two Sum-of-Disjoint-Product Algorithms for Reliability and Safety Evaluation Klaus Heidtmann Departement of Computer Science, Hamburg University, Vogt-Kölln-Str. 30, D-22527 Hamburg, Germany [email protected]

Abstract The evaluation of system reliability and safety is important for the design of new systems and the improvement or further development of existing systems. Especially the probability that a systems operates (safely) using the probabilities that its components operate is a vital system characteristic and its computation is a non-trivial task. The most often used method to solve this problem is to derive disjoint events from the description of the system structure and to sum up the probabilities of these disjoint events to quantify system reliability or safety. To compute disjoint products as logical representation of disjoint events Abraham's algorithm inverts single variables indicating the state of a component and therefor produces a huge number of disjoint products. To avoid this disadvantage Heidtmann developed a new method which inverts multiple variables at once and results in a much smaller number of disjoint products as confirmed by some examples. This paper quantifies this advantage by statistical methods and statistical characteristics for both algorithms presenting measurements of the number of produced disjoint products and the computation time of both algorithms for a large sample of randomly generated systems. These empirical values are used to investigate the efficiency of both algorithms by statistical means showing that the difference between both algorithm grows exponentially with system size and that Heidtmanns method is significantly superior. The results were obtained using our Java tool for system reliability and safety computation which is available in the WWW.

Keywords safety evaluation, reliability analysis, system reliability, reliability formula, sum of disjoint products (SDP), computational efficiency, statistical evaluation of algorithms, significant difference

1. Introduction Fault trees and reliability networks (also known as two-, K- or multi- terminal networks) illustrate the causal relationship between system and component failures. Both kinds of graphs representing the so called system structure are frequently used for reliability and safety modeling of complex technical systems. The advantage that they posses over Markov chains and stochastic Petri nets is a concise representation and efficient solution algorithms. The class of algorithms known as sum-of-disjoint-products (SDP) is the most often used for such reliability resp. safety calculations and is therefore an important technique for reliability and safety analysis. It is applied to compute various exact or approximate measures related to the reliability and safety of technical systems like availability, dependability, reliability, performability, safety, and risk [Hei97]. SDP-algorithms play also an important role in the calculation of degrees of support in the context of probabilistic assumption-based reasoning and the Dempster-Shafer theory of evidence [BeK95, Koh95, KoM95]. SDP methods start by generating minpaths for reliability graphs or the mincuts for fault trees. Given the reliabilities of the components and minimal sets of components (called minpath) that can allow a system to operate (safely), an SDP algorithm computes the reliability resp. safety, i.e. the probability that all components in at least one of these sets are operational. The system

reliability or safety is the probability of the union of the minimal sets. This is a union of product problem. The idea of SDP is to convert the sum of products into the sum of disjoint products so to that the probability of the sum of products can be expressed as the sum of the probability of each disjoint product. If component failures are statistically independent then the probability of each disjoint product is the product of the corresponding component reliabilities. If this assumption of statistical independence is not valid in practice the result computed under this assumption may serve as an approximate bound. In general, when using the SDP approach, the evaluation is performed in three phases; 1) enumeration of all minimal sets (minpaths or mincuts), 2) computation of the disjoint products by using an SDP algorithm, and 3) calculation of the system reliability measure by assigning component indices to the reliability formula as a (weighted) sum of the disjoint products. Results reported in literature show that usually the time of the second phase is dominant in the total running time needed for evaluation. It is thus important to reduce the amount of the SDP algorithm by obtaining fewer disjoint products resulting in a smaller probabilistic reliability formula, because this reduces the rounding error, the storage space, and the computation time of reliability calculation. That fewer disjoint products result in a smaller probabilistic reliability formula is extremely important in system modeling, when this formula (as a sum of products) is evaluated repeatedly for various combinations of component indices. So the quality of an SDP-algorithm can be determined by the number of disjoint products it generates. As one of the first Abraham introduced an algorithm for the generation of disjoint products using single variable inversion and therefor producing a huge number of disjoint products [Abr79]. Further work on this technique of single variable inversion yielded only marginal improvements. To avoid this disadvantage of producing an immense number of disjoint products Heidtmann presented a fundamentally new approach (sometimes called KDH88) applying inversion to multiple or grouped-variables which he called subproducts [Hei89]. Later a similar but less powerful [LuT98a,b] version was published in [VeT91], where the method of multiple-variable inversion was verified. Heidtmann [Hei95] extended his technique to noncoherent (non-monotone) systems and this algorithm was verified in [BeM96]. The extended sum of disjoint products algorithm is applicable to all coherent and non-coherent system structures. It applies not only to this kind of reliability and safety structures which can be specified in terms of Boolean Logic, but it is also applicable to dynamic reliability and safety problems which can be supported by logical extensions like Temporal Logic [Hei91, Hei92, Hei97]. Unfortunately its computational amount grows exponentially with the size of the considered system. Just for specific classes of system structures like k-out-of-n systems [BaH84, Hei86], k-to-l-out-of-n structures [Hei81, Rus87, UpP93], and series-parallel systems linear time algorithms are known [Mis93, Hei95, Hei97]. A lot of comparative computations published in the literature [Hei89, VeT91, Hei95, BeM96, Hei97, LuT98a-c, CDR99] confirm that for both methods the number of produced disjoint products and the computation time grow exponentially with the size of the analyzed system. But these examples also indicate that the newer technique of Heidtmann using multiple variable inversion is superior to approaches considering only single variables like Abraham's algorithm. This was mathematically proved by [And92] in the sense that Abraham's technique is in no case superior to Heidtmann's algorithm. In [BeM96] it is said: "Compared to the algorithm of Abraham Heidtmann's method is considerably more efficient and will generate a much smaller number of disjoint products in most cases. A simple look at the results shows the strong superiority of Heidtmann's algorithm over the algorithm of Abraham, because it produces much less disjoint products in much less time: 29 cases the improvement (reduction) in the number of disjoint products generated is more than 50% and 17 cases it is more than 80%. The improvement becomes larger as the number of system components (edges) increases." These results were based on a very small sample of 35 systems only. Consequently, we want to investigate a much larger sample of systems to answer the following quantitative question with

much more confidence: How much better is the newer approach of Heidtmann? Is it significantly superior? Is the trends observed on small samples also valid for a much larger sample? So in this paper we derive statements of statistical validity to answer these questions. This means that we investigate a large and statistically valid sample of systems to obtain results like Heidtmanns algorithm is so and so much better on the average or significantly better than Abrahams method. Reliability and safety computation for large systems impose high storage and computation time requirements on computers. To quantify and characterize these requirements we study the best known sum of disjoint products techniques for a huge number of systems. As results we present detailed measurements and stochastic characteristics of the output of both algorithms. In general the presented results give some insight into the stochastic characteristics of both algorithms and can be applied to design new algorithms and manage the computation process skillfully. First section 2 presents an example to illustrate both methods and especially their difference. Then details of their implementation and utilization from our homepage are discussed. Section 3 presents the experimental environment and describes the performed computational experiments followed by the first results on the number of disjoint products as the characteristic attribute of SDP-algorithms. This is confirmed by the observed computation times. Section 4 regression analysis is applied to the measured values to characterize the increase of the number of disjoint products and computation time by exponential expressions. We apply these expressions for instance to estimate the number of disjoint products and the computation time for larger reliability graphs.

2. Illustration of both Algorithms and their Implementation In order to explain the purpose and the different approach as well as the different results of the two SDP-algorithms compared in this paper, let us start with a short example. Assume that a system with 5 components operates if components 1, 2 and 3 operate. It may also operate if components 4 and 5 operate. So this system has minsets (minpaths) {1,2,3} and {4,5}. For this example system Abrahams method produces the following disjoint events when it makes the last minset {4,5} disjoint to the former one{1,2,3}: A1. Component 1 is not operational, component 4 and 5 are operational. A2. Component 2 is not operational, components 1, 4, and 5 are operational. A3. Component 3 is not operational, components 1, 2, 4, and 5 are operational. In all three cases the first condition represents the single variable inversion (negation) technique. In a formal Boolean Logic notation where the AND-operator (conjunction) is written as a product, these disjoint events were represented by the following disjoint products: x1x4x5, x1x2x4x5, x1x2x3x4x5 This results in the reliability formulas of Equation 1 for system reliability R depending on component reliabilities r1, r2, r3, r4, r5 in case of statistical independent component failures. For this example system the probabilistic reliability formula implied by Abrahams method consists of 4 summands (s. Equation 1): R = r1r2r3 + (1-r1)r4r5 + r1(1-r2)r4r5 + r1r2(1-r3)r4r5

(1)

Heidtmanns method results in the following single event which is disjoint to minpath {1,2,3}: H1. It is not true that component 1 and 2 and 3 are operational, while component 4 and 5 are operational. Here we invert or negate multiple variables (x1x2x3) or a group of component states (component 1 and 2 and 3 are operational). In a formal notation of Boolean Logic this looks like (x1x2x3)x4x5 (or equivalently like (x1x2x3)x4x5 after applying the Shannon

Inversion Theorem with the OR-Operator  ). This yields the following reliability formula with only two summands (s. Equation 2): R = r1r2r3 + (1-r1r2r3)r4r5

(2)

So far we gave an explanation of both algorithms by example. Detailed textual and formal descriptions can be found in [Hei89, VeT91, Hei95, Hei97]. The last cited reference includes also programs in Pascal. Most implementations of the algorithms use the programming language C [VeT91, SoR91, LuT98a/b, TKK00, Vah98] and some of them are included in tools for reliability analysis, i.e. MOSEL [ABS99,Her00], SHARPE [TrM93,STP95,PTV97], CAREL [SoR91], SyRelAn [Vah95,Vah98] Another implementation uses Common Lisp [BeM96]. The experiments discussed in the following are based on our own implementation in Java. Our implementation, the experiments and the following results refer to the class of systems called reliability graphs. All nodes of these graphs are assumed to be perfectly reliable so that they can't fail and need not be considered in the reliability analysis, while the edges may fail so that they represent the components of the system which are susceptible to failure. A reliability graph is said to be operational if and only if all terminal nodes as a marked subset of all nodes are connected via operational edges. So the reliability of a reliability graph is the probability that all of its terminal nodes are connected by paths of operational edges. An algorithm for randomly generating reliability graphs for a given number edges as well as Abraham's and Heidtmann's algorithms were implemented in Java 2 and executed by the Java 2 Runtime Environment. These programs were used for the experiments which were discussed in the following sections. Moreover we integrated these programs as Java applets into a Web-tool named ReNeT (Reliability of Network Topologies) which can be run from our homepage http://www.informatik.uni-hamburg.de/TKRN/world/tools on your own computer. There is a graphical user interface where you may enter your reliability graph by drawing nodes (black circles for terminal nodes (left mouse bottom) and white circles for others (right mouse bottom)) and connecting them with edges. After drawing you can enter probabilities for component (edge) failures. The program then computes and shows both reliability formulas generated by Abraham's resp. Heidtmann's algorithm as different sums of disjoint products. At last it shows the reliability of the system or reliability graph you entered. The following section presents our comprehensive experimental measurements which were produced by a notebook with Pentium II 366 processor, 64 MB RAM, 128 MB Swap, and SuSE Linux 7.1, Kernel 2.4.0.

3. Results of Measurement In this section we give first computational results to compare the performance of Heidtmanns algorithm versus the algorithm of Abraham. All together 3200 randomly generated reliability graphs with 5 to 16 unreliable edges were investigated representing 3200 systems with 5 to 16 unreliable components. In more detail, for each number of edges from 5 to 12 we randomly produced and analyzed 400 reliability graphs and for 13 to 16 edges we generated and investigated 100 graphs for each number of edges. First we show the detailed results for reliability graphs with 16 edges. In this case we randomly generated 100 graphs, computed there disjoint products by each of the two algorithms and counted the number of the resulting disjoint products. According to the theoretical result that Abraham's algorithm produces at least as many disjoint products as Heidtmann's method the number of actually computed disjoint products for each of the invested graphs was smaller in case of Heidtmann's algorithm. For Fig. 1 we arranged the resulting values by increasing number of disjoint products with regard to Heidtmann's algorithm (s. Fig. 1). It can be seen that in some cases the Abraham method produces more than twice the number disjoint products than

Heidtmanns algorithm. In the worst case Abrahams algorithm produces about 8000 disjoint products whereas only about 2000 disjoint products result from Heidtmann's method, i.e. a third. So we notice from small to large differences in the number of disjoint products for both algorithms. The same applies to the computation time, which can be seen from Fig 2 noticing the logarithmical scale. In single cases, when the number of produced disjoint products is nearly identical for both methods, Abraham's algorithm may become a little faster than Heidtmann's method, because the Abraham procedure to derive disjoint products is a little bit simpler. All in all the values of both attributes evaluated by the large sample show the great advantage of Heidtmann's method over Abraham's.

Fig. 1. Number of disjoint products (left) and computation time (right) for 100 sample graphs with 16 edges produced by Abrahams and Heidtmanns algorithm

Now we present the mean value of the number of disjoint products and the mean computation time in milliseconds for every sample of 400 resp. 100 graphs with an identical number of edges. These mean values are given in Tab. 1 and illustrated in the following Fig. 2. They seem to grow exponentially. This will be investigated in the following section. Number of Components (edges)

5 6 7 8 9 10 11 12 13 14 15 16

M ean N um ber of D isjoint P roducts M ean C om putation T im e (m s) Abraham Heidtmann Abraham Heidtmann 1,8 1,6 1,0 0,7 2,9 2,4 1,1 0,9 9,6 7,0 7,2 3,1 11,6 8,1 12,1 4,9 30,4 20,2 64,1 25,5 55,5 34,3 173,7 65,3 82,0 47,6 613,8 248,4 176,5 95,3 2179,3 718,5 262,0 133,2 5415,1 1628,5 467,7 234,4 18706,8 4669,6 1161,8 518,6 116811,3 20804,1 2016,4 804,3 535857,2 73759,3

Table 1: Empirical mean of the number of disjoint products and the computation time in ms for 400 resp. 100 sample graphs resulting from Abraham's and Heidtmann's algorithm

Fig. 2. Comparison of mean number of disjoint products (left) and mean computation time (right) for both algorithms (Notice the logarithmical scale !)

To show the influence of different operating systems on the computation time Table 2 includes the empirical mean values of computation time in milliseconds for 100 sample graphs with 10 edges and both algorithms. Operating System Linux Windows NT Solaris

Computation Time (ms) Abraham Heidtmann 173,7 65,3 114,7 94,0 203,8 124,2

Table 2: Empirical mean of the computation time (ms) on computers with different operating systems for 100 sample graphs with 10 edges

4. Statistical Comparison To see the increase of the mean value of the number of disjoint products and the mean computation time with increasing number of edges we illustrate the values of Tab. 1 by the following Fig. 3. Here it is obvious that both values for Heidtmann's algorithm follow exponential distributions. The similar figure of Abraham's method which increases much more rapidly is not presented here.

Fig. 3. Mean numbers of disjoint products (left) and computation time (right) for sample graphs with 5 to 16 edges produced by Heidtmann's algorithm

So the measured results lead to the hypothesis that neither the number of disjoint products produced nor the computation time follow a normal distribution. The corresponding statistical tests (Kolmogoroff-Smirnov) confirmed this hypothesis. The regression analysis of SPSS (Software Packet of Statistical Standard-Software) was used to compute an approximation of the mean value of the number of disjoint products depending on the number of edges n. It yields the following exponential functions: 0.0106 exp(0.5594 n) for Heidtmann's algorithm as seen in Fig.3 (left) and 0.0869 exp(0.6276 n) for Abraham's technique. A similar approximation of the computation time yields 0.0015 exp(1.0839 n) for Heidtmann's algorithm as seen in Fig.3 (right) and 0.0011 exp(1.1212 n) for Abraham's method Using these expressions we can estimate the number of disjoint products and the computation time for reliability graphs with more than 16 edges by regression analysis without explicit generation of the graphs and their further investigation. So for systems with 17 to 20 components and both algorithms the estimated number of disjoint products and the estimated computation time are given in Table 3. Here the immensely increasing superiority of Heidtmann's technique over the method of Abraham is obvious.

Number of Components (Edges)

Estimated Number of Disjoint Products Abraham

Heidtmann

Estimated Computation Time (ms) Abraham

3.740

1.444

18

7.006

2.527

3.384.97

451.53

19

13.124

4.422

11.384.28

1.334.90

20

24.585

7.738

38.287.39

3.946.49

17

1.006.48

Heidtmann 152.73

Table 3: Estimates for the number of disjoint products and for the computation time of reliability graphs with more than 16 edges for both algorithms

As the number of disjoint products and the computation time do not follow normal distributions as was tested with the Kolmogoroff-Smirnov-Test we applied the Wilcoxon-Test for related samples. It is clear that for a specific reliability graph the number of disjoint products produced by the Abraham algorithm and the corresponding number of Heidtmann's algorithm are related. The same applies to the computation time of both algorithms applied to the same reliability graph. So the number of disjoint products and the computation time can be arranged in pairs, where each pair belongs to the same reliability graph. So we applied the Wilcoxon-Test to the samples of 400 randomly generated reliability graphs with 5 to 12 and to the samples of 100 randomly generated graphs each with 13 to 16 edges. The results of the Wilcoxon-Test shows that for all these 11 samples the values of the number of disjoint products and the computation time differ with high significance. The same results when we apply the Test to the whole sample of 3200 reliability graphs. In all cases the level of significance is Zero, which means high significance. So the algorithm of Heidtmann is high significantly superior to Abraham's method as far as the number of produced disjoint products and the computation time is concerned.

5. Conclusion and Outlook This paper presented comprehensive measurements and a detailed stochastic characterization of the output of two frequently used algorithms for reliability and safety computation as an important step of reliability and safety analysis. First the two sum of disjoint products algorithms of Abraham and Heidtmann were explained by an example, which also illustrated the typical difference in the number of disjoint products. This attribute together with the computation time of these two algorithms was measured for 3200 reliability graphs. The results of this measurement were discussed beginning with the illustration of the measured values for 100 randomly generated reliability graphs with 16 edges, which correspond to systems with 16 unreliable components. The number of disjoint products and the computation time grow exponentially with the number of components for both algorithms as well as the difference of their numbers of disjoint products and their computation time. Heidtmann's algorithm was significantly superior with respect to both attributes. Very similar output was observed for 3100 other reliability graphs with 5 to 15 edges. Obviously the single variable inversion technique of Abraham generally produces very large numbers of disjoint products while multiple variable inversion of Heidtmann's method achieves appreciably fewer disjoint products. This is very important as it results in much shorter computation time for the set of disjoint products, additionally reduces the storage space and produces a much smaller reliability formula. Therefore computation time for the evaluation of the reliability formula is reduced as well as the rounding error. No observed values of the performed measurements were normally (Gaussian) distributed. Based on the empirical results the number of disjoint products and the computation time for both algorithms and the their differences were estimated for larger systems by regression analysis. All in all the values of both attributes, i.e. number of disjoint products and

computation time, evaluated by a large sample show the great advantage of Heidtmann's algorithm over Abraham's. Because of the exponential growth reliability and safety measures are often approximated for large systems. If only a given subset of a systems minsets is considered for approximate computation Heidtmann's algorithm computes the approximate value faster than Abraham's method. If special resources like computation time or storage is limited than Heidtmann's technique uses these restricted resources much more efficient than Abraham's algorithm producing closer approximations than Abraham's method with identical resources. The quantification of these advantages as well as the use of both algorithms for approximation purposes in general needs further investigation and will be subject to further studies. The derived stochastic characteristics can be applied to the development of more efficient algorithms for reliability computation so that they produce fewer disjoint products and a smaller reliability formula. In the context of approximation our characterization can be used straight forward to derive some comparative aspects. Furthermore, these results can be applied to estimate characteristic attributes for larger systems, for instance by regression analysis, or to use our Java code to analyze your own systems. In this case their are two ways for your own reliability analyses. On one hand our tool is extremely useful as a Java applet, e.g. when applying it via the WWW directly to your reliability problem executing the reliability computation on your own computer. On the other hand our Java programs can serve as a basis for local installations of both algorithms. In both cases our programs can be used for your own reliability and safety analysis. Moreover, the behavior of the algorithms can be investigated for various systems by means of measurement or the influence of different system structures on the efficiency of the algorithms can be observed. It may also be possible to exploit new areas of application for this efficient SDP-algorithm [YuT98]. Besides coherent and non-coherent system structures as well as the solution of dynamic reliability and safety problems using Temporal Logic [Hei91, Hei92, Hei97] it is important to know and use the algorithm that minimizes the reliability formula (minimal sum of disjoint products) since such formulas play also a central role in probabilistic assumption-based truth maintenance systems [LaL89] and probabilistic assumption-based reasoning [Koh95, KoM95, BeK95]. In many practical applications it is possible to reduce large systems by well known reduction methods [Mis83, Hei97] so that the reduced system can further be analysed by SDP-methods. The often used example of a two terminal ARPA-network with 24 edges (system components) can be reduced to 8 edges by series- and polygon-to-chain-reduction [Hei97].

References [Abr79] Abraham J.A., An improved algorithm for network reliability, IEEE Trans. Reliability Vol. 28, No. 1, 1979 58-61, also in: [RaA90], 89-92 [ABS99] Almasi B., Bolch G., Sztrik J., Modeling Terminal Systems using MOSEL, Proc. Europ. Simulation Symp. ESS'99, Erlangen, Germany, 1999 [And92] Anders J.M., Methods for the reliability analysis of complex binary systems, PhD thesis, Dept. Mathematics, Humboldt-University, Berlin, 1992 [BaH 84] . Barlow R.E., Heidtmann K.D., Computing k-out-of-n system reliability, IEEE Trans. Reliability, 33, 3, 1984 [BeK95] Besnard P., Kohlas J., Evidence Theory based on general consequence relations, Intern. J. Foundation of Computer Science 6, 1995, 119-135 [BeM96] Bertschy R., Monney P.A., A generalization of the algorithm of Heidtmann to non-monotone formulas, J. of Computational and Applied Mathematics, Dec 1996, Vol. 76, No. 1-2, 55-76 [CDR99] Chatelet E., Dutuit Y., Rauzy A., Bouhoufani T., An optimized procedure to generate sums of disjoint products, Reliability Engineering and System Safety, Sept. 1999, Vol. 65, No. 3, 289-294

[Hei81] Heidtmann K.D., A class of noncoherent systems and their reliability analysis, Dig. 11th Ann. Intern. Symp. Fault-Tolerant Computing, FTCS 11, Portland/USA, 1981 [Hei86] Heidtmann K.D., Minset splitting for improved reliability computation, IEEE Trans. Reliability 35, 5, 1986 [Hei89] Heidtmann K.D., Smaller sums of disjoint products by subproduct inversion, IEEE Trans. Reliability 38, 3, 1989, 305-311 [Hei91] Heidtmann K.D., Temporal Logic applied to reliability modeling of fault-tolerant systems, Proc. 2nd Intern. Symp. Formal Techniques in Real-Time and Fault-Tolerant Systems, Nijmegen/Netherlands, 1992, in: Vytopil J., Lecture Notes in Computer Science, No. 571, Springer, Berlin, 1991 [Hei92] Heidtmann K.D., Deterministic reliability modeling of dynamic redundancy, IEEE Trans. Reliability 41, 3, 1992, 378-385 [Hei95] Heidtmann K.D., Methoden zur Zuverlässigkeitsanalyse unter besonderer Berücksichtigung von Rechnernetzen (Methods for reliability analysis with special emphasis on computer communication networks), Habilitationsschrift, Dept. Comp. Science, Hamburg University, 1995 (in german) [Hei97] Heidtmann K.D., Zuverlässigkeitsbewertung technischer Systeme (Reliability analysis of technical systems), Teubner, Stuttgart, 1997 (in german) [Her00] Herold H., MOSEL, An Universal Language for Modeling Computer, Communication, and Manufacturing Systems, PhD Thesis, Techn. Faculty, University Eralngen, 2000 [Koh95] Mathematical foundations of evidence theory, in: Coletti G., Dubois D., Scozzafa R., (Eds.), Mathematical Models for Handling Partial Knowledge in Artificial Intelligence, Plenum Press, New York, 1995, 31-64 [KoM95] Kohlas J., Monney P.A., A Mathematical Theory of Hints, An Approach to the Dempster-Shafer Theory of Evidence, Lecture Notes in Economics and Mathematical Systems Vol. 425, Springer, Berlin, 1995 [LaL89] Laskey K.B., Lehner P.E., Assumptions, belief and probabilities, Artificial Intelligence 41, 1989, 65-77 [LuT98a] Luo T., Trivedi K.S., An improved multiple variable inversion algorithm for reliability calculation, 10 th Intern. Conf. Tools'98, Palma de Mallorca, Spain, Sept. 1998, in: Puigjaner R., Savino N.N., Serra B. (eds.), Computer Performance Evaluation, Modeling Techniques and Tools, Springer, 1998 [LuT98b] Luo T., Trivedi K.S., An improved algorithm for coherent-system reliability, IEEE Trans. Reliability, March 1998, Vol. 47, No. 1, 73-78 [LuT98c] Luo T., Trivedi K.S., Using Multiple Inversion Techniques to Analyze Fault-trees with Inversion Gates, 28Th Ann. Fault Tolerant Computing Symp., FTCS 98, Munich 1998 [Mis93] Misra K.B., New trends in system reliability evaluation, Elsevier Publishers, 1993 [PTV97] Puliafito A., Tomarchio O, Vita L., Porting SHARPE on the Web, Proc. TOOL'97, Saint Malo, June 1997 [RaA90] Rai S., Agrawal D.P., Distributed Computing Network Reliability, IEEE Computer Society Press, Washington, 1990 [Rus87] Rushdi A.M., Efficient computation of k-to-l-out-of-n systems, Reliability Engineering 17, 1987, 157-163 [RVT95] Rai S., Veeraraghavan, Trivedi K.S., A Survey of Efficient Reliability Computation Using Disjoint Products Approach, Networks 25, 3, 1995, 147-163 [SoR91] Soh S., Rai S., CAREL: computer aided reliability estimator for distributed computing networks, IEEE Trans. Parallel and Distributed Systems 2, 2, 1991, 199-213 [STP95] Sahner R., Trivedi K.S., Puliafito A., Performance and Reliability Analysis of Computer Systems - An Example Based Approach Using SHARPE Software Package, Kluwer Academic Publishers, Massachussetts, 1995 [TKK00] Tsuchiya T., Kajikawa T., Kikuno T., Parallelizing SDP (Sum of Disjoint Products) Algorithms for fast reliability analysis, IEICE Trans. Inf. & Syst., Vol. E83, No. 5, May 2000, 1183-1186 [TrM93] Trivedi K.S., Malhotra M., Reliability and Performability Techniques and Tools: A Survey Proc. 7th ITG/GI Conf. Measurement, Modelling and Evaluation of Computer and Communication Systems, Aachen University of Technology, 1993, 27-48 [UpP93] Upadhyaya, Pham, Analysis of a class of noncoherent systems and an architecture for the computation of the system reliability, IEEE Trans. Computers 42, 4, 1993 [Vah95] Vahl A. Reliability Assessment of Complex System Structures - A Software Tool for Design Support, Proc. 9th Symp. Quality and Reliability in Electronics, Relectronic'95, Budapest, 1995, 161-166 [Vah98] Vahl A., Interaktive Zuverlässigkeitsanalyse von Flugzeug-Systemarchitekturen, PhD Thesis, Technical University Hamburg-Harburg, Flugzeug-Systemtechnik, VDI-Verlag, Düsseldorf, 1998 (in german) [VeT91] Veeraraghavan and Trivedi K.S., An improved algorithm for the symbolic reliability Analysis of Networks," IEEE Trans. on Reliability, Vol. 40, No. 3, Aug. 1991, 347-358 [YuT98] Ma Yue, Trivedi K.S., An algorithm for reliability analysis of phased mission systems, Intern. Symposium on Software Reliability Engineering, ISSRE 1998

Suggest Documents