Bayesian Networks in Reliability. The Good, the Bad, and the Ugly. Helge
Langseth. Department of Computer and Information Science. Norwegian
University of ...
Bayesian Networks in Reliability The Good, the Bad, and the Ugly
Helge Langseth Department of Computer and Information Science Norwegian University of Science and Technology
MMR’07
1
Helge Langseth
Bayesian Networks in Reliability
Outline
2
1
Introduction
2
The Good: Why Bayesian Nets are popular Mathematical properties Making decisions Applications
3
The Bad: Building complex quantitative models The model building process The quantitative part Utility theory
4
The Ugly: Continuous variables Introduction Approximations
5
Concluding remarks Helge Langseth
Bayesian Networks in Reliability
Introduction
A simple example: “Explosion” E: Environment
L: Leak
G: GD failed
X: Explosion
C: Casualties
P (E, L, G, X, C)
3
Helge Langseth
Bayesian Networks in Reliability
Introduction
A simple example: “Explosion” E: Environment
L: Leak
G: GD failed
X: Explosion
pa (X) = {L, G}
C: Casualties
P (E, L, G, X, C)
3
Helge Langseth
Bayesian Networks in Reliability
Introduction
A simple example: “Explosion” E: Environment
L: Leak
G: GD failed
X: Explosion
pa (X) = {L, G} nd(X) = {E, L, G}
C: Casualties
P (E, L, G, X, C)
3
Helge Langseth
Bayesian Networks in Reliability
Introduction
A simple example: “Explosion” E: Environment
L: Leak
G: GD failed
X: Explosion
pa (X) = {L, G} nd(X) = {E, L, G}
C: Casualties
X⊥⊥E | {L, G} Other d-sep. rules: Jensen&Nielsen (07)
P (E, L, G, X, C)
3
Helge Langseth
Bayesian Networks in Reliability
Introduction
A simple example: “Explosion” G yes no
E: Environment
L: Leak
G: GD failed
X: Explosion
E =hostile λH · τ /2 1 − λH · τ /2
E =normal λN · τ /2 1 − λN · τ /2
P (G | pa (G))
pa (X) = {L, G} nd(X) = {E, L, G}
C: Casualties
X⊥⊥E | {L, G} Hence, P (X | E, L, G) = P (X | L, G) Other d-sep. rules: Jensen&Nielsen (07)
P (E, L, G, X, C) = P (E) · P (L | E) · P (G | E, L) · P (X | E, L, G) · P (C | E, L, G, X) = P (E) · P (L | E) · P (G | E)
· P (X | L, G)
Markov properties ⇔ Factorization property 3
Helge Langseth
Bayesian Networks in Reliability
· P (C | X)
The Good: Why Bayesian Nets are popular
Outline
4
1
Introduction
2
The Good: Why Bayesian Nets are popular Mathematical properties Making decisions Applications
3
The Bad: Building complex quantitative models The model building process The quantitative part Utility theory
4
The Ugly: Continuous variables Introduction Approximations
5
Concluding remarks Helge Langseth
Bayesian Networks in Reliability
The Good: Why Bayesian Nets are popular
Mathematical properties
What the mathematical foundation has to offer Intuitive representation: Almost defined as “box-diagram with formal meaning”. Causal interpretation natural in many cases. Efficient representation: The number of required parameters are reduced. If all variables are binary, the example requires 11 “local” parameters, compared to the 31 “global” parameters of the full joint. Efficient calculations: Efficient calculations of any joint distribution P (xi , xj ) or conditional distribution P (xk | xℓ , xm ). Model estimation: Estimating parameters (fixed structure) via EM, estimating structure by discrete optimization techniques.
5
Helge Langseth
Bayesian Networks in Reliability
The Good: Why Bayesian Nets are popular
Making decisions
Influence diagrams: The “Explosion” example revisited Cost1
E: Environment
L: Leak
G: GD failed
X: Explosion
C: Casualties
Cost2
6
Helge Langseth
Bayesian Networks in Reliability
SSM
The Good: Why Bayesian Nets are popular
Making decisions
Influence diagrams: The “Explosion” example revisited Cost1
E: Environment
L: Leak
6
SSM
G: GD failed
X: Explosion
F : Effectiveness, SSM
C: Casualties
Test interval
Cost2
Cost3
Helge Langseth
Bayesian Networks in Reliability
The Good: Why Bayesian Nets are popular
Applications
An application: Troubleshooting
7
Helge Langseth
Bayesian Networks in Reliability
The Good: Why Bayesian Nets are popular
Applications
Underlying model
TOP C1 X1
8
C2 X2
C3 X3
Helge Langseth
C4 X4
X5
Bayesian Networks in Reliability
system-layer
The Good: Why Bayesian Nets are popular
Applications
Underlying model
TOP C1 X1
C2 X2
C3 X3
C4 X4
X5
E
8
Helge Langseth
Bayesian Networks in Reliability
system-layer
The Good: Why Bayesian Nets are popular
Applications
Underlying model
TOP C1
8
C2
C3
C4
X1
X2
X3
X4
X5
A1
A2
A3
A4
A5
Helge Langseth
Bayesian Networks in Reliability
system-layer
action-layer
The Good: Why Bayesian Nets are popular
Applications
Underlying model
TOP C1
8
C2
C3
C4
X1
X2
X3
X4
X5
A1
A2
A3
A4
A5
R1
R2
R3
R4
R5
Helge Langseth
Bayesian Networks in Reliability
system-layer
action-layer result-layer
The Good: Why Bayesian Nets are popular
Applications
Underlying model QS
question-layer TOP
C1
8
C2
C3
C4
X1
X2
X3
X4
X5
A1
A2
A3
A4
A5
R1
R2
R3
R4
R5
Helge Langseth
Bayesian Networks in Reliability
system-layer
action-layer result-layer
The Good: Why Bayesian Nets are popular
Applications
Other applications Software reliability Modelling Organizational factors (e.g., the SAM-Framework) Explicit models of dynamics (e.g., repairable systems, phase-mission-systems, monitoring systems) Some of these can be seen at the Bayes net sessions later today
9
Helge Langseth
Bayesian Networks in Reliability
The Bad: Building complex quantitative models
Outline
10
1
Introduction
2
The Good: Why Bayesian Nets are popular Mathematical properties Making decisions Applications
3
The Bad: Building complex quantitative models The model building process The quantitative part Utility theory
4
The Ugly: Continuous variables Introduction Approximations
5
Concluding remarks Helge Langseth
Bayesian Networks in Reliability
The Bad: Building complex quantitative models
The model building process
Phases of the model building process Step 0 – Decide what to model: Select the boundary for what to include in the model. Step 1 – Defining variables: Select the important variables in the domain. Step 2 – The qualitative part: Define the graphical structure that connects the variables. Step 3 – The quantitative part: Fix parameters to specify each P (xi | pa (xi )). This is the ‘bad’ part. Step 4 – Verification: Verification of the model.
11
Helge Langseth
Bayesian Networks in Reliability
The Bad: Building complex quantitative models
The quantitative part
The quantitative part: Defining P (y|pa (y)) Z1
...
Z2
Consider a binary node with m binary parents. The CPT P (y|z1 , . . . , zm ) contains 2m parameters. Y
2m
Naïve approach: conditional probabilities: All parameters are required if no other assumptions can be made.
12
Helge Langseth
Bayesian Networks in Reliability
Zm
The Bad: Building complex quantitative models
The quantitative part
The quantitative part: Defining P (y|pa (y)) Z1
...
Z2
Zm
Consider a binary node with m binary parents. The CPT P (y|z1 , . . . , zm ) contains 2m parameters. Y
Naïve approach:
2m
conditional probabilities
Deterministic relations: Parameter free: Y considered a function of its parents, e.g., {Y = fail} ⇐⇒ {Z1 = fail}∨{Z2 = fail}∨. . .∨{Zm = fail}.
12
Helge Langseth
Bayesian Networks in Reliability
The Bad: Building complex quantitative models
The quantitative part
The quantitative part: Defining P (y|pa (y)) Z1
...
Z2
Zm
Consider a binary node with m binary parents. The CPT P (y|z1 , . . . , zm ) contains 2m parameters. Y
Naïve approach:
2m
conditional probabilities
Deterministic relations: Parameter free Z1
Q1
Noisy OR relation: m + 1 conditional probabilities: Independent inhibitors Q1 , . . . , Qm ; Assume Z2 . . . Zm {Q1 = fail}∨. . .∨{Qm = fail} =⇒ {Y = fail}. For each Qi we have . . . Qm Q2 P (Qi = fail|Zi = fail) = qi , Y
12
P (Qi = fail|Zi = ¬fail) = 0. “Leak probability”: P (Y = fail|Q1 = . . . = Qm = ¬fail) = q0 . Helge Langseth
Bayesian Networks in Reliability
The Bad: Building complex quantitative models
The quantitative part
The quantitative part: Defining P (y|pa (y)) Z1
...
Z2
Consider a binary node with m binary parents. The CPT P (y|z1 , . . . , zm ) contains 2m parameters. Y
Naïve approach:
2m
conditional probabilities
Deterministic relations: Parameter free Noisy OR relation: m + 1 conditional probabilities Logistic regression: From m + 1 to 2m regression parameters: Y is dependent variable in logistic regression with Zi ’s as “covariates”: X XX pz1 ,...,zm log ηj zj + ηij zi · zj + . . . = η0 + 1 − pz1 ,...,zm j
12
Helge Langseth
i
j
Bayesian Networks in Reliability
Zm
The Bad: Building complex quantitative models
The quantitative part
The quantitative part: Defining P (y|pa (y)) Z1
...
Z2
Consider a binary node with m binary parents. The CPT P (y|z1 , . . . , zm ) contains 2m parameters. Y
Naïve approach:
2m
conditional probabilities
Deterministic relations: Parameter free Noisy OR relation: m + 1 conditional probabilities Logistic regression: From m + 1 to 2m regression parameters IPF procedure: m + 1 marginal distributions, m CPRs: Find a joint PT over Z1 , . . . , Zm , Y with given CPRs. Assume m = 1, p0 (z, y) initialized P to fit CPR. – p′k (z, y) ← pk−1 (z, y) · p(z)/ P y′ pk−1 (z, y) ′ – pk (z, y) ← pk (z, y) · p(y)/ z pk (z, y) 12
Helge Langseth
Bayesian Networks in Reliability
Zm
The Bad: Building complex quantitative models
The quantitative part
The quantitative part: Defining P (y|pa (y)) Z1
...
Z2
Consider a binary node with m binary parents. The CPT P (y|z1 , . . . , zm ) contains 2m parameters. Y
Naïve approach:
2m
conditional probabilities
Deterministic relations: Parameter free Noisy OR relation: m + 1 conditional probabilities Logistic regression: From m + 1 to 2m regression parameters IPF procedure: m + 1 marginal distributions, m CPRs Special structures: From 2 to 2m conditional probabilities: Y defined, e.g., by rules such as “P (Y = fail|Z1 = fail, . . . , Zm = fail) = p1 , but P (Y = fail|z1 , . . . , zm ) = p2 for all other configurations z”.
12
Helge Langseth
Bayesian Networks in Reliability
Zm
The Bad: Building complex quantitative models
The quantitative part
The quantitative part: Defining P (y|pa (y)) Z1
...
Z2
Consider a binary node with m binary parents. The CPT P (y|z1 , . . . , zm ) contains 2m parameters. Y
Naïve approach:
2m
conditional probabilities
Deterministic relations: Parameter free Noisy OR relation: m + 1 conditional probabilities Logistic regression: From m + 1 to 2m regression parameters IPF procedure: m + 1 marginal distributions, m CPRs Special structures: From 2 to 2m conditional probabilities Qualitative BNs: No quantitative parameters: Only qualitative effects modelled (and later calculated). From m to 2m qualitative effects (‘+’, ‘0’ or ‘−’).
12
Helge Langseth
Bayesian Networks in Reliability
Zm
The Bad: Building complex quantitative models
The quantitative part
The quantitative part: Defining P (y|pa (y)) Z1
...
Z2
Consider a binary node with m binary parents. The CPT P (y|z1 , . . . , zm ) contains 2m parameters. Y
Naïve approach:
2m
conditional probabilities
Deterministic relations: Parameter free Noisy OR relation: m + 1 conditional probabilities Logistic regression: From m + 1 to 2m regression parameters IPF procedure: m + 1 marginal distributions, m CPRs Special structures: From 2 to 2m conditional probabilities Qualitative BNs: No quantitative parameters Alternative solutions: No conditional probabilities: Other frameworks (like vines), or parameter estimation. 12
Helge Langseth
Bayesian Networks in Reliability
Zm
The Bad: Building complex quantitative models
Utility theory
Utility Theory
13
Helge Langseth
Bayesian Networks in Reliability
The Bad: Building complex quantitative models
Utility theory
Utility Theory Utility - 2
Pareto boundary
Utility - 1
13
Helge Langseth
Bayesian Networks in Reliability
The Ugly: Continuous variables
Outline
14
1
Introduction
2
The Good: Why Bayesian Nets are popular Mathematical properties Making decisions Applications
3
The Bad: Building complex quantitative models The model building process The quantitative part Utility theory
4
The Ugly: Continuous variables Introduction Approximations
5
Concluding remarks Helge Langseth
Bayesian Networks in Reliability
The Ugly: Continuous variables
Introduction
The Ugly: Continuous variables The original calculation procedure only supports a restricted set of distributional families: Continuous variables must have Gaussian distributions. Discrete variables should only have discrete parents. Gaussian parents of Gaussians are partial regression coefficients of their children. Disc.
Disc.
N (µ, σ 2 )
Disc.
Beta(α, β)
X1
X2
X3
X4
X5
Y1
Y2
Y3
Y4
Y5
Disc.
N (µx , σx2 )
Disc.
N (µx , Σx )
Bern(x)
These classes of distributions are not sufficient for reliability analysis. This is the ‘ugly’ part. 15
Helge Langseth
Bayesian Networks in Reliability
The Ugly: Continuous variables
Introduction
An example model: The THERP methodology Used to model human ability to perform in certain settings (measured as binary variables) Known environment variables, like “Level of feedback” Z1
Z2
w11 T1
Always known w24
T2
T3
T4 Logistic regression
This is simple. The probability of a subject failing to perform task Ti is: −1 P (Ti = ti |z) = 1 + exp −w′i z
16
Helge Langseth
Bayesian Networks in Reliability
The Ugly: Continuous variables
Introduction
An example model: The THERP methodology Used to model human ability to perform in certain settings (measured as binary variables) Known environment variables, like “Level of feedback”
T1
N (µ, σ 2 )
Z1
Z2
Z3
T2
T3
T4 Logistic regression
We can also have latent traits, which are unknown and vary between subjects (like “Omitting a step in a procedure”). In this case, the model is a“latent trait model ” (similar to binary factory analyzer). In the following we will focus on a situation with two latent “traits”, and one “task”. 16
Helge Langseth
Bayesian Networks in Reliability
The Ugly: Continuous variables
Introduction
Why is this difficult Z1
Z2
Assume we have one observation D1 = {1}, and parameters w1 = [1 1]T . T
The likelihood is given by P (T = 1) =
1 2πσ1 σ2
Z
h −µ1 )2 exp − (z12σ + 2
R2
1
(z2 −µ2 )2 2σ22
1 + exp(−z1 − z2 )
i
dz,
which has no known analytic representation in general. Hence, we cannot do the required calculations in this model. Note! This is true even if we choose not to use Bayesian networks as our modelling language.
17
Helge Langseth
Bayesian Networks in Reliability
The Ugly: Continuous variables
Approximations
Attempts to find f (z1, z2 | T = 1) and P (T = 1) Numerical approximation: 0.4
0.4
0.35
0.35
0.3
0.3
0.25
0.25
0.2
0.2
0.15
0.15
0.1
0.1
0.05 0 −3
P (T = 1) = 0.49945 CPU = 600 msec. f (z1 , z2 | T = 1)
0.05
−2
−1
0
1
2
0 −3
3
−2
−1
Z1
0
Z2
1
2
3
3
2
1
0
T (Here: P (T = 1|z1 + z2 )) −1
1 0.9 0.8
−2
0.7 0.6 0.5
−3 −3
0.4
−2
−1
0
1
0.3 0.2 0.1 0 −6
18
−4
−2
0
2
4
6
Helge Langseth
1000 × 1000 grid Bayesian Networks in Reliability
2
3
The Ugly: Continuous variables
Approximations
Attempts to find f (z1, z2 | T = 1) and P (T = 1) Discretization: Every continuous variable is “translated” into a discrete one. The more discrete states used the higher . . . - approximation quality. - computational complexity.
“Tricks” are available to find number of states and where to set split-points, including dynamic discretization.
18
Helge Langseth
Bayesian Networks in Reliability
The Ugly: Continuous variables
Approximations
Attempts to find f (z1, z2 | T = 1) and P (T = 1) Discretization: 0.4
0.4
0.35
0.35
0.3
0.3
0.25
0.25
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0 −3
−2
−1
0
1
2
0 −3
3
P (T = 1) = 0.49761 CPU = 2 msec. −2
−1
Z1
0
Z2
1
2
f (z1 , z2 | T = 1)
3
3
2
1
0
T (Here: P (T = 1|z1 + z2 )) −1
1 0.9 0.8
−2
0.7 0.6 0.5
−3 −3
0.4
−2
−1
0
1
2
0.3 0.2 0.1 0 −6
18
−4
−2
0
2
4
6
Helge Langseth
5 × 5 discretization grid Bayesian Networks in Reliability
3
The Ugly: Continuous variables
Approximations
Attempts to find f (z1, z2 | T = 1) and P (T = 1) Mixtures of Truncated Exponentials: In standard discretization, the continuous variable is approximated by a step-function. Calculations are also possible if each ‘step’ is replaced by a truncated exponential. A single variable density is split into n intervals I k , k = 1, . . . , n, each approximated by ∗
f (z) =
(k) a0
+
m X i=1
(k) (k) ai exp bi z for z ∈ I k
We typically see 1 ≤ n ≤ 4 and 0 ≤ m ≤ 2. Clever parameter choices are tabulated for many standard distributions. 18
Helge Langseth
Bayesian Networks in Reliability
The Ugly: Continuous variables
Approximations
Attempts to find f (z1, z2 | T = 1) and P (T = 1) Mixtures of Truncated Exponentials: 0.4
0.4
0.35
0.35
0.3
0.3
0.25
0.25
0.2
0.2
0.15
0.15
0.1
0.1
0.05 0 −3
P (T = 1) = 0.49914 CPU = 4 msec. f (z1 , z2 | T = 1)
0.05
−2
−1
0
1
2
0 −3
3
−2
−1
Z1
0
Z2
1
2
3
3
2
1
0
T (Here: P (T = 1|z1 + z2 )) −1
1 0.9 0.8
−2
0.7 0.6 0.5
−3 −3
0.4
−2
−1
0
1
2
0.3 0.2 0.1 0 −6
18
−4
−2
0
2
4
6
Helge Langseth
S. Acid et al.: ELVIRA Bayesian Networks in Reliability
3
The Ugly: Continuous variables
Approximations
Attempts to find f (z1, z2 | T = 1) and P (T = 1) Markov Chain Monte Carlo: Works well with Bayesian Networks, as independence statements can be exploited for fast simulation: Metropolis-Hastings works directly out-of-the-box. Gibbs sampling might sometimes require clever adaption.
18
Helge Langseth
Bayesian Networks in Reliability
The Ugly: Continuous variables
Approximations
Attempts to find f (z1, z2 | T = 1) and P (T = 1) Markov Chain Monte Carlo: 0.4
0.4
0.35
0.35
0.3
0.3
0.25
0.25
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0 −3
−2
−1
0
1
2
0 −3
3
P (T = 1) = 0.49821 CPU = 32 · 103 msec. −2
−1
Z1
0
Z2
1
2
f (z1 , z2 | T = 1)
3
3
2
1
0
T (Here: P (T = 1|z1 + z2 )) −1
1 0.9 0.8
−2
0.7 0.6 0.5
−3 −3
0.4
−2
−1
0
1
2
0.3 0.2 0.1 0 −6
18
−4
−2
0
2
4
6
Helge Langseth
W. Gilks et al.: BUGS Bayesian Networks in Reliability
3
The Ugly: Continuous variables
Approximations
Attempts to find f (z1, z2 | T = 1) and P (T = 1) Variational Approximations: − log(exp(v/2) + exp(−v/2))
P (T = 1|x)
-0.6 -0.8 -1 -1.2 -1.4 -1.6 -1.8 -3
-2
-1
0
1
2
3
0
1
2
3
4
5
6
7
8
Replace a tricky function h(v) with family of simple functions ˜ ξ), such that h(v) = sup h(v, ˜ ξ). indexed by ξ, h(v, ξ Note: log (P (T = 1|v)) = v/2 − log(exp(v/2) + exp(−v/2)), where the last term is convex in v 2 . 18
Helge Langseth
Bayesian Networks in Reliability
9
v2
The Ugly: Continuous variables
Approximations
Attempts to find f (z1, z2 | T = 1) and P (T = 1) Variational Approximations: Define A(z1 , z2 ) = z1 + z2 λ(ξ) =
exp(−ξ)−1 4ξ(1+exp(−ξ))
Variational approximation The variational approximation of P (T = 1 | z) is exp (A(z) − ξ)/2 + λi (ξ) · (A(z)2 − ξ 2 ) ˜ . P (T = 1 | z, ξ) = 1 + exp(−ξ) r h i Can be shown that the best choice is ξ ← E (Z1 + Z2 )2 T = 1 =⇒ We need to iterate
18
Helge Langseth
Bayesian Networks in Reliability
The Ugly: Continuous variables
Approximations
Attempts to find f (z1, z2 | T = 1) and P (T = 1) Variational Approximations: 0.4
0.4
0.35
0.35
0.3
0.3
0.25
0.25
0.2
0.2
0.15
0.15
0.1
0.1
0.05 0 −3
P (T = 1) = 0.49828 CPU = 17 msec. f (z1 , z2 | T = 1)
0.05
−2
−1
0
1
2
0 −3
3
−2
−1
Z1
0
Z2
1
2
3
3
2
1
0
T (Here: P (T = 1|z1 + z2 )) −1
1 0.9 0.8
−2
0.7 0.6 0.5
−3 −3
0.4
−2
−1
0
1
0.3 0.2 0.1 0 −6
18
−4
−2
0
2
4
6
Helge Langseth
J. M. Winn: VIBES Bayesian Networks in Reliability
2
3
The Ugly: Continuous variables
Approximations
Attempts to find f (z1, z2 | T = 1) and P (T = 1) Other approaches: A number of other approaches are also being examined Laplace-approximation Transformation into Mixture-of-Gaussian models Other frameworks, like Vines etc.
18
Helge Langseth
Bayesian Networks in Reliability
Concluding remarks
Outline
19
1
Introduction
2
The Good: Why Bayesian Nets are popular Mathematical properties Making decisions Applications
3
The Bad: Building complex quantitative models The model building process The quantitative part Utility theory
4
The Ugly: Continuous variables Introduction Approximations
5
Concluding remarks Helge Langseth
Bayesian Networks in Reliability
Concluding remarks
Summary Bayesian Networks’ popularity is increasing, also in the reliability community. The main features (as seen from our community) are: Constitute an intuitive modelling ‘language’. High level of modelling flexibility. Efficient calculations based on utilization of the conditional independence structures encoded in the graph. Cost efficient representation.
Building models can still be time consuming. Problem owners lack training in using BNs: Users more confident when using traditional frameworks, like, e.g., FT modelling. The calculations may be too complex to understand.
Most important research focus (for this community) is to find good approximations to handle continuous variables. 20
Helge Langseth
Bayesian Networks in Reliability
Concluding remarks
Colleagues A number of people have helped or worked with me on the topics covered in this presentation: Bayesian Network Models: Thomas D. Nielsen, Finn V. Jensen, Jiří Vomlel Bayesian Networks in Reliability: Luigi Portinale, Claus Skaanning Continuous Variables: Antonio Salmerón, Rafael Rumí Reliability Models: Bo Lindqvist, Tim Bedford, Roger M. Cooke, Jørn Vatn
21
Helge Langseth
Bayesian Networks in Reliability
Concluding remarks
Further reading Helge Langseth and Finn V. Jensen. Bayesian networks and decision graphs in reliability. In Encyclopedia of Statistics in Quality and Reliability. John Wiley & Sons, In press. Helge Langseth and Luigi Portinale. Bayesian networks in reliability. Reliability Engineering and System Safety, 92(1):92–108, 2007. Finn V. Jensen and Thomas D. Nielsen. Bayesian Networks and Decision Graphs. Springer-Verlag, Berlin, Germany, 2007. Barry R. Cobb, Prakash P. Shenoy, and Rafael Rumí. Approximating probability density functions in hybrid Bayesian networks with mixtures of truncated exponentials. Statistics and Computing, 46(3):293–308, 2006. 22
Helge Langseth
Bayesian Networks in Reliability