Apr 8, 2013 - Instead of manually encoding patterns in computer programs, we make computers learn these patterns without
An Introduction to Bayesian Machine Learning
An Introduction to Bayesian Machine Learning Jos´e Miguel Hern´andez-Lobato Department of Engineering, Cambridge University
April 8, 2013
1
An Introduction to Bayesian Machine Learning
What is Machine Learning? The design of computational systems that discover patterns in a collection of data instances in an automated manner. The ultimate goal is to use the discovered patterns to make predictions on new data instances not seen before. 0
0
0
0
0
0
0
?
?
?
1
1
1
1
1
1
1
?
?
?
2
2
2
2
2
2
2
?
?
?
Instead of manually encoding patterns in computer programs, we make computers learn these patterns without explicitly programming them . Figure source [Hinton et al. 2006]. 2
An Introduction to Bayesian Machine Learning
Model-based Machine Learning We design a probabilistic model which explains how the data is generated. An inference algorithm combines model and data to make predictions. Probability theory is used to deal with uncertainty in the model or the data. 0 0
0
0
0
0
10
1
1
1
1
201
20
20
20
20
2
12
1
1
1
1
2
2
2
2
Data
Prob. Model
0
0
New Data 1
? ? ?
1 20 1
Inference Algorithm 2
2
? ? ?
1 20
? ?
1 ? 2
Prediction 0: 0.900 1: 0.005 2: 0.095
3
An Introduction to Bayesian Machine Learning
Basics of Probability Theory The theory of probability can be derived using just two rules: Sum rule: Z p(x) =
p(x, y ) dy .
Product rule: p(x, y ) = p(y |x)p(x) = p(x|y )p(y ) . They can be combined to obtain Bayes’ rule: p(y |x) =
p(x|y )p(y ) p(y |y )p(y ) =R . p(x) p(x, y ) dy
Independence of X and Y : p(x, y ) = p(x)p(y ). Conditional independence of X and Y given Z : p(x, y |z) = p(x|z)p(y |z). 4
An Introduction to Bayesian Machine Learning
The Bayesian Framework The probabilistic model M with parameters θ explains how the data D is generated by specifying the likelihood function p(D|θ, M) . Our initial uncertainty on θ is encoded in the prior distribution p(θ|M) . Bayes’ rule allows us to update our uncertainty on θ given D: p(θ|D, M) =
p(D|θ, M)p(θ|M) . p(D|M)
We can then generate probabilistic predictions for some quantity x of new data Dnew given D and M using Z p(x|Dnew , D, M) = p(x|θ, Dnew , M)p(θ|D, M)dθ . These predictions will be approximated using an inference algorithm . 5
An Introduction to Bayesian Machine Learning
Bayesian Model Comparison Given a particular D, we can use the model evidence p(D|M) to reject both overly simple models, and overly complex models.
Figure source [Ghahramani 2012]. 6
An Introduction to Bayesian Machine Learning
Probabilistic Graphical Models The Bayesian framework requires to specify a high-dimensional distribution p(x1 , . . . , xk ) on the data, model parameters and latent variables. Working with fully flexible joint distributions is intractable! We will work with structured distributions, in which the random variables interact directly with only few others. These distributions will have many conditional independencies . This structure will allow us to: - Obtain a compact representation of the distribution. - Use computationally efficient inference algorithms. The framework of probabilistic graphical models allows us to represent and work with such structured distributions in an efficient manner. 7
An Introduction to Bayesian Machine Learning
Some Examples of Probabilistic Graphical Models Bayesian Network
Markov Network
Graphs Season
Flu Muscle-Pain
A
Hayfever Congestion
D
B C
Independencies (F ⊥H|C ), (C ⊥S|F , H) (M⊥H, C |F ), (M⊥C |F ), ...
(A⊥C |B, D), (B⊥D|A, C )
p(S, F , H, M, C ) = p(S)p(F |S) p(H|S)p(C |F , H)p(M|F )
p(A, B, C , D) = Z1 φ1 (A, B) φ2 (B, C )φ3 (C , D)φ4 (A, D)
Factorization
Figure source [Koller et al. 2009]. 8
An Introduction to Bayesian Machine Learning
Bayesian Networks A BN G is a DAG whose nodes are random variables X1 , . . . , Xd . Let PAGXi be the parents of Xi in G. The network is annotated with the conditional distributions p(Xi |PAGXi ). Conditional Independencies: Let NDGXi be the variables in G which are non-descendants of Xi in G. G encodes the conditional independencies (Xi ⊥NDGXi |PAGXi ), i = 1, . . . , d . Factorization: G encodes the factorization p(x1 , . . . , xd ) =
Qd
G i=1 p(xi |paxi )
.
9
An Introduction to Bayesian Machine Learning
BN Examples: Naive Bayes We have features X = (X1 , . . . , Xd ) ∈ Rn and a label Y ∈ {1, . . . , C }. The figure shows the Naive Bayes graphical model with parameters θ1 and θ2 for a dataset D = {xi , yi }di=1 using plate notation . Shaded nodes are observed , unshaded nodes are not observed. The joint distribution for D θ1 and θ2 is n d Y Y p(xi,j |yi , θ1 )p(yi |θ2 ) p(D, θ1 θ2 ) = i=1
j=1
p(θ1 )p(θ2 ) .
Figure source [Murphy 2012]. 10
An Introduction to Bayesian Machine Learning
BN Examples: Hidden Markov Model We have a temporal sequence of measurements D = {X1 , . . . , XT }. Time dependence is explained by a hidden process Z = {Z1 , . . . , ZT }, that is modeled by a first-order Markov chain . The data is a noisy observation of the hidden process. The joint distribution for D, Z, θ1 and θ2 is "T # Y p(D, Z, θ1 , θ2 ) = p(xt |zt , θ1 ) "
t=1 T Y
#
p(zt |zt−1 , θ2 )
t=1
p(θ1 )p(θ2 )p(z0 ) . Figure source [Murphy 2012].
11
An Introduction to Bayesian Machine Learning
BN Examples: Matrix Factorization Model We have observations xi,j from an n × d matrix X. The entries of X are generated as a function of the entries of a low rank matrix UVT , U is n × k and V is d × k and k min(n, d). p(D,U, V, θ1 , θ2 , θ3 ) = n Y d Y p(xi,j |ui , vj , θ3 ) i=1 j=1
" n Y i=1
# d Y p(ui |θ1 ) p(vj |θ2 ) i=j
p(θ1 )p(θ2 )p(θ3 ) . 12
An Introduction to Bayesian Machine Learning
D-separation Conditional independence properties can be read directly from the graph. We say that the sets of nodes A, B and C satisfy (A⊥B|C ) when all of the possible paths from any node in A to any node in B are blocked . A path will be blocked if it contains a node x with arrows meeting at x 1 - i) head-to-tail or ii) tail-to-tail and x is C . 2 - head-to-head and neither x, nor any of its descendants, is in C . f
a
e
f
a
b
c
e
(a⊥b|c) does not follow from the graph. b
(a⊥b|c) is implied by the graph.
c
Figure source [Bishop 2006]. 13
An Introduction to Bayesian Machine Learning
Bayesian Networks as Filters p(x1 , . . . , xk ) must satisfy the CIs implied by d-separation . Q p(x1 , . . . , xk ) must factorize as p(x1 , . . . , xd ) = di=1 p(xi |paGxi ). p(x1 , . . . , xk ) must satisfy the CIs (Xi ⊥NDGXi |PAGXi ) , i = 1, . . . , d. The three filters are the same!
All possible Distributions
All distributions that factorize and satisfy CIs
Figure source [Bishop 2006]. 14
An Introduction to Bayesian Machine Learning
Markov Blanket The Markov blanket of a node x is the set of nodes comprising parents, children and co-parents of x. It is the minimal set of nodes that isolates a node from the rest, that is, x is CI of any other node in the graph given its Markov blanket .
x
Figure source [Bishop 2006]. 15
An Introduction to Bayesian Machine Learning
Markov Networks A MN is an undirected graph G whose nodes are the r.v. X1 , . . . , Xd . It is annotated with the potential functions φ1 (D1 ), . . . , φk (Dk ), where D1 , . . . , Dk are sets of variables, each forming a maximal clique of G, and φi , . . . , φk are positive functions. Conditional Independencies: G encodes the conditional independencies (A⊥B|C ) for any sets of nodes A, B and C such that C separates A from B in G . Factorization: G encodes the factorization p(X1 , . . . , Xd ) = Z −1 a normalization constant.
Qk
i=1 φi (Di ),
where Z is
16
An Introduction to Bayesian Machine Learning
Clique and Maximal Clique Clique : Fully connected subset of nodes. Maximal Clique : A clique in which we cannot include any more nodes without it ceasing to be a clique. (x1 , x2 ) is a clique but not a maximal clique.
x1 x2
(x2 , x3 , x4 ) is a maximal clique.
x3
p(x1 , . . . , x4 ) = x4
1 φ1 (x1 , x2 , x3 )φ2 (x2 , x3 , x4 ) . Z
Figure source [Bishop 2006]. 17
An Introduction to Bayesian Machine Learning
MN Examples: Potts Model Let x1 , . . . , xn ∈ {1, . . . , C }, p(x1 , . . . , xn ) =
1Y φij (xi , xj) , Z i∼j
where
log φij (xi , xj) =
xi
β > 0 if xi = xj , 0 otherwise
xj
Figure source [Bishop 2006].
Figure source Erik Sudderth. 18
An Introduction to Bayesian Machine Learning
From Directed Graphs to Undirected Graphs: Moralization Let p(x1 , . . . , x4 ) = p(x1 )p(x2 )p(x3 )p(x4 |x1 , x2 , x3 ) . How do we obtain the corresponding undirected model? p(x4 |x1 , x2 , x3 ) implies that x1 , . . . , x4 must be in a maximal clique. General method: 1- Fully connect all the parents of any node. 2- Eliminate edge directions. x1
x3 x2
x1
x3 x2
x4
Moralization adds the fewest extra links and so retains the maximum number of CIs. x4
Figure source [Bishop 2006]. 19
An Introduction to Bayesian Machine Learning
CIs in Directed and Undirected Models Markov Network in Undirected Models If all the CIs of p(x1 , . . . , xn ) are reflected in G, and vice versa, then G is said to be a perfect map . C A
B
A
D
U
P
B
C
D
Figure source [Bishop 2006].
20
An Introduction to Bayesian Machine Learning
Basic Distributions: Bernoulli Distribution for x ∈ {0, 1} governed by µ ∈ [0, 1] such that µ = p(x = 1). Bern(x|µ) = xµ + (1 − x)(1 − µ). E(x) = µ. Var(x) = µ(1 − µ).
0
1 21
An Introduction to Bayesian Machine Learning
Basic Distributions: Beta Distribution for µ ∈ [0, 1] such as the prob. of a binary event. Γ(a + b) a−1 µ (1 − µ)b−1 . Γ(a)Γ(b) 3.0
Beta(µ|a, b) =
1.5 1.0 0.5
Var(x) = ab/((a + b)2 (a + b + 1)).
0.0
E(x) = a/(a + b).
pdf
2.0
2.5
a = 0.5, b = 0.5 a = 5, b = 1 a = 1, b = 3 a = 2, b = 2 a = 2, b = 5
0.0
0.2
0.4
0.6
0.8
1.0
x
22
An Introduction to Bayesian Machine Learning
Basic Distributions: Multinomial We extract with replacement n balls of k different categories from a bag. Let xi and denote the number of balls extracted and pi the probability, both of category i = 1, . . . , k. n! Pk x x x1 !···xk ! p11 · · · pkk if i=1 xk = n p(x1 , . . . , xk |n, p1 , . . . , pk ) = . 0 otherwise E(xi ) = npi . Var(xi ) = npi (1 − pi ). Cov(xi , xj ) = −npi pj (1 − pi ).
23
An Introduction to Bayesian Machine Learning
Basic Distributions: Dirichlet Multivariate distribution over µ1 , . . . , µk ∈ [0, 1], where
Pk
i=1 µi
= 1.
Parameterized in terms of α = (α1 , . . . , αk ) with αi > 0 for i = 1, . . . , k. P k k Γ α Y i=1 k µαi i . Dir(µ1 , . . . , µk |α) = Γ(α1 ) · · · Γ(αk ) i=1
ai E(µi ) = Pk
j=1 aj
.
Figure source [Murphy 2012]. 24
An Introduction to Bayesian Machine Learning
Basic Distributions: Multivariate Gaussian 1 1 1 T −1 p(x|µ, Σ) = exp − (x − µ) Σ (x − µ) . 2 (2π)n/2 |Σ|1/2 E(x) = µ. Cov(x) = Σ.
Figure source [Murphy 2012]. 25
An Introduction to Bayesian Machine Learning
Basic Distributions: Wishart Distribution for the precision matrix Λ = Σ−1 of a Multivariate Gaussian. 1 (ν−D−1) −1 W(Λ|w, ν) = B(W, ν)|Λ| exp − Tr(W Λ) , 2 where B(W, ν) ≡ |W|−ν/2
! D Y ν + 1 − i 2vD/2 π D(D−1)/4 Γ . 2 i=1
E(Λ) = νW.
26
An Introduction to Bayesian Machine Learning
Summary With ML computers learn patterns and then use them to make predictions. With ML we avoid to manually encode patterns in computer programs. Model-based ML separates knowledge about the data generation process (model) from reasoning and prediction (inference algorithm). The Bayesian framework allows us to do model-based ML using probability distributions which must be structured for tractability. Probabilistic graphical models encode such structured distributions by specifying several CIs (factorizations) that they must satisfy. Bayesian Networks and Markov Networks are two different types of graphical models which can express different types of CIs. 27
An Introduction to Bayesian Machine Learning
References Hinton, G. E., Osindero, S. and Teh, Y. A fast learning algorithm for deep belief nets. Neural Computation 18, 2006, 1527-1554. Koller, D. and Friedman, N. Probabilistic Graphical Models: Principles and Techniques Mit Press, 2009 Bishop, C. M. Model-based machine learning Philosophical Transactions of the Royal Society A: Mathematical,Physical and Engineering Sciences, 2013, 371 Ghahramani Z. Bayesian nonparametrics and the probabilistic approach to modelling. Philosophical Transactions of the Royal Society A, 2012. Murphy, K. Machine Learning: A Probabilistic Perspective Mit Press, 2012 28