Probabilistic Graphical Models Lecture 12 Dynamical Models - - PowerPoint PPT Presentation

probabilistic graphical models
SMART_READER_LITE
LIVE PREVIEW

Probabilistic Graphical Models Lecture 12 Dynamical Models - - PowerPoint PPT Presentation

Probabilistic Graphical Models Lecture 12 Dynamical Models CS/CNS/EE 155 Andreas Krause Announcements Homework 3 out tonight Start early!! Project milestones due today Please email to TAs 2 Parameter learning for log-linear models


slide-1
SLIDE 1

Probabilistic Graphical Models

Lecture 12 – Dynamical Models

CS/CNS/EE 155 Andreas Krause

slide-2
SLIDE 2

2

Announcements

Homework 3 out tonight

Start early!!

Project milestones due today

Please email to TAs

slide-3
SLIDE 3

3

Parameter learning for log-linear models

Feature functions (Ci) defined over cliques Log linear model over undirected graph G

Feature functions 1(C1),…,k(Ck) Domains Ci can overlap

Joint distribution How do we get weights wi?

slide-4
SLIDE 4

4

Log-linear conditional random field

Define log-linear model over outputs Y

No assumptions about inputs X

Feature functions (Ci,x) defined over cliques and inputs Joint distribution

slide-5
SLIDE 5

5

Example: CRFs in NLP

Classify into Person, Location or Other

  • Mrs. Greene spoke today in New York. Green chairs the finance committee

X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 Y1 Y2 Y3 Y4 Y5 Y6 Y7 Y8 Y9 Y10 Y11 Y12

slide-6
SLIDE 6

6

Example: CRFs in vision

slide-7
SLIDE 7

7

Gradient of conditional log-likelihood

Partial derivative Requires one inference per Can optimize using conjugate gradient

slide-8
SLIDE 8

8

Exponential Family Distributions

Distributions for log-linear models More generally: Exponential family distributions

h(x): Base measure w: natural parameters (x): Sufficient statistics A(w): log-partition function

Hereby x can be continuous (defined over any set)

slide-9
SLIDE 9

9

Examples

  • Exp. Family:

Gaussian distribution Other examples: Multinomial, Poisson, Exponential, Gamma, Weibull, chi-square, Dirichlet, Geometric, …

h(x): Base measure w: natural parameters (x): Sufficient statistics A(w): log-partition function

slide-10
SLIDE 10

10

Moments and gradients

Correspondence between moments and log-partition function (just like in log-linear models) Can compute moments from derivatives, and derivatives from moments! MLE moment matching

slide-11
SLIDE 11

11

Conjugate priors in Exponential Family

Any exponential family likelihood has a conjugate prior

slide-12
SLIDE 12

12

Exponential family graphical models

So far, only defined graphical models over discrete variables. Can define GMs over continuous distributions! For exponential family distributions: Can do much of what we discussed (VE, JT, parameter learning, etc.) for such exponential family models Important example: Gaussian Networks

slide-13
SLIDE 13

13

Multivariate Gaussian distribution

Joint distribution over n random variables P(X1,…Xn) σjk = E[ (Xj – µj) (Xk - µk) ] Xj and Xk independent σjk=0

slide-14
SLIDE 14

14

Marginalization

Suppose (X1,…,Xn) ~ N( µ, Σ) What is P(X1)?? More generally: Let A={i1,…,ik} {1,…,N} Write XA = (Xi1,…,Xik) XA ~ N( µA, ΣAA)

slide-15
SLIDE 15

15

Conditioning

Suppose (X1,…,Xn) ~ N( µ, Σ) Decompose as (XA,XB) What is P(XA | XB)?? P(XA = xA| XB = xB) = N(xA; µA|B, ΣA|B) where Computable using linear algebra!

slide-16
SLIDE 16

16

Conditional linear Gaussians

slide-17
SLIDE 17

17

Canonical Representation

Multivariate Gaussians in exponential family! Standard vs canonical form:

= -1 = -1

slide-18
SLIDE 18

18

Gaussian Networks

Zeros in precision matrix indicate missing edges in log-linear model!

slide-19
SLIDE 19

19

Inference in Gaussian Networks

Can compute marginal distributions in O(n3)! For large numbers n of variables, still intractable If Gaussian Network has low treewidth, can use variable elimination / JT inference! Need to be able to multiply and marginalize factors!

slide-20
SLIDE 20

20

Multiplying factors in Gaussians

slide-21
SLIDE 21

21

Conditioning in canonical form

Joint distribution (XA, XB) ~ N(,) Conditioning: P(XA | XB = xB) = N(xA; |=, A|B=xB)

slide-22
SLIDE 22

22

Marginalizing in canonical form

Recall conversion formulas

= -1 = -1

Marginal distribution

slide-23
SLIDE 23

23

Standard vs. canonical form

Standard form Canonical form Marginalization Conditioning

In standard form, marginalization is easy In canonical form, conditioning is easy!

slide-24
SLIDE 24

24

Variable elimination

In Gaussian Markov Networks, Variable elimination = Gaussian elimination (fast for low bandwidth = low treewidth matrices)

slide-25
SLIDE 25

25

Dynamical models

slide-26
SLIDE 26

26

HMMs / Kalman Filters

Most famous Graphical models:

Naïve Bayes model Hidden Markov model Kalman Filter

Hidden Markov models

Speech recognition Sequence analysis in comp. bio

Kalman Filters control

Cruise control in cars GPS navigation devices Tracking missiles..

Very simple models but very powerful!!

slide-27
SLIDE 27

27

HMMs / Kalman Filters

X1,…,XT: Unobserved (hidden) variables Y1,…,YT: Observations HMMs: Xi Multinomial, Yi arbitrary Kalman Filters: Xi, Yi Gaussian distributions

Non-linear KF: Xi Gaussian, Yi arbitrary

Y1 Y2 Y3 Y4 Y5 Y6 X1 X2 X3 X4 X5 X6

slide-28
SLIDE 28

28

HMMs for speech recognition

Infer spoken words from audio signals

28

Y1 Y2 Y3 Y4 Y5 Y6 Phoneme X1 X2 X3 X4 X5 X6 Words “He ate the cookies on the couch”

slide-29
SLIDE 29

29

Hidden Markov Models

Inference:

In principle, can use VE, JT etc. New variables Xt, Yt at each time step need to rerun

Bayesian Filtering:

Suppose we already have computed P(Xt | y1,…,t) Want to efficiently compute P(Xt+1 | y1,…,t+1)

Y1 Y2 Y3 Y4 Y5 Y6 X1 X2 X3 X4 X5 X6

slide-30
SLIDE 30

30

Bayesian filtering

Start with P(X1) At time t

Assume we have P(Xt | y1…t-1) Condition: P(Xt | y1…t) Prediction: P(Xt+1, Xt | y1…t) Marginalization: P(Xt+1 | y1…t)

Y1 Y2 Y3 Y4 Y5 Y6 X1 X2 X3 X4 X5 X6

slide-31
SLIDE 31

31

Parameter learning in HMMs

Assume we have labels for hidden variables Assume stationarity

P(Xt+1 | Xt) is same over all time steps P(Yt | Xt) is same over all time steps Violates parameter independence ( parameter “sharing”)

Example: compute parameters for P(Xt+1=x | Xt=x’) What if we don’t have labels for hidden vars?

Use EM (later this course)

slide-32
SLIDE 32

32

Kalman Filters (Gaussian HMMs)

X1,…,XT: Location of object being tracked Y1,…,YT: Observations P(X1): Prior belief about location at time 1 P(Xt+1|Xt): “Motion model”

How do I expect my target to move in the environment? Represented as CLG: Xt+1 = A Xt + N(0, )

P(Yt | Xt): “Sensor model”

What do I observe if target is at location Xt? Represented as CLG: Yt = H Xt + N(0, )

Y1 Y2 Y3 Y4 Y5 Y6 X1 X2 X3 X4 X5 X6

slide-33
SLIDE 33

33

Understanding Motion model

slide-34
SLIDE 34

34

Understanding sensor model

slide-35
SLIDE 35

35

Bayesian Filtering for KFs

Can use Gaussian elimination to perform inference in “unrolled” model Start with prior belief P(X1) At every timestep have belief P(Xt | y1:t-1)

Condition on observation: P(Xt | y1:t) Predict (multiply motion model): P(Xt+1,Xt | y1:t) “Roll-up” (marginalize prev. time): P(Xt+1 | y1:t)

Y1 Y2 Y3 Y4 Y5 Y6 X1 X2 X3 X4 X5 X6

slide-36
SLIDE 36

36

Implementation

Current belief: P(xt | y1:t-1) = N(xt; Xt, Xt) Multiply sensor and motion model Marginalize

slide-37
SLIDE 37

37

What if observations not “linear”?

Linear observations:

Yt = H Xt + noise

Nonlinear observations:

slide-38
SLIDE 38

38

Incorporating Non-gaussian observations

Nonlinear observation P(Yt | Xt) not Gaussian First approach: Approximate P(Yt | Xt) as CLG

Linearize P(Yt | Xt) around current estimate E[Xt | y1..t-1] Known as Extended Kalman Filter (EKF) Can perform poorly if P(Yt | Xt) highly nonlinear

Second approach: Approximate P(Yt, Xt) as Gaussian

Takes correlation in Xt into account After obtaining approximation, condition on Yt=yt (now a “linear” observation)

slide-39
SLIDE 39

39

Finding Gaussian approximations

Need to find Gaussian approximation of P(Xt,Yt) How?

Gaussians in Exponential Family Moment matching!!

E[Yt] = E[Yt2] = E[Xt Yt] =

slide-40
SLIDE 40

40

Linearization by integration

Need to integrate product of Gaussian with arbitrary function Can do that by numerical integration

Approximate integral as weighted sum of evaluation points Gaussian quadrature defines locations and weights of points For 1 dim: Exact for polynomials of degree D if choosing 2D points using Gaussian quadrature For higher dimensions: Need exponentially many points to achieve exact evaluation for polynomials

Application of this is known as “Unscented” Kalman Filter (UKF)

slide-41
SLIDE 41

41

Factored dynamical models

So far: HMMs and Kalman filters What if we have more than one variable at each time step?

E.g., temperature at different locations, or road conditions in a road network? Spatio-temporal models

Y1 Y2 Y3 Y4 Y5 Y6 X1 X2 X3 X4 X5 X6

slide-42
SLIDE 42

42

Dynamic Bayesian Networks

At every timestep have a Bayesian Network Variables at each time step t called a “slice” St “Temporal” edges connecting St+1 with St A1 B1 C1 D1 E1 A2 B2 C2 D2 E2 A3 B3 C3 D3 E3

slide-43
SLIDE 43

43

Tasks

Read Koller & Friedman Chapters 6.2.3, 15.1