Embedded probabilistic programming Oleg Kiselyov Chung-chieh Shan - - PowerPoint PPT Presentation

embedded probabilistic programming
SMART_READER_LITE
LIVE PREVIEW

Embedded probabilistic programming Oleg Kiselyov Chung-chieh Shan - - PowerPoint PPT Presentation

Embedded probabilistic programming Oleg Kiselyov Chung-chieh Shan FNMOC Rutgers University oleg@pobox.com ccshan@cs.rutgers.edu 17 July 2009 Probabilistic inference Model (what) Inference (how) Prt


slide-1
SLIDE 1

Embedded probabilistic programming

Oleg Kiselyov

FNMOC

  • leg@pobox.com

Chung-chieh Shan

Rutgers University ccshan@cs.rutgers.edu

17 July 2009

slide-2
SLIDE 2

2/14

Probabilistic inference

Model (what) Inference (how)

Pr✭❘❡❛❧✐t②✮ Pr✭❖❜s ❥ ❘❡❛❧✐t②✮ ♦❜s

✾ ❂ ❀ Pr✭❘❡❛❧✐t② ❥ ❖❜s ❂ ♦❜s✮

Pr✭❖❜s ❂ ♦❜s ❥ ❘❡❛❧✐t②✮ Pr✭❘❡❛❧✐t②✮ Pr✭❖❜s ❂ ♦❜s✮

slide-3
SLIDE 3

2/14

Declarative probabilistic inference

Model (what) Inference (how)

Pr✭❘❡❛❧✐t②✮ Pr✭❖❜s ❥ ❘❡❛❧✐t②✮ ♦❜s

✾ ❂ ❀ Pr✭❘❡❛❧✐t② ❥ ❖❜s ❂ ♦❜s✮

Pr✭❖❜s ❂ ♦❜s ❥ ❘❡❛❧✐t②✮ Pr✭❘❡❛❧✐t②✮ Pr✭❖❜s ❂ ♦❜s✮

slide-4
SLIDE 4

2/14

Declarative probabilistic inference

Model (what) Inference (how) Toolkit

(BNT, PFP)

invoke distributions, conditionalization, . . . Language

(BLOG, IBAL, Church)

random choice,

  • bservation, . . .

interpret

slide-5
SLIDE 5

2/14

Declarative probabilistic inference

Model (what) Inference (how) Toolkit

(BNT, PFP)

+ use existing libraries, types, debugger + easy to add custom inference Language

(BLOG, IBAL, Church)

+ random variables are

  • rdinary variables

+ compile models for faster inference

slide-6
SLIDE 6

2/14

Declarative probabilistic inference

Model (what) Inference (how) Toolkit

(BNT, PFP)

+ use existing libraries, types, debugger + easy to add custom inference Language

(BLOG, IBAL, Church)

+ random variables are

  • rdinary variables

+ compile models for faster inference Today: Best of both invoke interpret Express models and inference as interacting programs in the same general-purpose language.

slide-7
SLIDE 7

2/14

Declarative probabilistic inference

Model (what) Inference (how) Toolkit

(BNT, PFP)

+ use existing libraries, types, debugger + easy to add custom inference Language

(BLOG, IBAL, Church)

+ random variables are

  • rdinary variables

+ compile models for faster inference Today: Best of both Payoff: expressive model + models of inference: bounded-rational theory of mind Payoff: fast inference + deterministic parts of models run at full speed + importance sampling Express models and inference as interacting programs in the same general-purpose language.

slide-8
SLIDE 8

3/14

Outline

◮ Expressivity

Nested inference Implementation Reifying a model into a search tree Importance sampling with look-ahead Performance

slide-9
SLIDE 9

4/14

Hidden Markov model

1 2 3 4 5 6 7

0.7 0.7 0.4 0.4 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3

Pr✭❙t❛t❡✺ ❥ ❖❜s✹ ❂ ✮

slide-10
SLIDE 10

4/14

Hidden Markov model

1 2 3 4 5 6 7

0.7 0.7 0.4 0.4 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3

L R

6/7 1/7

L R

5/7 2/7

L R

4/7 3/7

L R

3/7 4/7

L R

2/7 5/7

L R

1/7 6/7

L R

1 1

Pr✭❙t❛t❡✺ ❥ ❖❜s✹ ❂ ✮

slide-11
SLIDE 11

4/14

Hidden Markov model

1 2 3 4 5 6 7

0.7 0.7 0.4 0.4 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3

L R

6/7 1/7

L R

5/7 2/7

L R

4/7 3/7

L R

3/7 4/7

L R

2/7 5/7

L R

1/7 6/7

L R

1 1 1/8

L Pr✭❙t❛t❡✺ ❥ ❖❜s✹ ❂ L✮

slide-12
SLIDE 12

4/14

Hidden Markov model

1 2 3 4 5 6 7

0.7 0.7 0.4 0.4 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3

L R

6/7 1/7

L R

5/7 2/7

L R

4/7 3/7

L R

3/7 4/7

L R

2/7 5/7

L R

1/7 6/7

L R

1 1

2 3 3 4 3

1/8 0.3 0.4 0.3 0.3

L

3/7

Pr✭❙t❛t❡✺ ❥ ❖❜s✹ ❂ L✮

slide-13
SLIDE 13

4/14

Hidden Markov model

type state = int type obs = L | R let nstates = 8 1 2 3 4 5 6 7

0.7 0.7 0.4 0.4 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3

L R

6/7 1/7

L R

5/7 2/7

L R

4/7 3/7

L R

3/7 4/7

L R

2/7 5/7

L R

1/7 6/7

L R

1 1 1/8

L Pr✭❙t❛t❡✺ ❥ ❖❜s✹ ❂ L✮

slide-14
SLIDE 14

4/14

Hidden Markov model

type state = int type obs = L | R let nstates = 8 let transition_prob = [| [(0.7,0); (0.3,1)]; ... |] let evolve : state -> state = fun st -> dist (transition_prob.(st)) let observe : state -> obs = fun st -> let p = float st /. float (nstates - 1) in dist [(1.-.p, L); (p, R)] let rec run = fun n obs -> let st = if n = 1 then uniform nstates else evolve (run (n - 1) obs) in

  • bs st n; st

run 5 (fun st n -> if n = 4 && observe st <> L then fail ())

Models are ordinary code (in OCaml) using a library function dist.

slide-15
SLIDE 15

4/14

Hidden Markov model

type state = int type obs = L | R let nstates = 8 let transition_prob = [| [(0.7,0); (0.3,1)]; ... |] let evolve : state -> state = fun st -> dist (transition_prob.(st)) let observe : state -> obs = fun st -> let p = float st /. float (nstates - 1) in dist [(1.-.p, L); (p, R)] let rec run = fun n obs -> let st = if n = 1 then uniform nstates else evolve (run (n - 1) obs) in

  • bs st n; st

run 5 (fun st n -> if n = 4 && observe st <> L then fail ())

Models are ordinary code (in OCaml) using a library function dist.

slide-16
SLIDE 16

4/14

Hidden Markov model

type state = int type obs = L | R let nstates = 8 let transition_prob = [| [(0.7,0); (0.3,1)]; ... |] let evolve : state -> state = fun st -> dist (transition_prob.(st)) let observe : state -> obs = fun st -> let p = float st /. float (nstates - 1) in dist [(1.-.p, L); (p, R)] let rec run = fun n obs -> let st = if n = 1 then uniform nstates else evolve (run (n - 1) obs) in

  • bs st n; st

run 5 (fun st n -> if n = 4 && observe st <> L then fail ())

Models are ordinary code (in OCaml) using a library function dist.

slide-17
SLIDE 17

4/14

Hidden Markov model

type state = int type obs = L | R let nstates = 8 let transition_prob = [| [(0.7,0); (0.3,1)]; ... |] let evolve : state -> state = fun st -> dist (transition_prob.(st)) let observe : state -> obs = fun st -> let p = float st /. float (nstates - 1) in dist [(1.-.p, L); (p, R)] let rec run = fun n obs -> let st = if n = 1 then uniform nstates else evolve (run (n - 1) obs) in

  • bs st n; st

normalize (exact_reify (fun () -> run 5 (fun st n -> if n = 4 && observe st <> L then fail ())))

Models are ordinary code (in OCaml) using a library function dist. Inference applies to a thunk and returns a distribution.

slide-18
SLIDE 18

4/14

Hidden Markov model

type state = int type obs = L | R let nstates = 8 let transition_prob = [| [(0.7,0); (0.3,1)]; ... |] let evolve : state -> state = fun st -> dist (transition_prob.(st)) let observe : state -> obs = fun st -> let p = float st /. float (nstates - 1) in dist [(1.-.p, L); (p, R)] let rec run = fun n obs -> let st = if n = 1 then uniform nstates else evolve (run (n - 1) obs) in

  • bs st n; st

normalize (exact_reify (fun () -> run 5 (fun st n -> if n = 4 && observe st <> L then fail ())))

Models are ordinary code (in OCaml) using a library function dist. Inference applies to a thunk and returns a distribution. Deterministic parts of models run at full speed.

slide-19
SLIDE 19

5/14

Models as programs in a general-purpose language

Reuse existing infrastructure!

◮ Rich libraries: lists, arrays, database access, I/O, . . . ◮ Type inference ◮ Functions as first-class values ◮ Compiler ◮ Debugger ◮ Memoization

Express Dirichlet processes, etc. (Goodman et al. 2008) Speed up inference using lazy evaluation bucket elimination sampling w/memoization (Pfeffer 2007)

slide-20
SLIDE 20

6/14

Hidden Markov model

type state = int type obs = L | R let nstates = 8 let transition_prob = [| [(0.7,0); (0.3,1)]; ... |] let evolve : state -> state = fun st -> dist (transition_prob.(st)) let observe : state -> obs = fun st -> let p = float st /. float (nstates - 1) in dist [(1.-.p, L); (p, R)] let rec run = fun n obs -> let st = if n = 1 then uniform nstates else evolve (run (n - 1) obs) in

  • bs st n; st

normalize (exact_reify (fun () -> run 5 (fun st n -> if n = 4 && observe st <> L then fail ())))

slide-21
SLIDE 21

6/14

Hidden Markov model

type state = int type obs = L | R let nstates = 8 let transition_prob = [| [(0.7,0); (0.3,1)]; ... |] let evolve : state -> state = fun st -> dist (transition_prob.(st)) let observe : state -> obs = fun st -> let p = float st /. float (nstates - 1) in dist [(1.-.p, L); (p, R)] let rec run = fun n obs -> let st = if n = 1 then uniform nstates else evolve (run (n - 1) obs) in

  • bs st n; st

normalize (exact_reify (fun () -> run 5 (fun st n -> if n = 4 && observe st <> L then fail ())))

slide-22
SLIDE 22

6/14

Hidden Markov model

type state = int type obs = L | R let nstates = 8 let transition_prob = [| [(0.7,0); (0.3,1)]; ... |] let evolve : state -> state = fun st -> dist (transition_prob.(st)) let observe : state -> obs = fun st -> let p = float st /. float (nstates - 1) in dist [(1.-.p, L); (p, R)] let rec run = fun n obs -> let st = if n = 1 then uniform nstates else evolve (dist (exact_reify (fun () -> run (n - 1) obs))) in

  • bs st n; st

normalize (exact_reify (fun () -> run 5 (fun st n -> if n = 4 && observe st <> L then fail ())))

slide-23
SLIDE 23

7/14

Self application: nested inference

Choose a coin that is either fair or completely biased for true.

let biased = flip 0.5 in let coin = fun () -> flip 0.5 || biased in ♣ ♣ ♣ ✵✿✸

slide-24
SLIDE 24

7/14

Self application: nested inference

Choose a coin that is either fair or completely biased for true.

let biased = flip 0.5 in let coin = fun () -> flip 0.5 || biased in

Let ♣ be the probability that flipping the coin yields true.

What is the probability that ♣ is at least ✵✿✸?

slide-25
SLIDE 25

7/14

Self application: nested inference

Choose a coin that is either fair or completely biased for true.

let biased = flip 0.5 in let coin = fun () -> flip 0.5 || biased in

Let ♣ be the probability that flipping the coin yields true.

What is the probability that ♣ is at least ✵✿✸? Answer: 1.

at_least 0.3 true (exact_reify coin)

slide-26
SLIDE 26

7/14

Self application: nested inference

exact_reify (fun () ->

Choose a coin that is either fair or completely biased for true.

let biased = flip 0.5 in let coin = fun () -> flip 0.5 || biased in

Let ♣ be the probability that flipping the coin yields true.

What is the probability that ♣ is at least ✵✿✸? Answer: 1.

at_least 0.3 true (exact_reify coin) )

slide-27
SLIDE 27

7/14

Self application: nested inference

exact_reify (fun () ->

Choose a coin that is either fair or completely biased for true.

let biased = flip 0.5 in let coin = fun () -> flip 0.5 || biased in

Let ♣ be the probability that flipping the coin yields true. Estimate ♣ by flipping the coin twice. What is the probability that our estimate of ♣ is at least ✵✿✸? Answer: 7/8.

at_least 0.3 true (sample 2 coin) )

slide-28
SLIDE 28

7/14

Self application: nested inference

exact_reify (fun () ->

Choose a coin that is either fair or completely biased for true.

let biased = flip 0.5 in let coin = fun () -> flip 0.5 || biased in

Let ♣ be the probability that flipping the coin yields true. Estimate ♣ by flipping the coin twice. What is the probability that our estimate of ♣ is at least ✵✿✸? Answer: 7/8.

at_least 0.3 true (sample 2 coin) )

Returns a distribution—not just nested query (Goodman et al. 2008). Inference procedures are OCaml code using dist, like models. Works with observation, recursion, memoization. Bounded-rational theory of mind without interpretive overhead.

slide-29
SLIDE 29

8/14

Outline

Expressivity Nested inference

◮ Implementation

Reifying a model into a search tree Importance sampling with look-ahead Performance

slide-30
SLIDE 30

9/14

Reifying a model into a search tree

true

✳✽ ✳✷ ✳✸

false

✳✷ . . . ✳✻ . . . ✳✸ ✳✺ Exact inference by depth-first brute-force enumeration. Rejection sampling by top-down random traversal.

slide-31
SLIDE 31

9/14

Reifying a model into a search tree

  • pen
  • pen

true

✳✽

  • pen

✳✷ ✳✸

false

✳✷

  • pen
  • pen

✳✻

  • pen

✳✸ ✳✺ Exact inference by depth-first brute-force enumeration. Rejection sampling by top-down random traversal.

slide-32
SLIDE 32

9/14

Reifying a model into a search tree

closed

  • pen

true

✳✽

  • pen

✳✷ ✳✸

false

✳✷

  • pen
  • pen

✳✻

  • pen

✳✸ ✳✺ Exact inference by depth-first brute-force enumeration. Rejection sampling by top-down random traversal.

slide-33
SLIDE 33

9/14

Reifying a model into a search tree

closed closed

true

✳✽

  • pen

✳✷ ✳✸

false

✳✷ closed

  • pen

✳✻

  • pen

✳✸ ✳✺ Exact inference by depth-first brute-force enumeration. Rejection sampling by top-down random traversal.

slide-34
SLIDE 34

9/14

Reifying a model into a search tree

closed closed

true

✳✽ closed ✳✷ ✳✸

false

✳✷ closed

  • pen

✳✻

  • pen

✳✸ ✳✺ Exact inference by depth-first brute-force enumeration. Rejection sampling by top-down random traversal.

slide-35
SLIDE 35

9/14

Reifying a model into a search tree

  • pen

closed

true

✳✽ closed ✳✷ ✳✸

false

✳✷ closed

  • pen

✳✻

  • pen

✳✸ ✳✺

unit -> bool

reify reflect Inference procedures cannot access models’ source code. Reify then reflect (materialized views):

◮ Brute-force enumeration becomes bucket elimination ◮ Sampling becomes particle filtering

slide-36
SLIDE 36

9/14

Reifying a model into a search tree

  • pen

closed

true

✳✽ closed ✳✷ ✳✸

false

✳✷ closed

  • pen

✳✻

  • pen

✳✸ ✳✺

unit -> bool

reify reflect Implementation:

◮ represent a probability and state monad

(Giry 1982, Moggi 1990, Filinski 1994)

◮ using first-class delimited continuations

(Strachey & Wadsworth 1974, Felleisen et al. 1987, Danvy & Filinski 1989)

slide-37
SLIDE 37

9/14

Reifying a model into a search tree

  • pen

closed

true

✳✽ closed ✳✷ ✳✸

false

✳✷ closed

  • pen

✳✻

  • pen

✳✸ ✳✺

unit -> bool

reify reflect Implementation: shallow DSL embedding

let dist ch = List.map ... ch let literal x = unit x let app e0 e1 = bind e0 (fun f -> bind e1 (fun x -> f x))

Continuation-passing style

let dist ch = fun k -> List.map ...k... ch let literal x = fun k -> k x let app e0 e1 = fun k -> e0 (fun f -> e1 (fun x -> f x k))

First-class delimited continuations

let dist ch = shift (fun k -> List.map ...k... ch) let literal x = x let app e0 e1 = e0 e1

slide-38
SLIDE 38

9/14

Reifying a model into a search tree

  • pen

closed

true

✳✽ closed ✳✷ ✳✸

false

✳✷ closed

  • pen

✳✻

  • pen

✳✸ ✳✺

unit -> bool

reify reflect Implementation: using clonable user-level threads

◮ Model runs inside a thread. ◮ dist clones the thread. ◮ fail kills the thread. ◮ Memoization mutates thread-local storage.

Analogy: Virtualize (not emulate) a CPU. Nesting works.

slide-39
SLIDE 39

10/14

Importance sampling with look-ahead

  • pen
  • pen

true

✳✽

  • pen

✳✷ ✳✸

false

✳✷

  • pen
  • pen

✳✻

  • pen

✳✸ ✳✺ Probability mass ♣❝ ❂ ✶

✭✿✷❀ ✮ ✭✿✻❀ ✮

slide-40
SLIDE 40

10/14

Importance sampling with look-ahead

closed

  • pen

true

✳✽

  • pen

✳✷ ✳✸

false

✳✷

  • pen
  • pen

✳✻

  • pen

✳✸ ✳✺ Probability mass ♣❝ ❂ ✶

✭✿✷❀ ✮ ✭✿✻❀ ✮

  • 1. Expand one level.
slide-41
SLIDE 41

10/14

Importance sampling with look-ahead

closed

  • pen

true

✳✽

  • pen

✳✷ ✳✸

false

✳✷

  • pen
  • pen

✳✻

  • pen

✳✸ ✳✺ Probability mass ♣❝ ❂ ✶

✭✿✷❀ false✮ ✭✿✻❀ ✮

  • 1. Expand one level.
  • 2. Report shallow successes.
slide-42
SLIDE 42

10/14

Importance sampling with look-ahead

closed

✿✸ closed true

✳✽

  • pen

✳✷ ✳✸

false

✳✷

✿✹✺ closed

  • pen

✳✻

  • pen

✳✸ ✳✺ Probability mass ♣❝ ❂ ✿✼✺

✭✿✷❀ false✮ ✭✿✻❀ ✮

  • 1. Expand one level.
  • 2. Report shallow successes.
  • 3. Expand one more level and tally open probability.
slide-43
SLIDE 43

10/14

Importance sampling with look-ahead

closed closed

true

✳✽

  • pen

✳✷ ✳✸

false

✳✷ closed

  • pen

✳✻

  • pen

✳✸ ✳✺ Probability mass ♣❝ ❂ ✿✼✺

✭✿✷❀ false✮ ✭✿✻❀ ✮

  • 1. Expand one level.
  • 2. Report shallow successes.
  • 3. Expand one more level and tally open probability.
  • 4. Randomly choose a branch and go back to 2.
slide-44
SLIDE 44

10/14

Importance sampling with look-ahead

closed closed

true

✳✽

  • pen

✳✷ ✳✸

false

✳✷ closed

  • pen

✳✻

  • pen

✳✸ ✳✺ Probability mass ♣❝ ❂ ✿✼✺

✭✿✷❀ false✮ ✭✿✻❀ true✮

  • 1. Expand one level.
  • 2. Report shallow successes.
  • 3. Expand one more level and tally open probability.
  • 4. Randomly choose a branch and go back to 2.
slide-45
SLIDE 45

10/14

Importance sampling with look-ahead

closed closed

true

✳✽

✵ closed

✳✷ ✳✸

false

✳✷ closed

  • pen

✳✻

  • pen

✳✸ ✳✺ Probability mass ♣❝ ❂ ✵

✭✿✷❀ false✮ ✭✿✻❀ true✮

  • 1. Expand one level.
  • 2. Report shallow successes.
  • 3. Expand one more level and tally open probability.
  • 4. Randomly choose a branch and go back to 2.
slide-46
SLIDE 46

10/14

Importance sampling with look-ahead

closed closed

true

✳✽ closed ✳✷ ✳✸

false

✳✷ closed

  • pen

✳✻

  • pen

✳✸ ✳✺ Probability mass ♣❝ ❂ ✵

✭✿✷❀ false✮ ✭✿✻❀ true✮

  • 1. Expand one level.
  • 2. Report shallow successes.
  • 3. Expand one more level and tally open probability.
  • 4. Randomly choose a branch and go back to 2.
slide-47
SLIDE 47

11/14

Outline

Expressivity Nested inference Implementation Reifying a model into a search tree Importance sampling with look-ahead

◮ Performance

slide-48
SLIDE 48

12/14

Motivic development in Beethoven sonatas

(Pfeffer 2007)

  • Source motif
slide-49
SLIDE 49

12/14

Motivic development in Beethoven sonatas

(Pfeffer 2007)

  • Source motif
slide-50
SLIDE 50

12/14

Motivic development in Beethoven sonatas

(Pfeffer 2007)

  • Source motif
slide-51
SLIDE 51

12/14

Motivic development in Beethoven sonatas

(Pfeffer 2007)

  • Source motif
slide-52
SLIDE 52

12/14

Motivic development in Beethoven sonatas

(Pfeffer 2007)

infer

  • Destination motif

Source motif

✎ ✎

slide-53
SLIDE 53

12/14

Motivic development in Beethoven sonatas

(Pfeffer 2007)

infer

  • Destination motif

Source motif

  • Implemented using lazy stochastic lists.

Motif pair 1 2 3 4 5 6 7 % correct using importance sampling

✎ Pfeffer 2007 (30 sec)

93 100 28 80 98 100 63

✎ This paper

(90 sec) 98 100 29 87 94 100 77

✎ This paper

(30 sec) 92 99 25 46 72 95 61

slide-54
SLIDE 54

12/14

Motivic development in Beethoven sonatas

(Pfeffer 2007)

5 10 15 20 25 30 35 40

  • 19
  • 18
  • 17
  • 16
  • 15
  • 14
  • 13

Frequency in 100 trials ln Pr(D = 1 | S = 1) IBAL 90 seconds 30 seconds

slide-55
SLIDE 55

13/14

Noisy radar blips for aircraft tracking

(Milch et al. 2007)

Blips present and absent infer 1 2 3 4 5 6 7 Number of planes Probability Particle filter. Implemented using lazy stochastic coordinates.

slide-56
SLIDE 56

13/14

Noisy radar blips for aircraft tracking

(Milch et al. 2007)

Blips present and absent

t ❂ ✶

infer 1 2 3 4 5 6 Number of planes Probability Particle filter. Implemented using lazy stochastic coordinates.

slide-57
SLIDE 57

13/14

Noisy radar blips for aircraft tracking

(Milch et al. 2007)

Blips present and absent

t ❂ ✶, t ❂ ✷

infer 1 2 3 4 Number of planes Probability Particle filter. Implemented using lazy stochastic coordinates.

slide-58
SLIDE 58

13/14

Noisy radar blips for aircraft tracking

(Milch et al. 2007)

Blips present and absent

t ❂ ✶, t ❂ ✷, t ❂ ✸

infer 3 4 Number of planes Probability Particle filter. Implemented using lazy stochastic coordinates.

slide-59
SLIDE 59

14/14

Declarative probabilistic inference

Model (what) Inference (how) Toolkit

(BNT, PFP)

+ use existing libraries, types, debugger + easy to add custom inference Language

(BLOG, IBAL, Church)

+ random variables are

  • rdinary variables

+ compile models for faster inference Today: Best of both Payoff: expressive model + models of inference: bounded-rational theory of mind Payoff: fast inference + deterministic parts of models run at full speed + importance sampling Express models and inference as interacting programs in the same general-purpose language.