Stochastic particle methods in Bayesian statistical learning P. Del - - PowerPoint PPT Presentation

stochastic particle methods in bayesian statistical
SMART_READER_LITE
LIVE PREVIEW

Stochastic particle methods in Bayesian statistical learning P. Del - - PowerPoint PPT Presentation

Stochastic particle methods in Bayesian statistical learning P. Del Moral (INRIA team ALEA) INRIA & Bordeaux Mathematical Institute & X CMAP INRIA Workshop on Statistical Learning IHP Paris, Dec. 5-6, 2011 Some hyper-refs


slide-1
SLIDE 1

Stochastic particle methods in Bayesian statistical learning

  • P. Del Moral (INRIA team ALEA)

INRIA & Bordeaux Mathematical Institute & X CMAP INRIA Workshop on Statistical Learning IHP Paris, Dec. 5-6, 2011

Some hyper-refs

Feynman-Kac formulae, Genealogical & Interacting Particle Systems with appl., Springer (2004)

Sequential Monte Carlo Samplers JRSS B. (2006). (joint work with Doucet & Jasra)

On the concentration of interacting processes Hal-inria (2011). (joint work with Hu & Wu) [+ Refs]

More references on the website http://www.math.u-bordeaux1.fr/∼delmoral/index.html [+ Links]

slide-2
SLIDE 2

Stochastic particle sampling methods Interacting jumps models Genetic type interacting particle models Particle Feynman-Kac models The 4 particle estimates Island particle models (⊂ Parallel Computing) Bayesian statistical learning Nonlinear filtering models Fixed parameter estimation in HMM models Particle stochastic gradient models Approximate Bayesian Computation Interacting Kalman-Filters Uncertainty propagations in numerical codes Concentration inequalities Current population models Particle free energy Genealogical tree models Backward particle models

slide-3
SLIDE 3

Stochastic particle sampling methods Interacting jumps models Genetic type interacting particle models Particle Feynman-Kac models The 4 particle estimates Island particle models (⊂ Parallel Computing) Bayesian statistical learning Concentration inequalities

slide-4
SLIDE 4

Introduction

Stochastic particle methods = Universal adaptive sampling technique 2 types of stochastic interacting particle models:

◮ Diffusive particle models with mean field drifts

[McKean-Vlasov style] ← → Fluid mechanics

slide-5
SLIDE 5

Introduction

Stochastic particle methods = Universal adaptive sampling technique 2 types of stochastic interacting particle models:

◮ Diffusive particle models with mean field drifts

[McKean-Vlasov style] ← → Fluid mechanics

◮ Interacting jump particle models

[Boltzmann & Feynman-Kac style] ← → Fluid mech. & Physics + Biology + Engineering + Statistics

slide-6
SLIDE 6

Lectures ⊂ Interacting jumps models

◮ Interacting jumps = Recycling transitions = ◮ Discrete time models (⇔ geometric rejection/jump times)

slide-7
SLIDE 7

Genetic type interacting particle models

◮ Mutation-Proposals w.r.t. Markov transitions Xn−1 Xn ∈ En. ◮ Selection-Rejection-Recycling w.r.t. potential/fitness function Gn.

slide-8
SLIDE 8

Equivalent particle algorithms

Sequential Monte Carlo Sampling Resampling Particle Filters Prediction Updating Genetic Algorithms Mutation Selection Evolutionary Population Exploration Branching-selection Diffusion Monte Carlo Free evolutions Absorption Quantum Monte Carlo Walkers motions Reconfiguration Sampling Algorithms Transition proposals Accept-reject-recycle

slide-9
SLIDE 9

Equivalent particle algorithms

Sequential Monte Carlo Sampling Resampling Particle Filters Prediction Updating Genetic Algorithms Mutation Selection Evolutionary Population Exploration Branching-selection Diffusion Monte Carlo Free evolutions Absorption Quantum Monte Carlo Walkers motions Reconfiguration Sampling Algorithms Transition proposals Accept-reject-recycle More botanical names: bootstrapping, spawning, cloning, pruning, replenish, multi-level splitting, enrichment, go with the winner, . . . 1950 ≤ Meta-Heuristic style stochastic algorithms ≤ 1996

slide-10
SLIDE 10

A single stochastic model

Particle interpretation of Feynman-Kac path integrals

slide-11
SLIDE 11

Genealogical tree evolution (size,time)=(N, n) = (3, 3)

  • ✲ •
  • ✲ • = •
  • ✲ •

✲ ✲

  • = •
  • ✲ •

✲ ✲

  • = •
slide-12
SLIDE 12

Genealogical tree evolution (size,time)=(N, n) = (3, 3)

  • ✲ •
  • ✲ • = •
  • ✲ •

✲ ✲

  • = •
  • ✲ •

✲ ✲

  • = •

Meta-heuristics ”96’ Meta-Theorem” : Ancestral lines ≃ i.i.d. path samples w.r.t. Feynman-Kac measure Qn := 1 Zn   

  • 0≤p<n

Gp(Xp)    Pn with Pn := Law (X0, . . . , Xn)

slide-13
SLIDE 13

Genealogical tree evolution (size,time)=(N, n) = (3, 3)

  • ✲ •
  • ✲ • = •
  • ✲ •

✲ ✲

  • = •
  • ✲ •

✲ ✲

  • = •

Meta-heuristics ”96’ Meta-Theorem” : Ancestral lines ≃ i.i.d. path samples w.r.t. Feynman-Kac measure Qn := 1 Zn   

  • 0≤p<n

Gp(Xp)    Pn with Pn := Law (X0, . . . , Xn) & Inversely!

slide-14
SLIDE 14

Genealogical tree evolution (size,time)=(N, n) = (3, 3)

  • ✲ •
  • ✲ • = •
  • ✲ •

✲ ✲

  • = •
  • ✲ •

✲ ✲

  • = •

Meta-heuristics ”96’ Meta-Theorem” : Ancestral lines ≃ i.i.d. path samples w.r.t. Feynman-Kac measure Qn := 1 Zn   

  • 0≤p<n

Gp(Xp)    Pn with Pn := Law (X0, . . . , Xn) & Inversely! example Qn = Law((X0, . . . , Xn) | Xp ∈ Ap, p < n) ⇐ ⇒ Gn = 1An

slide-15
SLIDE 15

Particle estimates

More formally

  • ξi

0,n, ξi 1,n, . . . , ξi n,n

  • := i-th ancetral line of the i-th current individual = ξi

n

⇓ 1 N

  • 1≤i≤N

δ(ξi

0,n,ξi 1,n,...,ξi n,n) −

→N→∞ Qn

slide-16
SLIDE 16

Particle estimates

More formally

  • ξi

0,n, ξi 1,n, . . . , ξi n,n

  • := i-th ancetral line of the i-th current individual = ξi

n

⇓ 1 N

  • 1≤i≤N

δ(ξi

0,n,ξi 1,n,...,ξi n,n) −

→N→∞ Qn ⊕ Current population models ηN

n := 1

N

  • 1≤i≤N

δξi

n −

→N→∞ ηn = n-th time marginal of Qn

slide-17
SLIDE 17

Particle estimates

More formally

  • ξi

0,n, ξi 1,n, . . . , ξi n,n

  • := i-th ancetral line of the i-th current individual = ξi

n

⇓ 1 N

  • 1≤i≤N

δ(ξi

0,n,ξi 1,n,...,ξi n,n) −

→N→∞ Qn ⊕ Current population models ηN

n := 1

N

  • 1≤i≤N

δξi

n −

→N→∞ ηn = n-th time marginal of Qn ⊕ Unbiased particle approximation ZN

n =

  • 0≤p<n

ηN

p (Gp) −

→N→∞ Zn = E  

0≤p<n

Gp(Xp)   =

  • 0≤p<n

ηp(Gp)

slide-18
SLIDE 18

Particle estimates

More formally

  • ξi

0,n, ξi 1,n, . . . , ξi n,n

  • := i-th ancetral line of the i-th current individual = ξi

n

⇓ 1 N

  • 1≤i≤N

δ(ξi

0,n,ξi 1,n,...,ξi n,n) −

→N→∞ Qn ⊕ Current population models ηN

n := 1

N

  • 1≤i≤N

δξi

n −

→N→∞ ηn = n-th time marginal of Qn ⊕ Unbiased particle approximation ZN

n =

  • 0≤p<n

ηN

p (Gp) −

→N→∞ Zn = E  

0≤p<n

Gp(Xp)   =

  • 0≤p<n

ηp(Gp) Ex.: Gn = 1An ZN

n = proportion of success −

→ P(Xp ∈ Ap, p < n)

slide-19
SLIDE 19

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-20
SLIDE 20

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-21
SLIDE 21

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-22
SLIDE 22

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-23
SLIDE 23

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-24
SLIDE 24

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-25
SLIDE 25

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-26
SLIDE 26

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-27
SLIDE 27

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-28
SLIDE 28

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-29
SLIDE 29

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-30
SLIDE 30

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-31
SLIDE 31

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-32
SLIDE 32

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-33
SLIDE 33

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-34
SLIDE 34

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-35
SLIDE 35

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-36
SLIDE 36

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-37
SLIDE 37

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-38
SLIDE 38

Graphical illustration : ηN

n := 1 N

  • 1≤i≤N δξi

n ≃ ηn

slide-39
SLIDE 39

Complete ancestral tree when Gn−1(x)Mn(x, dy) = Hn(x, y) λ(dy)

Backward Markov chain model QN

n (d(x0, . . . , xn)) := ηN n (dxn) Mn,ηN

n−1(xn, dxn−1) . . . M1,ηN 0 (x1, dx0)

with the random particle matrices: Mn+1,ηN

n (xn+1, dxn) ∝ ηN

n (dxn) Hn+1(xn, xn+1)

slide-40
SLIDE 40

Complete ancestral tree when Gn−1(x)Mn(x, dy) = Hn(x, y) λ(dy)

Backward Markov chain model QN

n (d(x0, . . . , xn)) := ηN n (dxn) Mn,ηN

n−1(xn, dxn−1) . . . M1,ηN 0 (x1, dx0)

with the random particle matrices: Mn+1,ηN

n (xn+1, dxn) ∝ ηN

n (dxn) Hn+1(xn, xn+1)

Example: Normalized additive functionals fn(x0, . . . , xn) = 1 n + 1

  • 0≤p≤n

fp(xp) ⇓ QN

n (fn) :=

1 n + 1

  • 0≤p≤n

ηN

n Mn,ηN

n−1 . . . Mp+1,ηN p (fp)

  • matrix operations
slide-41
SLIDE 41

Island models (⊂ Parallel Computing)

Reminder : the unbiased property E  fn(Xn)

  • 0≤p<n

Gp(Xp)   = E  ηN

n (fn)

  • 0≤p<n

ηN

p (Gp)

  = E  Fn(Xn)

  • 0≤p<n

Gp(Xp)   with the Island evolution Markov chain model Xn := ηN

n

and Gn(Xn) = ηN

n (Gn) = Xn (Gn)

⇓ particle model with (Xn, Gn(Xn)) = Interacting Island particle model

slide-42
SLIDE 42

Stochastic particle sampling methods Bayesian statistical learning Nonlinear filtering models Fixed parameter estimation in HMM models Particle stochastic gradient models Approximate Bayesian Computation Interacting Kalman-Filters Uncertainty propagations in numerical codes Concentration inequalities

slide-43
SLIDE 43

Bayesian statistical learning

slide-44
SLIDE 44

Signal processing & filtering models

Law (Markov process X | Noisy & Partial observations Y )

◮ Signal X :

target evolution (missile, plane, robot, vehicle, image contours), forecasting models, assets volatility, speech signals, ...

◮ Observation Y : Radar/Sonar/Gps sensors, financial assets prices,

image processing, audio receivers, statistical data measurements, ...

slide-45
SLIDE 45

Signal processing & filtering models

Law (Markov process X | Noisy & Partial observations Y )

◮ Signal X :

target evolution (missile, plane, robot, vehicle, image contours), forecasting models, assets volatility, speech signals, ...

◮ Observation Y : Radar/Sonar/Gps sensors, financial assets prices,

image processing, audio receivers, statistical data measurements, ... ⊂ Multiple objects tracking models (highly more complex pb)

On the Stability and the Approximation of Branching Distribution Flows, with Applications to Nonlinear Multiple Target Filtering. Francois Caron, Pierre Del Moral, Michele Pace, and B.-N. Vo (HAL-INRIA RR-7376) [50p]. Stoch. Analysis and Applications Volume 29, Issue 6, 2011.

Comparison of implementations of Gaussian mixture PHD filters. M. Pace, P. Del Moral, Fr. Caron 13th International Conference on Information. FUSION, EICC, Edinburgh, UK, 26-29 July (2010) Law   X =

  • 1≤i≤NX

t

δXi

t

  • Y =
  • 1≤i≤NY

t

δY i

t

  

slide-46
SLIDE 46

Filtering (prediction ⊕ smoothing) p((x0, . . . , xn) | (y0, . . . , yn)) & p(y0, . . . , yn) ? Bayes’ rule p((x0, . . . , xn) | (y0, . . . , yn)) ∝ p((y0, . . . , yn) | (x0, . . . , xn))

  • 0≤k≤n p(yk|xk)←likelihood functions Gk

×p(x0, . . . , xn)

slide-47
SLIDE 47

Filtering (prediction ⊕ smoothing) p((x0, . . . , xn) | (y0, . . . , yn)) & p(y0, . . . , yn) ? Bayes’ rule p((x0, . . . , xn) | (y0, . . . , yn)) ∝ p((y0, . . . , yn) | (x0, . . . , xn))

  • 0≤k≤n p(yk|xk)←likelihood functions Gk

×p(x0, . . . , xn) ⇓ Feynman-Kac models : Gn(xn) := p(yn|xn) & Pn := Law (X0, . . . , Xn) Law ((X0, . . . , Xn) | Yp = yp, p < n) = 1 Zn   

  • 0≤p<n

Gp(Xp)    Pn Note : Not unique stochastic model!

slide-48
SLIDE 48

Hidden Markov chains problems

Θ Signal X Θ observations Y Θ Law

  • fixed parameter Θ
  • Noisy & Partial observations Y Θ

◮ Parameter Θ : kinetic model unknown parameters, statistical

parameters (signal/sensors), hypothesis testing, ..

◮ Signal X Θ :

Single or multiple targets evolution, forecasting models, financial assets volatility, speech signals, video images, ...

◮ Observation Y Θ : Radar/Sonar/Gps sensors, financial assets

prices, image processing, statistical data measurements, ...

slide-49
SLIDE 49

Posterior density p( θ | (y0, . . . , yn)) ∝ p((y0, . . . , yn) | θ)

  • 0≤k≤n p(yk|θ,(y0,...,yk−1))←likelihood functions

× p(θ)

slide-50
SLIDE 50

Posterior density p( θ | (y0, . . . , yn)) ∝ p((y0, . . . , yn) | θ)

  • 0≤k≤n p(yk|θ,(y0,...,yk−1))←likelihood functions

× p(θ) ⇓ Multiplicative formulation Law (Θ | (y0, . . . , yn)) ∝   

  • 0≤p≤n

hp(θ)    λ(dθ) with hn(θ) := p(yn|θ, (y0, . . . , yn−1)) & λ := Law (Θ)

slide-51
SLIDE 51

First key observation p((y0, . . . , yn)|θ) =

  • 0≤p≤n

hp(θ) = Zn(θ) with the normalizing constant Zn(θ) of the conditional distribution p((x0, . . . , xn)|(y0, . . . , yn), θ) = 1 p((y0, . . . , yn)|θ) p((y0, . . . , yn)|(x0, . . . , xn), θ) p((x0, . . . , xn)|θ)

slide-52
SLIDE 52

First key observation p((y0, . . . , yn)|θ) =

  • 0≤p≤n

hp(θ) = Zn(θ) with the normalizing constant Zn(θ) of the conditional distribution p((x0, . . . , xn)|(y0, . . . , yn), θ) = 1 p((y0, . . . , yn)|θ) p((y0, . . . , yn)|(x0, . . . , xn), θ) p((x0, . . . , xn)|θ) Second key observation hn(θ) and Zn(θ) easy to compute for linear/gaussian models

slide-53
SLIDE 53

Third key observation : Any target measure of the form ηn(dθ) = 1 Zn   

  • 0≤p≤n

hp(θ)    × λ(dθ) is the n-th time marginal of the Feynman-Kac measure Qn := 1 Zn   

  • 0≤p<n

Gp(Θp)    Pn with Gn = hn+1 and Pn := Law (Θ0, . . . , Θn) where Θp−1 Θp as an MCMC move with target measure ηp

slide-54
SLIDE 54

Particle auxiliary variables θ ξθ ∼ P(θ, dξ) ηn(dθ) ∝   

  • 0≤p≤n

hp(θ)    λ(dθ)

=λ(dθ)×P(θ,dξ)

with θ = (θ, ξ) and hn(θ) := 1 N

N

  • i=1

p(yn | ξθ,i

n ) ≃N↑∞ p(yn|θ, (y0, . . . , yn−1)) = hp(θ)

slide-55
SLIDE 55

Particle auxiliary variables θ ξθ ∼ P(θ, dξ) ηn(dθ) ∝   

  • 0≤p≤n

hp(θ)    λ(dθ)

=λ(dθ)×P(θ,dξ)

with θ = (θ, ξ) and hn(θ) := 1 N

N

  • i=1

p(yn | ξθ,i

n ) ≃N↑∞ p(yn|θ, (y0, . . . , yn−1)) = hp(θ)

But by the unbiased property the θ-marginal of ηn coincides with Law (Θ | (y0, . . . , yn)) ∝   

  • 0≤p≤n

hp(θ)    λ(dθ) Feynman-Kac formulation : Markov chain Θk =

  • Θk, ξ(k)

MCMC with target ηn and Gn = hn+1

slide-56
SLIDE 56

Particle gradient models (θ ∈ Rd)

Zn(θ) = p((y0, . . . , yn−1) | θ) = E  

0≤q<n

p(yq | θ, X θ

q )

  ⇒ ∇ log Zn(θ)

  • derivative

= Q(θ)

n (Λn)

  • path-integral

with the Feynman-Kac measure Q(θ)

n

  • n path space associated with

(X θ

n , G θ n (xn)) = (X θ n , p(yq | θ, xn))

and with the additive functional Λn(x0, . . . , xn) =

  • 0≤p<n

∇ log (p(xq+1|θ, xq)p(yq | θ, xq))

slide-57
SLIDE 57

Particle gradient models (θ ∈ Rd)

Zn(θ) = p((y0, . . . , yn−1) | θ) = E  

0≤q<n

p(yq | θ, X θ

q )

  ⇒ ∇ log Zn(θ)

  • derivative

= Q(θ)

n (Λn)

  • path-integral

with the Feynman-Kac measure Q(θ)

n

  • n path space associated with

(X θ

n , G θ n (xn)) = (X θ n , p(yq | θ, xn))

and with the additive functional Λn(x0, . . . , xn) =

  • 0≤p<n

∇ log (p(xq+1|θ, xq)p(yq | θ, xq)) Particle gradient algorithm Θn = Θn−1 + τn ∇Q(θ)

n (Λn) ≃ Θn−1 + τn ∇Q(θ),N n

(Λn)

slide-58
SLIDE 58

Approximate Bayesian Computation

When p(yn|xn) is untractable or impossible to compute in reasonable time

  • Xn

= Fn(Xn−1, Wn) Yn = Hn(Xn, Vn)

Xn=(Xn,Yn)

− − − − − − − − − − − − →

  • Xn

= Fn(Xn−1, Wn) Y ǫ

n

= Yn + ǫ V ǫ

n

⇓ Law (X | Y ǫ = y ) ≃ǫ↓0 Law (X | Y = y )

slide-59
SLIDE 59

Approximate Bayesian Computation

When p(yn|xn) is untractable or impossible to compute in reasonable time

  • Xn

= Fn(Xn−1, Wn) Yn = Hn(Xn, Vn)

Xn=(Xn,Yn)

− − − − − − − − − − − − →

  • Xn

= Fn(Xn−1, Wn) Y ǫ

n

= Yn + ǫ V ǫ

n

⇓ Law (X | Y ǫ = y ) ≃ǫ↓0 Law (X | Y = y ) ⇓ Feynman-Kac model with the Markov chain and the potentials : Xn = (Xn, Yn) and Gn(Xn) = p(Y ǫ

n |Yn)

slide-60
SLIDE 60

Interacting Kalman-Filters

Xn = (X 1

n , X 2 n ) with X 1 n Markov and (X 2 n , Yn)|X 1 linear-gaussian model

X 2

n

= An(X 1

n ) X 2 n−1 + Bn(X 1 n ) Wn

Yn = Cn(X 1

n ) X 2 n + Dn(X 1 n ) Vn

⇓ Law

  • X 2

n

  • X 1, Yp = yp, p < n
  • = ηX 1,n = Kalman gaussian predictor
slide-61
SLIDE 61

Interacting Kalman-Filters

Xn = (X 1

n , X 2 n ) with X 1 n Markov and (X 2 n , Yn)|X 1 linear-gaussian model

X 2

n

= An(X 1

n ) X 2 n−1 + Bn(X 1 n ) Wn

Yn = Cn(X 1

n ) X 2 n + Dn(X 1 n ) Vn

⇓ Law

  • X 2

n

  • X 1, Yp = yp, p < n
  • = ηX 1,n = Kalman gaussian predictor

⇓ Law

  • (X 1, X 2) | Y
  • = Feynman-Kac model with

Xn =

  • X 1

n , ηX 1,n

  • &

Gn(Xn) =

  • p(Yn | (x1

n, x2 n)) ηX 1,n(dx2 n)

slide-62
SLIDE 62

◮ Uncertainty propagations in numerical codes

Law (Inputs I | Outputs O = C(I) ∈ Reference or Critical event )

  • µ = Law(I)

A = {I : C(I) ∈ B}

→ P (I ∈ A) = µ(A) & Law (I | I ∈ A) = µA

slide-63
SLIDE 63

◮ Uncertainty propagations in numerical codes

Law (Inputs I | Outputs O = C(I) ∈ Reference or Critical event )

  • µ = Law(I)

A = {I : C(I) ∈ B}

→ P (I ∈ A) = µ(A) & Law (I | I ∈ A) = µA Multi-level decomposition hn = 1An with An ↓ = ⇒ µAn(dx) ∝   

  • 0≤p≤n

hp(x)    µ(dx)

slide-64
SLIDE 64

◮ Uncertainty propagations in numerical codes

Law (Inputs I | Outputs O = C(I) ∈ Reference or Critical event )

  • µ = Law(I)

A = {I : C(I) ∈ B}

→ P (I ∈ A) = µ(A) & Law (I | I ∈ A) = µA Multi-level decomposition hn = 1An with An ↓ = ⇒ µAn(dx) ∝   

  • 0≤p≤n

hp(x)    µ(dx) ⇒ Feynman-Kac representation (Xn−1 Xn) = MCMC moves with target µAn & Gn = 1An+1

slide-65
SLIDE 65

Stochastic particle sampling methods Bayesian statistical learning Concentration inequalities Current population models Particle free energy Genealogical tree models Backward particle models

slide-66
SLIDE 66

Current population models

Constants (c1, c2) related to (bias,variance), c universal constant Test funct. fn ≤ 1

◮ ∀ (x ≥ 0, n ≥ 0, N ≥ 1), the probability of the event

  • ηN

n − ηn

  • (f ) ≤ c1

N

  • 1 + x + √x
  • + c2

√ N √x is greater than 1 − e−x.

◮ y = (yi)1≤i≤d (−∞, y] = d

i=1(−∞, yi] cells in En = Rd.

Fn(y) = ηn

  • 1(−∞,y]
  • and

F N

n (y) = ηN n

  • 1(−∞,y]
  • ∀ (x ≥ 0, n ≥ 0, N ≥ 1), the probability of the following event

√ N

  • F N

n − Fn

  • ≤ c
  • d (x + 1)

is greater than 1 − e−x.

slide-67
SLIDE 67

Particle free energy models

Constants (c1, c2) related to (bias,variance), c universal constant.

◮ ∀ (x ≥ 0, n ≥ 0, N ≥ 1, ǫ ∈ {+1, −1}), the probability of the event

ǫ n log ZN

n

Zn ≤ c1 N

  • 1 + x + √x
  • + c2

√ N √x is greater than 1 − e−x. Note : (0 ≤ ǫ ≤ 1 ⇒ (1 − e−ǫ) ∨ (eǫ − 1) ≤ 2ǫ) e−ǫ ≤ zN z ≤ eǫ ⇒

  • zN

z − 1

  • ≤ 2ǫ
slide-68
SLIDE 68

Genealogical tree models := ηN

n (in path space) Constants (c1, c2) related to (bias,variance), c universal constant fn test function fn ≤ 1.

◮ ∀ (x ≥ 0, n ≥ 0, N ≥ 1), the probability of the event

  • ηN

n − Qn

  • (f ) ≤ c1

n + 1 N

  • 1 + x + √x
  • + c2
  • (n + 1)

N √x is greater than 1 − e−x.

◮ Fn = indicator fct. fn of cells in En =

  • Rd0 × . . . , ×Rdn

∀ (x ≥ 0, n ≥ 0, N ≥ 1), the probability of the following event sup

fn∈Fn

  • ηN

n (fn) − Qn(fn)

  • ≤ c (n + 1)
  • 0≤p≤n dp

N (x + 1) is greater than 1 − e−x.

slide-69
SLIDE 69

Backward particle models

Constants (c1, c2) related to (bias,variance), c universal constant. fn normalized additive functional with fp ≤ 1.

◮ ∀ (x ≥ 0, n ≥ 0, N ≥ 1), the probability of the event

  • QN

n − Qn

  • (fn) ≤ c1

1 N (1 + (x + √x)) + c2

  • x

N(n + 1) is greater than 1 − e−x.

◮ fa,n normalized additive functional w.r.t. fp = 1(−∞,a], a ∈ Rd = En

. ∀ (x ≥ 0, n ≥ 0, N ≥ 1), the probability of the following event sup

a∈Rd

  • QN

n (fa,n) − Qn(fa,n)

  • ≤ c
  • d

N (x + 1) is greater than 1 − e−x.