Statistical modeling of summary values leads to accurate Approximate - - PowerPoint PPT Presentation

statistical modeling of summary values leads to accurate
SMART_READER_LITE
LIVE PREVIEW

Statistical modeling of summary values leads to accurate Approximate - - PowerPoint PPT Presentation

Centre for Outbreak Analysis and M odelling Statistical modeling of summary values leads to accurate Approximate Bayesian Computations Oliver Ratmann (Imperial College London, UK) Anton Camacho (London School of Hygiene & Tropical


slide-1
SLIDE 1

Centre for Outbreak Analysis and Modelling

Statistical modeling of summary values leads to accurate Approximate Bayesian Computations

Oliver Ratmann (Imperial College London, UK) Anton Camacho (London School of Hygiene & Tropical Medicine, UK) Adam Meijer (National Institute of the Environment & Public Health, NL) Gé Donker (Netherlands Institute for Health Services Research, NL)

Tuesday, 7 January 14

slide-2
SLIDE 2

Standard ABC

ABC approximation to likelihood is exact if 1) summary statistics are sufficient 2) upper and lower tolerances coincide summary stat tolerance

(Beaumont 2002)

Tuesday, 7 January 14

slide-3
SLIDE 3

Standard ABC

ABC approximation to likelihood is exact if 1) summary statistics are sufficient 2) upper and lower tolerances coincide summary stat tolerance

(Beaumont 2002)

in practice not feasible, ‘asymptotic’ argument

Tuesday, 7 January 14

slide-4
SLIDE 4

σ2 n-ABC estimate of πτ(σ2|x) 0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 n=60 naive tolerances τ-=0.35 τ+=1.65 π(σ2|x) argmaxσ2 π(σ2|x)

even with sufficient summary statistics (Fernhead & Prangle 2012)

Standard ABC is noisy

Tuesday, 7 January 14

slide-5
SLIDE 5

ABC*

σ2 n−ABC estimate of πτ(σ2|x) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 n=60 calibrated tolerances τ−=0.572 τ+=1.808 m=97 π(σ2|x) argmaxσ2 π(σ2|x)

Can we construct ABC such that inference is accurate

  • wrt point estimate, eg MAP
  • wrt overall similarity in distribution, eg KL

divergence

  • and maintain computational feasibility

If yes, under which conditions? How general are these?

(Ratmann, Camacho, Meijer, Donker; arXiv 2013)

Tuesday, 7 January 14

slide-6
SLIDE 6

R =

  • c− ≤ T
  • s1:n(x), s1:m(y)
  • ≤ c+

T-test

  • bjective: declare , unequal

H0: , equal H1: , unequal rejection region:

µ(θ) µx µ(θ) µx µ(θ) µx (−∞, c−] ∪ [c+, ∞)

ABC

  • bjective: declare , equal

H0: , unequal H1: , equal rejection region:

µ(θ) µx µ(θ) µx µ(θ) µx

[c−, c+]

ABC* step 1

To avoid asymptotics, interpret ABC accept/reject step as the outcome of a decision test

Tuesday, 7 January 14

slide-7
SLIDE 7

R =

  • c− ≤ T
  • s1:n(x), s1:m(y)
  • ≤ c+

T-test

  • bjective: declare , unequal

H0: , equal H1: , unequal rejection region:

µ(θ) µx µ(θ) µx µ(θ) µx (−∞, c−] ∪ [c+, ∞)

ABC* step 1

To avoid asymptotics, interpret ABC accept/reject step as the outcome of a decision test ABC

  • bjective: declare , equal

H0: , unequal H1: , equal rejection region: , are fully determined sth

µ(θ) µx µ(θ) µx µ(θ) µx

[c−, c+]

c− c+

P( R | H0 ) ≤ α

Tuesday, 7 January 14

slide-8
SLIDE 8

R =

  • c− ≤ T
  • s1:n(x), s1:m(y)
  • ≤ c+

T-test

  • bjective: declare , unequal

H0: , equal H1: , unequal rejection region:

µ(θ) µx µ(θ) µx µ(θ) µx (−∞, c−] ∪ [c+, ∞)

ABC* step 1

To avoid asymptotics, interpret ABC accept/reject step as the outcome of a decision test ABC

  • bjective: declare , equal

H0: , unequal H1: , equal rejection region: , are fully determined sth Let then ABC approximation to likelihood is the power function

  • f the test

µ(θ) µx µ(θ) µx µ(θ) µx

[c−, c+]

c− c+

P( R | H0 ) ≤ α

ρ = µ − µx

ρ → P( R | ρ )

Tuesday, 7 January 14

slide-9
SLIDE 9

R =

  • c− ≤ T
  • s1:n(x), s1:m(y)
  • ≤ c+

T-test

  • bjective: declare , unequal

H0: , equal H1: , unequal rejection region:

µ(θ) µx µ(θ) µx µ(θ) µx (−∞, c−] ∪ [c+, ∞)

ABC* step 1

To avoid asymptotics, interpret ABC accept/reject step as the outcome of a decision test ABC

  • bjective: declare , equal

H0: , unequal H1: , equal rejection region: , are fully determined sth Let then ABC approximation to likelihood is the power function

  • f the test

µ(θ) µx µ(θ) µx µ(θ) µx

[c−, c+]

c− c+

P( R | H0 ) ≤ α

ρ = µ − µx

ρ → P( R | ρ )

holds for specific test: two sided, one sample equivalence hypothesis test

Tuesday, 7 January 14

slide-10
SLIDE 10

Example: test variance

x1:n ∼ N(0, σ2

x)

y1:m ∼ N(0, σ2)

suppose then

Tuesday, 7 January 14

slide-11
SLIDE 11

Example: test variance

x1:n ∼ N(0, σ2

x)

y1:m ∼ N(0, σ2)

suppose

ρ = σ2/ˆ σ2

x

ρ? = 1 H0 : ρ / ∈ [τ −, τ +] H1 : ρ ∈ [τ −, τ +] T = S2(y1:m)/S2(x1:n) = ρ 1 n − 1

m

X

i=1

(yi − ¯ y)2 σ2 ∼ ρ n − 1 χ2

m−1

then for simplicity, summary values equal data

Tuesday, 7 January 14

slide-12
SLIDE 12

Example: test variance

x1:n ∼ N(0, σ2

x)

y1:m ∼ N(0, σ2)

suppose

ρ = σ2/ˆ σ2

x

ρ? = 1 H0 : ρ / ∈ [τ −, τ +] H1 : ρ ∈ [τ −, τ +] T = S2(y1:m)/S2(x1:n) = ρ 1 n − 1

m

X

i=1

(yi − ¯ y)2 σ2 ∼ ρ n − 1 χ2

m−1

then for simplicity, summary values equal data point of equality

Tuesday, 7 January 14

slide-13
SLIDE 13

Example: test variance

x1:n ∼ N(0, σ2

x)

y1:m ∼ N(0, σ2)

suppose

ρ = σ2/ˆ σ2

x

ρ? = 1 H0 : ρ / ∈ [τ −, τ +] H1 : ρ ∈ [τ −, τ +] T = S2(y1:m)/S2(x1:n) = ρ 1 n − 1

m

X

i=1

(yi − ¯ y)2 σ2 ∼ ρ n − 1 χ2

m−1

then for simplicity, summary values equal data point of equality tolerances on population level

Tuesday, 7 January 14

slide-14
SLIDE 14

Example: test variance

x1:n ∼ N(0, σ2

x)

y1:m ∼ N(0, σ2)

suppose

ρ = σ2/ˆ σ2

x

ρ? = 1 H0 : ρ / ∈ [τ −, τ +] H1 : ρ ∈ [τ −, τ +] T = S2(y1:m)/S2(x1:n) = ρ 1 n − 1

m

X

i=1

(yi − ¯ y)2 σ2 ∼ ρ n − 1 χ2

m−1

then know distribution of T, can work out ,

c− c+

Tuesday, 7 January 14

slide-15
SLIDE 15

Example: test variance

x1:n ∼ N(0, σ2

x)

y1:m ∼ N(0, σ2)

suppose

ρ = σ2/ˆ σ2

x

ρ? = 1 H0 : ρ / ∈ [τ −, τ +] H1 : ρ ∈ [τ −, τ +] T = S2(y1:m)/S2(x1:n) = ρ 1 n − 1

m

X

i=1

(yi − ¯ y)2 σ2 ∼ ρ n − 1 χ2

m−1

then know distribution of T, can work out ,

c− c+

Tuesday, 7 January 14

slide-16
SLIDE 16

Example: test variance

x1:n ∼ N(0, σ2

x)

y1:m ∼ N(0, σ2)

suppose

ρ = σ2/ˆ σ2

x

ρ? = 1 H0 : ρ / ∈ [τ −, τ +] H1 : ρ ∈ [τ −, τ +] T = S2(y1:m)/S2(x1:n) = ρ 1 n − 1

m

X

i=1

(yi − ¯ y)2 σ2 ∼ ρ n − 1 χ2

m−1

then know distribution of T, can work out , and power function

c− c+

0.5 1.0 1.5 2.0 0.0 0.2 0.4 0.6 0.8 ρ power

Tuesday, 7 January 14

slide-17
SLIDE 17

Example: test variance

x1:n ∼ N(0, σ2

x)

y1:m ∼ N(0, σ2)

suppose

ρ = σ2/ˆ σ2

x

ρ? = 1 H0 : ρ / ∈ [τ −, τ +] H1 : ρ ∈ [τ −, τ +] T = S2(y1:m)/S2(x1:n) = ρ 1 n − 1

m

X

i=1

(yi − ¯ y)2 σ2 ∼ ρ n − 1 χ2

m−1

then know distribution of T, can work out , and power function and calibrate

c− c+

0.5 1.0 1.5 2.0 0.0 0.2 0.4 0.6 0.8 ρ power

move mode increase

Tuesday, 7 January 14

slide-18
SLIDE 18

Example: test variance

x1:n ∼ N(0, σ2

x)

y1:m ∼ N(0, σ2)

suppose

ρ = σ2/ˆ σ2

x

ρ? = 1 H0 : ρ / ∈ [τ −, τ +] H1 : ρ ∈ [τ −, τ +] T = S2(y1:m)/S2(x1:n) = ρ 1 n − 1

m

X

i=1

(yi − ¯ y)2 σ2 ∼ ρ n − 1 χ2

m−1

then know distribution of T, can work out , and power function and calibrate

c− c+

0.5 1.0 1.5 2.0 0.0 0.2 0.4 0.6 0.8 ρ power

tighten increase move mode increase

Tuesday, 7 January 14

slide-19
SLIDE 19

Example: test variance

x1:n ∼ N(0, σ2

x)

y1:m ∼ N(0, σ2)

suppose

ρ = σ2/ˆ σ2

x

ρ? = 1 H0 : ρ / ∈ [τ −, τ +] H1 : ρ ∈ [τ −, τ +] T = S2(y1:m)/S2(x1:n) = ρ 1 n − 1

m

X

i=1

(yi − ¯ y)2 σ2 ∼ ρ n − 1 χ2

m−1

then calibrated tolerances

n−ABC estimate of πτ(σ2|x) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 n=60 calibrated tolerances τ−=0.477 τ+=2.2 naive tolerances τ−=0.35 τ+=1.65 π(σ2|x) argmaxσ2 π(σ2|x)

exact posterior

Tuesday, 7 January 14

slide-20
SLIDE 20

Example: test variance

x1:n ∼ N(0, σ2

x)

y1:m ∼ N(0, σ2)

suppose

ρ = σ2/ˆ σ2

x

ρ? = 1 H0 : ρ / ∈ [τ −, τ +] H1 : ρ ∈ [τ −, τ +] T = S2(y1:m)/S2(x1:n) = ρ 1 n − 1

m

X

i=1

(yi − ¯ y)2 σ2 ∼ ρ n − 1 χ2

m−1

then calibrated tolerances calibrated m

σ2 n−ABC estimate of πτ(σ2|x) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 n=60 calibrated tolerances τ−=0.572 τ+=1.808 m=97 calibrated tolerances τ−=0.726 τ+=1.392 m=300 π(σ2|x) argmaxσ2 π(σ2|x)

tighten exact posterior

Tuesday, 7 January 14

slide-21
SLIDE 21

Example: test variance

x1:n ∼ N(0, σ2

x)

y1:m ∼ N(0, σ2)

suppose

ρ = σ2/ˆ σ2

x

ρ? = 1 H0 : ρ / ∈ [τ −, τ +] H1 : ρ ∈ [τ −, τ +] T = S2(y1:m)/S2(x1:n) = ρ 1 n − 1

m

X

i=1

(yi − ¯ y)2 σ2 ∼ ρ n − 1 χ2

m−1

then calibrated tolerances calibrated m

Tuesday, 7 January 14

slide-22
SLIDE 22

Example: test variance

x1:n ∼ N(0, σ2

x)

y1:m ∼ N(0, σ2)

suppose

ρ = σ2/ˆ σ2

x

ρ? = 1 H0 : ρ / ∈ [τ −, τ +] H1 : ρ ∈ [τ −, τ +] T = S2(y1:m)/S2(x1:n) = ρ 1 n − 1

m

X

i=1

(yi − ¯ y)2 σ2 ∼ ρ n − 1 χ2

m−1

then calibrated tolerances calibrated m Conclusions-1

using statistical decision theory, the ABC accept/ reject step can be set up such that

  • the ABC* MAP equals the MAP of the exact

posterior

  • the KL divergence of the ABC* posterior to the

exact posterior is minimal

Tuesday, 7 January 14

slide-23
SLIDE 23

ABC* step 2

  • 1. repeat data points on summary level “summary values”

➣ ¡can model their distribution, eg

s1:n(x) ∼ N(µx, σ2

x)

Statistical decision testing on summary level

  • 3. indirect inference

➣ ¡link auxiliary space back to original space

  • 2. testing on auxiliary space

➣ ¡given , is the underlying small ?

s1:n(x) s1:m(y)

ρ = µ(θ) − µx

s1:m(y) ∼ N(µ(θ), σ2(θ))

Tuesday, 7 January 14

slide-24
SLIDE 24

ABC* step 2

  • 1. repeat data points on summary level “summary values”

➣ ¡can model their distribution, eg

s1:n(x) ∼ N(µx, σ2

x)

Statistical decision testing on summary level

  • 3. indirect inference

➣ ¡link auxiliary space back to original space

  • 2. testing on auxiliary space

➣ ¡given , is the underlying small ?

s1:n(x) s1:m(y)

ρ = µ(θ) − µx

s1:m(y) ∼ N(µ(θ), σ2(θ))

Assumptions

summary values can be found sth A1 they are sufficient for θ A2 their distribution can be modeled in an elementary way so that test statistics are available and can be calibrated further conditions to transport the accurate ABC* density to the original space

Tuesday, 7 January 14

slide-25
SLIDE 25

ABC* step 2

  • 1. repeat data points on summary level “summary values”

➣ ¡can model their distribution, eg

s1:n(x) ∼ N(µx, σ2

x)

Statistical decision testing on summary level

  • 3. indirect inference

➣ ¡link auxiliary space back to original space

  • 2. testing on auxiliary space

➣ ¡given , is the underlying small ?

s1:n(x) s1:m(y)

ρ = µ(θ) − µx

s1:m(y) ∼ N(µ(θ), σ2(θ))

Assumptions

summary values can be found sth A1 they are sufficient for θ A2 their distribution can be modeled in an elementary way so that test statistics are available and can be calibrated further conditions to transport the accurate ABC* density to the original space

Tuesday, 7 January 14

slide-26
SLIDE 26

Summary values

suitable data points on a summary level can be found data

Tuesday, 7 January 14

slide-27
SLIDE 27

data time series is biennial

Summary values

suitable data points on a summary level can be found

Tuesday, 7 January 14

slide-28
SLIDE 28

data time series is biennial

  • dd and even time

series values can be modeled as iid Gaussian

Summary values

suitable data points on a summary level can be found

Tuesday, 7 January 14

slide-29
SLIDE 29

s1:n(x) ∼ N(µx, σ2

x)

s1:n(y) ∼ N(µ(θ), σ2(θ))

ρ = µ(θ) − µx

  • bs

sim population error

L : Θ ⊂ RD → ∆ ⊂ RK θ → (ρ1, . . . , ρK) ρk = δk(νxk, νk(θ))

ρ = (ρ1, . . . , ρK)

θ = (θ1, . . . , θD)

D orig parameters K error parameters Link function

Modeling summary values

constructs an auxiliary probability space Discussion wrt indirect inference (Gouriéroux 1993)

  • difficulty in indirect inference: which aux space chosen

here constructed empirically from distr of summary values

Tuesday, 7 January 14

slide-30
SLIDE 30

ABC* indirect inference

⇡true posterior(✓|x) ∝ `(x|✓) ⇡(✓) ∝ `(s1:nk

k

(x), k = 1, . . . , K|✓) ⇡(✓) = `(s1:nk

k

(x), k = 1, . . . , K|⇢) ⇡(⇢) |@L(✓)| ⇡a

b c(✓|x)

∝ Px(ABC accept|⇢) ⇡(⇢) |@L(✓)|

using assumptions A1, A2:

Tuesday, 7 January 14

slide-31
SLIDE 31

⇡true posterior(✓|x) ∝ `(x|✓) ⇡(✓) ∝ `(s1:nk

k

(x), k = 1, . . . , K|✓) ⇡(✓) = `(s1:nk

k

(x), k = 1, . . . , K|⇢) ⇡(⇢) |@L(✓)| ⇡a

b c(✓|x)

∝ Px(ABC accept|⇢) ⇡(⇢) |@L(✓)|

using assumptions A1, A2:

ABC* indirect inference

Tuesday, 7 January 14

slide-32
SLIDE 32

ABC* approximation on -space is

ρ

⇡true posterior(✓|x) ∝ `(x|✓) ⇡(✓) ∝ `(s1:nk

k

(x), k = 1, . . . , K|✓) ⇡(✓) = `(s1:nk

k

(x), k = 1, . . . , K|⇢) ⇡(⇢) |@L(✓)| ⇡a

b c(✓|x)

∝ Px(ABC accept|⇢) ⇡(⇢) |@L(✓)|

using assumptions A1, A2:

ABC* indirect inference

Tuesday, 7 January 14

slide-33
SLIDE 33

ABC* approximation on -space is

ρ

⇡true posterior(✓|x) ∝ `(x|✓) ⇡(✓) ∝ `(s1:nk

k

(x), k = 1, . . . , K|✓) ⇡(✓) = `(s1:nk

k

(x), k = 1, . . . , K|⇢) ⇡(⇢) |@L(✓)| ⇡a

b c(✓|x)

∝ Px(ABC accept|⇢) ⇡(⇢) |@L(✓)|

using assumptions A1, A2:

match through calibration

  • f ABC tolerances and m

ABC* indirect inference

Tuesday, 7 January 14

slide-34
SLIDE 34

ABC* approximation on -space is

ρ

⇡true posterior(✓|x) ∝ `(x|✓) ⇡(✓) ∝ `(s1:nk

k

(x), k = 1, . . . , K|✓) ⇡(✓) = `(s1:nk

k

(x), k = 1, . . . , K|⇢) ⇡(⇢) |@L(✓)| ⇡a

b c(✓|x)

∝ Px(ABC accept|⇢) ⇡(⇢) |@L(✓)|

using assumptions A1, A2:

match through calibration

  • f ABC tolerances and m

ABC* indirect inference

Regularity conditions on the link function

A3 the link function is bijective and continuously differentiable

Tuesday, 7 January 14

slide-35
SLIDE 35

Example: moving average

no sufficient statistics other than data, simple enough so that link function is analytically known xt = ut + aut−1, ut ∼ N(0, σ2) θ = (a, σ2) ν1 = (1 + a2)σ2 ν2 = a/(1 + a2) ρ1 = (1 + a2)σ2/ˆ νx1 ρ2 = atanh(a/(1 + a2)) − atanh(ˆ νx2),

Tuesday, 7 January 14

slide-36
SLIDE 36

Example: moving average

no sufficient statistics other than data, simple enough so that link function is anlytically known xt = ut + aut−1, ut ∼ N(0, σ2) θ = (a, σ2) ν1 = (1 + a2)σ2 ν2 = a/(1 + a2) ρ1 = (1 + a2)σ2/ˆ νx1 ρ2 = atanh(a/(1 + a2)) − atanh(ˆ νx2),

Tuesday, 7 January 14

slide-37
SLIDE 37

Example: moving average

no sufficient statistics other than data, simple enough so that link function is anlytically known xt = ut + aut−1, ut ∼ N(0, σ2) θ = (a, σ2) ν1 = (1 + a2)σ2 ν2 = a/(1 + a2) ρ1 = (1 + a2)σ2/ˆ νx1 ρ2 = atanh(a/(1 + a2)) − atanh(ˆ νx2),

Tuesday, 7 January 14

slide-38
SLIDE 38

Example: moving average

no sufficient statistics other than data, simple enough so that link function is anlytically known xt = ut + aut−1, ut ∼ N(0, σ2) θ = (a, σ2) ν1 = (1 + a2)σ2 ν2 = a/(1 + a2) ρ1 = (1 + a2)σ2/ˆ νx1 ρ2 = atanh(a/(1 + a2)) − atanh(ˆ νx2),

−0.4 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 a σ2

1 1 1.5 2 1 3 5 1

Testing only variance: link not bijective exact posterior

−0.4 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 a σ2

1 3 5 1 3 5 10

Testing variance and autocorrelation with even values: summary values not sufficient

Tuesday, 7 January 14

slide-39
SLIDE 39

Example: moving average

no sufficient statistics other than data, simple enough so that link function is anlytically known xt = ut + aut−1, ut ∼ N(0, σ2) θ = (a, σ2) ν1 = (1 + a2)σ2 ν2 = a/(1 + a2) ρ1 = (1 + a2)σ2/ˆ νx1 ρ2 = atanh(a/(1 + a2)) − atanh(ˆ νx2),

−0.4 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 a σ2

1 1 1.5 2 1 3 5 1

Testing only variance: link not bijective exact posterior

−0.4 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 a σ2

1 3 5 1 3 5 10

Testing variance and autocorrelation with even values: summary values not sufficient

Tuesday, 7 January 14

slide-40
SLIDE 40

Example: moving average

no sufficient statistics other than data, simple enough so that link function is anlytically known xt = ut + aut−1, ut ∼ N(0, σ2) θ = (a, σ2) ν1 = (1 + a2)σ2 ν2 = a/(1 + a2) ρ1 = (1 + a2)σ2/ˆ νx1 ρ2 = atanh(a/(1 + a2)) − atanh(ˆ νx2),

−0.4 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 a σ2

1 1 1.5 2 1 3 5 1

exact posterior

−0.4 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 a σ2

1 3 5 1 3 5 10

−0.4 −0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 a σ2

1 3 5 10 1 3 5 10

5 tests: link bijective and summary values sufficient

Tuesday, 7 January 14

slide-41
SLIDE 41

Example: flu time series data

stochastic transmission model, derived from ODEs three parameters of interest: reproductive number R0, duration of immunity, reporting rate 6 sets of iid summary values, from 3 time series, subsetting odd and even values

Tuesday, 7 January 14

slide-42
SLIDE 42

Example: flu time series data

stochastic transmission model, derived from ODEs three parameters of interest: reproductive number R0, duration of immunity, reporting rate 6 sets of iid summary values, from 3 time series, subsetting odd and even values

Tuesday, 7 January 14

slide-43
SLIDE 43

Example: flu time series data

Test if link bijective from ABC*

  • utput

previous standard MCMC ABC MCMC ABC* with calibrated tolerances

Tuesday, 7 January 14

slide-44
SLIDE 44

Example: flu time series data

Test if link bijective from ABC*

  • utput

previous standard MCMC ABC MCMC ABC* with calibrated tolerances

Tuesday, 7 January 14

slide-45
SLIDE 45

Example: flu time series data

Test if link bijective from ABC*

  • utput

previous standard MCMC ABC MCMC ABC* with calibrated tolerances

Tuesday, 7 January 14

slide-46
SLIDE 46

Conclusions

using statistical decision theory in ABC,

  • we can entirely avoid previous asymptotic arguments
  • and construct accurate ABC algorithms by calibrating the decision tests appropriately

necessary to understand the distribution of the data on a summary level identifying replicate structures and modeling them is key in ABC as in any other approaches for which the likelihood is tractable

Tuesday, 7 January 14

slide-47
SLIDE 47

Thank you

co-workers on this project Anton Camacho (London School of Hygiene & Tropical Medicine, UK) Adam Meijer (National Institute of the Environment & Public Health, NL) Gé Donker (Netherlands Institute for Health Services Research, NL) acknowledgements Ioanna Manolopoulou (University College London) Christian Robert (Paris Dauphine)

Tuesday, 7 January 14