Towards a robust vision of geometric inference Claire Brcheteau - - PowerPoint PPT Presentation

towards a robust vision of geometric inference
SMART_READER_LITE
LIVE PREVIEW

Towards a robust vision of geometric inference Claire Brcheteau - - PowerPoint PPT Presentation

Towards a robust vision of geometric inference Claire Brcheteau Universit Paris-Sud 11, Laboratoire de Mathmatiques dOrsay and Inria Select Team Inria Saclay, Data Shape Team Under the supervision of Pascal Massart (Universit


slide-1
SLIDE 1

Towards a robust vision of geometric inference

Claire Brécheteau

Université Paris-Sud 11, Laboratoire de Mathématiques d’Orsay and Inria Select Team – Inria Saclay, Data Shape Team Under the supervision of Pascal Massart (Université Paris-Sud 11) and Frédéric Chazal (Inria Saclay)

September 24, 2018

Claire Brécheteau Towards a robust vision of geometric inference

slide-2
SLIDE 2

Towards a robust vision of geometric inference

Geometric Inference : Recover geometric information from a point cloud sampled around some shape. ❳

Claire Brécheteau Towards a robust vision of geometric inference

slide-3
SLIDE 3

Towards a robust vision of geometric inference

Geometric Inference : Recover geometric information from a point cloud sampled around some shape. Global setting :

(X ,δ), metric space P, probability distribution supported on X

That is, (X ,δ,P) is a metric-measure space.

Q, probability distribution (close to P somehow)

❳n = {X1,X2,...,Xn}, n-sample from Q

Claire Brécheteau Towards a robust vision of geometric inference

slide-4
SLIDE 4

Towards a robust vision of geometric inference

Robustness :

1

Robustness to outliers or Trimming : Getting rid of a proportion 1−η of the probability (resp. of the data-points).

Claire Brécheteau Towards a robust vision of geometric inference

slide-5
SLIDE 5

Towards a robust vision of geometric inference

Robustness :

1

Robustness to outliers or Trimming : Getting rid of a proportion 1−η of the probability (resp. of the data-points).

t∗

η ∈ arg min t

inf

η ˜ P≤P

˜ Pγ(t,.)

Claire Brécheteau Towards a robust vision of geometric inference

slide-6
SLIDE 6

Towards a robust vision of geometric inference

Robustness :

1

Robustness to outliers or Trimming : Getting rid of a proportion 1−η of the probability (resp. of the data-points).

t∗

η ∈ arg min t

inf

η ˜ P≤P

˜ Pγ(t,.) Pη such that : inf

η ˜ P≤P

˜ Pγ(t∗

η ,.) = Pηγ(t∗ η ,.)

Claire Brécheteau Towards a robust vision of geometric inference

slide-7
SLIDE 7

Towards a robust vision of geometric inference

Robustness :

1

Robustness to outliers or Trimming : Getting rid of a proportion 1−η of the probability (resp. of the data-points).

t∗

η ∈ arg min t

inf

η ˜ P≤P

˜ Pγ(t,.) Pη such that : inf

η ˜ P≤P

˜ Pγ(t∗

η ,.) = Pηγ(t∗ η ,.)

2

Stability (e.g. according to a Wasserstein metric Wp). Small Wp(P,Q) Roughly the same geometric information in P and Q.

Claire Brécheteau Towards a robust vision of geometric inference

slide-8
SLIDE 8

Towards an implementable robust vision of geometric inference

Claire Brécheteau Towards a robust vision of geometric inference

slide-9
SLIDE 9

Main questions

How to compare two datasets ?

Claire Brécheteau Towards a robust vision of geometric inference

slide-10
SLIDE 10

Main questions

How to compare two datasets ? How to make clusters from a dataset ?

Claire Brécheteau Towards a robust vision of geometric inference

slide-11
SLIDE 11

Main questions

How to compare two datasets ? How to make clusters from a dataset ? How to infer the distance to a compact set, with a fixed budget ?

Claire Brécheteau Towards a robust vision of geometric inference

slide-12
SLIDE 12

A multifunction tool : the distance-to-measure function

Claire Brécheteau Towards a robust vision of geometric inference

slide-13
SLIDE 13

A definition for the DTM

δP,h(x) = inf

  • r > 0 | P
  • B(x,r)
  • > h
  • Claire Brécheteau

Towards a robust vision of geometric inference

slide-14
SLIDE 14

A definition for the DTM

δP,h(x) = inf

  • r > 0 | P
  • B(x,r)
  • > h
  • The distance-to-measure (DTM) [Chazal, Cohen-Steiner, Mérigot 09’] is defined

for all x ∈ X and h ∈ [0,1] by :

dP,h(x) = 1 h h δP,l(x)dl

Claire Brécheteau Towards a robust vision of geometric inference

slide-15
SLIDE 15

A definition for the DTM

δP,h(x) = inf

  • r > 0 | P
  • B(x,r)
  • > h
  • The distance-to-measure (DTM) [Chazal, Cohen-Steiner, Mérigot 09’] is defined

for all x ∈ X and h ∈ [0,1] by :

d(p)

P,h(x) =

1 h h δp

P,l(x)dl

1

p

Claire Brécheteau Towards a robust vision of geometric inference

slide-16
SLIDE 16

The DTM for Stable – Geometric Inference

1

When h = 0, dP,0 = dX .

2

  • d(p)

P,h −d(p) Q,h

  • ∞ ≤ h− 1

p Wp(P,Q)[Chazal, Cohen-Steiner, Mérigot 09’].

Claire Brécheteau Towards a robust vision of geometric inference

slide-17
SLIDE 17

The DTM for Stable – Geometric Inference

1

When h = 0, dP,0 = dX .

2

  • d(p)

P,h −d(p) Q,h

  • ∞ ≤ h− 1

p Wp(P,Q)[Chazal, Cohen-Steiner, Mérigot 09’].

−0.5 0.0 0.5 1.0 1.5 0.0 0.2 0.4 0.6

Distance to X

−0.5 0.0 0.5 1.0 1.5 0.0 0.2 0.4 0.6

Distance to ❳n

−0.5 0.0 0.5 1.0 1.5 0.0 0.2 0.4 0.6

DTM with h = 0.2

Claire Brécheteau Towards a robust vision of geometric inference

slide-18
SLIDE 18

The DTM contains information

Theorem (Brécheteau)

PO (uniform distribution on O) can be recovered from dPO,h

provided that h is small enough and O regular enough. ❘

Claire Brécheteau Towards a robust vision of geometric inference

slide-19
SLIDE 19

The DTM contains information

Theorem (Brécheteau)

PO (uniform distribution on O) can be recovered from dPO,h

provided that h is small enough and O regular enough. Theorem (Brécheteau)

P can be recovered from (dP,h)h∈[0,1]

provided that (X ,δ) = (❘d,·).

Claire Brécheteau Towards a robust vision of geometric inference

slide-20
SLIDE 20

The DTM, an implementable tool

1

κ = nh

2

Empirical distribution Pn = n

i=1 1 n δXi

3

X (1), X (2),. . . X (κ) : κ nearest neighbours of x in ❳n

Claire Brécheteau Towards a robust vision of geometric inference

slide-21
SLIDE 21

The DTM, an implementable tool

1

κ = nh

2

Empirical distribution Pn = n

i=1 1 n δXi

3

X (1), X (2),. . . X (κ) : κ nearest neighbours of x in ❳n dPn,h(x) = 1 κ

κ

  • i=1

δ

  • X(i),x
  • Easy implementation of the DTM at a point x in practice !

Claire Brécheteau Towards a robust vision of geometric inference

slide-22
SLIDE 22

The DTM, an implementable tool

1

κ = nh

2

Empirical distribution Pn = n

i=1 1 n δXi

3

X (1), X (2),. . . X (κ) : κ nearest neighbours of x in ❳n d(p)

Pn,h(x) =

  • 1

κ

κ

  • i=1

δp X(i),x 1

p

Easy implementation of the DTM at a point x in practice !

Claire Brécheteau Towards a robust vision of geometric inference

slide-23
SLIDE 23

Claire Brécheteau Towards a robust vision of geometric inference

slide-24
SLIDE 24

1) A statistical test of isomorphism between mm-spaces

Claire Brécheteau Towards a robust vision of geometric inference

slide-25
SLIDE 25

A statistical test of isomorphism between mm-spaces

Two mm-spaces (X ,δ,P) and (Y ,δ′,P′) are isomorphic [Gromov 81’] if :

∃φ : X → Y a one-to-one isometry, s.t. for all Borel set A, P′(φ(A)) = P(A).

Claire Brécheteau Towards a robust vision of geometric inference

slide-26
SLIDE 26

A statistical test of isomorphism between mm-spaces

Two mm-spaces (X ,δ,P) and (Y ,δ′,P′) are isomorphic [Gromov 81’] if :

∃φ : X → Y a one-to-one isometry, s.t. for all Borel set A, P′(φ(A)) = P(A).

How to build a test of level α > 0 to test the null hypothesis

H0 : “(X ,δ,P) and (Y ,δ′,P′) are isomorphic” ?

vs

H1 : “(X ,δ,P) and (Y ,δ′,P′) are not isomorphic” ?

(X ,δ,P) (Y ,δ′,P′)

Claire Brécheteau Towards a robust vision of geometric inference

slide-27
SLIDE 27

From the Gromov-Wasserstein distance to the DTM-signature

The Gromov-Wasserstein distance [Mémoli 10’] GW is a metric such that

GW ((X ,δ,P),(Y ,δ′,P′)) = 0 iff the mm-spaces are isomorphic. Too high computational cost.

Definition The DTM-signature, dP,h(P) is the distribution of dP,h(X) when X ∼ P. Theorem (Brécheteau)

W1

  • dP,h (P),dP′,h
  • P′

≤ 1 h GW (X ,Y )

Claire Brécheteau Towards a robust vision of geometric inference

slide-28
SLIDE 28

Bootstrap approximation

Definition Defined by dPN,h(Pn) with PN from (X1, X2,...,XN ) and Pn from (X1,X2,...,Xn). Statistic :

T =

  • nW1
  • dPN,h (Pn),dP′N,h
  • P′n
  • Subsampling distribution :

L ∗ (P) = L ∗ nW1

  • dPN,h
  • P∗

n

  • ,dPN,h
  • P∗

n ′

|PN

  • Under hypothesis H0, L (T ) is approximated with L ∗ = 1

2L ∗ (P)+ 1 2L ∗

P′.

Claire Brécheteau Towards a robust vision of geometric inference

slide-29
SLIDE 29

Bootstrap approximation

Definition Defined by dPN,h(Pn) with PN from (X1, X2,...,XN ) and Pn from (X1,X2,...,Xn). Statistic :

T =

  • nW1
  • dPN,h (Pn),dP′N,h
  • P′n
  • Subsampling distribution :

L ∗ (P) = L ∗ nW1

  • dPN,h
  • P∗

n

  • ,dPN,h
  • P∗

n ′

|PN

  • Under hypothesis H0, L (T ) is approximated with L ∗ = 1

2L ∗ (P)+ 1 2L ∗

P′.

0.02 0.04 0.06 0.08 0.10 0.0 0.2 0.4 0.6 0.8 1.0 Wasserstein distance between DTM signatures

Cdf of L (T ) and L ∗ (P) (Bunny)

N = 10000, n = 100, h = 0.1

Claire Brécheteau Towards a robust vision of geometric inference

slide-30
SLIDE 30

The error of type I

Test :

φN,n,h = ✶T ≥ˆ

qα,N,n,h

with ˆ

qα,N,n,h, the α-quantile of L ∗.

Theorem (Brécheteau) If P is supported on compact subsets of ❘d ; L (●P,h −●′

P,h1) is atomless ;

n ∼ N

1 ρ ;

in the general case, if ρ > max{d,2}

2

, in the (a,b)-standard case, if ρ > 1, then P(P,P)

  • φN,n,h
  • → α, when N → ∞.
  • P,h and ●′

P,h independent Gaussian processes with covariance kernel

κ(s,t) = FdP,h(P)(s)

  • 1−FdP,h(P)(t)
  • for s ≤ t.

Claire Brécheteau Towards a robust vision of geometric inference

slide-31
SLIDE 31

The error of type II

n ∼ N

1 ρ with ρ > 1

X , Y : compact subsets of ❘d.

Theorem (Brécheteau) There is nP,P′ such that ∀n ≥ nP,P′, P(P,P′)

  • 1−φN,n,h
  • ≤ 4exp

 − W 2

1

  • dP,h(P),dP′,h(P′)
  • 3max
  • Diam2

P,Diam2 P′

n  

Claire Brécheteau Towards a robust vision of geometric inference

slide-32
SLIDE 32

Experiments

N = 2000 points ; α = 0.05, h = 0.05, n = 20.

Comparison to the spiral with shape parameter 10 (grey). spiral shape parameter 15 20 30 40 100 type I error DTM 0.050 0.049 0.051 0.044 0.051 type II error DTM 0.475 0.116 0.013 0.023 0.015 type II error KS 0.232 0.598 0.535 0.586 0.578

Type I and type II error approximations

Claire Brécheteau Towards a robust vision of geometric inference

slide-33
SLIDE 33

2) Bregman trimmed clustering

Claire Brécheteau Towards a robust vision of geometric inference

slide-34
SLIDE 34

The DTM, a tool for trimming

Px,h : restriction of P to the ball of P-

mass h

d2

P,h(x) = Px,h.− x2

= inf

h ˜ P≤P

˜ Pγ(x,.)

with γ(x,.) = .− x2. For the DTM :

1−h ↔ 1−η d2

P,η : x → infη˜ P≤P ˜

P.−x2

Minimizer x∗ : Trimmed barycenter

Claire Brécheteau Towards a robust vision of geometric inference

slide-35
SLIDE 35

The DTM a tool for Trimming k-means

codebook c = (c1,c2,...,ck)

γ(c,.) = minj∈1..k .−cj 2 B(x,r) ↔

i∈1..k B

  • ci,r

.

Claire Brécheteau Towards a robust vision of geometric inference

slide-36
SLIDE 36

The DTM a tool for Trimming k-means

codebook c = (c1,c2,...,ck)

γ(c,.) = minj∈1..k .−cj 2 B(x,r) ↔

i∈1..k B

  • ci,r

. d2

P,η : c → infη˜ P≤P ˜

Pminj∈1..k .−cj2

Minimizer c∗ : Optimal codebook for Trimmed k-means [Cuesta et al. 97’]

Claire Brécheteau Towards a robust vision of geometric inference

slide-37
SLIDE 37

An example of trimmed clustering with a Bregman divergence

Ω ⊂ ❘d convex set, φ : Ω → ❘ strictly convex and C 1. Bregman divergence dφ : (x, y) → φ(x)−φ(y)−〈∇yφ,x − y〉, ∀x, y ∈ Ω

Claire Brécheteau Towards a robust vision of geometric inference

slide-38
SLIDE 38

An example of trimmed clustering with a Bregman divergence

Ω ⊂ ❘d convex set, φ : Ω → ❘ strictly convex and C 1. Bregman divergence dφ : (x, y) → φ(x)−φ(y)−〈∇yφ,x − y〉, ∀x, y ∈ Ω

Poisson distribution :

−log

  • pθ(x)
  • = −log

θx x! e−θ

  • = dφ(x,θ)+C(x),

for φ(x) = x log(x)− x, and dφ(x,c) = xlog

x

c

  • −(x−c).

Claire Brécheteau Towards a robust vision of geometric inference

slide-39
SLIDE 39

Bregman trimmed clustering

Definition Bregman h-trimmed variation given c - or - Bregman-divergence-to-measure :

d2

φ,P,η(c) = inf η˜ P≤P

˜ P min

j∈1..kdφ(.,cj)

Definition A Bregman h-trimmed k-optimal codebook c∗ is any minimizer c of the criterion

dφ,P,η(c).

Theorem (Brécheteau, Fischer and Levrard) Assume that φ is C 2 and strictly convex and F0 = Conv(Supp(P)) ⊂

  • Ω.

Then, the minimum c∗ exists.

Claire Brécheteau Towards a robust vision of geometric inference

slide-40
SLIDE 40

Some theory

ˆ cn : minimizer of dφ,Pn,η.

Theorem (Brécheteau, Fischer and Levrard) If P is continuous, P.p < ∞ for some p > 2, φ is C2 on Ω,

F0 = Conv(Supp(P)) ⊂ Ω◦ and c∗ is the unique minimizer of dφ,P,η, then : lim

n→+∞dist(ˆ

cn,c∗) = 0 a.e.

and

lim

n→+∞dφ,P,η(ˆ

cn) = dφ,P,η(c∗) a.e.

This convergence holds at a parametric rate

1 n :

Theorem (Brécheteau, Fischer and Levrard) Assume that P.p < ∞. Then, for n large enough, with probability greater than

1−n− p

2 −2e−x, we have

dφ,P,η(ˆ cn)−dφ,P,η(c∗) ≤ CP ηn (1+

  • x).

Claire Brécheteau Towards a robust vision of geometric inference

slide-41
SLIDE 41

Gaussian mixture Poisson mixture Binomial mixture Gamma mixture Clustering associated to the selected parameter η - dimension 2

Claire Brécheteau Towards a robust vision of geometric inference

slide-42
SLIDE 42

3) Distance to a compact set inference, with a quantization point of view

Claire Brécheteau Towards a robust vision of geometric inference

slide-43
SLIDE 43

How to characterise a probability distribution at best with a fixed budget ?

Given P, Q or ❳n : Find c and ω such that the k-power function

x → min

j∈1..k x −cj 2 +ω2 j

is a good approximation of the square of the distance to X ,

d2

X : x → min y∈X x−y2

Claire Brécheteau Towards a robust vision of geometric inference

slide-44
SLIDE 44

What about using (trimmed) k-means for quantization problem ?

Trimmed k-means does not work...

Claire Brécheteau Towards a robust vision of geometric inference

slide-45
SLIDE 45

The DTM, an alternative definition as a power distance when p = 2 [Chazal, Cohen-Steiner, Mérigot 09’]

d2

P,h(x) = Px,h.− x2

= inf

h ˜ P≤P

˜ P.− x2 = m(Px,h)− x2 + v(Px,h) = inf

h ˜ P≤P

m( ˜ P)− x2 + v( ˜ P)

Notation : Mean m(P), Va- riance v(P).

Claire Brécheteau Towards a robust vision of geometric inference

slide-46
SLIDE 46

The DTM, an alternative definition as a power distance when p = 2 [Chazal, Cohen-Steiner, Mérigot 09’]

d2

P,h(x) = Px,h.− x2

= inf

h ˜ P≤P

˜ P.− x2 = m(Px,h)− x2 + v(Px,h) = inf

h ˜ P≤P

m( ˜ P)− x2 + v( ˜ P)

Notation : Mean m(P), Va- riance v(P). Sublevel sets of the DTM : union of balls. Approximation : κ-witnessed distance [Guibas Morozov Mérigot 11’] n balls

Claire Brécheteau Towards a robust vision of geometric inference

slide-47
SLIDE 47

A measure-dependant Bregman divergence

Set φP,h the function defined on ❘d by

φP,h : x → x2 −d2

P,h(x).

(1) [Chazal, Cohen-Steiner, Mérigot 09’] The map φP,h is convex. The Bregman-divergence associate to φP,h satisfies for x,t ∈ ❘d :

dφP,h(x,t) = x−m(Pt,h)2 +v(Pt,h)−d2

P,h(x)

min

j∈1..k dφP,h(.,tj) =

  • min

j∈1..kx−m(Pt,h)2 +v(Pt,h)

  • −d2

P,h(x)

Bregman clustering with dφP,h ! Rq : Theory 1−η = 0 (For practice 1−η ∈ [0,1)) !

Claire Brécheteau Towards a robust vision of geometric inference

slide-48
SLIDE 48

Bregman clustering with dφP,h or the k-PDTM

t∗ ∈ arg min

t

P min

j∈1..k dφP,h(.,tj)

= arg min

t

P min

j∈1..k .−m(Pt j ,h)2 + v(Pt j ,h)−d2 P,h(.)

Definition The k-power distance-to-measure (k-PDTM) dP,h,k is defined for x ∈ ❘d by :

d2

P,h,k(x) = min j∈1..kx−m(Pt∗

j ,h)2 +v(Pt∗ j ,h)

Claire Brécheteau Towards a robust vision of geometric inference

slide-49
SLIDE 49

Graphical representation for the k-PDTM

ω(c) = inf

  • ω > 0 | ∀x ∈ ❘d, x −c2 +ω2 ≥ d2

P,h(x)

  • Theorem (Brécheteau and Levrard)

d2

P,h,k(x) = min j∈1..kx−c∗ j 2 +ω2(c∗ j )

for

c∗ ∈ arg min

  • P min

j∈1..k .−cj 2 +ω2(cj )

  • −0.5

0.0 0.5 1.0 1.5 0.0 0.2 0.4 0.6

k-PDTM, k = 2 centres

−0.5 0.0 0.5 1.0 1.5 0.0 0.2 0.4 0.6

k-PDTM, k = 10 centres

Claire Brécheteau Towards a robust vision of geometric inference

slide-50
SLIDE 50

Wasserstein stability for the k-PDTM

Proposition If Supp(P) ⊂ B(0,K), and Q. < ∞, then P

  • d2

Q,h,k(.)−d2 P,h(.)

  • is bounded from above by

3d2

Q,h −d2 P,h∞,B(0,K) +P

  • d2

P,h,k(.)−d2 P,h(.)

  • +4W1(P,Q) sup

s∈❘d

m(Ps,h)

with P

  • d2

P,h,k(.)−d2 P,h(.)

  • f order k− 2

d′ for a “d′-dimensional distribution”.

Claire Brécheteau Towards a robust vision of geometric inference

slide-51
SLIDE 51

Approximation of the k-PDTM from point clouds

Supp(P) = X ⊂ B(0,K) Xi = Yi + Zi, Yi and Zi all independent, Yi ∼ P, Zi sub-Gaussian with variance σ2 ≤ K 2 Qn = 1

n

n

i=1 δXi .

Theorem (Brécheteau and Levrard) For every p > 0, with probability larger than 1−10n−p, we have

  • Pd2

Qn,h,k(.)−d2 Q,h,k(.)

  • ≤ C
  • kd K 2((p +1)log(n))

3 2

hn +C K σ

  • h

.

Claire Brécheteau Towards a robust vision of geometric inference

slide-52
SLIDE 52

Approximation of the k-PDTM from point clouds

Supp(P) = X ⊂ B(0,K) Xi = Yi + Zi, Yi and Zi all independent, Yi ∼ P, Zi sub-Gaussian with variance σ2 ≤ K 2 Qn = 1

n

n

i=1 δXi .

Theorem (Brécheteau and Levrard) For every p > 0, with probability larger than 1−10n−p, we have

  • Pd2

Qn,h,k(.)−d2 Q,h,k(.)

  • ≤ C
  • kd K 2((p +1)log(n))

3 2

hn +C K σ

  • h

.

  • ptimize in k the quantity

C

  • kK 2((p +1)log(n))

3 2

hn +CP,hk− 2

d′ .

Claire Brécheteau Towards a robust vision of geometric inference

slide-53
SLIDE 53

Approximation of the k-PDTM from point clouds

Supp(P) = X ⊂ B(0,K) Xi = Yi + Zi, Yi and Zi all independent, Yi ∼ P, Zi sub-Gaussian with variance σ2 ≤ K 2 Qn = 1

n

n

i=1 δXi .

Theorem (Brécheteau and Levrard) For every p > 0, with probability larger than 1−10n−p, we have

  • Pd2

Qn,h,k(.)−d2 Q,h,k(.)

  • ≤ C
  • kd K 2((p +1)log(n))

3 2

hn +C K σ

  • h

.

  • ptimize in k the quantity

C

  • kK 2((p +1)log(n))

3 2

hn +CP,hk− 2

d′ .

Optimal choice k ∼ n

d′ d′+4 .

Claire Brécheteau Towards a robust vision of geometric inference

slide-54
SLIDE 54

Geometric inference with the k-PDTM

Theorem (Brécheteau and Levrard) Assumption : ∀x ∈ X , P(B(x,r)) ≥ C(P)rd′ ∧1. Set ∆2

P = Pd2 Q,h,k(.), then,

sup

x∈❘d

|dQ,h,k(x)−dX (x)| ≤ C(P)−

1 d′+2 ∆ 2 d′+2

P

+2∆P +W2(P,Q)h− 1

2 .

Claire Brécheteau Towards a robust vision of geometric inference

slide-55
SLIDE 55

Numerical Illustrations

k-PDTM (η = 1)

Trimmed k-PDTM (η < 1)

Claire Brécheteau Towards a robust vision of geometric inference

slide-56
SLIDE 56

Summing up

Method New tool Isomorphism Test DTM-signature dP,h(P) Bregman clustering Bregman divergence-to-measure c → dφ,P,η(c) Quantization & dX inference

k-PDTM x → dP,h,k(x)

Claire Brécheteau Towards a robust vision of geometric inference

slide-57
SLIDE 57

Summing up

Method New tool Isomorphism Test DTM-signature dP,h(P) Bregman clustering Bregman divergence-to-measure c → dφ,P,η(c) Quantization & dX inference

k-PDTM x → dP,h,k(x)

Future work : Non asymptotic statistics for studying

ˆ tη ∈ arg min

t

inf

η ˜ Pn≤Pn

˜ Pnγ(t,.)

Claire Brécheteau Towards a robust vision of geometric inference

slide-58
SLIDE 58

Thank you !

Claire Brécheteau Towards a robust vision of geometric inference

slide-59
SLIDE 59

The DTM and the trimmed log-likelihood

F =

  • θ ; Pθ with density pθ.

γ(θ,.) = −log(pθ(.)) B(x,r) ↔ upper-level set of pθ. dF

P,η : θ → infη˜ P≤P ˜

P −log(pθ(.))

Minimizer θ∗ : Trimmed log-likelihood maximizer

Claire Brécheteau Towards a robust vision of geometric inference

slide-60
SLIDE 60

The DTM and the trimmed log-likelihood

F =

  • θ ; Pθ with density pθ.

θ = (θ1,θ2,...,θk) γ(θ,.) = minj∈1..k −log(pθj (.)) B(x,r) ↔ union of upper-level set of the pθj s. dF

P,η : θ → infη˜ P≤P ˜

Pminj∈1..k −log(pθj(.))

Minimizer θ∗ : Optimal codebook for some trimmed clustering...

Claire Brécheteau Towards a robust vision of geometric inference

slide-61
SLIDE 61

Experiments

Claire Brécheteau Towards a robust vision of geometric inference

slide-62
SLIDE 62

Robust heteroscedastic Gaussian clustering

Claire Brécheteau Towards a robust vision of geometric inference