Scaling limits of non-increasing Markov chains and applications to - - PowerPoint PPT Presentation

scaling limits of non increasing markov chains and
SMART_READER_LITE
LIVE PREVIEW

Scaling limits of non-increasing Markov chains and applications to - - PowerPoint PPT Presentation

Scaling limits of non-increasing Markov chains and applications to random trees and coalescents Bndicte HAAS Universit Paris-Dauphine based on joint works with Grgory MIERMONT (Orsay) Bndicte Haas SSP - March 2012 1 / 26 Outline


slide-1
SLIDE 1

Scaling limits of non-increasing Markov chains and applications to random trees and coalescents

Bénédicte HAAS

Université Paris-Dauphine

based on joint works with Grégory MIERMONT (Orsay)

Bénédicte Haas SSP - March 2012 1 / 26

slide-2
SLIDE 2

Outline

1

Introduction

2

Scaling limits of non-increasing Markov chains

3

Applications:

  • scaling limits of Markov branching trees
  • number of collisions in Λ-coalescents
  • random walks with barriers

Bénédicte Haas SSP - March 2012 2 / 26

slide-3
SLIDE 3

Scaling limits: a basic example

I.i.d sequence of centered random variables Xi ∈ {−1, 1} : −1, 1, 1, 1, 1, −1, −1, −1, 1, −1, −1, 1, 1, 1, 1, ... Centered random walk: Sn = X1 + ... + Xn How does Sn behave when n is large ? (1) what is the growth rate ? (2) what is the limit after rescaling ? Central limit theorem: Sn √n

law

→ N(0, 1) Functional version:

DONSKER’S theorem (51):

„S[nt] √n , t ∈ [0, 1] «

law

→ (B(t), t ∈ [0, 1]) where B is a standard Brownian motion

Bénédicte Haas SSP - March 2012 3 / 26

slide-4
SLIDE 4

Another example: Galton-Watson trees

Galton-Watson processes are introduced in 1873 to study the extinction of family names

ancester=root generation 3 generation 2 generation 1

η: offspring distribution (proba. on Z+ = {0, 1, 2, ..}), such that η(1) < 1, with mean m Extinction probability = 1 in subcritical (m < 1) and critical (m = 1) cases ∈ [0, 1) in supercritical cases (m > 1)

Bénédicte Haas SSP - March 2012 4 / 26

slide-5
SLIDE 5

Large Galton-Watson trees

T GW

n

: critical GW tree conditioned to have n nodes Offspring distribution η: finite variance σ2 < ∞ ◮ H(U)

n

: height (=generation) of a node chosen uniformly at random H(U)

n

√n

law

→ R σ , where R has a Rayleigh distribution: P(R > x) = exp(−x2/2) (MEIR & MOON 78) ◮ Hn: height of the tree Hn √n

law

→ 2W σ , where W =maximum of a Brownian excursion with length 1 (KOLCHIN 86) With length 1 on each edge, what does the tree look like when n → ∞ ?

Bénédicte Haas SSP - March 2012 5 / 26

slide-6
SLIDE 6

Universal limit: the Brownian tree TBr

(picture by G. Miermont)

ALDOUS 93: T GW

n

√n

law

→ 2 σ TBr Compact (random) real tree, i.e. compact metric space with the tree property: ∀x, y ∈ TBr ∃! path from x to y almost surely: binary tree, self-similar, with Hausdorff dimension = 2

Bénédicte Haas SSP - March 2012 6 / 26

slide-7
SLIDE 7

Topology on the set of compact rooted real trees

Gromov-Hausdorff distance: let (T , ρ), (T ′, ρ′) be two compact rooted real trees dGH ` (T , ρ), (T ′, ρ′) ´ := inf ` dH(ϕ1(T ), ϕ2(T ′)) ∨ dZ(ϕ1(ρ), (ϕ2(ρ′)) ´ the infimum being on all isometric embeddings ϕ1 : T ֒ → Z and ϕ2 : T ′ ֒ → Z into a same metric space (Z, dZ). (T , ρ) and (T ′, ρ′) are equivalent if ∃ ϕ isometry: T ′ = ϕ(T ), ρ′ = ϕ(ρ) dGH: distance on the set of equivalence classes

ALDOUS 93:

T GW

n

√n

law

GH

2 σ TBr, jointly with the convergence of the uniform probability on the nodes of T GW

n

towards a probability measure on the leaves of TBr

Bénédicte Haas SSP - March 2012 7 / 26

slide-8
SLIDE 8

Large Galton-Watson trees: when the variance is infinite

Assume: η(k) = P(to have k children) ∼

k→∞ Ck −β, 1 < β < 2

Then (DUQUESNE 03): T GW

n

n1−1/β

law

GH C−1/βTβ

◮ “smaller" trees ◮ the limiting tree Tβ belongs to the family of stable Lévy trees (introduced by Duquesne, Le Gall, Le Jan):

  • each branching vertex branches in an infinite, countable number of subtrees
  • it is a self-similar tree, with Hausdorff dimension = β/(β − 1)

Bénédicte Haas SSP - March 2012 8 / 26

slide-9
SLIDE 9

Non-increasing Markov chains

(X(k), k ≥ 0): Z+-valued Markov chain, non-increasing (Xn(k), k ≥ 0): chain starting from Xn(0) = n

An Xn n k

Absorption time: An = inf{i : Xn(i) = Xn(j), ∀j ≥ i}, finite How behave Xn(·) n and An when n → ∞ ?

Bénédicte Haas SSP - March 2012 9 / 26

slide-10
SLIDE 10

Scaling limit

Assumption: macroscopic jumps are rare starting from n, the probability that the first jump is larger than ≥ nε behaves like ∼

n→∞

cε nγ for some γ > 0 (cε ր when ε ց)

More precisely, ∃ finite measure µ on [0, 1] such that, for ε ∈ (0, 1] P (n − Xn(1) ≥ nε) ∼ 1 nγ Z

[0,1−ε]

µ(dx) 1 − x , and E »n − Xn(1) n – ∼ µ ([0, 1]) nγ

Theorem (H.-MIERMONT 11) Then, ∃ time-continuous Markov process X∞ such that „Xn ([nγt]) n , t ≥ 0 «

law

→ (X∞(t), t ≥ 0), for the Skorokhod topology on the set D([0, ∞), [0, ∞)).

Bénédicte Haas SSP - March 2012 10 / 26

slide-11
SLIDE 11

The limit process X∞ is:

◮ self-similar: starting from X∞(0) = x, the process ` cX∞(c−γt), t ≥ 0 ´ is distributed as X∞ starting from X∞(0) = cx, ∀c > 0 ◮ starting from X∞(0) = 1, X∞ writes X∞ = exp(−ξρ) where

  • ξ is a subordinator
  • E[exp(−λξt)] = exp(−tφ(λ)), with

φ(λ) = µ({1})λ + Z

(0,1)

(1 − xλ)µ(dx) 1 − x + µ({0}), λ ≥ 0,

  • ρ is an acceleration of time:

ρ(t) = inf  u ≥ 0 : Z u exp(−γξr)dr ≥ t ff

Bénédicte Haas SSP - March 2012 11 / 26

slide-12
SLIDE 12

Absorption time

Consequently, X∞ starting from X∞(0) = 1 is absorbed at 0 at time Z ∞ exp(−γξr)dr

(almost surely finite)

Jointly with the previous convergence, we have An nγ

law

→ Z ∞ exp(−γξr)dr This is not an immediate consequence of the convergence of the whole process! Remark: extension of these results to regular variation assumptions.

Bénédicte Haas SSP - March 2012 12 / 26

slide-13
SLIDE 13

Application to Markov branching trees

(Tn, n ≥ 1) : Tn random rooted tree with n nodes Markov branching property: Tn :

(n = 17)

9 nods R 4 nods 3 nods 4 nods R

law

∼ T4

R 3 nods law

∼ T3

9 nods R

law

∼ T9 Conditional on “the root of Tn branches in p sub-trees with n1 ≥ ... ≥ np nodes”, these sub-trees are independent, with respective distributions those of Tn1, ..., Tnp Similar definition for sequences of trees indexed by the number of leaves Ex.: Galton-Watson trees conditioned to have n nodes (respectively n leaves)

Bénédicte Haas SSP - March 2012 13 / 26

slide-14
SLIDE 14

Markov branching trees

(Tn, n ≥ 1) Markov branching indexed by leaves What does it look like when n is large ? First step: height of a leaf chosen uniformly at random

R

Bénédicte Haas SSP - March 2012 14 / 26

slide-15
SLIDE 15

Markov branching trees

(Tn, n ≥ 1) Markov branching indexed by leaves What does it look like when n is large ? First step: height of a leaf chosen uniformly at random

R

Xn(0) = 9 Xn(k): size of the sub-tree above generation k containing the marked leaf

Bénédicte Haas SSP - March 2012 14 / 26

slide-16
SLIDE 16

Markov branching trees

(Tn, n ≥ 1) Markov branching indexed by leaves What does it look like when n is large ? First step: height of a leaf chosen uniformly at random

R

Xn(0) = 9, Xn(1) = 5 Xn(k): size of the sub-tree above generation k containing the marked leaf

Bénédicte Haas SSP - March 2012 14 / 26

slide-17
SLIDE 17

Markov branching trees

(Tn, n ≥ 1) Markov branching indexed by leaves What does it look like when n is large ? First step: height of a leaf chosen uniformly at random

R

Xn(0) = 9, Xn(1) = 5, Xn(2) = 3 Xn(k): size of the sub-tree above generation k containing the marked leaf

Bénédicte Haas SSP - March 2012 14 / 26

slide-18
SLIDE 18

Markov branching trees

(Tn, n ≥ 1) Markov branching indexed by leaves What does it look like when n is large ? First step: height of a leaf chosen uniformly at random

R

Xn(0) = 9, Xn(1) = 5, Xn(2) = 3, Xn(3) = 2 Xn(k): size of the sub-tree above generation k containing the marked leaf

Bénédicte Haas SSP - March 2012 14 / 26

slide-19
SLIDE 19

Markov branching trees

(Tn, n ≥ 1) Markov branching indexed by leaves What does it look like when n is large ? First step: height of a leaf chosen uniformly at random

R

Xn(0) = 9, Xn(1) = 5, Xn(2) = 3, Xn(3) = 2, Xn(4) = 1 Xn(k): size of the sub-tree above generation k containing the marked leaf It is a Markov chain! Absorption time at 1= height of the marked leaf

Bénédicte Haas SSP - March 2012 14 / 26

slide-20
SLIDE 20

Scaling limits of Markov branching trees

Let qn(n1, ..., np) := P(the root of Tn branches in p sub-trees with n1 ≥ ... ≥ np leaves) and S↓ = n s1 ≥ s2 ≥ .. ≥ 0 : X

i

si = 1

  • Assumption: ∀ bounded continuous f : S↓ → R,

nγ X

(n1,...,np) partition of n

qn(n1, ..., np) “ 1− n1 n ” f “n1 n , ..., np n , 0, .. ” →

n→∞

Z

S↓(1−s1)f(s)ν(ds),

with γ > 0 and ν a non-trivial σ−finite measure on S↓ such that

R

S↓(1 − s1)ν(ds) < ∞.

Informally: with proba.∼ 1 :

size: o(n) R ~n

with proba. ∼ ν(ds)

:

R s s s

1 2

n n n

3

Bénédicte Haas SSP - March 2012 15 / 26

slide-21
SLIDE 21

Scaling limits of Markov branching trees

Theorem (H.-MIERMONT 11) Under the previous assumption, ∃ random compact real tree Tγ,ν s.t. Tn nγ

law

GH Tγ,ν,

jointly with the convergence of the uniform probability on the leaves of Tn towards a probability measure on the leaves of Tγ,ν Similar result for sequences of trees indexed by the number of nodes, with γ ∈ (0, 1]. Outline of proof:

1

height of a random leaf

2

scaling limit of the tree spanned by k random leaves (finite dimensional convergence)

3

tightness criterion

Bénédicte Haas SSP - March 2012 16 / 26

slide-22
SLIDE 22

Markov branching trees: the limiting tree

  • Tγ,ν is self-similar, with Hausdorff dimension max(1, 1/γ) (H.-MIERMONT 04)

γ: index of self-similarity ν: distribution of relative masses in the subtrees of a branching vertex

  • we recover the Brownian tree TBr when γ = 1/2 and ν(s1 + s2 < 1) = 0,

ν(s1 ∈ dx) = √ 2 √πx3/2(1 − x)3/2 , 1/2 < x < 1

Bénédicte Haas SSP - March 2012 17 / 26

slide-23
SLIDE 23

Markov branching trees: applications

◮ we recover Aldous’ results on critical Galton-Watson trees conditioned by nodes T GW

n

√n

law

→ 2 σ TBr if σ2 < ∞ and its extension by Duquesne to infinite variance cases ◮ RIZZOLO 11: T GW,L

n

critical GW with finite variance σ2, n leaves T GW,L

n

√n

law

→ 2 ση(0)TBr where η(0): proba. of having 0 child (see also KORTCHEMSKI 11)

Bénédicte Haas SSP - March 2012 18 / 26

slide-24
SLIDE 24

Markov branching trees: applications

◮ scaling limits of combinatorial trees

  • T (lab)

n

: uniform among labelled, trees with n nodes, rooted T (lab)

n

√n

law

→ 2TBr since T (lab)

n

∼ T GW

n

with a Poisson(1) offspring distribution

  • T (ord)

n

: uniform among ordered trees with n nodes, rooted T (ord)

n

√n

law

→ TBr since T (ord)

n

∼ T GW

n

with a Geometric(1/2) offspring distribution

  • T (P)

n : uniform among non-ordored, non-labelled trees with n nodes, rooted.

Problem: it is not a conditioned Galton-Watson tree Corollary (H.-MIERMONT 11) :

T (P)

n

√n

law

→ cPTBr

Bénédicte Haas SSP - March 2012 19 / 26

slide-25
SLIDE 25

Markov branching trees: applications

◮ scaling limits of sequences of trees built recursively by adding edges one by one Ex.: Ford’s trees : a ∈ [0, 1] fixed, T (a)

n

binary, n leaves, induction: weight 1 − a on each leaf-edge; weight a on each other edge

T1 . . .

a a 1a 1a a 1a

Tn+1 Tn

1a

T2

then, for a ∈ (0, 1), T (a)

n

na

law

→ Ta,νa When a = 1/2: the limiting tree is the Brownian tree!

Bénédicte Haas SSP - March 2012 20 / 26

slide-26
SLIDE 26

Application to the number of collisions in Λ-coalescents

Goal: describe the ancestral history of a sample of n individuals (or particles, genes, DNA sequences) chosen in a large population go back in time → coalescence

PITMAN 99 and SAGITOV 99: Λ: finite measure on [0, 1]

Λn-coalescent: continuous-time Markov chain on the set of partitions of {1, ..., n}: among i present blocks, j coalesce in a unique block (to give a total number of blocks equal to i − j + 1) at rate gi,j = „ i j « Z

[0,1]

xj−2(1 − x)i−jΛ(dx), 2 ≤ j ≤ i when Λ({0}) = 0: chose x ∼ x−2Λ(dx), then a proportion x of blocks coalesce Ex.: • Λ = δ0: Kingman’s coalescent (blocks coalesce 2 by 2 at rate 1)

  • Λ(dx) = xa−1(1 − x)b−1dx, 0 < x < 1: Beta(a, b)-coalescent, a, b > 0

Bénédicte Haas SSP - March 2012 21 / 26

slide-27
SLIDE 27

Number of collisions in Λ-coalescents

Starting from the trivial partition {{1}, {2}, ..., {n}}:

  • Xn(k) :=number of blocks in the coalescent after k collisions, k ≥ 0
  • Xn: non-increasing Z+-Markov chain with transition probabilities

pi,j = gi,j gi = 1 gi „ i j « Z

[0,1]

xj−2(1 − x)i−jΛ(dx) where gi = Pi

j=2 gi,j is the total rate of coalescence of i blocks

  • An = inf{k : Xn(k) = 1}=total number of collisions of the process
  • Hyp. : Λ({0}) = 0 and ∃ γ ∈ (0, 1) such that

Z 1

u

x−2Λ(dx) ∼

u→0 u−γ/C

(⇒ proba. of a jump ≥ nε ∼ cεn−γ)

Bénédicte Haas SSP - March 2012 22 / 26

slide-28
SLIDE 28

Number of collisions in Λ-coalescents

Let ξ be a subordinator with Laplace exponent φ(λ) = 1 Γ(2 − γ) Z 1 (1 − (1 − x)λ)x−2Λ(dx) , and X∞ the associated self-similar Markov process with index γ. Corollary (H.-MIERMONT 11) There is the joint convergence in distribution Xn([Cnγ·]) n → X∞ and An Cnγ → Z ∞ exp(−γξr)dr Ex.: this holds for Beta-coalescents Λ(dx) = xa−1(1 − x)b−1dx, 0 < x < 1 when 1 < a < 2 and b > 0. Then, An grows at speed n2−a. (previously proved by IKSANOV, MÖHLE 08 when b = 1).

Bénédicte Haas SSP - March 2012 23 / 26

slide-29
SLIDE 29

Number of collisions in Beta-coalescents

To compare, a list of previously known results:

  • if 0 < a < 1, b > 0, the limit law of

An − (1 − a)n n1/(2−a) is a (2 − a)-stable distribution (GNEDIN, YAKUBOVICH 07, IKSANOV, MÖHLE 08 when b = 1)

  • if a = b = 1 (Bolthausen-Sznitman coalescent), the limit law of

log2(n) n An − log(n log(n)) is a 1-stable distribution (IKSANOV, MÖHLE 07)

  • if a = 2, b > 0, there exists c > 0 such that the limit law of

An − c log2(n) log3/2(n) is a centered normal distribution (IKSANOV, MARYNYCH, MÖHLE 09)

  • if a > 2, b > 0, there exists c > 0 such that the limit law of

An − c log(n) p log(n) is a centered normal distribution (GNEDIN, IKSANOV, MÖHLE 08)

Bénédicte Haas SSP - March 2012 24 / 26

slide-30
SLIDE 30

Application to random walks with barriers

(Sk, k ≥ 0) Z+-valued random walk: Sk = Y1 + ... + Yk Yi : i.i.d ∈ Z+, S0 = 0

  • behavior as n → ∞ of the walk killed at n (Sk ∧ n, k ≥ 0) ?
  • behavior of the walk ignoring jumps that would make it exceed n ?

(defined recursively by: ˜ Sk+1 = ˜ Sk + Yk+11{˜

Sk +Yk+1≤n})

In all cases: Xn(k) = n − Sk ∧ n, and ˜ Xn(k) = n − ˜ Sk k ≥ 1 define non-increasing Z+-valued Markov chains (Xn is absorbed at 0, but not necessarily ˜ Xn)

Bénédicte Haas SSP - March 2012 25 / 26

slide-31
SLIDE 31

Random walks with barriers

Assume that P(Y1 ≥ n) ∼ 1 Cnγ , γ ∈ (0, 1) Let:

  • ξ be a subordinator with Laplace exponent

φ(λ) = R ∞

0 (1 − e−λy) γe−y dy (1−e−y )γ+1

  • e(1) ∼ Exp(1), independent of ξ
  • ξ(k)

t

= ξt + ∞1{e(1)≤t}, t ≥ 0

  • X∞ and X (k)

∞ associated self-similar Markov processes with index γ

Corollary (H.-MIERMONT 11): Xn([Cnγ·]) n , ˜ Xn([Cnγ·]) n !

law

→ “ X (k)

∞ , X∞

” “ An Cnγ , ˜ An Cnγ ”

law

→ “R e(1) exp(−γξr)dr, R ∞ exp(−γξr)dr ”

Bénédicte Haas SSP - March 2012 26 / 26