Bandwidth Allocation in Large Stochastic Networks Mathieu Feuillet - - PowerPoint PPT Presentation

bandwidth allocation in large stochastic networks
SMART_READER_LITE
LIVE PREVIEW

Bandwidth Allocation in Large Stochastic Networks Mathieu Feuillet - - PowerPoint PPT Presentation

Bandwidth Allocation in Large Stochastic Networks Mathieu Feuillet Soutenance de thse 12/07/2012 Introduction Modeling Traffic Performance Network Objectives: - Modeling - Design - Dimensioning 3 What Are We T alking About? - In a


slide-1
SLIDE 1

Bandwidth Allocation in Large Stochastic Networks

Mathieu Feuillet

Soutenance de thèse

12/07/2012

slide-2
SLIDE 2

Introduction

slide-3
SLIDE 3

Modeling

Network Traffic Performance Objectives:

  • Modeling
  • Design
  • Dimensioning

3

slide-4
SLIDE 4

What Are We T alking About?

  • In a distributed storage system with failures,

what is the life expectancy of a file?

  • Does the Internet collapse if users are selfish

and don’t use congestion control?

  • Does CSMA/CA, as used in WiFi, ensure

efficient use of bandwidth?

4

slide-5
SLIDE 5

Contents

Mathematical tools Modeling Scaling methods Stochastic averaging Examples Unreliable File System The Law of the Jungle Flow-Aware CSMA

5

slide-6
SLIDE 6

Modeling

slide-7
SLIDE 7

Modeling

Network Traffic Performance Objectives:

  • Modeling
  • Design
  • Dimensioning

T

  • ols:
  • Markov processes
  • Queueing models
  • Scaling methods

7

slide-8
SLIDE 8

Stochastic Models

State: (X(t)) a Markov jump process in Nd:

  • Number of files,
  • Number of active flows in the Internet,
  • Number of messages to be transmitted.

8

slide-9
SLIDE 9

Stochastic Models

State: (X(t)) a Markov jump process in Nd:

  • Number of files,
  • Number of active flows in the Internet,
  • Number of messages to be transmitted.

Markov assumptions:

  • Poisson arrivals
  • Exponentially distributed sizes/durations.

8

slide-10
SLIDE 10

Stochastic Models

State: (X(t)) a Markov jump process in Nd:

  • generally, non-reversible,
  • when ergodic, invariant distribution not known,
  • results on transient properties are rare (for

d ≥ 2).

9

slide-11
SLIDE 11

Scaling Methods

slide-12
SLIDE 12

Scaling Methods

Principle: N a scaling parameter Analyze the evolution of the sample path of

XN(ΨN(t))

ΦN

  • as N → ∞, for some convenient (ΨN(t)) and (ΦN).

Time scale t → ΨN(t) is used as a tool to focus on some specific part of sample paths.

11

slide-13
SLIDE 13

Scaling Methods

Principle: N a scaling parameter Analyze the evolution of the sample path of

XN(ΨN(t))

ΦN

  • as N → ∞, for some convenient (ΨN(t)) and (ΦN).

Time scale t → ΨN(t) is used as a tool to focus on some specific part of sample paths. There may be more than one time scale of interest!

11

slide-14
SLIDE 14

Scaling Methods: Goals

Give a First order description of (XN(t)):

XN(ΨN(t)) ≈ ΦN.x(t)

where, (x(t)) is a simpler stochastic process or even a deterministic dynamical system:

˙ x(t) = F(x(t))

12

slide-15
SLIDE 15

Classical Example: Fluid Limit

¯

X(t)

  • =

X(Nt)

N

  • ,

with N = X(0).

Scaling parameter: initial state Time scale: t → Nt

13

slide-16
SLIDE 16

Classical Example: Fluid Limit

¯

X(t)

  • =

X(Nt)

N

  • ,

with N = X(0).

Scaling parameter: initial state Time scale: t → Nt

Fluid limit reaches 0 ⇓ Process is stable

13

slide-17
SLIDE 17

Example: Fluid Limit of M/M/1 Queue

¯ X(t)

  • =

X(Nt) N

  • ,

with N = X(0). 1 ·102 0.5 1 ·101 X(0) = 10 t X(t)

14

slide-18
SLIDE 18

Example: Fluid Limit of M/M/1 Queue

¯ X(t)

  • =

X(Nt) N

  • ,

with N = X(0). 1 ·103 0.5 1 ·102 X(0) = 102 λ = 0.8 μ = 1 t X(t)

14

slide-19
SLIDE 19

Example: Fluid Limit of M/M/1 Queue

¯ X(t)

  • =

X(Nt) N

  • ,

with N = X(0). 1 ·104 0.5 1 ·103 X(0) = 103 λ = 0.8 μ = 1 t X(t)

14

slide-20
SLIDE 20

Example: Fluid Limit of M/M/1 Queue

¯ X(t)

  • =

X(Nt) N

  • ,

with N = X(0). 1 ·105 0.5 1 ·104 X(0) = 104 λ = 0.8 μ = 1 t X(t)

14

slide-21
SLIDE 21

Example: Fluid Limit of M/M/1 Queue

¯ X(t)

  • =

X(Nt) N

  • ,

with N = X(0). 1 ·106 0.5 1 ·105 X(0) = 105 λ = 0.8 μ = 1 t X(t)

14

slide-22
SLIDE 22

References

Fluid limits for queueing systems: [Malyshev 93] [Rybko-Stolyar 92] [Dai 95] Scaling methods: [Khasminskii 56] [Freidlin-Wentzell 79] [Ethier-Kurtz 86] [Robert 03]

15

slide-23
SLIDE 23

T echnical Corner

Proof of the tightness of the scaled process XN (ΨN(t)) ΦN

  • Stochastic Differential Equation representation
  • f (XN(t)) with martingales
  • Standard tightness criteria

16

slide-24
SLIDE 24

T echnical Corner

Proof of the tightness of the scaled process XN (ΨN(t)) ΦN

  • Stochastic Differential Equation representation
  • f (XN(t)) with martingales
  • Standard tightness criteria

Difficulties:

  • Discontinuities: Skorokhod Problem T

echniques

  • Stochastic averaging

16

slide-25
SLIDE 25

T echnical Corner

Proof of the tightness of the scaled process XN (ΨN(t)) ΦN

  • Stochastic Differential Equation representation
  • f (XN(t)) with martingales
  • Standard tightness criteria

Difficulties:

  • Discontinuities: Skorokhod Problem T

echniques

  • Stochastic averaging

16

slide-26
SLIDE 26

T echnical Corner

Proof of the tightness of the scaled process XN (ΨN(t)) ΦN

  • Stochastic Differential Equation representation
  • f (XN(t)) with martingales
  • Standard tightness criteria

Difficulties:

  • Discontinuities: Skorokhod Problem T

echniques

  • Stochastic averaging

Each example has its specific difficulties

16

slide-27
SLIDE 27

Stochastic Averaging

slide-28
SLIDE 28

A Deterministic Example

Deterministic sequences (xN(t)) and (yN(t)) with: ˙ xN(t) = NF(xN(t)), ˙ yN(t) = G(xN(t), yN(t))

18

slide-29
SLIDE 29

A Deterministic Example

Deterministic sequences (xN(t)) and (yN(t)) with: ˙ xN(t) = NF(xN(t)), Fast time-scale ˙ yN(t) = G(xN(t), yN(t)) Slow time-scale

18

slide-30
SLIDE 30

A Deterministic Example

Deterministic sequences (xN(t)) and (yN(t)) with: ˙ xN(t) = NF(xN(t)), Fast time-scale ˙ yN(t) = G(xN(t), yN(t)) Slow time-scale Fast time-scale: ˙ xN(t/N) = F(xN(t/N)).

18

slide-31
SLIDE 31

A Deterministic Example

Deterministic sequences (xN(t)) and (yN(t)) with: ˙ xN(t) = NF(xN(t)), Fast time-scale ˙ yN(t) = G(xN(t), yN(t)) Slow time-scale Fast time-scale: ˙ xN(t/N) = F(xN(t/N)). Slow time-scale: If x(t) tends to a fixed point x⋆: (yN(t)) converges to (y(t)) with ˙ y(t) = G(x⋆, y(t))

18

slide-32
SLIDE 32

A Deterministic Example

Deterministic sequences (xN(t)) and (yN(t)) with: ˙ xN(t) = NF(xN(t), yN(t)), Fast time-scale ˙ yN(t) = G(xN(t), yN(t)) Slow time-scale

19

slide-33
SLIDE 33

A Deterministic Example

Deterministic sequences (xN(t)) and (yN(t)) with: ˙ xN(t) = NF(xN(t), yN(t)), Fast time-scale ˙ yN(t) = G(xN(t), yN(t)) Slow time-scale Fast time-scale: When N → ∞, yN(t/N) ≈ z ˙ xN(t/N) ≈ F(x(t/N), z)

19

slide-34
SLIDE 34

A Deterministic Example

Deterministic sequences (xN(t)) and (yN(t)) with: ˙ xN(t) = NF(xN(t), yN(t)), Fast time-scale ˙ yN(t) = G(xN(t), yN(t)) Slow time-scale Fast time-scale: When N → ∞, yN(t/N) ≈ z ˙ xN(t/N) ≈ F(x(t/N), z) Slow time-scale: If (xN(t)) tends to a fixed point x⋆

z:

(yN(t)) converges to (y(t)) with ˙ y(t) = G

  • x⋆

y(t), y(t)

  • 19
slide-35
SLIDE 35

Stochastic vs Deterministic

Deterministic Stochastic Fast process ODE Markov process (x(t)) (X(t)) ˙ x = F(x(t), y) Ω(y) Slow process ODE Markov process (y(t)) (Y(t)) Equilibrium Fixed point Stationary x⋆

y

distribution πy Convergence Regularity of Regularity of y → x⋆

y

y → πy

20

slide-36
SLIDE 36

Stochastic vs Deterministic

Deterministic Stochastic Fast process ODE Markov process (x(t)) (X(t)) ˙ x = F(x(t), y) Ω(y) Slow process ODE Markov process (y(t)) (Y(t)) Equilibrium Fixed point Stationary x⋆

y

distribution πy Convergence Regularity of Regularity of y → x⋆

y

y → πy

20

slide-37
SLIDE 37

References

Statistical mechanics: [Bogolyubov 62] Stochastic calculus: [Khasminskii 68], [Papanicolaou et al. 77], [Freidlin-Wenzell 79]. Loss networks: [Kurtz 92], [Hunt-Kurtz 94]

21

slide-38
SLIDE 38

Contributions

The Law of the Jungle:

  • Stochastic averaging
  • Scaling over the stationary distributions

Flow-Aware CSMA:

  • Suboptimality of CSMA (mono/multi-channel)
  • Optimality of Flow-Aware CSMA (mono/multi)
  • Time-scale separation

An unreliable file system:

  • Three time-scales
  • Stochastic averaging (simpler proof)

Transient properties of Engset and Ehrenfest:

  • Positive martingales
  • Asymptotics on hitting times

22

slide-39
SLIDE 39

Contributions

The Law of the Jungle:

  • Stochastic averaging
  • Scaling over the stationary distributions

Flow-Aware CSMA:

  • Suboptimality of CSMA (mono/multi-channel)
  • Optimality of Flow-Aware CSMA (mono/multi)
  • Time-scale separation

An unreliable file system:

  • Three time-scales
  • Stochastic averaging (simpler proof)

Transient properties of Engset and Ehrenfest:

  • Positive martingales
  • Asymptotics on hitting times

22

slide-40
SLIDE 40

Example 1: An Unreliable File System

slide-41
SLIDE 41

Model

1 1 2 2 3 3 Back-up: λN βN files 2 copies/file

24

slide-42
SLIDE 42

Model

1 2 2 3 3 Back-up: λN βN files 2 copies/file μ 1

Each copy is lost at rate μ

24

slide-43
SLIDE 43

Model

1 2 2 3 3 Back-up: λN βN files 2 copies/file

24

slide-44
SLIDE 44

Model

1 2 2 3 3 Back-up: λN βN files 2 copies/file

A file with 1 copy can be backed up

24

slide-45
SLIDE 45

Model

1 1 2 2 3 3 Back-up: λN βN files 2 copies/file

A file with 1 copy can be backed up

24

slide-46
SLIDE 46

Model

1 1 2 2 3 3 Back-up: λN βN files 2 copies/file

24

slide-47
SLIDE 47

Model

1 1 2 2 3 3 Back-up: λN βN files 2 copies/file 2

A file with 0 copies is lost

24

slide-48
SLIDE 48

Model

1 1 2 3 3 Back-up: λN βN files 2 copies/file 2

A file with 0 copies is lost

24

slide-49
SLIDE 49

Model

1 1 3 3 Back-up: λN βN files 2 copies/file

A file with 0 copies is lost

24

slide-50
SLIDE 50

Model

1 1 3 3 Back-up: λN βN files 2 copies/file 1

24

slide-51
SLIDE 51

Model

1 3 3 Back-up: λN βN files 2 copies/file 3

24

slide-52
SLIDE 52

Model

1 3 Back-up: λN βN files 2 copies/file 3

24

slide-53
SLIDE 53

Model

1 Back-up: λN βN files 2 copies/file 1

24

slide-54
SLIDE 54

Model

Back-up: λN βN files 2 copies/file

What is the decay rate of the network?

24

slide-55
SLIDE 55

Model

Xi(t) : number of files with i copies at time t. (X0(t), X1(t), X2(t)): a transient Markov Process. X0(t) + X1(t) + X2(t) = βN. A unique absorbing state (βN, 0, 0). (X0(t)) (X1(t)) (X2(t)) μx1 λN1{x1>0} 2μx2

25

slide-56
SLIDE 56

Model

Xi(t) : number of files with i copies at time t. (X0(t), X1(t), X2(t)): a transient Markov Process. X0(t) + X1(t) + X2(t) = βN. A unique absorbing state (βN, 0, 0). (X0(t)) (X1(t)) (X2(t)) μx1 λN1{x1>0} 2μ(βN − x0 − x1)

25

slide-57
SLIDE 57

Different Behaviors

Three time scales:    t → t/N t → t t → Nt Three regimes: Overload: 2β > ρ = λ/μ, Critical load: 2β = ρ, Underload: 2β < ρ.

26

slide-58
SLIDE 58

Time scale: t → t/N

(X0(t/N)) (X1(t/N)) (X2(t/N)) μ x1 N λ1{x1>0} 2μ N (βN − x1 − x0)

27

slide-59
SLIDE 59

Time scale: t → t/N

(L1(t)): an M/M/1 queue

  • ergodic if 2β < ρ,

transient if 2β > ρ. (L1(t)) ∼ Nβ λ1{x1>0} 2μβ

28

slide-60
SLIDE 60

Time scale: t → t/N

(L1(t)): an M/M/1 queue

  • ergodic if 2β < ρ,

transient if 2β > ρ. (L1(t)) ∼ Nβ λ1{x1>0} 2μβ

No loss!

28

slide-61
SLIDE 61

Time scale: t → t

Overloaded network

If 2β > ρ, (X0(t)/N, X1(t)/N, X2(t)/N) converges to a deterministic process (x0(t), x1(t), x2(t)).

λ 2μ

β t

x2(t) : 2 copies x1(t) : 1 copy x0(t) : 0 copies

29

slide-62
SLIDE 62

Time scale: t → t

Overloaded network

If 2β > ρ, (X0(t)/N, X1(t)/N, X2(t)/N) converges to a deterministic process (x0(t), x1(t), x2(t)).

λ 2μ

β t

x2(t) : 2 copies x1(t) : 1 copy x0(t) : 0 copies

A fraction N(β − ρ/2) is lost!

29

slide-63
SLIDE 63

Time scale: t → t

Underloaded network

If 2β < ρ, (X0(t)/N, X1(t)/N, X2(t)/N) converges to    x0(t) = 0, x1(t) = 0, x2(t) = β.

λ 2μ

β t

x2(t) : 2 copies

30

slide-64
SLIDE 64

Time scale: t → t

Underloaded network

If 2β < ρ, (X0(t)/N, X1(t)/N, X2(t)/N) converges to    x0(t) = 0, x1(t) = 0, x2(t) = β.

λ 2μ

β t

x2(t) : 2 copies

No significant loss!

30

slide-65
SLIDE 65

Time Scale t → Nt

lim

N→+∞

X0(Nt) N

  • = Ψ(t),

where Ψ(t) is the unique solution of Ψ(t) = μ t 2μ(β − Ψ(s)) λ − 2μ(β − Ψ(s)) ds.

31

slide-66
SLIDE 66

Time Scale t → Nt

lim

N→+∞

X0(Nt) N

  • = Ψ(t),

where Ψ(t) unique solution in (0, β) of (1 − Ψ(t)/β)ρ/2 eΨ(t)+t = 1. β t

0 copies

t → Nt is the “correct” time scale to describe decay.

31

slide-67
SLIDE 67

A Stochastic Averaging Phenomenon

∼ NΨ(t) X1,t(∞) ∼ N(β − Ψ(t)) Fast time scale: At “time” Nt, (X1(Nt+u/N), u ≥ 0): an M/M/1 with transition rates: +1 at rate 2μ(β − Ψ(t)) −1 at rate λ.

32

slide-68
SLIDE 68

A Stochastic Averaging Phenomenon

∼ NΨ(t) X1,t(∞) ∼ N(β − Ψ(t)) Slow time scale: (X0(Nt)/N) “sees” only X1 at equi- librium: Ψ(t)” = ”μ t E(X1,s(∞))ds = t 2μ(β − Ψ(s)) λ − 2μ(β − Ψ(s)) ds.

32

slide-69
SLIDE 69

T echnical Corner

Step 1 Radon measures: tightness of (μN) with 〈μN, g〉 = 1 N Nt g

  • XN

1(s), s

  • ds

33

slide-70
SLIDE 70

T echnical Corner

Step 1 Radon measures: tightness of (μN) with 〈μN, g〉 = 1 N Nt g

  • XN

1(s), s

  • ds

Step 2 Control of limits of (μN): lim

N→∞

1 N Nt XN

1(s)ds = Ψ(t) =

t 〈πs, I〉 ds t πs(N)ds = t

33

slide-71
SLIDE 71

T echnical Corner

Step 1 Radon measures: tightness of (μN) with 〈μN, g〉 = 1 N Nt g

  • XN

1(s), s

  • ds

Step 2 Control of limits of (μN): lim

N→∞

1 N Nt XN

1(s)ds = Ψ(t) =

t 〈πs, I〉 ds t πs(N)ds = t Here: Proof by stochastic domination

33

slide-72
SLIDE 72

T echnical Corner

Step 1 Radon measures: tightness of (μN) with 〈μN, g〉 = 1 N Nt g

  • XN

1(s), s

  • ds

Step 2 Control of limits of (μN): lim

N→∞

1 N Nt XN

1(s)ds = Ψ(t) =

t 〈πs, I〉 ds t πs(N)ds = t Here: Proof by stochastic domination Step 3 Identification of πs with martingale techniques and balance equations.

33

slide-73
SLIDE 73

Decay Rate of the Network

TN(δ) = inf

  • t ≥ 0 : XN

0(t) ≥ δβN

  • Theorem:

lim

N→∞

TN(δ) N = − ρ 2 log(1 − δ) − δβ. δ TN(δ)/N

34

slide-74
SLIDE 74

Conclusion

  • Three different time scales
  • A first example of stochastic averaging
  • Asymptotics on a transitory property.

Extensions:

  • Number of copies: d > 2 ⇒ d − 1 times scales
  • Decentralized back-up (mean-field)

Open problem:

  • Modeling a DHT: geometrical considerations

35

slide-75
SLIDE 75

Example 2: The Law of the Jungle

slide-76
SLIDE 76

Context

Congestion control:

  • Rate adjustment to limit packet loss
  • Retransmission of lost packets

37

slide-77
SLIDE 77

Context

Congestion control:

  • Rate adjustment to limit packet loss
  • Retransmission of lost packets

No congestion control:

  • No rate adjustment
  • Sources send at their maximum rate
  • Coding to recover from packet loss

37

slide-78
SLIDE 78

Context

Congestion control:

  • Rate adjustment to limit packet loss
  • Retransmission of lost packets

No congestion control:

  • No rate adjustment
  • Sources send at their maximum rate
  • Coding to recover from packet loss

Does this bring congestion collapse?

37

slide-79
SLIDE 79

Bandwidth Sharing Networks

[Massoulié Roberts 00]

1 2

  • A flow: a stream of packets
  • Flows are considered as a fluid
  • Users divided in classes/routes
  • Poisson arrivals/Exponential sizes
  • Resource allocation determined by congestion

policy

38

slide-80
SLIDE 80

Bandwidth Sharing Networks

[Massoulié Roberts 00]

λ1 λ0 λ2 μ1ϕ1 μ0ϕ0 μ2ϕ2

  • A flow: a stream of packets
  • Flows are considered as a fluid
  • Users divided in classes/routes
  • Poisson arrivals/Exponential sizes
  • Resource allocation determined by congestion

policy

38

slide-81
SLIDE 81

Resource Allocation

Usually, α-fair policies are considered [MW00]. Here:

  • Sources send at their maximum rate (1 or a)
  • T

ail dropping: At each link, output rates are proportional to input rates

39

slide-82
SLIDE 82

Resource Allocation

Usually, α-fair policies are considered [MW00]. Here:

  • Sources send at their maximum rate (1 or a)
  • T

ail dropping: At each link, output rates are proportional to input rates x1 x0 x2

39

slide-83
SLIDE 83

Resource Allocation

Usually, α-fair policies are considered [MW00]. Here:

  • Sources send at their maximum rate (1 or a)
  • T

ail dropping: At each link, output rates are proportional to input rates x1 x0 x2 ϕ1 =

x1 x0+x1

α =

x0 x0+x1

39

slide-84
SLIDE 84

Resource Allocation

Usually, α-fair policies are considered [MW00]. Here:

  • Sources send at their maximum rate (1 or a)
  • T

ail dropping: At each link, output rates are proportional to input rates x1 x0 x2 ϕ1 =

x1 x0+x1

α =

x0 x0+x1

ϕ0 = min

  • α,

α x2a+α

  • ϕ2 = min
  • x2a,

x2a α+x2a

  • 39
slide-85
SLIDE 85

Ergodicity Condition

1 2 Optimal ergodicity condition: ρ0 + ρ1 < 1, ρ0 + ρ2 < 1 where ρi = λi/μi. We know α-fair policies are optimal [BM02].

40

slide-86
SLIDE 86

Ergodicity Condition

1 2 Optimal ergodicity condition: ρ0 + ρ1 < 1, ρ0 + ρ2 < 1 where ρi = λi/μi. We know α-fair policies are optimal [BM02].

What about our policy?

40

slide-87
SLIDE 87

Fluid Limits

x2 ϕ0 = min

  • α,

α x2a+α

  • ϕ2 = min
  • x2a,

x2a α+x2a

  • ϕ1 =

x1 x0+x1

α =

x0 x0+x1

x1 x0 If x2 ≫ 0, class 2 uses virtually all the second link. If (z0(t), z1(t), z2(t)) is a fluid limit with z2(0) > 0,    ˙ z0(t) = λ0, ˙ z1(t) = λ1 − μ1

z1(t) z0(t)+z1(t),

˙ z2(t) = λ2 − μ2.

41

slide-88
SLIDE 88

Fluid Limits

x2 ϕ0 = min

  • α,

α x2a+α

  • ϕ2 = min
  • x2a,

x2a α+x2a

  • ϕ1 =

x1 x0+x1

α =

x0 x0+x1

x1 x0 If x2 ≫ 0, class 2 uses virtually all the second link. If (z0(t), z1(t), z2(t)) is a fluid limit with z2(0) > 0,    ˙ z0(t) = λ0, ˙ z1(t) = λ1 − μ1

z1(t) z0(t)+z1(t),

˙ z2(t) = λ2 − μ2. If ρ2 < 1, (z2(t)) reaches 0 in finite time.

41

slide-89
SLIDE 89

Fluid Limits

x2 ϕ0 = min

  • α,

α x2a+α

  • ϕ2 = min
  • x2a,

x2a α+x2a

  • x1

x0 ϕ1 =

x1 x0+x1

α Classes 0 and 1 are frozen: πα

2 is the stationary distribution of class 2

¯ Φ0(α) = Eπα

2

  • Φ0
  • α,

α x2a + α

  • .

42

slide-90
SLIDE 90

Fluid Limits

When z2(t) = 0:            ˙ z0(t) = λ0 − μ0 ¯ ϕ0

  • z0(t)

z0(t) + z1(t)

  • ,

˙ z1(t) = λ1 − μ1 z1(t) z0(t) + z1(t) , ˙ z2(t) = 0. with ¯ Φ0(α) = Eπα

2

  • Φ0
  • α,

α x2a + α

  • .

43

slide-91
SLIDE 91

Fluid Limits

When z2(t) = 0:            ˙ z0(t) = λ0 − μ0 ¯ ϕ0

  • z0(t)

z0(t) + z1(t)

  • ,

˙ z1(t) = λ1 − μ1 z1(t) z0(t) + z1(t) , ˙ z2(t) = 0. with ¯ Φ0(α) = Eπα

2

  • Φ0
  • α,

α x2a + α

  • .

Stochastic averaging

43

slide-92
SLIDE 92

Ergodicity Conditions

Ergodicity conditions: ρ1 < 1, ρ2 < 1, ρ0 < ¯ ϕ0(1 − ρ1) Optimal conditions: ρ1 < 1, ρ2 < 1, ρ0 < min(1 − ρ1, 1 − ρ2)

44

slide-93
SLIDE 93

Ergodicity Conditions

Ergodicity conditions: ρ1 < 1, ρ2 < 1, ρ0 < ¯ ϕ0(1 − ρ1) Optimal conditions: ρ1 < 1, ρ2 < 1, ρ0 < min(1 − ρ1, 1 − ρ2) But: ¯ ϕ0(1 − ρ1) < min(1 − ρ2, 1 − ρ1)

44

slide-94
SLIDE 94

Ergodicity Conditions

Ergodicity conditions: ρ1 < 1, ρ2 < 1, ρ0 < ¯ ϕ0(1 − ρ1) Optimal conditions: ρ1 < 1, ρ2 < 1, ρ0 < min(1 − ρ1, 1 − ρ2) But: ¯ ϕ0(1 − ρ1) < min(1 − ρ2, 1 − ρ1)

Not optimal!

44

slide-95
SLIDE 95

Impact of Maximum Rate a

Class 1: ρ1 Class 0: ρ0

Optimal a = 1 a = 0.1 a = 0.01

45

slide-96
SLIDE 96

Impact of Maximum Rate a

Class 1: ρ1 Class 0: ρ0

Optimal a = 1 a = 0.1 a = 0.01

What happens when a → 0 ?

45

slide-97
SLIDE 97

Scaling the Maximum Rate a

We freeze α and consider the process (XS

2(t)) with

Q-matrix: q(x2, x2 + 1) = λ2, q(x2, x2 − 1) = μ2 min

  • x2a,

x2a α + x2a

  • 46
slide-98
SLIDE 98

Scaling the Maximum Rate a

We freeze α and consider the process (XS

2(t)) with

Q-matrix: q(x2, x2 + 1) = λ2, q(x2, x2 − 1) = μ2 min

  • x2

a S , x2a/S α + x2a/S

  • Time-scale: t → St

46

slide-99
SLIDE 99

Scaling the Maximum Rate a

We freeze α and consider the process (XS

2(t)) with

Q-matrix: q(x2, x2 + 1) = λ2, q(x2, x2 − 1) = μ2 min

  • x2

a S , x2a/S α + x2a/S

  • Time-scale: t → St

(XS

2(St)/S) ⇒ (x2(t)) with

˙ x2(t) = λ2 − μ2 min

  • ax2(t),

x2(t)a α + x2(t)a

  • 46
slide-100
SLIDE 100

Scaling the Maximum Rate a

We freeze α and consider the process (XS

2(t)) with

Q-matrix: q(x2, x2 + 1) = λ2, q(x2, x2 − 1) = μ2 min

  • x2

a S , x2a/S α + x2a/S

  • Time-scale: t → St

(XS

2(St)/S) ⇒ (x2(t)) with

˙ x2(t) = λ2 − μ2 min

  • ax2(t),

x2(t)a α + x2(t)a

  • Fixed point:

x2 = ρ2 a max

  • 1,

α 1 − ρ2

  • 46
slide-101
SLIDE 101

Scaling the Maximum Rate a

(XS

2(St)/S) t→∞

− − → XS

2(∞)/S S→∞

 

 S→∞

(x2(t)) − − →

t→∞

x2(∞)

47

slide-102
SLIDE 102

Scaling the Maximum Rate a

(XS

2(St)/S) t→∞

− − → XS

2(∞)/S S→∞

 

 S→∞

(x2(t)) − − →

t→∞

x2(∞)

Convergence of processes ⇓ Convergence of stationary distribution

47

slide-103
SLIDE 103

Scaling the Maximum Rate a

(XS

2(St)/S) t→∞

− − → XS

2(∞)/S S→∞

 

 S→∞

(x2(t)) − − →

t→∞

x2(∞)

Convergence of processes ⇓ Convergence of stationary distribution

lim

a→0

¯ Φ0(1 − ρ1) = min(1 − ρ1, 1 − ρ2) The policy is asymptotically optimal

47

slide-104
SLIDE 104

Conclusion

  • Analysis of equilibrium,
  • Inversion of limits: scaling on stationary

distributions

  • Impact of access rates

Extensions:

  • Linear networks with L links
  • Second order scaling: speed of convergence.
  • Upstream trees

Open problem:

  • General acyclic networks

48

slide-105
SLIDE 105

Example 3: Flow-Aware CSMA

slide-106
SLIDE 106

Model

The network is represented by a conflict graph 1 2 3 For each node i:

  • Xi(t) ∈ N: number of flows at time t
  • Yi(t) = 1 if node is active at time t, 0 otherwise.

50

slide-107
SLIDE 107

Model

The network is represented by a conflict graph 1 2 3

wireless link

For each node i:

  • Xi(t) ∈ N: number of flows at time t
  • Yi(t) = 1 if node is active at time t, 0 otherwise.

50

slide-108
SLIDE 108

Model

The network is represented by a conflict graph 1 2 3

wireless link potential interference

For each node i:

  • Xi(t) ∈ N: number of flows at time t
  • Yi(t) = 1 if node is active at time t, 0 otherwise.

50

slide-109
SLIDE 109

Conflict Graph

1 2 3 λ1 λ2 λ3 Schedules: ∅, {1}, {2}, {3}, {1, 3}.

51

slide-110
SLIDE 110

Conflict Graph

1 2 3 λ1 λ2 λ3 Schedules: ∅, {1}, {2}, {3}, {1, 3}.

51

slide-111
SLIDE 111

Conflict Graph

1 2 3 λ1 λ2 λ3 μ1 1 Schedules: ∅, {1}, {2}, {3}, {1, 3}.

51

slide-112
SLIDE 112

Conflict Graph

1 2 3 λ1 λ2 λ3 μ2 2 Schedules: ∅, {1}, {2}, {3}, {1, 3}.

51

slide-113
SLIDE 113

Conflict Graph

1 2 3 λ1 λ2 λ3 μ3 3 Schedules: ∅, {1}, {2}, {3}, {1, 3}.

51

slide-114
SLIDE 114

Conflict Graph

1 2 3 λ1 λ2 λ3 μ1 μ3 1 3 Schedules: ∅, {1}, {2}, {3}, {1, 3}.

51

slide-115
SLIDE 115

Conflict Graph

1 2 3 λ1 λ2 λ3 Schedules: ∅, {1}, {2}, {3}, {1, 3}. Optimal stability region: convex hull of schedules

51

slide-116
SLIDE 116

Conflict Graph

1 2 3 λ1 λ2 λ3 Schedules: ∅, {1}, {2}, {3}, {1, 3}. Optimal stability region: convex hull of schedules In this example: {ρ1 + ρ2 ≤ 1, ρ2 + ρ3 ≤ 1} with ρi = λi/μi

51

slide-117
SLIDE 117

Conflict Graph

1 2 3 λ1 λ2 λ3 Schedules: ∅, {1}, {2}, {3}, {1, 3}. Optimal stability region: convex hull of schedules In this example: {ρ1 + ρ2 ≤ 1, ρ2 + ρ3 ≤ 1} with ρi = λi/μi

Stability region?

51

slide-118
SLIDE 118

Standard CSMA

∼ exp(α) Back-off ∼ exp(1) Transmission ∼ exp(α) Back-off

52

slide-119
SLIDE 119

Standard CSMA

∼ exp(α) Back-off ∼ exp(1) Transmission ∼ exp(α) Back-off

Optimal?

52

slide-120
SLIDE 120

Standard CSMA

∼ exp(α) Back-off ∼ exp(1) Transmission ∼ exp(α) Back-off

Optimal?

1 2 3 ρ1 ρ2 ρ3

0.5 1 0.5 1

ρ1 = ρ3 ρ2

Optimal Actual

52

slide-121
SLIDE 121

Flow-Aware CSMA

Proposed modification of CSMA: Exponential backoff time for each flow ∼ exp(αx1) Back-off ∼ exp(1) Transmission ∼ exp(αx1) Back-off

53

slide-122
SLIDE 122

Flow-Aware CSMA

Proposed modification of CSMA: Exponential backoff time for each flow ∼ exp(αx1) Back-off ∼ exp(1) Transmission ∼ exp(αx1) Back-off The process (X(t), Y(t)) is difficult to analyze:

53

slide-123
SLIDE 123

Flow-Aware CSMA

Proposed modification of CSMA: Exponential backoff time for each flow ∼ exp(αNx1) Back-off ∼ exp(N) Transmission ∼ exp(αNx1) Back-off The process (XN(t), YN(t)) is difficult to analyze: Idea: Separate network dynamics and flow dynamics. When N → ∞, (YN(t)): classical loss network.

53

slide-124
SLIDE 124

Flow-Aware CSMA

Proposed modification of CSMA: Exponential backoff time for each flow ∼ exp(αNx1) Back-off ∼ exp(N) Transmission ∼ exp(αNx1) Back-off The process (XN(t), YN(t)) is difficult to analyze: Idea: Separate network dynamics and flow dynamics. When N → ∞, (YN(t)): classical loss network.

Stochastic averaging

53

slide-125
SLIDE 125

Optimality of Flow-Aware CSMA

Theorem: Flow-aware CSMA algorithm is optimal for any network. Sketch of proof:

  • Asymptotically behaves as Max-Weight.
  • Deduce a Lyapunov function and apply

Foster’s criterion.

54

slide-126
SLIDE 126

Conclusion

  • An optimal and fully distributed channel

access mechanism

  • Limiting process: jump process
  • Simplification of the problem

Extension:

  • Multi-channel

Open problem:

  • Initial problem still open

55

slide-127
SLIDE 127

General Conclusion

Three examples:

  • Capacity of an unreliable file system
  • Law of the Jungle
  • Flow-Aware CSMA

56

slide-128
SLIDE 128

General Conclusion

Three examples:

  • Capacity of an unreliable file system
  • Law of the Jungle
  • Flow-Aware CSMA

Mathematical tools:

  • Several examples of scalings
  • A simpler proof for stochastic averaging

56

slide-129
SLIDE 129

General Conclusion

Three examples:

  • Capacity of an unreliable file system
  • Law of the Jungle
  • Flow-Aware CSMA

Mathematical tools:

  • Several examples of scalings
  • A simpler proof for stochastic averaging
  • and. . .
  • Scalings: A set of powerful tools
  • Stochastic averaging: a not so rare

phenomenon

56

slide-130
SLIDE 130

General Conclusion

Three examples:

  • Capacity of an unreliable file system
  • Law of the Jungle
  • Flow-Aware CSMA

Mathematical tools:

  • Several examples of scalings
  • A simpler proof for stochastic averaging
  • and. . .
  • Scalings: A set of powerful tools
  • Stochastic averaging: a not so rare

phenomenon

Many interesting open questions. . .

56

slide-131
SLIDE 131

Thank you!