Decentralized Prediction of End-to-End Network Performance Classes - - PowerPoint PPT Presentation

decentralized prediction of end to end network
SMART_READER_LITE
LIVE PREVIEW

Decentralized Prediction of End-to-End Network Performance Classes - - PowerPoint PPT Presentation

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work Decentralized Prediction of End-to-End Network Performance Classes Yongjun Liao, Wei Du, Pierre Geurts, Guy Leduc Research Unit


slide-1
SLIDE 1

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Decentralized Prediction of End-to-End Network Performance Classes

Yongjun Liao, Wei Du, Pierre Geurts, Guy Leduc

Research Unit in Networking(RUN), University of Li` ege, Belgium

December 08, 2011

1 / 27

slide-2
SLIDE 2

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

End-to-End Network Performance

Metrics

round-trip time (RTT) available bandwidth (ABW) packet loss rate (PLR)

Network Performance Matters!

peer-to-peer downloading

  • verlay routing

content distribution network Internet games

Peer Selection

Internet

smallest RTT highest ABW

2 / 27

slide-3
SLIDE 3

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Acquisition on Large-Scale Networks

full-mesh active measurements

1 2 3 4 5 6 7 8 9

n nodes ⇒ o(n2) measurements accurate but expensive

network performance prediction

1 2 3 4 5 6 7 8 9

n nodes ⇒ measurements ≪ o(n2) less accurate but cheap

3 / 27

slide-4
SLIDE 4

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Network Performance Prediction

Challenges

4 / 27

slide-5
SLIDE 5

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Network Performance Prediction

Challenges

Networks are dynamic.

◮ Churn: nodes join and leave frequently. ◮ Metric values vary over time. 4 / 27

slide-6
SLIDE 6

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Network Performance Prediction

Challenges

Networks are dynamic.

◮ Churn: nodes join and leave frequently. ◮ Metric values vary over time.

Metrics differ largely.

◮ RTT: symmetric; ABW: asymmetric. ◮ RTT: the smaller the better; ABW: the larger the better. ◮ RTT and ABW are measured with different methodologies. 4 / 27

slide-7
SLIDE 7

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Network Performance Prediction

Challenges

Networks are dynamic.

◮ Churn: nodes join and leave frequently. ◮ Metric values vary over time.

Metrics differ largely.

◮ RTT: symmetric; ABW: asymmetric. ◮ RTT: the smaller the better; ABW: the larger the better. ◮ RTT and ABW are measured with different methodologies.

Decentralized processing is prefered.

◮ no landmarks or central servers ◮ no infrastructure 4 / 27

slide-8
SLIDE 8

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Related Work on Network Performance Prediction

Round-Trip Time

Euclidean Embedding

◮ GNP: Ng et al. INFOCOM 2002 ◮ Vivaldi: Dabek et al. SIGCOMM 2004

Matrix Factorization

◮ IDES: Mao et al. IMC 2004 ◮ DMF: Liao et al. Networking 2010

Available Bandwidth

SEQUOIA: Rama et al. SIGMETRICS 2009 iPlane: Madhyastha et al. USENIX OSDI 2006

5 / 27

slide-9
SLIDE 9

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Our Contributions

6 / 27

slide-10
SLIDE 10

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Our Contributions

  • 1. Class-based Performance Representation

Represent network performance by discrete-valued classes, instead

  • f real-valued quantities.

6 / 27

slide-11
SLIDE 11

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Our Contributions

  • 1. Class-based Performance Representation

Represent network performance by discrete-valued classes, instead

  • f real-valued quantities.
  • 2. Formulation as Matrix Completion

Treat the prediction problem as a matrix completion problem.

6 / 27

slide-12
SLIDE 12

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Our Contributions

  • 1. Class-based Performance Representation

Represent network performance by discrete-valued classes, instead

  • f real-valued quantities.
  • 2. Formulation as Matrix Completion

Treat the prediction problem as a matrix completion problem.

  • 3. Decentralized Prediction Algorithm

DMFSGD: a decentralized matrix facotrization algorithm based

  • n stochastic gradient descent.

6 / 27

slide-13
SLIDE 13

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Class-based Performance Representation

Binary Classification

“good” or “bad”

7 / 27

slide-14
SLIDE 14

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Class-based Performance Representation

Binary Classification

“good” or “bad” Class reflects the QoS experience of end users.

7 / 27

slide-15
SLIDE 15

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Class-based Performance Representation

Binary Classification

“good” or “bad” Class reflects the QoS experience of end users. Class unifies different metrics.

7 / 27

slide-16
SLIDE 16

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Class-based Performance Representation

Binary Classification

“good” or “bad” Class reflects the QoS experience of end users. Class unifies different metrics. Class information is often sufficient.

◮ Streaming applications care if ABW is high enough. ◮ P2P applications care if RTT is small enough. 7 / 27

slide-17
SLIDE 17

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Class-based Performance Representation

Binary Classification

“good” or “bad” Class reflects the QoS experience of end users. Class unifies different metrics. Class information is often sufficient.

◮ Streaming applications care if ABW is high enough. ◮ P2P applications care if RTT is small enough.

Class measurements are cheap.

◮ Classes are rough. ◮ Classes are more stable. 7 / 27

slide-18
SLIDE 18

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Measure Performance Classes

“good” or “bad”

Thresholding

Measure if the metric value is larger or smaller than a threshold τ. If RTT < 100ms, performance is “good”. If ABW > 100Mbps, performance is “good”.

8 / 27

slide-19
SLIDE 19

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Measure Performance Classes

“good” or “bad”

Thresholding

Measure if the metric value is larger or smaller than a threshold τ. If RTT < 100ms, performance is “good”. If ABW > 100Mbps, performance is “good”.

Measuring classes is much cheaper!

8 / 27

slide-20
SLIDE 20

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Measure Performance Classes

“good” or “bad”

Thresholding

Measure if the metric value is larger or smaller than a threshold τ. If RTT < 100ms, performance is “good”. If ABW > 100Mbps, performance is “good”.

Measuring classes is much cheaper!

Threshold τ

defined according to requirements of applications Google TV requires 10Mbps for HD contents.

8 / 27

slide-21
SLIDE 21

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Our Contributions

  • 1. Class-based Performance Representation

Represent network performance by discrete-valued classes, instead

  • f real-valued quantities.
  • 2. Formulation as Matrix Completion

Treat the prediction problem as a matrix completion problem.

  • 3. Decentralized Prediction Algorithm

DMFSGD: a decentralized matrix facotrization algorithm based

  • n stochastic gradient descent.

9 / 27

slide-22
SLIDE 22

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Matrix Completion for Network Performance Prediction

Matrix Completion

Predict the unknown entries from a few known entries.

X

10 / 27

slide-23
SLIDE 23

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Matrix Completion for Network Performance Prediction

Matrix Completion

Predict the unknown entries from a few known entries.

Why is it possible?

Matrix entries are correlated.

X

10 / 27

slide-24
SLIDE 24

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Matrix Completion for Network Performance Prediction

Matrix Completion

Predict the unknown entries from a few known entries.

Why is it possible?

Matrix entries are correlated. The correlations induce low rank.

X

10 / 27

slide-25
SLIDE 25

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Matrix Completion for Network Performance Prediction

Matrix Completion

Predict the unknown entries from a few known entries.

Why is it possible?

Matrix entries are correlated. The correlations induce low rank. n × n matrix of rank r < n

◮ only r linearly independent

columns or rows

X

10 / 27

slide-26
SLIDE 26

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Matrix Completion for Network Performance Prediction

Matrix Completion

Predict the unknown entries from a few known entries.

Why is it possible?

Matrix entries are correlated. The correlations induce low rank. n × n matrix of rank r < n

◮ only r linearly independent

columns or rows

You don’t need all n × n entries!

X

10 / 27

slide-27
SLIDE 27

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Matrix Completion for Network Performance Prediction

1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 3 1 2 4 5 6 7 8

11 / 27

slide-28
SLIDE 28

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Matrix Completion for Network Performance Prediction

1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 3 1 2 4 5 6 7 8 3 1 2 6 8

11 / 27

slide-29
SLIDE 29

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Matrix Completion for Network Performance Prediction

1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 3 1 2 4 5 6 7 8 3 1 2 6 8

11 / 27

slide-30
SLIDE 30

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Matrix Completion for Network Performance Prediction

1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 3 1 2 4 5 6 7 8 3 1 2 6 8

good or bad

11 / 27

slide-31
SLIDE 31

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Matrix Completion for Network Performance Prediction

1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 3 1 2 4 5 6 7 8 3 1 2 6 8

1 or -1

11 / 27

slide-32
SLIDE 32

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Matrix Completion for Network Performance Prediction

1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 3 1 2 4 5 6 7 8 1 2 6 8 5 2 8 4 7

11 / 27

slide-33
SLIDE 33

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Matrix Completion for Network Performance Prediction

1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 3 1 2 4 5 6 7 8 1 2 6 8 5 2 8 4 7

11 / 27

slide-34
SLIDE 34

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Matrix Completion for Network Performance Prediction

1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 3 1 2 4 5 6 7 8 1 2 6 8 2 8 4 7

11 / 27

slide-35
SLIDE 35

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Matrix Completion for Network Performance Prediction

1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 3 1 2 4 5 6 7 8 1 2 6 8 2 8 4 7

11 / 27

slide-36
SLIDE 36

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Why is Matrix Completion Possible

Correlations across network performance

network topology routing algorithms redundancies among network paths ...

Internet

i1 i2 j 12 / 27

slide-37
SLIDE 37

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Why is Matrix Completion Possible

Low-rank of performance matrices

1 5 10 15 20 0.2 0.4 0.6 0.8 1 # singular value singular values RTT RTT class ABW ABW class

RTT matrix: 2255 × 2255 ABW matrix: 201 × 201 Class matrices are obtained by thresholding.

13 / 27

slide-38
SLIDE 38

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Low-Rank Matrix Factorization ≈ X ˆ X

Rank( ˆ X) = r

14 / 27

slide-39
SLIDE 39

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Low-Rank Matrix Factorization ≈ X ˆ X

Rank( ˆ X) = r

= × U V T

14 / 27

slide-40
SLIDE 40

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Low-Rank Matrix Factorization ≈ X ˆ X

Rank( ˆ X) = r

= × U V T Look for (U, V ), instead of ˆ X

14 / 27

slide-41
SLIDE 41

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Our Contributions

  • 1. Class-based Performance Representation

Represent network performance by discrete-valued classes, instead

  • f real-valued quantities.
  • 2. Formulation as Matrix Completion

Treat the prediction problem as a matrix completion problem.

  • 3. Decentralized Prediction Algorithm

DMFSGD: a decentralized matrix facotrization algorithm based

  • n stochastic gradient descent.

15 / 27

slide-42
SLIDE 42

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Stochastic Gradient Descent ≈ X ˆ X

Rank( ˆ X) = r

= × U V T

16 / 27

slide-43
SLIDE 43

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Stochastic Gradient Descent ≈ X ˆ X

Rank( ˆ X) = r

= × U V T xij ˆ xij ui v T

j

≈ = uiv T

j

16 / 27

slide-44
SLIDE 44

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Decentralized Matrix Factorization by Stochastic Gradient Descent No construction of matrices Repeated SGD updates

17 / 27

slide-45
SLIDE 45

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Decentralized Matrix Factorization by Stochastic Gradient Descent No construction of matrices

X: measurement xij is probed by node i.

Repeated SGD updates

17 / 27

slide-46
SLIDE 46

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Decentralized Matrix Factorization by Stochastic Gradient Descent No construction of matrices

X: measurement xij is probed by node i. U, V : row vectors ui, vi are stored at node i.

Repeated SGD updates

17 / 27

slide-47
SLIDE 47

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Decentralized Matrix Factorization by Stochastic Gradient Descent No construction of matrices

X: measurement xij is probed by node i. U, V : row vectors ui, vi are stored at node i.

Repeated SGD updates

When xij is available, update so that xij ≈ uivT

j .

17 / 27

slide-48
SLIDE 48

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Decentralized Matrix Factorization by Stochastic Gradient Descent No construction of matrices

X: measurement xij is probed by node i. U, V : row vectors ui, vi are stored at node i.

Repeated SGD updates

When xij is available, update so that xij ≈ uivT

j .

i j

17 / 27

slide-49
SLIDE 49

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Decentralized Matrix Factorization by Stochastic Gradient Descent No construction of matrices

X: measurement xij is probed by node i. U, V : row vectors ui, vi are stored at node i.

Repeated SGD updates

When xij is available, update so that xij ≈ uivT

j .

i j uj, vj ui, vi

17 / 27

slide-50
SLIDE 50

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Decentralized Matrix Factorization by Stochastic Gradient Descent No construction of matrices

X: measurement xij is probed by node i. U, V : row vectors ui, vi are stored at node i.

Repeated SGD updates

When xij is available, update so that xij ≈ uivT

j .

i j uj, vj ui, vi probe(ui)

17 / 27

slide-51
SLIDE 51

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Decentralized Matrix Factorization by Stochastic Gradient Descent No construction of matrices

X: measurement xij is probed by node i. U, V : row vectors ui, vi are stored at node i.

Repeated SGD updates

When xij is available, update so that xij ≈ uivT

j .

i j uj, vj ui, vi compute xij

17 / 27

slide-52
SLIDE 52

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Decentralized Matrix Factorization by Stochastic Gradient Descent No construction of matrices

X: measurement xij is probed by node i. U, V : row vectors ui, vi are stored at node i.

Repeated SGD updates

When xij is available, update so that xij ≈ uivT

j .

i j uj, vj ui, vi reply(vj, xij)

17 / 27

slide-53
SLIDE 53

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Decentralized Matrix Factorization by Stochastic Gradient Descent No construction of matrices

X: measurement xij is probed by node i. U, V : row vectors ui, vi are stored at node i.

Repeated SGD updates

When xij is available, update so that xij ≈ uivT

j .

i j uj, vj ui, vi update vj uivT

j

≈ xij update ui uivT

j

≈ xij

17 / 27

slide-54
SLIDE 54

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Decentralized Matrix Factorization by Stochastic Gradient Descent No construction of matrices

X: measurement xij is probed by node i. U, V : row vectors ui, vi are stored at node i.

Repeated SGD updates

When xij is available, update so that xij ≈ uivT

j .

i ui, vi use phase k uk, vk

17 / 27

slide-55
SLIDE 55

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Decentralized Matrix Factorization by Stochastic Gradient Descent No construction of matrices

X: measurement xij is probed by node i. U, V : row vectors ui, vi are stored at node i.

Repeated SGD updates

When xij is available, update so that xij ≈ uivT

j .

i ui, vi use phase k ui,vi uk,vk

17 / 27

slide-56
SLIDE 56

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Decentralized Matrix Factorization by Stochastic Gradient Descent DMFSGD

All nodes employ the same processing. Each node selects k neighbors to communicate with. Each node collaborates with one neighbor at each time.

Advantages

easy to implement computationally lightweight suitable for large-scale dynamic measurements adaptable for various metrics

18 / 27

slide-57
SLIDE 57

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Experiments and Evaluations

Datasets

Harvard Meridian HP-S3 nodes 226 2500 231 metric RTT RTT ABW dynamic Yes No No source Ledlie et al. Wong et al. Ramasubramanian et al. NSDI 2007 SIGCOMM 2005 SIGMETRICS 2009 *Harvard dataset is collected from Azureus (now Vuze) and contains dynamic measurements with time-stamps.

19 / 27

slide-58
SLIDE 58

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Experiments and Evaluations

Datasets

Harvard Meridian HP-S3 nodes 226 2500 231 metric RTT RTT ABW dynamic Yes No No source Ledlie et al. Wong et al. Ramasubramanian et al. NSDI 2007 SIGCOMM 2005 SIGMETRICS 2009 *Harvard dataset is collected from Azureus (now Vuze) and contains dynamic measurements with time-stamps.

Accuracy=# of correct prediction

# of data

Harvard Meridian HP-S3 Accuracy 89.4% 85.4% 87.3%

19 / 27

slide-59
SLIDE 59

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Peer Selection

Find a “good” peer, not the “best” peer! Internet

20 / 27

slide-60
SLIDE 60

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Peer Selection

Some peers are predicted as “good” and some “bad”. Internet

  • 20 / 27
slide-61
SLIDE 61

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Peer Selection

Select a peer that is predicted as “good”. Internet

  • 20 / 27
slide-62
SLIDE 62

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Peer Selection

The node is satisfied if the selected peer is truly “good”. Internet

  • satisfied

truly “good”

20 / 27

slide-63
SLIDE 63

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Peer Selection

The node is unsatisfied if the selected peer is actually “bad”. Internet

  • unsatisfied

actually “bad”

20 / 27

slide-64
SLIDE 64

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Peer Selection

Evaluation

Count the number of unsatisfied nodes.

21 / 27

slide-65
SLIDE 65

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Peer Selection

Evaluation

Count the number of unsatisfied nodes.

Methods that are compared with

classification: class-based prediction

21 / 27

slide-66
SLIDE 66

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Peer Selection

Evaluation

Count the number of unsatisfied nodes.

Methods that are compared with

classification: class-based prediction random peer selection

21 / 27

slide-67
SLIDE 67

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Peer Selection

Evaluation

Count the number of unsatisfied nodes.

Methods that are compared with

classification: class-based prediction random peer selection regression: value-based prediction

◮ Predict values of some metric by our DMFSGD algorithm. ◮ Select the predicted best peer for each node. ◮ Check if the selected peers are truly “good”. 21 / 27

slide-68
SLIDE 68

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Peer Selection

Evaluation

Count the number of unsatisfied nodes.

Methods that are compared with

classification: class-based prediction random peer selection regression: value-based prediction

◮ Predict values of some metric by our DMFSGD algorithm. ◮ Select the predicted best peer for each node. ◮ Check if the selected peers are truly “good”.

classification with noise

◮ Overall 15% erroneous labels were simulated. 21 / 27

slide-69
SLIDE 69

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Peer Selection

Harvard

10 20 30 40 50 60 0.1 0.2 0.3 0.4 0.5

peer number

unsatisfied node percentage

Random Classification Regression Classification with noise

22 / 27

slide-70
SLIDE 70

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Peer Selection

HP-S3

10 20 30 40 50 60 0.05 0.1 0.15 0.2 0.25 0.3

peer number

unsatisfied node percentage

Random Classification Regression Classification with noise

22 / 27

slide-71
SLIDE 71

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Conclusions and Future Work

Decentralized Prediction of Network Performance Classes

Class-based Performance Representation Formulation as Matrix Completion DMFSGD: a decentralized matrix factorization algorithm by stochastic gradient descent

◮ accurate and scalable ◮ generic to deal with various metrics ◮ robust against erroneous measurements ◮ usable on real Internet applications

Future Work

Multiclass classification , , , ,

23 / 27

slide-72
SLIDE 72

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Acknowledgement

We wish to thank

  • Dr. Ramasubramanian for providing the HP-S3 dataset;
  • ur shepherd Augustin Chaintreau;

the anonymous reviewers; project FP7-Fire ECODE.

Thank you for listening. Any questions?

24 / 27

slide-73
SLIDE 73

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Low-Rank Matrix Factorization

(U, V ) = arg min L(X, U, V , W , λ) = arg min

n

  • i,j=1

wijl(xij, uivT

j ) + λ n

  • i=1

uiuT

i + λ n

  • i=1

vivT

i

(U, V ) = {(ui, vi), i = 1, . . . , n} wij = 1 if xij is known and 0 otherwise l: loss function that penalizes the difference between x and ˆ x

◮ square loss function: l(x, ˆ

x) = (x − ˆ x)2;

◮ hinge loss function: l(x, ˆ

x) = max(0, 1 − xˆ x);

◮ logistic loss function: l(x, ˆ

x) = ln(1 + e−xˆ

x).

λ: regularization coefficient

25 / 27

slide-74
SLIDE 74

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Parameter Sensitivity

parameter tested values learning rate η 0.001, 0.01, 0.1, 1 regularization coefficient λ 0.001, 0.01, 0.1, 1 rank r 3, 10, 20, 100 loss function l hinge, logistic neighbor number k 5, 10, 30, 50 (Harvard and HP-S3) 16, 32, 64, 128 (Meridian) classification threshold τ 10%, 25%, 50%, 75%, 90% (portion of good-performing) *chosen value

Insensitive because the inputs are either 1 or −1.

26 / 27

slide-75
SLIDE 75

Introduction Class-based Representation Matrix Completion Experiments and Evaluations Conclusions and Future Work

Robustness Against Erroneous Measurement

Source of Erroneous Measurements

inaccurate measurement techniques network anomaly

Errors Type

1 Flip near τ 2 Underestimation bias 3 Flip randomly 4 Good-to-Bad 27 / 27