Emilio Leonardi Politecnico di Torino COMET-ENVISION Workshop - - PowerPoint PPT Presentation

emilio leonardi politecnico di torino comet envision
SMART_READER_LITE
LIVE PREVIEW

Emilio Leonardi Politecnico di Torino COMET-ENVISION Workshop - - PowerPoint PPT Presentation

Architecture of a Network-Aware P2P-TV Application: the NAPA- WINE Approach Slough, 11th November 2011 Emilio Leonardi Politecnico di Torino COMET-ENVISION Workshop Slough 11th November 2011 Internet Video Streaming Enable video distribution


slide-1
SLIDE 1

COMET-ENVISION Workshop Slough 11th November 2011

Architecture of a Network-Aware P2P-TV Application: the NAPA- WINE Approach

Emilio Leonardi Politecnico di Torino

Slough, 11th November 2011

slide-2
SLIDE 2

2

Internet Video Streaming

Enable video distribution from anywhere, to any number of people anywhere in the world

Unlimited number of channels

 Everyone can be a content producer/provider

slide-3
SLIDE 3

CDN vs P2P

 Content Delivery Networks

 - resources (costs) demanded to servers scale

linearly with the number of users

 + fully controllable by the content provider and

Internet provider

 P2P systems (Peer Assisted)

 + resources (costs) demanded to servers,

potentially independent from the system scale

 - requires high bandwidth access  - much more difficult to control

5

slide-4
SLIDE 4

Open issues in peer assisted systems

 peer authentication (access control pricing)  incentives to cooperation  robustness against attacks (s.a pollution)  localization of traffic

In a nutshell how to make peer assisted systems, secure, fully controllable, and network gently?

6

slide-5
SLIDE 5

7

NAPA-WINE Project Objectives

Definition and implementation of a network aware P2P-TV application that is able to minimize the impact on the transport network

Design of a distributed monitoring tool to be integrated within the application.

Design of algorithms for the control of cooperative P2P-TV systems

Characterization of P2P-TV traffic

slide-6
SLIDE 6

Tree based P2P-TV systems

8 source

 Different peers are organized in a tree

structure routed at the source

 The content is distributed along the tree

slide-7
SLIDE 7

Multi tree P2P-TV systems

 The source adopts a multi-description encoder  Each description is distributed on a different

tree

10

slide-8
SLIDE 8

Unstructured Systems

12

  • peers are arranged according to a generic highly

connected network

  • the stream is subdivided in portions called chunks
  • each chunk is distributed along a possibly different

spanning tree (SP)

  • SP are selected using simple random fully

distributed algorithms

slide-9
SLIDE 9

Scheduling algorithm at nodes

 Chunks are distributed through the network

using a swarm like (epidemic) approach

 as soon as, a peer obtains a new chunk c, it will

  • ffer c to its neighbors

 Chunks are not propagated perfectly in order;

however chunk timing is critical (due to the application requirements)

 Each chunk has a deadline after which it is not

useful (this deadline is related to the play-out buffer)

13

slide-10
SLIDE 10

Pros and Cons of Unstructured Architectures

+ fully resilient to churning + no need of centralized control + efficient to exploit the bandwidth

  • larger delays in delivering information
  • very difficult to control and predict the

performance NAPA-WINE application is unstructured

14

slide-11
SLIDE 11

15

P2P-TV Simple View

IP topology

Overlay Topology Distribution Topology

Which TOPOLOGY to use? Which LINKS to use?

slide-12
SLIDE 12

16

P2P-TV: NAPA-WINE Approach

Monitoring Control

slide-13
SLIDE 13

NAPA-WINE Second Video Conference 22 Oct 2008 19

Overview of the architecture

Scheduler layer

IPv4 / IPv6 + UDP / TCP / SCTP / ... Messaging Layer + NAT/FW traversal

Peer-Rep

Overlay Layer Active peers’ InfoBase Chunk buffer Video Source(s) Display(s) Peer Selection Trading Logic

Net-Rep Ext-Rep Monitoring layer

Monitoring Controller

  • Pasv. meas
  • Act. meas

REP controller Neighbour set Topology controller

User Layer

Content Ingestion Player Control Interface

slide-14
SLIDE 14

Network Monitoring Module

 A number of measurement functions are available 

RTT, Hop count, capacity, loss rate,

Capacity and available bandwidth

pluggable measurement functions can be added

extensible framework

slide-15
SLIDE 15

Monitoring Platform

21

slide-16
SLIDE 16

Simple example of Capacity and available bandwidth measurement

22

4Mb/s UDP cross traffic 7Mb/s UDP cross traffic 1,2,3,4,5 TCP cross traffic

slide-17
SLIDE 17

On-line QoE estimation

 A QoE estimator has been developed:

 It estimates online the quality of the audio/video on the base of

losss patterns and potentially other parameters (trained database)

 based on random neural network (Gelenbe, Rubino)

FT, LightComm, NEC WP5

slide-18
SLIDE 18

Conclusive experience

useful parameters can easily be measured

RTT, hop counts

useful for topology management and peer scheduling

some parameters are difficult to obtain

available bandwidth technique are error prone

the capacity of the bottleneck may be intrusive

some parameters must be carefully measured

Input parameters for the Neural Network: losses, loss burst size, delays

misguided measurement in the RNN input will mistake the QoE estimation process

measurement accuracy is crucial for good QoE estimation

32

slide-19
SLIDE 19

Repository

 A repository has been developed and released:

 It stores information published by peers 

SQL information base

HTTP communication interface

Currently implement the peer repository

E-REP (ALTO server) is alto under development

Netvisor

slide-20
SLIDE 20

34

Application Layer Traffic Optimization (ALTO)

slide-21
SLIDE 21

35

ALTO in Napa-Wine Architecture

35

 Integration of ALTO Server + Client into

Napa-Wine Architecture

ALTO Server

ALTO Client

ALTO Client is part of the External Repository (E-Rep)

E-Rep can contact ALTO- server to gain network-layer information the peers cannot measure themselves

slide-22
SLIDE 22

Scheduling and overlay

36

slide-23
SLIDE 23

Scheluning and Overlay modules

 FT, LightComm, NEC

NAPA-WINE Second Review Meeting Brussels, 10th May 2010 37 NAPA-WINE Second Video Conference 22 Oct 2008 37

Scheduler layer

IPv4 / IPv6 + UDP / TCP / SCTP / ... Messaging Layer + NAT/FW traversal Overlay Layer Active peers’ InfoBase Chunk buffer Video Source(s) Display(s) Peer Selection Trading Logic

  • Pas. meas.

REP controller Neighbour set Topology controller

User Layer

Content Ingestion Player Control Interface

slide-24
SLIDE 24

Signalling Thread

 A peer publishes the set of

chunks it possesses through an offer message.

 Peers specify the chunk they

are interested in with a select message.

 Once the select message is

received, chosen chunk is transmitted (over UDP).

 An ack is sent back once

chunk is received

Peer A Peer B New Chunk Arrival time time

38

OFFER SELECT CHUNK ACK

slide-25
SLIDE 25

System Dynamics

time

Negative Select Select Offer Chunk Transmission Acknoledgement 39

N

A

Peer A

RTT

AB

D

AB

slide-26
SLIDE 26

Congestion Control is needed

The number of parallel threads NA must match peers’ upload capacity.

If NA is too small,

Peers’ upload bandwidth is not exploited at best.

The transmission queue empties quickly.

Long periods of inactivity.

If NA is too large,

Transmission queue becomes too long.

Large delivery delays and, possibly, losses.

 Exploit upload bandwidth and mantain short queues to

limit the delivery delay!

 Optimal setup depends on the network scenario which

is unpredictable

40

slide-27
SLIDE 27

Hose Rate Control

 The algorithm runs everytime an ACK is received:

  • 1. D = trx,ack – trx,sel - RTTAB
  • 2. WA(n)= WA(n-1)- a(D-D0)
  • 3. ∆NA = floor(WA(n))- floor(WA(n-1))

41

slide-28
SLIDE 28

HRC Performance

 Queue delay (D), number of

active signalling threads (NA) and throughput evolution during time adopting HRC (ρ = 0.9, D0=150ms).

42

4 Mb/s 4 Mb/s TCP 1 Mb/s

slide-29
SLIDE 29

43

Logical topology

 The logical topology is a directed graph, every

node chooses its K in-neighbors (parents).

 It can be built either exploiting repository

information gossiping mechanisms (Newscast)

 Every T sec. peer p updates the list of in-

neighbors NI(p). At every update, NI (p) is the result of two separate filtering functions:

 one that selects the peers to drop,  another one selecting parents to add.

43

slide-30
SLIDE 30

Logical Topology (cnt)

 For these filtering functions we consider:

 peer upload bandwidth,  path RTT or  path packet loss rate,

 and some application layer metrics

 the peer offer rate  number of received chunks from a parent.

A sufficient degree of randomness must be guaranteed!

44

slide-31
SLIDE 31

Performance

45

slide-32
SLIDE 32

46

Examples of topologies

46

RTT-RTT RTT-Random Random-Random

slide-33
SLIDE 33

Scientific Conclusions

 In most of the scenarios it is possible to localize

the traffic without endangering the perceived QoE.

 Being too extreme in localizing traffic may cause

degradations of the QoE.

 Nevertheless there are margins within which traffic

can be localized without degrading QoE.

 In several cases a smart localization strategy can

even slightly improve the application performance.

47

slide-34
SLIDE 34

Scientific Conclusions (Cnt)

 Localization is more effective if the

application can exploit cost metrics exported by the operators through the ALTO interface that has been standardized within the IETF, and of which NAPA-WINE is one of the principal contributors.

48

slide-35
SLIDE 35

Scientific Conclusions (Cnt)

 Continuous monitoring of the network status

can greatly improve the ability of detecting anomalies and the ability to promptly react to them.

 Network monitoring can easily be achieved

by embedding a distributed measurement platform within the application (as done in Winestreamer).

49

slide-36
SLIDE 36

Scientific Conclusions (Cnt)

 To achieve good performance it is necessary

that the distributed algorithms for the design and the maintenance of the overlay topology guarantee a sufficient degree of randomness.

 Local selections of neighboring peers

according to deterministic rules can result in an overall overlay topology with bad graph properties.

50

slide-37
SLIDE 37

Scientific Conclusions (Cnt)

 Information about the upload bandwidth of

peers can be effectively exploited to design algorithms for the design and maintenance of the overlay topology and chunk scheduling that optimize the system performance.

51

slide-38
SLIDE 38

Scientific conclusions (Cnt)

 UDP is preferable to TCP as transport

protocol, since it significant reduces the chunk transfer times.

 Peers must be supplied with a simple

application level rate control mechanism to avoid bandwidth wastage and congestions

52

slide-39
SLIDE 39

53

Winestreamer/Peerstreamer

 Available at http://peerstreamer.org

NAPA-WINE Final Review Meeting Paris, 4 July 2011

slide-40
SLIDE 40

THE END

Thank you! Questions? Comments?