Performance and Fairness Evaluation of IW10 and Other Fast Startup - - PowerPoint PPT Presentation

performance and fairness evaluation of iw10 and other
SMART_READER_LITE
LIVE PREVIEW

Performance and Fairness Evaluation of IW10 and Other Fast Startup - - PowerPoint PPT Presentation

Performance and Fairness Evaluation of IW10 and Other Fast Startup Schemes Michael Scharf <michael.scharf@googlemail.com> This work was performed at the Institute of Communication Networks and Computer Engineering (IKR) at the University


slide-1
SLIDE 1

Performance and Fairness Evaluation

  • f IW10 and Other Fast Startup Schemes

Michael Scharf <michael.scharf@googlemail.com> This work was performed at the Institute of Communication Networks and Computer Engineering (IKR) at the University of Stuttgart.

March 2011 Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes 1

slide-2
SLIDE 2

Disclaimer Individual contribution Report of old work from the years 2008 and 2009

– Original intention was to show that a network-supported scheme like Quick-Start is indeed required – IW10 was considered as alternative (called Initial-Start) – Quite surprisingly, IW10 outperformed all other variants

First, preliminary results:

  • M. Scharf. Quick-Start, Jump-Start, and other fast startup

approaches: Implementation issues and performance. Presentation at 73rd IETF Meeting, ICCRG, Nov. 2008

Full reference for this work:

  • M. Scharf. Fast Startup Internet Congestion Control for

Broadband Interactive Applications. PhD thesis, University of Stuttgart, submitted Nov. 2009

March 2011 Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes 2

slide-3
SLIDE 3

Optimistic fast startup Bandwidth estimation SST adaptation Slow-Start Enhanced Standard Slow-Start End-to-end congestion control (implicit feedback) Network assistance (sporadic feedback) Network control (explicit feedback) Network supported congestion control Flow startup approaches (frequent feedback) Reno, CUBIC, ... Paced Start, Hybrid SS, ... rate pacing Using control No burstiness Larger window Swift-Start, eXplicit Control Quick-Start Jump-Start, Mega-Start, ... Initial-Start Limited SS, ... Protocol (XCP), Rate Control Protocol (RCP), ...

Fast startup congestion control Scope of the study

  • TCP's standard Slow-Start with CUBIC (SS)
  • Initial congestion window of 10 MSS, called Initial-Start (IS)
  • Jump-Start of M. Allman et al., slightly modified to reduce aggressiveness (JS)
  • Quick-Start TCP extension according to RFC 4782 (QS)
  • Rate Control Protocol (RCP)
  • … and others

March 2011 Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes 3

slide-4
SLIDE 4

Fast startup congestion control Evaluation methodology

Simulations

– Simulation with Linux code using the NSC framework – Own Linux patches for all TCP extensions, and an

  • wn tool for RCP

Considered scenarios

– Subset of the TCP evaluation suite – Dumbbell topology with 450 endsystems and 9 different RTTs – Bottleneck typically 10 Mbit/s, 50 packets buffer, drop tail – Replay of measured Internet traces in a-b-t format as recommended in TCP evaluation suite

Implementations verified by testbed measurements

March 2011 Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes 4

T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T

9 8 7 6 5 4 3 2 1 9 8 7 6 5 4 3 2 TCP connection(s) Central bottleneck Rate 10 Mbit/s Limited buffer size Access link Rate 1 Gbit/s Group-specific delay Access link Group-specific delay Rate 1 Gbit/s 174 ms 200 ms 150 ms 124 ms 98 ms 74 ms 28 ms 54 ms 4 ms group RTT of 1 Groups of endsystems

slide-5
SLIDE 5

Selected performance results Possible speedup of the different variants

  • Simulation with Linux 2.6.18
  • Dumbbell topology with 10 Mbit/s

bottleneck and 9 different RTTs

  • 450 clients and 450 servers
  • Default TCP configuration, except

for larger buffer sizes (8 MiB)

  • Replayed traces in a-b-t format
  • Mean downlink load 35%
  • Metric: Epoch duration

March 2011 Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes 5

  • Performance metric: Response time of a-b-t transfers (“epoch duration”)
  • Speedup of mid-sized transfers by larger initial window
  • Overall benefit is rather small: Many short transfers, many small RTTs

Time 1,196 B 329 B 403 B 356 B 0.12s 3.12s Request 1 Request 2 Request 3 Response 3 Response 2 25,821 B 403 B Response 1 Transaction 1 Transaction 2 (a,b,t) epoch Epoch duration

slide-6
SLIDE 6

Selected performance results Insight into the workload

  • Simulation with Linux 2.6.18
  • Dumbbell topology with 10 Mbit/s

bottleneck and 9 different RTTs

  • 450 clients and 450 servers
  • Default TCP configuration, except

for larger buffer sizes (8 MiB)

  • Replayed traces in a-b-t format
  • Mean downlink load 35%
  • Metric: Epoch duration

March 2011 Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes 6

  • Most TCP connections are rather short in the workload traces
  • Only transfers larger than 10 KB can benefit
  • Average improvement less than 1 s even for larger transfers

Time 1,196 B 329 B 403 B 356 B 0.12s 3.12s Request 1 Request 2 Request 3 Response 3 Response 2 25,821 B 403 B Response 1 Transaction 1 Transaction 2 (a,b,t) epoch Epoch duration

slide-7
SLIDE 7

Selected performance results Trade-off between speedup and packet loss

  • IW10 increases loss probability by 0.5%
  • Other considered schemes are not faster, but have a larger loss rate
  • Result: IW10 outperforms other schemes
  • Simulation with Linux 2.6.18
  • Dumbbell topology with 10 Mbit/s

bottleneck and 9 different RTTs

  • 450 clients and 450 servers
  • Default TCP configuration, except

for larger buffer sizes (8 MiB)

  • Replayed traces in a-b-t format
  • Variable load up to ca. 40% (due

to tool limitation to ca. 1000 stacks)

March 2011 Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes 7

slide-8
SLIDE 8

Selected performance results Sensitivity to bottleneck buffer size

  • Obviously, small buffers (<50 packets) are a problem
  • Fast startups only moderately increase the packet loss rate if

reasonably sized buffers (50-100 packets, or AQM) present

  • Simulation with Linux 2.6.18
  • Dumbbell topology with 10 Mbit/s

bottleneck and 9 different RTTs

  • 450 clients and 450 servers
  • Default TCP configuration, except

for larger buffer sizes (8 MiB)

  • Replayed traces in a-b-t format
  • Mean downlink load 35%

March 2011 Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes 8

slide-9
SLIDE 9

Selected performance results Fairness to unmodified stacks

  • Scenario: 50% of stacks use fast startup, 50% unchanged (CUBIC)
  • IW10 is rather fair and hardly impacts other flows
  • Result: IW10 outperforms other schemes
  • Simulation with Linux 2.6.18
  • Dumbbell topology with 10 Mbit/s

bottleneck and 9 different RTTs

  • 450 clients and 450 servers

50% CUBIC, 50% fast startup

  • Default TCP configuration, except

for larger buffer sizes (8 MiB)

  • Synthetic workload model for

HTTP/1.0, response sizes from truncated pareto distribution with mean 14 KB, shape parameter 1.1, truncation at 10 MB

March 2011 Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes 9

slide-10
SLIDE 10

Conclusion

Results Moderate benefit of fast startups for larger transfers IW10 works rather well and is quite fair More sophisticated schemes tend to be worse Network support such as Quick-Start can overcome some limitations, but it has problems of its own Recommendations for further work Study more extensively the use of rate pacing, even if results suggests that it may not be needed for 10 MSS Rethink error recovery algorithms after fast startup, since there are many degrees of freedom there, too

March 2011 Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes 10

slide-11
SLIDE 11

Selected references

Evaluation results of IW10 (amongst others)

  • M. Scharf. Comparison of end-to-end and network-supported fast startup congestion control
  • schemes. Computer Networks, 2011
  • M. Scharf. Fast Startup Internet Congestion Control for Broadband Interactive Applications. PhD

thesis, University of Stuttgart, submitted Nov. 2009

  • M. Scharf. Performance evaluation of fast startup congestion control schemes. In Proc. IFIP

Networking 2009, LNCS 5550, Springer-Verlag, pp. 716–727, 2009

  • M. Scharf. Quick-Start, Jump-Start, and other fast startup approaches: Implementation issues

and performance. Presentation at 73rd IETF Meeting, ICCRG, Nov. 2008 Studies of network-supported fast startup congestion control schemes

  • S. Hauger, M. Scharf, J. Kögel, and C. Suriyajan. Evaluation of router implementations of explicit

congestion control schemes. Journal of Communication, vol. 5, no. 3, 2010, pp. 197-204.

  • M. Scharf, M. Eissele, C. Mueller, and T. Ertl. Speeding up the 3D Web: A case for fast startup

congestion control. Proc. PFLDNeT, 2009

  • M. Proebster, M. Scharf, and S. Hauger. Performance comparison of router assisted congestion

control protocols: XCP vs. RCP. Proc. 2nd International Workshop on the Evaluation of Quality of Service through Simulation in the Future Internet, 2009

  • M. Scharf and H. Strotbek. Performance evaluation of Quick-Start TCP with a Linux kernel
  • implementation. Proc. IFIP Networking 2008, LNCS 4982, Springer-Verlag, pp. 703–714, 2008
  • S. Hauger, M. Scharf, J. Kögel, and C. Suriyajan. Quick-Start and XCP on a network processor:

Implementation issues and performance evaluation. Proc. IEEE HPSR 2008, 2008.

  • M. Scharf, S. Hauger, and J. Kögel. Quick-Start TCP: From theory to practice. Proc. PFLDnet,

2008

  • M. Scharf. Performance analysis of the Quick-Start TCP extension. Proc. IEEE Broadnets, 2007

March 2011 Performance and Fairness Evaluation of IW 10 and Other Fast Startup Schemes 11