Choosing Middleware: Why performance and scalability do (and dont) - - PowerPoint PPT Presentation

choosing middleware
SMART_READER_LITE
LIVE PREVIEW

Choosing Middleware: Why performance and scalability do (and dont) - - PowerPoint PPT Presentation

Choosing Middleware: Why performance and scalability do (and dont) matter Michi Henning Chief Scientist, ZeroC, Inc. Middleware is (almost) forever Committing to middleware represents major investment Product life cycle typically


slide-1
SLIDE 1

Why performance and scalability do (and don’t) matter

Michi Henning Chief Scientist, ZeroC, Inc.

Choosing Middleware:

slide-2
SLIDE 2

Middleware is (almost) forever

 Committing to middleware represents major investment  Product life cycle typically several years to a decade long

(at least)

 Once you choose middleware you are stuck with it  Standardization does not help

  • CORBA is the most standardized middleware to date
  • Despite standardization, changing ORBs was pragmatically

impossible

 You had better exercise due diligence

slide-3
SLIDE 3

Choose on features

   

Feature 5

   

Feature 4

   

Feature 3

   

Feature 2

   

Feature 1 Middleware 4 Middleware 3 Middleware 2 Middleware 1

The one with the most features must be best, right?

slide-4
SLIDE 4

Choose on features

 More features is not necessarily better  Unless the middleware is very carefully designed:

  • Larger memory footprint at run time leads to fewer cache

hits and reduced performance

  • More features almost always means steeper learning

curve: you almost always pay for what you do not use

  • All the features in the world do not help if the one feature

you really need is missing

slide-5
SLIDE 5

Choose on price

The one that’s cheapest must be best, right?

 Up-front licensing cost is typically 2-3% of total cost over

product life cycle

 Development cost usually eclipses licensing cost by an

  • rder of magnitude

 A 5% difference in programmer productivity is likely to

swamp any price difference

 A 5% difference in defect rate and MTBF is certain to

swamp any price difference

slide-6
SLIDE 6

Choose on performance

The one that goes fastest must be best, right?

 Biggest fallacy in the history of middleware

  • Performance only matters if you need it!
  • Many performance gains due to selective accounting
  • Benchmarks rarely tell the story
  • Most non-experts are incapable of creating valid

benchmarks and evaluating them correctly

slide-7
SLIDE 7

Real evaluation criteria

 How steep is the learning curve?  What is the quality of the APIs?

  • Type-safe?
  • Thread-safe?
  • Exception-safe?
  • Memory management?

 What is the quality of the documentation?

  • Well indexed?
  • Online and searchable?
  • Non-trivial examples?
  • Real-world example programs with real-world error handling?
slide-8
SLIDE 8

Real evaluation criteria

 High-quality professional training?  How reliable is the software?

  • Defect rate?
  • MTBF?

 Quality of support?

  • 24x7
  • Guaranteed response time?
  • Who provides the support?
  • Consulting services?
  • Vibrant developer community?
slide-9
SLIDE 9

Real evaluation criteria

 What operating systems and compilers are

supported?

 What languages are supported?  Source code available?  How complete is the relevant feature set?  What are the licensing conditions?

  • Up-front one-time buy out?
  • Royalty based?
  • GPL?
slide-10
SLIDE 10

When performance doesn’t matter

 Performance no longer matters when the

middleware is no longer noticeable

 Middleware is no longer noticeable if:

  • Overall time spent in the middleware is < 5%
  • Overall memory footprint is < 5%
  • Realized bandwidth is within 5% of theoretical maximum

 For many applications, even bad middleware

meets these criteria!

slide-11
SLIDE 11

When performance does matter

 Everyone’s first benchmark:

interface I { void ping(); };

 Measure round-trip time over loopback to establish messaging rate

compared to native sockets

2,500 7,500 7,500 10,600 8,000 10,500 18,000 WCF (SOAP) WCF (bin) Ice (.NET) RMI Ice (Java) Ice (C++) Sockets 570 2,200 2,300 2,300 2,300 2,300 2,300 WCF (SOAP) WCF (bin) Ice (.NET) RMI Ice (Java) Ice (C++) Sockets

Loopback Network

slide-12
SLIDE 12

When performance does matter

 Everyone’s second benchmark:

typedef sequence<byte> ByteSeq; interface I { void send(ByteSeq bs); };

 Measure over loopback to establish throughput compared to native

sockets (Gbit/sec)

0.15 0.52 0.63 0.83 0.83 1.2 1.3 WCF (SOAP) WCF (bin) Ice (.NET) RMI Ice (Java) Ice (C++) Sockets 0.14 0.43 0.52 0.28 0.66 0.75 0.78 WCF (SOAP) WCF (bin) Ice (.NET) RMI Ice (Java) Ice (C++) Sockets

Loopback Network

slide-13
SLIDE 13

When performance does matter

 Everyone’s forgotten benchmark:

typedef sequence<byte> ByteSeq; interface I { ByteSeq receive(); };

 Receive slower than send anything between 20% and 300%  The same middleware, compiled from the same source code, shows

20% difference on one OS, and 300% on another

 The same middleware, compiled from the same source code, can

be faster over the network than over loopback

slide-14
SLIDE 14

When performance does matter

 Data effects  Type of data is important:

  • A byte sequence performs very different from a structure

sequence

  • A structure sequence with fixed-length members

performs very different from a structure with variable- length members

slide-15
SLIDE 15

When performance does matter

struct Fixed { int i; int j; double d; }; sequence<Fixed> FixedSeq; interface Throughput { void send(FixedSeq s); FixedSeq recv(); }; struct Var { string s; double d; }; sequence<Variable> VarSeq; interface Throughput { void send(VarSeq s); VarSeq recv(); };

Performance for these tests differs dramatically!

slide-16
SLIDE 16

Data Effects

 The values in the data make a difference  Wire size for Fixed structures with small values versus

random values:

Small Values Ice for .NET WCF (binary) WCF (SOAP) Request size 800,057 bytes 601,538 bytes 2,750,781 bytes Random Values Ice for .NET WCF (binary) WCF (SOAP) Request size 800,057 bytes 1,167,968 bytes 3,976,802 bytes

slide-17
SLIDE 17

Data Effects

struct Fixed { int i; int j; double d; }; sequence<Fixed> FixedSeq; interface Throughput { void send(FixedSeq s); FixedSeq recv(); }; struct Fixed { int int1Member; int int2Member; double doubleMember; }; sequence<Fixed> FixedSeq; interface Throughput { void send(FixedSeq s); FixedSeq recv(); };

Ice WCF (binary) WCF (SOAP) Request size (short names) 800,057 bytes 601,538 bytes 2,750,781 bytes Request size (long names) 800,057 bytes 600,488 bytes 3,404,862 bytes

slide-18
SLIDE 18

Platform effects

 Same middleware (Ice), same source, same test, same

hardware

 Different OS: Vista versus OS X 145Mbit/s 2,500Mbit/s 740Mbit/s 2,850Mbit/s OS X 620Mbit/s 1,250Mbit/s 900Mbit/s 1,800Mbit/s Vista Java (fixed seq) Java (byte seq) C++ (fixed seq) C++ (byte seq)

slide-19
SLIDE 19

Performance or Scalability?

 Most applications do not have high performance

requirements.

  • Performance for an individual client not much of a

concern

 But: many applications have high scalability

requirements

  • If you have an application with thousands of clients, how

many servers and how much bandwidth do you need?

slide-20
SLIDE 20

Performance or Scalability?

 High scalability requires high performance  But:

  • High performance at small scale in no way guarantees high

performance at large scale

 Factors:

  • Quality of implementation
  • APIs designed for scale
  • Threading models
  • Connection management
  • Use of OS resources (file descriptors, timers, memory garbage
  • r fragmentation)
slide-21
SLIDE 21

Scalability Scenario

 Stateful interaction of few servers with many

clients

 Examples:

  • Twitter
  • Online shopping cart
  • Secure communication
  • Instant messaging
  • Stock ticker
  • Remote monitoring
slide-22
SLIDE 22

Scalability Test

 One server, one object, one method that does

nothing

 Lots of clients connected to server (one

connection per client)

 Clients each send one message every four

seconds

 Message deemed successful if it completes within

1 second.

 How many clients can the server support?

slide-23
SLIDE 23

Scalability Test (Java)

3GB 810MB 50MB Working set 30% 25% 15% CPU usage Ice for C++ Ice for Java RMI Requests/sec 20,000 20,000 7,500 # connections 80,000 80,000 30,000 Virtual memory 2.5GB 1.4GB 37GB

 RMI fails catastrophically beyond 30,000 clients due to memory

starvation

 Ice for Java continues to scale, but requests take more than 1

second due to GC passes

 Ice for C++ scales well beyond 120,000 clients

slide-24
SLIDE 24

Scalability Test (.NET)

Ice for .NET WCF (binary) WCF (SOAP) Requests/sec 7,250 4,850 8,000 # connections 29,000 19,500 32,000 CPU usage 33% 90% 40%

 WCF binary encoder consumes an inordinate amount of CPU  WCF SOAP encoder scales better than expected, but fails on the

client side

slide-25
SLIDE 25

CPU Consumption

Fixed-length seq. Ice for .NET WCF (binary) WCF (SOAP) CPU (send) 1 68 84 CPU (receive) 1 62 236

CPU consumption for Fixed throughput test

 CPU and bandwidth requirements make it unlikely that WCF will

scale

slide-26
SLIDE 26

Bandwith and hardware are cheap?

 Server has 50Mbit/sec link to Internet  Clients request 50kB of data  1Mbit/sec works out to 1kB/sec of data  ⇒ Server can handle 100 concurrent clients  If SOAP adds only a factor of 10 in size, the same server

can handle 10 concurrent clients

 You will need a 500Mbit/sec link instead ($$$)  Single server will probably not handle 500Mbit/sec. You

will need several servers ($$$)

slide-27
SLIDE 27

Lessons learned

 Benchmarking is hard  Creating relevant benchmarks is harder  Interpreting the results is harder still  Small scale results do not indicate large scale

performance

 Chances are that what you measure will be

irrelevant because the problem that will kill your application is one you never thought of.

slide-28
SLIDE 28

How to choose

 Performance matters only if you need it  Performance benchmarks matter only if they

measure what the application will do as closely as possible

 If you don’t need to scale, any middleware will do  If you need to scale in the future, SOAP or RMI

definitely won’t do

slide-29
SLIDE 29

How to choose

 The prime criteria (other than performance):

  • The devil you know is better than the devil you don’t

know

  • Ease of use
  • Do you know the future?

 DDE, OLE, OLE Automation, COM, ActiveX, DCOM, COM+, .NET Remoting, WCF  What next?

  • All the world is not Java, or Windows, or WS, or…
slide-30
SLIDE 30

How to choose

 Infrastructure services

  • Event distribution
  • Persistence
  • Location transparency
  • Fault-tolerance
  • Federation and load balancing
  • Deployment and administration
  • Firewall traversal

 Can you afford to build all of this yourself?

slide-31
SLIDE 31

Summary

 Performance is only one of many criteria  But: performance does matter because, without it, things

don’t scale

 Middleware is in a sorry state:

  • CORBA is dead
  • RMI is a toy
  • WCF is Windows only
  • SOAP, WS, and REST resource consumption is staggering

 Watch this space—the web is only one step of many along

the road to distributed computing