Concurrent Counting is harder than Queuing Costas Busch Rensselaer - - PowerPoint PPT Presentation

concurrent counting is harder than queuing
SMART_READER_LITE
LIVE PREVIEW

Concurrent Counting is harder than Queuing Costas Busch Rensselaer - - PowerPoint PPT Presentation

Concurrent Counting is harder than Queuing Costas Busch Rensselaer Polytechnic Intitute Srikanta Tirthapura Iowa State University 1 Arbitrary graph 2 Distributed Counting count count count count Some processors request a counter value


slide-1
SLIDE 1

1

Concurrent Counting is harder than Queuing

Costas Busch

Rensselaer Polytechnic Intitute

Srikanta Tirthapura

Iowa State University

slide-2
SLIDE 2

2

Arbitrary graph

slide-3
SLIDE 3

3

Distributed Counting Some processors request a counter value count count count count

slide-4
SLIDE 4

4

Distributed Counting Final state

2 1

3

4

slide-5
SLIDE 5

5

Distributed Queuing Some processors perform enqueue operations Enq(A) Enq(B) Enq(D) Enq(C) A B C D

slide-6
SLIDE 6

6

Distributed Queuing A B C D head Tail A B C D

Previous=nil Previous=B Previous=D Previous=A

slide-7
SLIDE 7

7

Counting:

  • parallel scientific applications
  • load balancing (counting networks)

Queuing:

  • distributed directories for mobile
  • bjects
  • distributed mutual exclusion

Applications

slide-8
SLIDE 8

8

Multicast with the condition: all messages received at all nodes in the same order Either Queuing or Counting will do Which is more efficient? Ordered Multicast

slide-9
SLIDE 9

9

Queuing vs Counting?

Total orders

Queuing = finding predecessor Needs local knowledge Counting = finding rank Needs global knowledge

slide-10
SLIDE 10

10

Problem

Is there a formal sense in which Counting is harder problem than Queuing? Reductions don’t seem to help

slide-11
SLIDE 11

11

Our Result

Concurrent Counting is harder than Concurrent Queuing

  • n a variety of graphs including:

many common interconnection topologies complete graph, mesh hypercube perfect binary trees

slide-12
SLIDE 12

12

Model

Synchronous system G=(V,E)

  • edges of unit delay

Congestion: Each node can process only one message in a single time step Concurrent one-shot scenario:

a set R subset V of nodes issue queuing (or counting) operations at time zero No more operations added later

slide-13
SLIDE 13

13

Cost Model

: delay till v gets back queuing result Cost of algorithm A on request set R is Queuing Complexity = Define Counting Complexity Similarly

) (v CQ

R v Q Q

v C R A C ) ( ) , ( )} , ( {max min R A CQ

V R A 

slide-14
SLIDE 14

14

Lower Bounds on Counting Counting Cost =

) log (

*n

n 

For arbitrary graphs: For graphs with diameter :

D

) (

2

D 

Counting Cost =

slide-15
SLIDE 15

15

Theorem: Proof: Consider some arbitrary algorithm for counting For graphs with diameter :

D

) (

2

D 

Counting Cost =

slide-16
SLIDE 16

16

Take shortest path of length Graph

D

slide-17
SLIDE 17

17

Graph make these nodes to count 1 2 3 4 5 6 7 8

slide-18
SLIDE 18

18

k Node of count decides after at least time steps

k

2 1  k

Needs to be aware of other processors k-1

2 1  k 2 1  k

slide-19
SLIDE 19

19

Counting Cost:

) ( 2 1

2 1

D k

D k

  

End of Proof

slide-20
SLIDE 20

20

Theorem: Counting Cost =

) log (

*n

n 

For arbitrary graphs: Proof: Consider some arbitrary algorithm for counting

slide-21
SLIDE 21

21

Prove it for a complete graph with nodes: any algorithm on any graph with nodes can be simulated on the complete graph

n n

slide-22
SLIDE 22

22

The initial state affects the outcome Red: count Blue: don’t count

v

Initial State

slide-23
SLIDE 23

23

Final state Red: count

v

1 2 3 4 5 Blue: don’t count

slide-24
SLIDE 24

24

Initial state Red: count

v

Blue: don’t count

slide-25
SLIDE 25

25

Final state Red: count

v

1 2 3 4 5 6 7 8 Blue: don’t count

slide-26
SLIDE 26

26

v

Let be the set of nodes whose input may affect the decision of ) (v A

v

) (v A

slide-27
SLIDE 27

27

Suppose that there is an initial state for which decides

v

k k v A  | ) ( |

Then:

v

) (v A

k

slide-28
SLIDE 28

28

v

) (v A

k

v

) (v A

k

These two initial states give same result for v

slide-29
SLIDE 29

29

v

) (v A

k

If , then would decide less than

k v A  | ) ( |

k

Thus,

k v A  | ) ( |

v

slide-30
SLIDE 30

30

Suppose that decides at time t

v

We show:

2

2 | ) ( |  v A

2 2

t

times

slide-31
SLIDE 31

31

Suppose that decides at time t

v

k v A  | ) ( |

2

2 | ) ( |  v A

2 2

t

times

k t

*

log 

slide-32
SLIDE 32

32

k t

*

log 

) log ( log

* 1 *

n n k

n k

 

Cost of node :

v

If nodes wish to count:

n

Counting Cost =

slide-33
SLIDE 33

33

v

) , ( t v A

Nodes that affect up to time

v

t

v

) , ( t v B

Nodes that affects up to time

v

t

| ) , ( | max ) ( t x A t a

x

 | ) , ( | max ) ( t x B t b

x

slide-34
SLIDE 34

34

v

) 1 , (  t v A

v

) 1 , (  t v B

After , the sets grow

1  t

1 ) (  t a 1 ) (  t b

slide-35
SLIDE 35

35

) , ( t v A

v

) 1 , (  t v A

slide-36
SLIDE 36

36

) , ( t v A

v

) 1 , (  t v A ) , ( t z A

Eligible to send message at time

z

1  t

There is an initial state such that that sends a message to

z

v

slide-37
SLIDE 37

37

) , ( t v A

v

) , ( t z A

z

) 1 , (  t v A ) , ( t s A

s

   ) , ( ) , ( t z A t s A

Eligible to send message at time

1  t

Suppose that Then, there is an initial state such that both send message to v

slide-38
SLIDE 38

38

) , ( t v A

v

) , ( t z A

z

) 1 , (  t v A ) , ( t s A

s

Eligible to send message at time

1  t

However, can receive one message at a time

v

slide-39
SLIDE 39

39

) , ( t v A

v

) , ( t z A

z

) 1 , (  t v A ) , ( t s A

s

Eligible to send message at time

1  t

   ) , ( ) , ( t z A t s A

Therefore:

slide-40
SLIDE 40

40

) , ( t z A

z

) , ( t s A

s

Number of nodes like :

s ) ( ) ( t b t a 

| ) , ( | max t x A

x

| ) , ( | max t x B

x

slide-41
SLIDE 41

41

) , ( t v A

v

) , ( t z A

z

) 1 , (  t v A ) , ( t s A

s

Therefore:

 

) ( ) ( ) ( ) , ( ) 1 , ( t b t a t a t v A t v A     

slide-42
SLIDE 42

42

 

) ( ) ( 1 ) ( ) 1 ( t b t a t a t a   

We can also show:

 

) (

2 1 ) ( ) 1 (

t a

t b t b   

Thus: Which give:

2

2 ) (   a

2 2

times End of Proof

slide-43
SLIDE 43

43

Upper Bound on Queuing

) log ( n n O

Queuing Cost = For graphs with spanning trees

  • f constant degree:

) (n O

Queuing Cost = For graphs whose spanning trees are lists or perfect binary trees:

slide-44
SLIDE 44

44

An arbitrary graph

slide-45
SLIDE 45

45

Spanning tree

slide-46
SLIDE 46

46

Spanning tree

slide-47
SLIDE 47

47

A A

Distributed Queue Head Tail Tail

Previous = Nil

slide-48
SLIDE 48

48

A B enq(B)

Tail

A

Head Tail

Previous = ? Previous = Nil

slide-49
SLIDE 49

49

A B enq(B)

Tail

A

Head Tail

Previous = ? Previous = Nil

slide-50
SLIDE 50

50

A B enq(B)

Tail

A

Head Tail

Previous = ? Previous = Nil

slide-51
SLIDE 51

51

A B enq(B)

Tail

A

Head Tail

Previous = ? Previous = Nil

slide-52
SLIDE 52

52

A B enq(B)

Tail

A

Head Tail

Previous = ? Previous = Nil

slide-53
SLIDE 53

53

A A B

Tail

Previous = A

B

A informs B Head Tail

Previous = Nil

slide-54
SLIDE 54

54

A

Concurrent Enqueue Requests Tail

B C A

Head Tail

enq(B) enq(C)

Previous = ? Previous = ? Previous = Nil

slide-55
SLIDE 55

55

A

Tail

B C A

Head Tail

enq(B) enq(C)

Previous = ? Previous = ? Previous = Nil

slide-56
SLIDE 56

56

A

Tail

B C A

Head Tail

enq(B) enq(C)

Previous = ? Previous = ? Previous = Nil

slide-57
SLIDE 57

57

A

Tail

B C enq(B) A C

Head Tail

Previous = A Previous = ? Previous = Nil

slide-58
SLIDE 58

58

A

Tail

B C enq(B) A C

Head Tail

Previous = A Previous = ? Previous = Nil

slide-59
SLIDE 59

59

A

Tail

B C A C

Head Tail C informs B

Previous = C Previous = A

B

Previous = Nil

slide-60
SLIDE 60

60

A

Tail

B C A C

Head Tail

Previous = C Previous = A

B

Previous = Nil

slide-61
SLIDE 61

61

A

Tail

B C A C

Head Tail

Previous = C Previous = A

B

Previous = Nil

Paths of enqueue requests

slide-62
SLIDE 62

62

Nearest-Neighbor TSP tour

  • n Spanning tree

A C B D

Origin

(first element in queue)

E F

slide-63
SLIDE 63

63

A C B

Visit closest unused node in Tree

E D F

slide-64
SLIDE 64

64

A C B

Visit closest unused node in Tree

E D F

slide-65
SLIDE 65

65

A C B E D F

Nearest-Neighbor TSP tour

slide-66
SLIDE 66

66

Queuing Cost 

2

x Nearest-Neighbor TSP length

[Herlihy, Tirthapura, Wattenhofer PODC’01]

For spanning tree of constant degree:

slide-67
SLIDE 67

67

Nearest-Neighbor TSP length

Optimal TSP length

[Rosenkratz, Stearns, Lewis SICOMP1977]

If a weighted graph satisfies triangular inequality:

n log

x

slide-68
SLIDE 68

68

A C D B E F A C D B F

2 4 4 1 3 1 2 3 1

weighted graph of distances

E

1 2 3 4 2 1

slide-69
SLIDE 69

69

Satisfies triangular inequality

A C D

2 1 1

1

e

2

e

3

e

) ( ) ( ) (

3 2 1

e w e w e w  

A C D B F

2 4 4 1 3 1 2 3 1

E

1 2 3 4 2 1

slide-70
SLIDE 70

70

A C E B F D

Nearest Neighbor TSP tour

A C D B F

2 4 4 1 3 1 2 3 1

E

1 2 3 4 2 1

Length=8

slide-71
SLIDE 71

71

A C E B F D A C D B F

2 4 4 1 3 1 2 3 1

E

1 2 3 4 2 1

Optimal TSP tour Length=6

slide-72
SLIDE 72

72

It can be shown that: Optimal TSP length

n 2

A C E B F D (Nodes in graph)

Since every edge is visited twice

slide-73
SLIDE 73

73

Therefore, for constant degree spanning tree: Queuing Cost = O(Nearest-Neighbor TSP)

) log ( n n O

= O(Optimal TSP x )

n log

=

slide-74
SLIDE 74

74

For special cases we can do better: Spanning Tree is Queuing Cost =

) (n O

List balanced binary tree

slide-75
SLIDE 75

75

Graphs with Hamiltonian path, have spanning trees which are lists Complete graph Queuing Cost =

) (n O

Mesh Hypercube Counting Cost =

) log (

*n

n 

slide-76
SLIDE 76

76

If the spanning tree is a list, then Theorem: Queuing Cost =

) (n O

slide-77
SLIDE 77

77

Nearest Neighbor TSP Queuing Cost = O(Nearest-Neighbor TSP) Proof:

slide-78
SLIDE 78

78

x

y

y x 

a

b

a b 2 

slide-79
SLIDE 79

79

Length doubles Total length

n 2 

n

nodes Even sides

slide-80
SLIDE 80

80

Length doubles

n

nodes Total length

n 2 

Odd sides

slide-81
SLIDE 81

81

n

nodes Total length

n n n 4 2 2   

Even+Odd sides End of proof