Distributed motion coordination of robotic networks or how to get - - PowerPoint PPT Presentation

distributed motion coordination of robotic networks
SMART_READER_LITE
LIVE PREVIEW

Distributed motion coordination of robotic networks or how to get - - PowerPoint PPT Presentation

Distributed motion coordination of robotic networks or how to get global behavior out of local interactions Jorge Cort es Applied Mathematics and Statistics Baskin School of Engineering University of California at Santa Cruz


slide-1
SLIDE 1

Distributed motion coordination

  • f robotic networks

– or how to get global behavior out of local interactions Jorge Cort´ es

Applied Mathematics and Statistics Baskin School of Engineering University of California at Santa Cruz http://www.ams.ucsc.edu/˜jcortes

Summer School on Geometry Mechanics and Control Centro Internacional de Encuentros Matem´ aticos Castro Urdiales, June 25-29, 2007 ,

slide-2
SLIDE 2

,

Outline

1 General introduction to the course 2 A primer on graph theory 3 Distributed linear iterations

Agreement algorithms Convergence analysis

4 Distributed algorithms on synchronous networks

slide-3
SLIDE 3

,

Cooperative multi-agent systems

What kind of systems? Groups of agents with control, sensing, communication and computing Individual agents sense in immediate environment communicate with others process information gathered take local actions in response

slide-4
SLIDE 4

,

Self-organized behaviors in biological groups

slide-5
SLIDE 5

,

Decision making in animals

Capable of deploy over a given region assume specified pattern rendezvous at a common point jointly initiate motion/change direction in a synchronized way Species achieve synchronized behavior with limited sensing/communication between individuals without apparently following group leader

(Couzin et al, Nature 05; Conradt et al, Nature 03)

slide-6
SLIDE 6

,

Engineered multi-agent systems

Embedded robotic systems and sensor networks for high-stress, rapid deployment — e.g., disaster recovery networks distributed environmental monitoring — e.g., portable chemical and biological sensor arrays detecting toxic pollutants autonomous sampling for biological applications — e.g., monitoring of species in risk, validation of climate and

  • ceanographic models

science imaging — e.g., multispacecraft distributed interferometers flying in formation to enable imaging at microarcsecond resolution

Sandia National Labs MBARI AOSN NASA Terrestrial Planet Finder

slide-7
SLIDE 7

,

Research challenges

What useful engineering tasks can be performed with limited-sensing/communication agents? Feedback rather than open-loop computation for known/static setup Information flow who knows what, when, why, how, dynamically changing Reliability/performance robust, efficient, predictable behavior How to coordinate individual agents into coherent whole? Objective: systematic methodologies to design and analyze cooperative strategies to control multi-agent systems Integration of control, communication, sensing, computing

slide-8
SLIDE 8

,

Research program: what are we after?

Design of provably correct coordination algorithms for basic tasks Formal model to rigorously formalize, analyze, and compare coordination algorithms Mathematical tools to study convergence, stability, and robustness

  • f coordination algorithms

Coordination tasks exploration, map building, search and rescue, surveillance, odor localization, monitoring, distributed sensing

slide-9
SLIDE 9

,

Technical approach

Optimization Methods resource allocation geometric optimization deterministic annealing Geometry & Analysis computational structures differential geometry nonsmooth analysis Control & Robotics algorithm design cooperative control stability theory Distributed Algorithms adhoc networks decentralized vs centralized emerging behaviors

slide-10
SLIDE 10

,

What is the course about?

A little bit of all of the following Cooperative robotic networks Distributed motion coordination algorithms Local agent interactions giving rise to global behavior Limited information, no omniscient leader Verifiably correct, rigorous assessment of properties

slide-11
SLIDE 11

,

What will we cover?

Models Robotic network, coordination algorithm, and task Complexity notions that help quantify the performance and cost of execution of coordination algorithms Analysis Tools that can be used to analyze the correctness, robustness, and optimality of coordination algorithms Design Algorithm design for rendezvous, deployment, and agreement

slide-12
SLIDE 12

,

Three sample tasks

Consider rendezvous/deployment/agreement scenario Rendezvous = get together at certain location Deployment = deploy over a given region Agreement = reach consensus upon the value of some variable From agent viewpoint, What should I process/compute/sense? What do I transmit? To whom? How do I take into account information that I acquire? Where do I move? Overall, what do I do?

slide-13
SLIDE 13

,

What will we not cover?

Plenty of things because of time constraints! formation control connectivity preservation quantization, asynchronism, delays distributed estimation, data fusion, and tracking ... Literature is full of very interesting recent works in cooperative control

slide-14
SLIDE 14

,

A skinny bibliography on cooperative control

  • I. Suzuki and M. Yamashita. Distributed anonymous mobile robots:

Formation of geometric patterns. SIAM Journal on Computing, 28(4):1347--1363, 1999

  • E. W. Justh and P. S. Krishnaprasad. Equilibria and steering laws

for planar formations. Systems & Control Letters, 52(1):25--38, 2004

  • A. Jadbabaie, J. Lin, and A. S. Morse. Coordination of groups of

mobile autonomous agents using nearest neighbor rules. IEEE Transactions on Automatic Control, 48(6):988--1001, 2003

  • R. Olfati-Saber. Flocking for multi-agent dynamic systems:

Algorithms and theory. IEEE Transactions on Automatic Control, 51(3):401--420, 2006

  • V. Gazi and K. M. Passino. Stability analysis of swarms. IEEE

Transactions on Automatic Control, 48(4):692--697, 2003

  • W. Ren, R. W. Beard, and E. M. Atkins. Information consensus in

multivehicle cooperative control: Collective group behavior through local interaction. IEEE Control Systems Magazine, 27(2):71--82, 2007

  • H. Ando, Y. Oasa, I. Suzuki, and M. Yamashita. Distributed

memoryless point convergence algorithm for mobile robots with limited visibility. IEEE Transactions on Robotics and Automation,

slide-15
SLIDE 15

,

A skinny bibliography on cooperative control – cont

  • Z. Lin, M. Broucke, and B. Francis. Local control strategies for

groups of mobile autonomous agents. IEEE Transactions on Automatic Control, 49(4):622--629, 2004

  • J. A. Marshall, M. E. Broucke, and B. A. Francis. Formations of

vehicles in cyclic pursuit. IEEE Transactions on Automatic Control, 49(11):1963--1974, 2004

  • R. Olfati-Saber and R. M. Murray. Consensus problems in networks of

agents with switching topology and time-delays. IEEE Transactions on Automatic Control, 49(9):1520--1533, 2004

  • L. Moreau. Stability of multiagent systems with time-dependent

communication links. IEEE Transactions on Automatic Control, 50(2):169--182, 2005

  • P. ¨

Ogren, E. Fiorelli, and N. E. Leonard. Cooperative control of mobile sensor networks: Adaptive gradient climbing in a distributed

  • environment. IEEE Transactions on Automatic Control,

49(8):1292--1302, 2004

  • M. Mesbahi. On state-dependent dynamic graphs and their

controllability properties. IEEE Transactions on Automatic Control, 50(3):387--392, 2005

slide-16
SLIDE 16

,

What is the general plan? – roadmap

Lecture 1: Introduction, examples, and preliminary notions Lecture 2: Models for cooperative robotic networks Lecture 3: Rendezvous Lecture 4: Deployment Lecture 5: Agreement

slide-17
SLIDE 17

,

Objective for the end of the course

Ideal scenario: i show you this slide, and you can relate to all things Network modeling, algorithm design and validation Network modeling network, ctrl+comm algorithm, task, complexity Coordination algorithms rendezvous, deployment, consensus Systematic algorithm design

1 geometric structures 2 aggregate objective functions 3 class of (gradient) algorithms local, distributed 4 invariance principles and stability

slide-18
SLIDE 18

,

Outline

1 General introduction to the course 2 A primer on graph theory 3 Distributed linear iterations

Agreement algorithms Convergence analysis

4 Distributed algorithms on synchronous networks

slide-19
SLIDE 19

,

Basic graph notions

A directed graph or digraph, of order n is G = (V, E) V is set with n elements – vertices E is set of ordered pair of vertices – edges Digraph is complete if E = V × V. (u, v) denotes an edge from u to v An undirected graph consists of a vertex set V and of a set E of unordered pairs of vertices. {u, v} denotes an unordered edge A digraph (V′, E′) is undirected if (v, u) ∈ E′ anytime (u, v) ∈ E′ a subgraph of a digraph (V, E) if V′ ⊂ V and E′ ⊂ E a spanning subgraph if it is a subgraph and V′ = V

slide-20
SLIDE 20

,

Graph neighbors

In a digraph G with an edge (u, v) ∈ E, u is in-neighbor of v, and v is

  • ut-neighbor of u

N in

G (v): set of in-neighbors of v – cardinality is in-degree

N out

G

(v): set of out-neighbors of v – cardinality is out-degree A digraph is topologically balanced if each vertex has the same in- and out-degrees, i.e., same number of incoming and outgoing edges Likewise, u and v are neighbors in a graph G if {u, v} is an undirected edge NG(v): set of neighbors of v in the undirected graph G – cardinality is degree

slide-21
SLIDE 21

,

Connectivity notions

A directed path in a digraph is an ordered sequence of vertices such that any two consecutive vertices are a directed edge of the digraph. A cycle is a non-trivial directed path that starts and ends at the same

  • vertex. A digraph is acyclic if it contains no cycles
slide-22
SLIDE 22

,

Connectivity notions

A directed path in a digraph is an ordered sequence of vertices such that any two consecutive vertices are a directed edge of the digraph. A cycle is a non-trivial directed path that starts and ends at the same

  • vertex. A digraph is acyclic if it contains no cycles

A vertex of a digraph is globally reachable if it can be reached from any other vertex by traversing a directed path. A digraph is strongly connected if every vertex is globally reachable

slide-23
SLIDE 23

,

Connectivity notions

A directed path in a digraph is an ordered sequence of vertices such that any two consecutive vertices are a directed edge of the digraph. A cycle is a non-trivial directed path that starts and ends at the same

  • vertex. A digraph is acyclic if it contains no cycles

A vertex of a digraph is globally reachable if it can be reached from any other vertex by traversing a directed path. A digraph is strongly connected if every vertex is globally reachable A directed tree is an acyclic digraph with the following property: there exists a vertex, called root, such that any other vertex of the digraph can be reached by one and only one path starting at the root Every in-neighbor is a parent and every out-neighbor is a child. Directed spanning tree = spanning subgraph + directed tree

slide-24
SLIDE 24

,

Connectivity in topologically balanced digraphs

Lemma Let G be a digraph. The following statements hold:

1 if G is strongly connected, then it contains a globally reachable

vertex and a spanning tree; and

2 if G is topologically balanced and contains either a globally

reachable vertex or a spanning tree, then G is strongly connected. Analogous definitions can be given for the case of undirected graphs. If a vertex of a graph is globally reachable, then every vertex is, the graph contains a spanning tree, and we call the graph connected

slide-25
SLIDE 25

,

Weighted digraphs

A weighted digraph is a triplet G = (V, E, A), where (V, E) is a digraph and A is an n × n weighted adjacency matrix such that aij > 0 if (vi, vj) is an edge of G, and aij = 0 otherwise Scalars aij are weights for the edges of G. Weighted digraph is undirected if aij = aji for all i, j ∈ {1, . . . , n}

slide-26
SLIDE 26

,

Weighted digraphs

A weighted digraph is a triplet G = (V, E, A), where (V, E) is a digraph and A is an n × n weighted adjacency matrix such that aij > 0 if (vi, vj) is an edge of G, and aij = 0 otherwise Scalars aij are weights for the edges of G. Weighted digraph is undirected if aij = aji for all i, j ∈ {1, . . . , n} Weighted out-degree and in-degree dout(i) =

n

  • j=1

aij and din(i) =

n

  • j=1

aji G is weight-balanced if each vertex has equal in- and out-degree Weighted out-degree diagonal matrix Dout(G): (Dout(G))ii = dout(vi) Weighted in-degree diagonal matrix Din(G): (Din(G))ii = din(vi)

slide-27
SLIDE 27

,

Laplacian matrix – connecting algebra and graph theory

The graph Laplacian of the weighted digraph G is L(G) = Dout(G) − A(G) Lemma (Properties of the Laplacian matrix) The following statements hold:

1 L(G)1n = 0n 2 G is undirected if and only if L(G) is symmetric 3 if G is undirected, then L(G) is positive semidefinite 4 G contains a globally reachable vertex if and only if

rank L(G) = n − 1

5 G is weight-balanced if and only if 1T nL(G) = 0n if and only if

Sym(L(G)) = 1

2(L(G) + L(G)T ) is positive semi-definite.

slide-28
SLIDE 28

,

Disagreement function

Disagreement function ΦG(x) = 1 2

n

  • i,j=1

aij(xj − xi)2 If G weight-balanced, ΦG(x) = xT L(G)x If G weight-balanced and weakly connected, λn(Sym(L)) x−Ave(x)12 ≥ ΦG(x) ≥ λ2(Sym(L)) x−Ave(x)12

slide-29
SLIDE 29

,

Outline

1 General introduction to the course 2 A primer on graph theory 3 Distributed linear iterations

Agreement algorithms Convergence analysis

4 Distributed algorithms on synchronous networks

slide-30
SLIDE 30

,

Distributed linear iterations

Data exchange and fusion is a basic task for any network Given graph G = (I, E), matrix F = (fij) ∈ Rn×n is compatible if fij = 0 if and only if (j, i) ∈ E Given compatible F, Linear fusion algorithm, starting from w(0) ∈ Rn, is w(ℓ + 1) = F · w(ℓ), ℓ ∈ N0 In coordinates, wi(ℓ + 1) = fiiwi(ℓ) +

  • j∈N in(i)

fijwj(ℓ)

slide-31
SLIDE 31

,

Time-dependent linear iterations

Discrete-time linear dynamical systems represent an important class of iterative algorithms with applications in

  • ptimization

systems of equations distributed decision making Linear fusion procedure can be extended to sequence of time-dependent state-transition functions associated with {F(ℓ) | ℓ ∈ N0} ⊂ Rn×n, w(ℓ + 1) = F(ℓ) · w(ℓ), ℓ ∈ N0 and w(0) ∈ Rn Examples include Kalman filters and agreement algorithms

slide-32
SLIDE 32

,

An agreement example: flocking

Consider a group of agents in the plane moving with unit speed and adjusting their heading as follows: at integer instants of time, each agent senses the heading of its neighbors (other agents within some specified distance r), and re-sets its heading to the average of its own heading and its neighbors’ heading

slide-33
SLIDE 33

,

An agreement example: flocking

Consider a group of agents in the plane moving with unit speed and adjusting their heading as follows: at integer instants of time, each agent senses the heading of its neighbors (other agents within some specified distance r), and re-sets its heading to the average of its own heading and its neighbors’ heading Mathematically, if (xi, yi) is position of agent i, xi = vi cos θi, yi = vi sin θi with vi = 1 and θi(ℓ + 1) = 1 1 + |Ni|

  • θi(ℓ) +
  • j∈Ni

θj(ℓ)

  • Topology might change from one time instant to the next
slide-34
SLIDE 34

,

Agreement algorithms

A (distributed) agreement algorithm over S = (I, E) is the linear algorithm corresponding to a compatible matrix F ∈ Rn×n that is (row) stochastic,

n

  • j=1

fij = 1 and fij ≥ 0 for all i, j ∈ I Note F · 1n = 1n. The vector subspace generated by 1n is the diagonal set diag Rn of Rn. Points in diag Rn are agreeement configurations An algorithm achieves agreement if it steers the network state towards the set of agreement configurations

slide-35
SLIDE 35

,

Laplacian- or adjacency-based agreement

Let G = (I, E, A) be weighted digraph Laplacian-based: a first form of agreement algorithm over S is w(ℓ + 1) = (I − ǫL(G)) · w(ℓ), ℓ ∈ N0 In order for I − ǫL(G) to be stochastic, ǫ must satisfy 0 < ǫ ≤ mini∈I 1/dout(i)

slide-36
SLIDE 36

,

Laplacian- or adjacency-based agreement

Let G = (I, E, A) be weighted digraph Laplacian-based: a first form of agreement algorithm over S is w(ℓ + 1) = (I − ǫL(G)) · w(ℓ), ℓ ∈ N0 In order for I − ǫL(G) to be stochastic, ǫ must satisfy 0 < ǫ ≤ mini∈I 1/dout(i) Adjacency-based: a second form of agreement algorithm over S is w(ℓ + 1) = (I + Dout(G))−1(I + A(G)) · w(ℓ), ℓ ∈ N0. The resulting stochastic matrix has always non-zero diagonal entries Any agreement algorithm is Laplacian- or adjacency-based Flocking example is adjacency-based

slide-37
SLIDE 37

,

Stability of agreement configurations

Forthcoming results invoke non-degeneracy property of a sequence of matrices: {F(ℓ) | ℓ ∈ N0} ⊂ Rn×n is non-degenerate if there exists α ∈ R>0 such that, for all ℓ ∈ N0, fii(ℓ) ≥ α, for all i ∈ I and fij(ℓ) ∈ {0} ∪ [α, 1], for all i = j ∈ I

slide-38
SLIDE 38

,

Stability – directed case

Theorem Let {F(ℓ) | ℓ ∈ N0} ⊂ Rn×n be a non-degenerate sequence of stochastic

  • matrices. The following are equivalent:

1 the time-dependent linear algorithm is uniformly globally attractive

with respect to diag Rn;

2 there exists T > 0 such that, for all ℓ ∈ N0, the digraph

∪s∈[ℓ,ℓ+T ](I, E(F(s))) contains a spanning tree. In other words, the linear algorithm converges uniformly and asymptotically to the vector subspace generated by 1n

slide-39
SLIDE 39

,

Stability – undirected case

Theorem Let {F(ℓ) | ℓ ∈ N0} ⊂ Rn×n be a non-degenerate sequence of stochastic, symmetric matrices. The following are equivalent:

1 the time-dependent linear algorithm is uniformly globally attractive

with respect to diag Rn;

2 for all ℓ ∈ N0, the digraph

∪s≥ℓ(I, E(F(s))) contains a spanning tree. In both results, each individual evolution converges to an specific point

  • f diag Rn, rather than converging to the whole set

Non-degeneracy requirement in both results can not be removed to achieve agreement

slide-40
SLIDE 40

,

Laplacian- and adjancency-based agreement

Convergence

The following statements are equivalent

1 Laplacian-based agreement algorithm is globally attractive with

respect to diag Rn

2 Adjancency-based agreement algorithm is globally attractive with

respect to diag Rn

3 S contains a spanning tree

slide-41
SLIDE 41

,

What is the agreement value?

Specific value upon which all wi, i ∈ I agree is unknown – complex function of initial condition and specific sequence of matrices

slide-42
SLIDE 42

,

What is the agreement value?

Specific value upon which all wi, i ∈ I agree is unknown – complex function of initial condition and specific sequence of matrices Given time-dependent {F(ℓ) | ℓ ∈ N0} ⊂ Rn×n satisfying

1 assumptions of directed result and, for all ℓ ∈ Z≥0, 1n is a left

eigenvector with associated eigenvalue 1 of F(ℓ), or

2 the assumptions of undirected result

then

n

  • i=1

wi(ℓ + 1) = 1T

nw(ℓ + 1) = 1T nF(ℓ)w(ℓ) = 1T nw(ℓ) = n

  • i=1

wi(ℓ) Since in the limit all entries of w must coincide, average-consensus lim

ℓ→+∞ wj(ℓ) = 1

n

n

  • i=1

wi(0), j ∈ I

We’ll see more of this in Lecture 5

slide-43
SLIDE 43

,

Outline

1 General introduction to the course 2 A primer on graph theory 3 Distributed linear iterations

Agreement algorithms Convergence analysis

4 Distributed algorithms on synchronous networks

slide-44
SLIDE 44

,

Synchronous networks

Previous examples of distributed linear iterations are particular class of algorithms that can be run in parallel by network of computers Theory of parallel computing and distributed algorithms studies general classes of algorithms that can be implemented in static networks (neighboring relationships do not change)

slide-45
SLIDE 45

,

Synchronous networks

Previous examples of distributed linear iterations are particular class of algorithms that can be run in parallel by network of computers Theory of parallel computing and distributed algorithms studies general classes of algorithms that can be implemented in static networks (neighboring relationships do not change) Synchronous network is group of processors with ability to exchange messages and perform local computations. Mathematically, a digraph (I, E),

1 I = {1, . . . , n} is the set of unique identifiers (UIDs), and 2 E is a set of directed edges over the vertices {1, . . . , n}, called the

communication links

slide-46
SLIDE 46

,

Ring network

Each agent has 1 neighbor to its right and 1 neighbor to its left

slide-47
SLIDE 47

,

Distributed algorithm

Distributed algorithm DA for a network S consists of the sets

1 L, a set containing the null element, called the communication

alphabet; elements of L are called messages;

2 W [i], i ∈ I, called the processor state sets; 3 W [i]

⊆ W [i], i ∈ I, sets of allowable initial values; and of the maps

1 msg[i] : W [i] × I → L, i ∈ I, called message-generation functions; 2 stf[i] : W [i] × Ln → W [i], i ∈ I, called state-transition functions.

If W [i] = W, msg[i] = msg, and stf[i] = stf for all i ∈ I, then DA is said to be uniform and is described by a tuple (L, W, {W [i]

0 }i∈I, msg, stf)

slide-48
SLIDE 48

,

Network evolution

Execution: discrete-time communication and computation

!"#$%&'() #$*)"+,+'-+ ./*#(+) /"0,+%%0") %(#(+

Formally, evolution of (S, DA) from initial conditions w[i]

0 ∈ W [i] 0 ,

i ∈ I, is the collection of trajectories w[i] : T → W [i], i ∈ I, satisfying w[i](ℓ) = stf[i](w[i](ℓ − 1), y[i](ℓ)) where w[i](−1) = w[i]

0 , i ∈ I, and where the trajectory y[i] : T → Ln

(describing the messages received by processor i) has components y[i]

j (ℓ), for j ∈ I, given by

y[i]

j (ℓ) =

  • msg[j](w[j](ℓ − 1), i),

if (i, j) ∈ E, null,

  • therwise.
slide-49
SLIDE 49

,

Complexity notions

How good is a distributed algorithm? How costly to execute? Complexity notions characterize performance of distributed algorithms Algorithm completion: an algorithm terminates when only null messages are transmitted and all processors states become constants

slide-50
SLIDE 50

,

Complexity notions

How good is a distributed algorithm? How costly to execute? Complexity notions characterize performance of distributed algorithms Algorithm completion: an algorithm terminates when only null messages are transmitted and all processors states become constants Time complexity: TC(DA, S) is maximum number of rounds required by execution of DA on S among all allowable initial states Space complexity: SC(DA, S) is maximum number of basic memory units required by a processor executing DA on S among all processors and all allowable initial states Communication complexity: CC(DA, S) is maximum number of basic messages transmitted over the entire network during execution of DA among all allowable initial states until termination (basic memory unit, message contains log(n) bits)

slide-51
SLIDE 51

,

Leader election by comparison

Problem Assume that all processors of a network have a state variable, say leader, initially set to unknown A leader is elected when one and only one processor has the state variable set to true and all others have it set to false Elect a leader

slide-52
SLIDE 52

,

Leader election by comparison

Problem Assume that all processors of a network have a state variable, say leader, initially set to unknown A leader is elected when one and only one processor has the state variable set to true and all others have it set to false Elect a leader Le Lann-Chang-Roberts (LCR) algorithm solves leader election in rings with complexities

1 time complexity n 2 space complexity 2 3 communication complexity Θ(n2)

slide-53
SLIDE 53

,

The LCR algorithm

Network: Ring network Alphabet: L = I ∪ {null} Processor State: w = (u, max-uid, leader, transmit), where u ∈ I, initially: u[i] = i for all i max-uid ∈ I, initially: max-uid[i] = i for all i leader ∈ {true, unknown}, initially: leader[i] = unknown for all i transmit ∈ {true, false}, initially: transmit[i] = true for all i function msg(w, i)

1: if transmit = true then 2:

return max-uid

3: else 4:

return null

slide-54
SLIDE 54

,

The LCR algorithm

function stf(w, y)

1: if (y contains only null messages) OR (largest identifier in y < u)

then

2:

new-uid := max-uid

3:

new-leader := leader

4:

new-transmit := false

5: if (largest identifier in y = u) then 6:

new-uid := max-uid

7:

new-leader := true

8:

new-transmit := false

9: if (largest identifier in y > u) then 10:

new-uid := largest identifier in y

11:

new-leader := leader

12:

new-transmit := true

13: return (u, new-uid, new-leader, new-transmit)

slide-55
SLIDE 55

,

Quantifying time, space, and communication complexity

Asymptotic “order of magnitude” measures. E.g., algorithm has time complexity of order

1 Ω(f(n)) if, for all n, ∃ network of order n and initial processor

values such that TC is greater than a constant factor times f(n)

2 O(f(n)) if, for all n, for all networks of order n and for all initial

processor values, TC is lower than a constant factor times f(n)

3 Θ(f(n)) if TC is of order Ω(f(n)) and O(f(n)) at the same time

Similar conventions for space and communication complexity

slide-56
SLIDE 56

,

Quantifying time, space, and communication complexity

Asymptotic “order of magnitude” measures. E.g., algorithm has time complexity of order

1 Ω(f(n)) if, for all n, ∃ network of order n and initial processor

values such that TC is greater than a constant factor times f(n)

2 O(f(n)) if, for all n, for all networks of order n and for all initial

processor values, TC is lower than a constant factor times f(n)

3 Θ(f(n)) if TC is of order Ω(f(n)) and O(f(n)) at the same time

Similar conventions for space and communication complexity Numerous variations of complexity definitions are possible

1 “Global” rather than “existential” lower bounds 2 Expected or average complexity notions 3 Complexity notions for problems, rather than for algorithms

slide-57
SLIDE 57

,

Summary and conclusions

General introduction to the course themes A primer on graph theory

1 Basic graph-theoretic notions and connectivity notions 2 Adjacency and Laplacian matrices

Distributed linear iterations

1 Discrete-time linear dynamical systems 2 Agreement algorithms and convergence results

Introduction to distributed algorithms

1 Model 2 Complexity notions 3 Leader election

slide-58
SLIDE 58

,

References

Graph theory

  • R. Diestel. Graph Theory, volume 173 of Graduate Texts in
  • Mathematics. Springer Verlag, New York, 2 edition, 2000

Distributed linear iterations and agreement algorithms

  • D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed

Computation: Numerical Methods. Athena Scientific, Belmont, MA, 1997

  • A. Jadbabaie, J. Lin, and A. S. Morse. Coordination of groups of

mobile autonomous agents using nearest neighbor rules. IEEE Transactions on Automatic Control, 48(6):988--1001, 2003

  • L. Moreau. Stability of multiagent systems with time-dependent

communication links. IEEE Transactions on Automatic Control, 50(2):169--182, 2005

Distributed algorithms

  • N. A. Lynch. Distributed Algorithms. Morgan Kaufmann Publishers, San

Mateo, CA, 1997

  • D. Peleg. Distributed Computing. A Locality-Sensitive Approach.

Monographs on Discrete Mathematics and Applications. SIAM, Philadelphia, PA, 2000