From control of networks to networked control Wing Shing Wong - - PowerPoint PPT Presentation

from control of networks to networked control
SMART_READER_LITE
LIVE PREVIEW

From control of networks to networked control Wing Shing Wong - - PowerPoint PPT Presentation

From control of networks to networked control Wing Shing Wong Department of Information Engineering The Chinese University of Hong Kong Objective: Explore impacts of recent Internet development on network control and networked control


slide-1
SLIDE 1

From control of networks to networked control

Wing Shing Wong Department of Information Engineering The Chinese University of Hong Kong

slide-2
SLIDE 2

Objective: Explore impacts of recent Internet development on network control and networked control problems

Partially based on joint works with: LO Yuan- Hsun, ZHANG Yijin, CHEN Yi, LIU Zhongchang, LIU Fang, Luo Jingjing, TAN Cheng and others.

2

slide-3
SLIDE 3

Talk outline

  • 1. Recent Internet developments
  • 2. Routing and scheduling on the fat-tree

topology

  • 3. Network control on fat-tree networks
  • 4. Implication on networked control
slide-4
SLIDE 4

Networked control over an open network

Physical network Network of decision-makers (agents) Open communication network

slide-5
SLIDE 5

Two recent trends in the Internet

  • SDN – Software Defined Networks
  • Growth of mega-sized data center networks
slide-6
SLIDE 6

Good ideas don’t die: Clos Networks

  • C. Clos, “A study of non-blocking switching networks,” The

Bell System Technical Journal, 1953.

  • Three stage network, intended for crossbar switches

Western Electric 100-point six-wire Type B crossbar switch

From C Clos, “A study of non-blocking Switching networks,” BSTJ 1952.

slide-7
SLIDE 7

A typical three-layer Clos network

𝒐 … 𝒐 … 𝒐 … 𝒐 … 𝒐 … 𝒐 … 𝒐 … 𝒐 …

… … …

𝒐 × 𝒔 𝒐 × 𝒐 𝒔 × 𝒐 𝑠 𝟐 𝟐 𝟐 𝒐 𝒐 𝟑 𝟑 𝟑

r−𝟐 n-𝟐 n-𝟐

Rearrangeably non- blocking Condition: 𝑠 ≥ 𝑜

𝑜2 output links 𝑜2 input links

slide-8
SLIDE 8
  • The fat-tree is a folded version of a Clos network: an

example involving 4 PODs

Fat-tree, a popular architecture for data center networks

Based on Mohammad Al-Fares et.. Al., “A Scalable, Commodity Data Center Network Architecture”, SIGCOMM’08, August 17–22, 2008, Seattle, Washington, USA

Core Aggregate Edge hosts POD 1 POD 2 POD 3 POD 4

slide-9
SLIDE 9
  • 𝐔𝑜: 𝑜-ary fat-tree network – each switch/router has 2𝑜 ports

and there are 2𝑜 PODs.

  • 𝑑𝑗,𝑘: the 𝑘-th core switch in the 𝑗-th core group. 𝐷𝑜 = 𝑜2.
  • 𝑏𝑗,𝑘: the 𝑘-th aggregation switch in the 𝑗-th pod. 𝐵𝑜 = 2𝑜2.
  • 𝑓𝑗,𝑘: the 𝑘-th edge switch in the 𝑗-th pod. 𝐹𝑜 = 2𝑜2.
  • 𝑑𝑗,𝑘: the 𝑘-th host in the edge switch 𝑓𝑢,𝑗. 𝐼𝑜 = 2𝑜3.
  • With 256 ports per switch, system can support 4,194,304 hosts

Fat-tree network architecture

slide-10
SLIDE 10

Then Now Switch fabric Interconnect Circuit switch Circuit/packet switch No inside buffering Buffering inside network Blocking Blocking/Queueing delay Single rate Single and Multi-rate Undivided connection Undivided and divided Static, centralized control Dynamic, centralized/distributed control No priority class Priority class possible

Difference in application scenarios

slide-11
SLIDE 11

Basic routing and scheduling issues on fat-tree networks

11

slide-12
SLIDE 12

Global packing number of a network

  • A basic issue is to understand “bandwidth”

requirement for a given traffic pattern.

  • If two or more connections use the same link at the

same time then there will be blocking or queuing unless the link can be shared by techniques such as multiple wavelengths (under WDMA) or multiple time slots (under TDMA).

  • Roughly, the minimal number of wavelengths or time

slots required in order to satisfy a given traffic demand without blocking or queueing is the global packing number (GPN).

slide-13
SLIDE 13
  • 𝑂 = 1,2,3,4 , Φ 𝐻, 𝑂 = 3:
  • Assume uniform traffic
  • Global packing number can understood as the

minimum:

– Wavelengths required in an optical network – Time slots to ensure zero-queuing delay

A toy example

slide-14
SLIDE 14

A toy example

  • 𝑂 = 1,2,3,4 , Φ 𝐻, 𝑂 = 3:
  • Assume uniform traffic
  • Global packing number can understood as the

minimum:

– Wavelengths required in an optical network – Time slots to ensure zero-queuing delay

slide-15
SLIDE 15

A toy example

  • 𝑂 = 1,2,3,4 , Φ 𝐻, 𝑂 = 3:
  • Assume uniform traffic
  • Global packing number can understood as the

minimum:

– Wavelengths required in an optical network – Time slots to ensure zero-queuing delay

slide-16
SLIDE 16
  • Theorem (Lo Zhang Chen Wong Fu preprint):

For any integer, 𝑜 > 1, the global packing number for uniform traffic is 2𝑜3 − 1.

  • The construction makes use of Latin square.
  • Consider a general integer-valued matrix,

𝐵 ∈ ℤ2𝑜3×2𝑜3, for example:

GPN for two types of traffic

2 ⋯ 7 1 3 ⋯ 6 5 ⋮ 5 1 ⋮ 9 2 ⋱ ⋯ ⋯ ⋮ 8 3 ⋮ 2

slide-17
SLIDE 17

Result for a general traffic matrix

  • Define an induced bipartite multigraph, 𝐶 𝐵 = (𝑌 ∪

𝑍, 𝐹) where 𝑌 = 𝑍 = 2𝑜3 and the multiplicity of an edge from node 𝑌𝑗 to node 𝑍

𝑘 is 𝐵𝑗,𝑘.

  • Nodes represent the hosts and edges represent the

traffic between host pairs.

  • Use different color of the edge to represent the

aggregate switches used in a POD.

slide-18
SLIDE 18

Result for a general traffic matrix

  • Define an induced bipartite multigraph, 𝐶 𝐵 = (𝑌

∪ 𝑍, 𝐹) where 𝑌 = 𝑍 = 2𝑜3 and the multiplicity

  • f an edge from node 𝑌𝑗 to node 𝑍

𝑘 is 𝐵𝑗,𝑘.

  • Nodes represent the hosts and edges represent the

traffic between host pairs.

  • Use different color of the edge to represent the

aggregate switches used in a POD.

slide-19
SLIDE 19

Result for a general traffic matrix

  • Theorem (Chen Wong Lo Zhang preprint): The global

packing number for traffic 𝐵, 𝜚 𝐔𝑜, 𝐼𝑜, 𝐵 = 𝜓 𝐶 𝐵 , where 𝜓 is the chromatic index. In particular, 𝜚 𝐔𝑜, 𝐼𝑜, 𝐵 = max max

𝑗

𝐵𝑗𝑘,

𝐼𝑜 𝑘=1

max

𝑘

𝐵𝑗𝑘

𝐼𝑜 𝑗=1

.

slide-20
SLIDE 20

Network control on fat-tree networks: Stability and delay analysis

20

slide-21
SLIDE 21

Analytical results for switches

  • Extensively studied for switches, Input-Queued Switches

and Output-Queued Switches

  • Throughput is limited by Head-of-line (HOL) problem

which can be addressed by Virtual Output Queueing (VOQ)

  • Scheduling is mapped to a bipartite graph matching

problem by maximum size matching (𝑃(𝑂5/2) complexity) or by maximum weight matching

  • Only maximum weight matching is throughput optimal

(Tassiulas, Ephremides, Kumar, Meyn, McKeown, Anantharam, Walrand, and others.)

slide-22
SLIDE 22

Analytic model and throughput optimality

  • At each time slot, either 0 or 1 packet arrives at each input
  • Stationary, ergodic arrivals with input node 𝑗 to output node 𝑘

traffic rate = 𝜇𝑗,𝑘.

  • Each node or link can route at most one packet per slot
  • The arrivals are admissible if 𝜇𝑗,𝑘 < 1

𝑗

and 𝜇𝑗,𝑘 < 1

𝑘

for all 𝑗, 𝑘

  • A schedule is throughput optimal if it stabilizes all admissible

rates Output nodes Input nodes Virtual output queues

𝜇𝑗,𝑘

slide-23
SLIDE 23

A short digression: Classification of network control models using the choice-base control perspective

23

slide-24
SLIDE 24

Google’s Project Loon Loon for All

Image from http://www.google.com/loon/

Locations requiring attention

A motivating example

slide-25
SLIDE 25
  • Assuming a simple linear dynamic model

𝑒𝑦 𝑒𝑢 = 𝐵𝑦 𝑢 + 𝑐𝑗𝑣𝑗 𝑢 , 𝑦(𝑢0)

𝑀 𝑗=1

∈ ℝ4

  • Agent i can select a point to preferentially serve from a

given set: {𝐐𝑗,1, 𝐐𝑗,2, … 𝐐𝑗,𝑂𝑗}

  • Assume there is a pre-agreed location depending on the

selected choices, is it possible to find distributed controls that to steer the satellite according to the agents’ choices without a central coordinator?

Satellite positioning problem

slide-26
SLIDE 26

Choice-based action system

  • Consider a distributed agent system with L agents:

𝑚 ∈ 𝐽 = {1, … , 𝑀}

  • Agent l has 𝑂𝑚 choices

𝑗𝑚 ∈ 𝐷𝑚 = {1, … , 𝑂𝑚}

  • The choice combination of all agents:

𝑗 = 𝑗1, 𝑗2, … , 𝑗𝑀 ∈ 𝐷1 × 𝐷2 × ⋯ × 𝐷𝑀 ≡ 𝐷

  • For each choice combination there is a target state to

be reached: 𝐼𝑗 ∈ ℝ𝑜

  • The number of targets: 𝑂1 × 𝑂2 × ⋯ × 𝑂𝑀
  • Represent all the targets in a tensor

𝐼 = 𝐼𝑗 𝑂1×𝑂2×⋯×𝑂𝑀

slide-27
SLIDE 27

The basic questions

  • Assume agents select their choices with a known

(uniform) distribution and the choices remain unchanged.

  • Can any target be reached under joint control of the

agents without explicit communication (a central coordinator or agent-to-agent communication)?

  • If not, how much information is needed among the

agents?

  • Two extreme cases: No communication versus full-

communication

– Is it possible to realize a target from a set of multiple choices with no communication between agents? – If full-communication is provided, the problem is equivalent to a collection of single target problems.

slide-28
SLIDE 28

Systems with linear dynamics

  • Deterministic a linear system with L agents

𝑦𝑗 𝑢 = 𝐵𝑦𝑗 𝑢 + 𝐶𝑚𝑣𝑚 𝑢, 𝑦𝑗 , 𝑗𝑚

𝑀 𝑚=1

, 𝑦𝑗 0 ∈ ℝ𝑙 (∗)

to minimize the cost function

𝐾 = 𝛽𝑓𝑗 𝑢𝑔

𝑈𝑓𝑗 𝑢𝑔 + 𝛾

𝑣𝑚

𝑈(𝑢)𝑣𝑚 𝑢 𝑒𝑢 𝑢𝑔 𝑀 𝑚=1 𝑗 ∈𝐷

where 𝑓𝑗 𝑢𝑔 = 𝑦𝑗 𝑢𝑔 − 𝐼𝑗 is the target error for the choice 𝑗 and 𝛽, 𝛾 are positive normalization factors.

  • Definition: A target matrix is reachable if for 𝛾 = 0, there

exists controls such that 𝐾 = 0

slide-29
SLIDE 29

Result for linear systems

  • Theorem (Guo Liu Wong) If (*) is individually

controllable, then there is an explicit solution for the above problem. Any target is reachable with open loop control if and only if for any agents, 𝑚 and 𝑛, and choices 𝑗𝑚, 𝑗𝑚

′ 𝑗𝑜 the choice set of agent 𝑚

and choices 𝑗𝑛, 𝑗𝑛

′ in the choice set of agent 𝑛:

𝐼𝑗1⋯𝑗𝑚⋯𝑗𝑛⋯𝑗𝑀 − 𝐼𝑗1⋯𝑗𝑚⋯𝑗𝑛

′ ⋯𝑗𝑀 = 𝐼𝑗1⋯𝑗𝑚 ′⋯𝑗𝑛⋯𝑗𝑀 − 𝐼𝑗1⋯𝑗𝑚 ′⋯𝑗𝑛 ′ ⋯𝑗𝑀

  • For two agent cases, then for any choices set

(𝑗, 𝑘, 𝑙, 𝑚)

𝐼𝑗,𝑚 − 𝐼𝑗,𝑙 = 𝐼

𝑘,𝑚 − 𝐼 𝑘,𝑙

slide-30
SLIDE 30

Agents made choices at t0, t1 and t2

Arithmetic mean targets

slide-31
SLIDE 31

Circle center Nonlinear function of

Incompatible target matrix case (non-

reachable without communication): Center of the minimum covering circle

slide-32
SLIDE 32

Target reaching control with signaling

  • For incompatible H, signaling is necessary to ensure the

targets can be reached.

  • Two-round solution:

– First round – target signaling – Second round – target control

  • Round 1, round 2,
  • Definition: A code-tensor

encodes a target matrix if

  • 1. It is a tensor with indices in and entries in ,
  • 2. It is compatible,
  • 3. For any two distinct indices,

whenever

slide-33
SLIDE 33

33

Planned trajectories of the sensor’s position

  • : code states

*: target states

slide-34
SLIDE 34

Scheduling problems from the choice- based perspective

  • Traffic demands of the hosts are regarded as choices
  • Switches also hold partial information
  • Scheduling algorithm design is based on information

exchange arrangement assumptions:

– Centralized set-up

  • Maximum weight matching
  • Modified Q-CSMA

– Partially distributed – Distributed

Centralized Distributed

slide-35
SLIDE 35

Virtual output queueing model for fat-tree

  • Each host-to-host source-destination

pair, (𝑗, 𝑘) has a virtual queue at the source link, with queue size 𝑅𝑗,𝑘(𝑢) at the t-th iteration. (𝑃(𝑜6) such pairs)

  • Let 𝒬 be the set of via paths for all

host-to-host communication in a fat- tree network (𝑃(𝑜8))

  • Let 𝐑(𝑜) be the 2𝑜3 by 2𝑜3 queue

length matrix at time 𝑜

VOQ

1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4

Host 1 Host 2 Host 3 Host 4 Non-Conflicting Routing for Fat-Tree

slide-36
SLIDE 36

Adopting McKeown et al’s Longest Queue First (LQF) for Fat-tree

  • At each time slot, 𝑜, solve the problem

max 𝑢𝑠𝐑𝑈(𝑜)𝐐,

where 𝐑(𝑜) is the queue state matrix at time 𝑜 and 𝐐 is a permutation matrix.

  • Routing solution always exists for a permutation traffic.

→ Zero-queueing inside the network

→ The solution is throughput optimal

  • The solution requires a centralized controller with

access to all queue length information. The selected schedule has to be conveyed to all hosts.

  • Not scalable

Centralized Distributed

slide-37
SLIDE 37

Applying Q-CSMA to fat-tree

  • Originally developed by Ni, Tan, Srikant for wireless network

Medium Access Control (MAC)

  • Motivated by the Glauber dynamics in physics, where multiple

links (in our case paths) can update their states in each slot

  • Aim to obtain non-conflicting schedules (in our case zero-queueing

delay schedules) and guarantee throughput-optimality

  • A non-conflicting schedule is a path set, whose elements do not

have overlapping links. Let ℳ0 denote a collection of all non- conflicting schedules and satisfies

Hosts Hosts

The red paths are selected to transmit packets One or multiple paths for each host pair

𝑵

𝑵∈𝓝𝟏

= 𝓠.

slide-38
SLIDE 38

Path scheduling algorithm

time slot t time slot t+1 ……

Control phase Data phase

Schedule in t Candidate Schedule M in t+1

+

Schedule in t+1

Operations in a control phase

  • A time-reversible discrete-time Markov chain

(DTMC) is defined that operates in the control phase to find the routing schedule.

  • The DTMC transition probability is defined by

selecting randomly a non-conflicting candidate schedule M, where M ∈ ℳ0 and merge it with the schedule defined by the current Markov state.

slide-39
SLIDE 39

Throughput optimality

  • The merge algorithm guarantees that at equilibrium the

Markov state, 𝑌, is selected with a probability of proportional to

1 𝑎

α𝑅𝑗,𝑘(𝑢)

(𝑗.𝑘)

where the product is over all (𝑗, 𝑘) node pairs in 𝑌.

  • It can be argued as in the original Q-CSMA model that

the scheduling algorithm is throughput optimal.

  • However, the complexity is still high.

– A central controller has to select the candidate schedule – Merging operations depend on queue length information (𝑃(𝑜3))

Centralized Distributed

slide-40
SLIDE 40

1 2 Pod 0 Core group 1 Pod 1 1 2 Pod 2 1 2 Pod 3 1 2 Pod 4 1 2 Pod 5 1 2 Core group 0

1 2

Core group 2 1 2 1 2 1 2 1 2 1 2 1 2

← Aggregation ← Edge ← Core ← Host

1 2 1 2

Using buffers in the network

  • Buffers are available
slide-41
SLIDE 41

1 2 Pod 0 Core group 1 Pod 1 1 2 Pod 2 1 2 Pod 3 1 2 Pod 4 1 2 Pod 5 1 2 Core group 0

1 2

Core group 2 1 2 1 2 1 2 1 2 1 2 1 2

← Aggregation ← Edge ← Core ← Host

1 2 1 2

Aiming for more scalable solution

slide-42
SLIDE 42

1 2 Pod 0 Core group 1 Pod 1 1 2 Pod 2 1 2 Pod 3 1 2 Pod 4 1 2 Pod 5 1 2 Core group 0

1 2

Core group 2 1 2 1 2 1 2 1 2 1 2 1 2

← Aggregation ← Edge ← Core ← Host

1 2 1 2

Aiming for more scalable solution

1 2 Pod 0 Core group 1 Pod 1 1 2 Pod 2 1 2 Pod 3 1 2 Pod 4 1 2 Pod 5 1 2 Core group 0

1 2

Core group 2 1 2 1 2 1 2 1 2 1 2 1 2

← Aggregation ← Edge ← Core ← Host

1 2 1 2

slide-43
SLIDE 43

1 2 Pod 0 Core group 1 Pod 1 1 2 Pod 2 1 2 Pod 3 1 2 Pod 4 1 2 Pod 5 1 2 Core group 0

1 2

Core group 2 1 2 1 2 1 2 1 2 1 2 1 2

← Aggregation ← Edge ← Core ← Host

1 2 1 2

Aiming for more scalable solution

1 2 Pod 0 Core group 1 Pod 1 1 2 Pod 2 1 2 Pod 3 1 2 Pod 4 1 2 Pod 5 1 2 Core group 0

1 2

Core group 2 1 2 1 2 1 2 1 2 1 2 1 2

← Aggregation ← Edge ← Core ← Host

1 2 1 2

Assume each POD has an individual controller which has assess to all queue information in the POD

slide-44
SLIDE 44

1 2 Pod 0 Core group 1 Pod 1 1 2 Pod 2 1 2 Pod 3 1 2 Pod 4 1 2 Pod 5 1 2 Core group 0

1 2

Core group 2 1 2 1 2 1 2 1 2 1 2 1 2

← Aggregation ← Edge ← Core ← Host

1 2 1 2

Allowing buffering inside network

Focusing on each POD, by applying the LQF algorithm on the uplink, we can ensure the Uplink Edge Queues are stable. There is no need for queueing at the Uplink Aggregate Queues

1 2 Pod 0 Core group 1 Pod 1 1 2 Pod 2 1 2 Pod 3 1 2 Pod 4 1 2 Pod 5 1 2 Core group 0

1 2

Core group 2 1 2 1 2 1 2 1 2 1 2 1 2

← Aggregation ← Edge ← Core ← Host

1 2 1 2

Uplink Edge Queues Uplink Aggregate Queues

slide-45
SLIDE 45

1 2 Pod 0 Core group 1 Pod 3 1 2 Pod 5 1 2 Core group 0

1 2

Core group 2 1 2 1 2 1 2

← Aggregation ← Edge ← Core ← Host

1 2 1 2

Contention at the downlink of the core switches may occur and packets may be queued. If the queue sizes at these downlink queues are announced to all PODs, it is possible to design throughput optimal scheduling for the network by means of the Tassiulas- Ephremides-Lyapunov argument. The amount of message exchanges is in the order of 𝑃(𝑜2).

Centralized Distributed

slide-46
SLIDE 46

Distributed load-balancing via packing and BIBD

  • Basic idea: Core switches (servers) periodically announce

the queue size information to aggregation switches (users) which use pre-assigned sequences to obtain load- balancing effects.

  • Assume 𝑤 servers and 𝑐 users each has 𝑙 jobs to transmit

at each time slot with probability 𝑞

  • Set a threshold, 𝑇, if queue size strictly greater than 𝑇

then the server will not accept new job until the queue size is below the threshold. (It changes from an available server to a non-available server.)

  • The set of available servers at each time slot is known to

all users via broadcast.

Core group 1 2

← Users ← Servers

1 2

1

slide-47
SLIDE 47

Job scheduling sequence

  • Pre-assign job scheduling sequences to users -- one sequence

per user for each possible number of available servers

  • Definition [1]: A balanced incomplete block design (BIBD) is a

pair (𝑌, ℬ), where 𝑌 = 𝑤, and ℬ is a collection of 𝑙 −subsets of 𝑌 such that each element of 𝑌 is contained in exactly 𝑠 blocks and any 2-susbet of 𝑌 is contained in exactly 𝜇 blocks. That is: (1) 𝑤𝑠 = 𝑐𝑙 (2) 𝑠 𝑙 − 1 = 𝜇(𝑤 − 1)

  • Definition [1]: A 𝑢 − 𝑤, 𝑙, 𝜇 packing is a pair (𝑌, ℬ), where

𝑌 = 𝑤, and ℬ is a collection of 𝑙 −subsets of 𝑌 (blocks), such that every 𝑢-subset of 𝑌 is a subset of at most 𝜇 blocks.

[1] C. Colbourn and J. Dinitz, “Handbook of Combinatorial Design,” Chapman and Hall, 2007 Centralized Distributed

slide-48
SLIDE 48
  • 𝑤 = 7, 2−(7,3,1) BIBD design
  • 𝑤 = 3, 2−(3,3,1) BIBD design
  • 𝑤 = 6
  • 𝑤 = 5

Examples of packing and BIBD

Example: Given 7 users, 7 servers, and every user transmits 3 jobs per time slot, then we have the following scheme:

  • 𝑤 = 4
slide-49
SLIDE 49

User-pro Service Rate Threshold Waiting-avg Waiting-max Random* BIBD Random BIBD 100% 2 2 1.500 0.500 3 1 100% 2 3 1.999 0.999 4 2 100% 2 10 5.499 4.499 7 5 100% 3 3 0.984 2 100% 3 10 2.740 5 80% 2 2 0.998 0.910 3 3 80% 2 3 1.480 1.398 4 4 80% 2 10 4.972 4.893 7 7 80% 3 3 0.203 2 80% 3 10 0.300 5 60% 2 2 0.467 0.347 3 3 60% 2 3 0.647 0.470 4 4 60% 2 10 1.305 0.698 7 7 60% 3 3 0.075 2 60% 3 10 0.081 4

Numerical study 1: 7 servers, 7 users, 3 jobs each time slot

Random means a random 3-subset is selected for each user at each round

slide-50
SLIDE 50

User-pro Service Rate Threshold Waiting-avg Waiting-max Random BIBD Random BIBD 100% 5 5 3.413 0.600 7 2 100% 5 7 3.827 0.999 8 2 100% 5 10 4.453 1.600 8 3 100% 7 7 1.073 5 100% 7 10 1.371 6 80% 5 5 1.906 1.597 7 7 80% 5 7 2.305 1.995 8 8 80% 5 10 2.904 2.595 8 8 80% 7 7 0.123 3 80% 7 10 0.134 3 60% 5 5 0.229 0.096 3 2 60% 5 7 0.274 0.099 4 2 60% 5 10 0.314 0.101 4 3 60% 7 7 0.024 2 60% 7 10 0.024 2

Numerical study 2: 15 servers, 35 users, 3 jobs each time slot

slide-51
SLIDE 51

Implications on networked control

51

slide-52
SLIDE 52

General trends

  • Better control on network delays enables

more sophisticated, massively parallel time critical applications over open networks

– Remote surgery, interactive virtual reality games, remote control of UAV/UGVs etc.

  • More integrated models for control and

network communication

  • E.g. application to time sampled system with

network delay and packet loss

slide-53
SLIDE 53

Integrated communicated control system

  • In such a system network control is part of

system control consideration

Time sampled System

slide-54
SLIDE 54

Prior result on sampled systems

  • Assumptions:

– (H1) 𝐵 is unstable and nonsingular, 𝐶 has full-column rank. – (H2) 𝐵 , 𝐶 is a stabilizable pair.

  • Theorem: [Tan and Zhang] Under the above assumptions and that the

packet dropout rate is fixed then the system is stabilizable if and only if the packet dropout out rate is strictly less than 𝑞𝑛𝑏𝑦 where 𝑞𝑛𝑏𝑦 is obtained by solving :

For scalar systems,

𝑦𝑙+1 = 𝐵 𝑦𝑙 + 𝛿𝑙𝐶 𝑣𝑙−𝑒

𝑞𝑛𝑏𝑦 = sup

𝑇>0,𝑍

𝑞 𝑇 > 0, 0 < 𝑞 < 1, −𝑇 ∗ ∗ 𝐵 𝑇 + 1 − 𝑞 𝐶 𝑍 −𝑇 ∗ 𝑞(1 − 𝑞)𝐵 𝑒𝐶 𝑍 −𝑇 < 0. 𝑞𝑛𝑏𝑦 = 1 𝐵 2𝑒+2 − 𝐵 2𝑒 + 1

slide-55
SLIDE 55

Multiple paths to improve delay performance

Time sampled System

slide-56
SLIDE 56

Control under SDN (CuSDN)

Sampling period: ℎ 𝐵 = 𝑓𝐵ℎ System state 𝑦𝑙 = 𝑦(𝑙ℎ) ∈ ℝ𝑚 Control delay 𝑒ℎ Control 𝑣𝑙−𝑒 = 𝑣(𝑦𝑙−𝑒) Delay for the 𝑙-th sample 𝑒𝑙 = 𝑒𝑡𝑑

𝑙 + 𝑒𝑑𝑏 𝑙

Assume that delays are iid with distribution 𝐺 𝑢 = 𝑞𝑠𝑝𝑐(𝑒 ≤ 𝑢) Packet loss probability: 𝑞𝑠𝑝𝑐 𝛿𝑙 = 0 = 1 − 𝐺(𝑒ℎ) For multiple paths cases assuming path independency: 𝑞𝑠𝑝𝑐 𝛿𝑙 = 0 = 1 − 𝐺 𝑒ℎ

𝑛

slide-57
SLIDE 57
  • Theorem: (Wong and Tan) Consider a scalar CuSDN system

satisfying assumptions H1-H2 with the packet dropout distribution satisfying 𝐺 𝑢 = 1 − 𝑓−𝑠𝑢 for some 𝑠 > 0. (1) If 𝑠𝑛 > 2𝐵, the system is stabilizable for any sampling period, ℎ. (2) If 𝑠𝑛 = 2𝐵, then the system is stabilizable if and

  • nly if 0 < ℎ < ℎ𝑛𝑏𝑦 where ℎ𝑛𝑏𝑦 = 𝑚𝑜2

2𝐵.

(3) If 𝑠𝑛 < 2𝐵, then the system is stabilizable if and

  • nly if 0 < ℎ < ℎ𝑛𝑏𝑦 where

ℎ𝑛𝑏𝑦= 𝑢

2𝐵𝑢 ln [𝑓 𝑛𝑠−2𝐵 𝑢

−𝑓−2𝐵𝑢 +1]

−1

, 𝑢 = 𝑚𝑜(2𝐵) − ln (2𝐵 − 𝑛𝑠) 𝑛𝑠 .

slide-58
SLIDE 58

Consider 𝐵 = 0.25, 𝐶 = 1, 𝑒 = 2, 𝑛 = 2, 𝑦0 = 2, 𝑣−2 = 𝑣−1 = 0. If 𝑠

1 = 0.5, 𝑠 1𝑛 > 2𝐵, then the system is stabilizable for any sampling period ℎ > 0

If 𝑠

2 = 0.2 , we have 𝑠 2𝑛 < 2𝐵. Then, 𝑢 = 𝑚𝑜(2𝐵)−ln (2𝐵−𝑛𝑠2) 𝑛𝑠2

= 4.0236,

2𝐵𝑢 ln [𝑓 𝑛𝑠−2𝐵 𝑢

−𝑓−2𝐵𝑢 +1] = 4.6947 = 5, and ℎ𝑛𝑏𝑦 = 0.8047.

ℎ = 1

slide-59
SLIDE 59

Future directions for CuSDN

  • Extension to more complex sampled systems:

– High dimensional with stochastic noise

  • Dealing with quantized packets
  • Control signals with target activation time

– Controller design controls with targeted application time. – Plant implements control with simple logic. – Target time can adapt to network congestions.

slide-60
SLIDE 60

THANK YOU