A bit of background When the Urban Challenge got rolling, a - - PowerPoint PPT Presentation

a bit of background
SMART_READER_LITE
LIVE PREVIEW

A bit of background When the Urban Challenge got rolling, a - - PowerPoint PPT Presentation

A bit of background When the Urban Challenge got rolling, a software framework had to be chosen ModUtils from Grand Challenge/NavLab CMU IPC from everything A NIST package used over at NREC Third party packages ModUtils


slide-1
SLIDE 1

A bit of background

  • When the Urban Challenge got rolling, a

software framework had to be chosen

– ModUtils from Grand Challenge/NavLab – CMU IPC from … everything – A NIST package used over at NREC – Third party packages

slide-2
SLIDE 2

ModUtils

  • Upside: It did everything

– Grand Challenge/NavLab provide

examples that it is capable

  • Downside: It did everything

– It was written in the wild west – Internal implementations of marshalling,

comms, log file format, mini-STL, etc

– Tracking down bugs is intimidating

  • Minimal documentation

– Developed reputation for having a steep

learning curve

slide-3
SLIDE 3

CMU IPC

  • Develop programs around IPC

– Ad-hoc, simple infrastructure for relatively

small systems

– Support packages like MicroRaptor (a

process management system)

  • Maybe a bit spartan
slide-4
SLIDE 4

NIST package

  • Stodgy, bureaucratic mess

– Lots of arcane text files to configure minor

details

  • Not particularly development friendly
  • No mandate, don't bother
slide-5
SLIDE 5

Third Party Systems

  • Basic/unsuitable

– No evidence of high performance uses – Design choices that are obviously not

suitable performance-wise

– Limited capabilities

  • Incomplete

– First have to understand system, then

extend it

  • Incompatible

– Not shopping for a new model, just an

implementation

slide-6
SLIDE 6

Framework of Reuse

  • Take ModUtils model, re-implement with a

priority on simplicity and reuse

– Config files - ruby – Marshalling – boost::serialization – Comms – CMU IPC – Log file format – Berkeley DB – UI – QT integrated with interface

mechanism

– task library to glue it all together

slide-7
SLIDE 7

task library

  • A software jig

– Performs common functions all tasks

require

  • Uses ruby to evaluate a script that results

in configuration values

– Ruby can either contain static values, or

dynamically generate values based on external stimuli

  • Instantiates and configures interfaces

– C++ virtual classes, implementations

dynamically loaded at run time depending on configuration

slide-8
SLIDE 8

Interfaces

  • At the heart of the interface base class is a

Channel

– Not every interface uses the channel, as

  • n the perimeter of the system they

actually interact with hardware, etc

  • But the bulk of interfaces are conduits to

remote interfaces or other tasks

  • TypedChannel<T> reads/writes instances
  • f type T.

– It internally marshalls or unmarshalls T to

a byte stream with boost::serialization

  • Channel just acts on std::string's
slide-9
SLIDE 9

Why develop SimpleComms?

  • Started with IPC

– Readily available

  • Original authors are local
  • Team members have had prior experience

– Simple to use

  • Launch central on one host
  • Set environmental variable CENTRALHOST

to that host on each machine

  • IPC_connect, IPC_subscribe, IPC_publish,

IPC_handleMessage

slide-10
SLIDE 10
  • All communications routed through one

daemon in central mode.

– Multiple trips across network – Serializes everything thru one bottleneck

Why not stick with IPC?

Central Mode Machine One T1 T2 T1 T2 Machine Two T1 T2 T3 T4 Machine Three T1 T2 T5 T6 Central

slide-11
SLIDE 11

Limits of IPC

  • There is a direct mode

– It *cough* works *cough* – Still necessitates multiple network trips

per publish

Direct Mode Machine One T1 T2 T1 T2 Machine Two T1 T2 T3 T4 Machine Three T1 T2 T5 T6 Central

slide-12
SLIDE 12

Limits of IPC

  • If the intermediate buffers are nearly full,

it's possible to enter a dead lock where both parties are writing.

Dead Lock Task Central

X

slide-13
SLIDE 13

SimpleComms Topography

  • Local routing daemons on each machine

– Frees tasks from network comms/multiple

deliveries

– One delivery per host

SimpleComms Machine One T1 T2 Machine Two T3 T4 Machine Three T5 T6 SCS SCS SCS

slide-14
SLIDE 14

SCS Internal Message Format

  • Externally, SCS accepts/conveys

std::string's of any length

  • Internally, messages are segmented to a

bit less than 64k

– Trade off between throughput and

connectivity

– Compatible with UDP

slide-15
SLIDE 15

SimpleComms Protocols

  • Local communication is done with Unix

Domain Sockets in datagram mode

– Originally selected for the simplicity of having

structured connectivity

– Each task and SCS binds to a socket in Linux's

virtual name-space to receive messages

– Down side with datagrams is discrepancy in

criteria used by select and write (# of

  • utstanding messages versus bytes)

– Blocking I/O is used to avoiding spinning

between select/write.

– The workaround was sufficient, otherwise SCS

would have been transitioned to stream mode

slide-16
SLIDE 16

SimpleComms Protocols

  • Broadcast UDP is used to for discovery

– “zero conf” as the SCS daemons discover

each other and automatically connect

– Default port, and ethernet interface used

for broadcast are the points of configurability

  • Changing the port allows running

independent networks, e.g. for simulations

  • Binding to a particular interface permits

keeping SCS on different robots from mixing

– eth0 is masked /16, the facility network – eth0:1 is masked /24, limited to robot

slide-17
SLIDE 17

SimpleComms Protocols

  • Developed both UDP and TCP modules

for inter-host connectivity

– Concerns about TCP delays from dropped

packets and retransmits not really an issue on robot w/ high quality ethernet hardware, completely integrated in the chassis

– Losing messages to UDP, or developing

retransmit logic never became a priority

– A pair of TCP connections between each

machine, used as one way conduits

  • Side effect of concurrent discovery
slide-18
SLIDE 18

SimpleComms Multi-threading

  • SCS has two primary threads

– A reading thread polls for input and distributes messages

to subscribed queues

– A writing thread empties the queues when sockets flag

available

Dedicated threads servicing buffers Task SCS Read Write

Unix Client

slide-19
SLIDE 19

SimpleComms Multithreading

  • Client library contains a receiving thread

– Services the incoming messages even when the task is

busy

– Messages are sent immediately

Dedicated threads servicing buffers Task SCS Read Write

Unix Client

slide-20
SLIDE 20

Vectored I/O

  • Minimize in memory copies

– Use vectored I/O to deliver directly to

destination

Vectored I/O Peek Header recvmsg Fragment

One Copy

sendmsg recv Header Fragment

Copy #1

Fragment

Copy #2