Acoustic Monitoring using Wireless Sensor Networks
Farhan Imtiaz Presented by:
Seminar in Distributed Computing 3/15/2010 1
Acoustic Monitoring using Wireless Sensor Networks Presented by: - - PowerPoint PPT Presentation
Acoustic Monitoring using Wireless Sensor Networks Presented by: Farhan Imtiaz Seminar in Distributed Computing 3/15/2010 1 Wireless Sensor Networks Wireless sensor network (WSN) is a wireless network consisting of spatially distributed
Farhan Imtiaz Presented by:
Seminar in Distributed Computing 3/15/2010 1
2
3/15/2010 Seminar in Distributed Computing
3
3/15/2010 Seminar in Distributed Computing
4
Estimate distance or angle to acoustic source at distributed points (array) Calculate intersection of distances or crossing of angles Given a set of acoustic sensors at known positions and an acoustic source whose position is unknown, estimate its location
3/15/2010 Seminar in Distributed Computing
5 3/15/2010 Seminar in Distributed Computing
6
3/15/2010 Seminar in Distributed Computing
3/15/2010 7 Seminar in Distributed Computing
8
In-field PDA Gateway Mesh Network of Deployed Nodes On-line operation Off-line operation and storage Compute Server Storage Server Internet or Sneakernet Control Console
3/15/2010 Seminar in Distributed Computing
9
Source Source Source Endpoint
3/15/2010 Seminar in Distributed Computing
10
Write program Node-side part Sink-side part Run program Optimizing compiler Disseminate to nodes
// acquire data from source, assign to four streams (ch1, ch2 ch3, ch4) = VoxNetAudio(44100) // calculate energy freq = fft(hanning(rewindow(ch1, 32))) bpfiltered = bandpass(freq, 2500, 4500) energy = calcEnergy(bpfiltered)
Development cycle happens in-field, interactively
3/15/2010 Seminar in Distributed Computing
11
30% less CPU Extra resources mean that data can be archived to disk as well as processed (‘spill to disk’, where local stream is pushed to storage co-processor) EVENT DETECTOR DATA ACQUISITION ‘SPILL TO DISK’
C
WS = Wavescript
3/15/2010 C Seminar in Distributed Computing
12 3/15/2010 Seminar in Distributed Computing
13
3/15/2010 Seminar in Distributed Computing
14
Network latency will become a problem if scientist wants results in <5 seconds (otherwise animal might move position)
3/15/2010 Seminar in Distributed Computing
15 3/15/2010 Seminar in Distributed Computing
16
Data PROCESSING TIME NETWORK LATENCY Send raw data, process at sink Process locally, send 800B Data processing As hops from sink increases, benefit of processing Data locally is clearly seen
3/15/2010 Seminar in Distributed Computing
Source sink
3/15/2010 17 Seminar in Distributed Computing
3/15/2010 18 Seminar in Distributed Computing
A B C
Intra-path interference
A C B D
Interpath interference
3/15/2010 19 Seminar in Distributed Computing
3 1 4 2 8 6 9 7 5
3/15/2010 20 Seminar in Distributed Computing
Request the data Sends the data as reply Sends the packets not received correctly. Sends selective negative ack. if some packet is not received correctly
3/15/2010 21 Seminar in Distributed Computing
1 2 4 5 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 5 6 7 8
3 6 7 8 4 9
3/15/2010 22 Seminar in Distributed Computing
3/15/2010 23 Seminar in Distributed Computing
3/15/2010 24
Node 1 Base- Station For N = 1 Rate = 1 Interference Packet Transmission
Seminar in Distributed Computing
3/15/2010 25
Node 1 Base- Station For N = 2 Rate = 1/2 Interference Packet Transmission Node 2
Seminar in Distributed Computing
3/15/2010 26
Node 1 Base- Station For N >= 3 Interference = 1 Rate = 1/3 Interference Packet Transmission Node 2 Node 3
Seminar in Distributed Computing
) 2 , min( 1 ) , ( I N I N r + =
3/15/2010 27 Seminar in Distributed Computing
Rule – 1 A node should only transmit when its successor is free from interference. Rule – 2 A node’s sending rate cannot exceed the sending rate of its successor. 3/15/2010 28 Seminar in Distributed Computing
5 6 7 8 7 7 8 7 8 8
+ + + = + + = + =
3/15/2010 29 Seminar in Distributed Computing
3/15/2010 30 Seminar in Distributed Computing
3/15/2010
Because of queue
Montag, 15. März 2010 31 Seminar in Distributed Computing
Because of protocol
The delivery of the packet is better then fixed rate, but because of the protocol
3/15/2010 32 Seminar in Distributed Computing
Hop – 6th 62 % 77 % 95 % 99.5 % 47 %
3/15/2010 33 Seminar in Distributed Computing
3/15/2010 34 Seminar in Distributed Computing
Transfer phase byte
results take into account the extra 3-byte rate control
achieves a good fraction of the throughput of “ASAP”, with a 65% lower loss rate.
3/15/2010 35 Seminar in Distributed Computing
Transfer phase packet throughput. Flush provides comparable throughput with a lower loss rate.
3/15/2010 36 Seminar in Distributed Computing
3/15/2010
37 3/15/2010 Seminar in Distributed Computing
3/15/2010
38 3/15/2010 Seminar in Distributed Computing
3/15/2010 39 Seminar in Distributed Computing
3/15/2010 40 Seminar in Distributed Computing
3/15/2010 41 Seminar in Distributed Computing