Reliable transport and Congestion Control in Wireless

Download Report

Transcript Reliable transport and Congestion Control in Wireless

Multihop Over the
Air Programming
Thanos Stathopoulos
LECS Lab, UCLA
Introduction

Nature of sensor networks
 Expected to operate for long periods of time
 Human intervention impractical or detrimental
sensing process

Nevertheless, code needs to be updated
 Add new functionality
 Incomplete knowledge of environment
 Predicting right set of actions is not always feasible
 Fix bugs
 Maintenance
to
Example: ESS at James Reserve

It’s there!
 20
motes deployed
 Target: 100 at first

But what about bug fixes, or new
functionality that’s not there yet?
Well…
It would be better if…
10011100
Reprogramming Approaches

Use a VM and transfer capsules

Advantage


Disadvantages



Not as flexible as full binary update
VM required
Transfer the entire binary to the motes

Advantage


Maximum flexibility
Disadvantage


Low energy cost
High energy cost due to large volume of data
Reliability is required regardless of approach
Previous work


Crossbow Network Programming (XNP)
Single hop



Sender sends the entire file, in ‘code capsules’ (segments)
Receivers store each capsule in EEPROM


Each capsule carries an application-layer sequence number (segment
number), so it can be stored in the correct address
After transmission is complete, receivers read through EEPROM to
find gaps


One sender, N receivers
Retransmission requests sent if gaps found
Sender polls each receiver to find out whether they have the entire
image
MOAP: Overview
Code distribution mechanism specifically
targeted for Mica2 motes
 Full binary updates
 Multi-hop operation achieved through
recursive single-hop broadcasts
 Energy and memory efficient

Requirements and Properties of
Multihop Code Distribution

The complete image must reach all nodes
 Reliability



mechanism required
If the image doesn’t fit in a single packet, it must
be placed in stable storage until transfer is
complete
Network lifetime shouldn’t be significantly
reduced by the update operation
Memory and storage requirements should be
moderate
Resource Prioritization

Energy: Most important resource
 Radio
operations are expensive
 TX: 12 mA
 RX: 4 mA
 Stable storage (EEPROM)
 Optimized for Read() operations



Write()s are expensive
But, everything must be stored
Goals
 Minimize
transmissions
 Minimize Write()s
Resource Prioritization

Memory usage
 Static RAM
 Only 4K available on current generation of motes
 Code update mechanism should leave ample space for the
real application
 Program memory
 MOAP must transfer itself as well as the new code




Otherwise, code updates will only happen once
Large image size means more packets transmitted!
Not an issue if one can send differences (‘diffs’)
Goals
 Minimize
 Use diffs
RAM consumption
Resource Prioritization
Something’s got to give…
 Latency

 Updates
don’t respond to real-time
phenomena
 Update rate is infrequent
 Can be traded off for reduced energy usage

Unfortunately, not for RAM usage
Design Choices

Dissemination protocol: How is data propagated?

Concurrently

Traditional IP multicast mechanisms



Diffusion



Soft state reduces memory requirements
Currently, TinyDiffusion is not optimized for many-to-all dissemination
Flooding



Tree construction either at the source(s) or at rendezvous points
In MOAP, all nodes must be reached
 Tree must span the entire network
 Expensive maintenance
 State requirements too high for sensor nets
Minimal state requirements
Low energy efficiency
In steps

Ripple (neighborhood-by-neighborhood)


Low state requirements
Slow
Design Choices

Reliability mechanism: How are repairs
handled?
 Repair

scope: local vs global
Answer depends on dissemination protocol
 Loss
detection responsibility
 ACKs vs NACKs
Design Choices

Segment management
is MOAP’s unit of data, used for
transfer and storage
 A segment

Currently aligned to an EEPROM line
 MOAP

needs to store all segments
Out-of-order delivery and losses likely
 Indexing
segments and gap detection
Memory hierarchy
 Sliding window

Ripple Dissemination

Transfer data neighborhood-by-neighborhood




Goal: very few sources at each neighborhood


Neighborhood: nodes in the same broadcast domain
Single-hop
Recursively extended to multi-hop
Preferably, only one
Receivers attempt to become sources when they have the entire
image

Publish-subscribe interface prevents nodes from becoming sources if
another source is present
 Leverage the broadcast medium

If data transmission is in progress, a source will always be one hop
away!


Allows local repairs
Increased latency


O(h*D)
Flooding: O(D)
Ripple
Reliability Mechanism

Loss responsibility lies on receiver
 Only

NACK-based
 In

one node to keep track of (sender)
line with IP multicast and WSN reliability schemes
Local scope
 No

need to route NACKs
Energy and complexity savings
 Affordable,
since all nodes will eventually have the
same image
Retransmission Policies

Broadcast RREQ, no suppression





Simple
High probability of successful reception
Highly inefficient
Zero latency
Broadcast RREQ, suppression based on randomized
timers



Quite efficient
Complex
Latency and successful reception based on randomization
interval
Retransmission Policies (cont’d)

Broadcast RREQ, fixed reply probability





Broadcast RREQ, adaptive reply probability



Simple
Good probability of successful reception
Latency depends on probability of reply
Average efficiency
More complex than the static case
Similar latency/reception behavior
Unicast RREQ, single reply



Smallest probability of successful reception
Highest efficiency
Simple


Complexity increases if source fails
Zero latency

High latency if source fails
Retransmission Polices:
Comparison
Segment Management:
Discovering if a segment is present

No indexing
 Nothing kept in RAM
 Need to read from EEPROM
to find if segment i is
missing

Full indexing
 Entire segment (bit)map is kept in RAM
 Look at entry i (in RAM) to find if segment
RAM
EEPROM
is missing
Segment Management (cont’d)

Partial indexing
 Map
kept in RAM
 Each entry represents k consecutive
segments
 Combination of RAM and EEPROM lookup
needed to find if segment i is missing
RAM
EEPROM
Segment Management (cont’d)

Hierarchical full indexing

First-level map kept in RAM


Combination of RAM and EEPROM lookup needed to find
if segment i is missing
RAM
EEPROM
Index
EEPROM
Data
Each entry points to a second-level map stored in EEPROM
Segment Management (cont’d)

Sliding window




Bitmap of up to w segments kept in RAM
Starting point: last segment received in order
RAM lookup
Limited out-of-order tolerance!
RAM
Base
EEPROM
Offset
Segment Management:
Comparison
Results: Energy efficiency

Significant reduction in traffic when using Ripple


Up to 90% for dense networks
Full Indexing performs 5-15% better than Sliding Window


Reason: better out-of-order tolerance
Differences diminish as network density grows
Results: Latency


Flooding is ~ 5 times faster than Ripple
Full indexing is 20-30% faster than Sliding
window
 Again,
reason is out-of-order tolerance
Results: Retransmission
Policies

Order-of-magnitude reduction when using
unicasts
Current Mote implementation


Using Ripple-sliding window with unicast retransmission policy
User builds code on the PC


Mote attached to PC becomes original source and sends PUBLISH
message


Packetizer creates segments out of binary
Receivers 1 hop away will subscribe, if version number is greater than
their own
When a receiver gets the full image, it will send a PUBLISH
message
If it doesn’t receive any subscriptions for some time, it will COMMIT the
new code and invoke the bootloader
 If a subscription is received, node becomes a source


Eventually, sources will also commit
Current Mote Implementation
(cont’d)

Retransmissions have higher priority than data packets


Nodes keep track of their sources’ activity with a keepalive timer



Solves NACK ‘last packet’ problem
If the source dies, the keepalive expiration will trigger a broadcast repair
request
Late joiner mechanism allows motes that have just recovered from
failure to participate in code transfer


Duplicate requests are suppressed
Requires all nodes to periodically advertise their version
Footprint


1000 bytes RAM (with 100-byte packets)
4.5K bytes ROM
One critical piece: the bootloader

(slightly) modified version of the Crossbow
bootloader
 Resides
at the very end of program memory
 Very small RAM+ROM footprint

Purpose
 Transfer
the entire image from EEPROM into
program memory
 Reboot the mote
Improving MOAP: Diffing

Sending the entire image is not the most efficient thing


Solution: use “diffs”



Bug fixes are usually small
Send only what’s new
Work done by Rahul Kapur, Tom Yeh and Ujjwal Lahoti
How diff works




Original source sends out a diff from the previous version
Nodes store diff in EEPROM
When transfer is complete nodes use the diff to construct the
new image into EEPROM
Bootloader is then called and the mote reboots
Diff results
Test
descripti
on
src
dest
dest
file
size
script
byte
cost
%
of
file
siz
e
direct
diff
cost
dire
ct
diff
%
complexit
y
(# of inst)
address
patch
reduction
Code and
data shift
Rfm
Rfm2
9220B
379B
4%
379B
4%
305
186B
49%
Removin
g
function
(medium)
Cnttole
dsrfm.n
ew
Cnttole
dsrfm.
old
9398B
1491B
15
%
1491B
15%
1277
n/a
Different
App
(large)
Tinyviz
Ident
13338
B
6779B
50
%
14331
B
107
%
1895
n/a
Different
App
(small >large)
Blink
Ident
13338
B
12685
B
95
%
13926
B
104
%
333
n/a
Improving MOAP: Status reports

MOAP will eventually reprogram all motes
But how do you know when it’s done?
 How do you know which state a mote is in?


Status reporting is needed


Absolutely necessary for diffs
Requirement: a stable tree



Keep information about the publisher after reboot
Maintanance issues
Ability to ask simple questions

<node, version> tuple




Which node has which version: <*,*>
Which node has version Y: <*, Y>
What version does node X have: <X,*>
Does node X have version Y: <X,Y>
Improving MOAP: Adding control
and selective updates

Need to control a node’s behavior


Need for selective updates


“Only nodes X-Z should be updated”
Requirement: ability to route packets from the original sender to
receivers



“Don’t use the new version”
Hard problem in the general case
Quite easier when N is small (~20 or so)
Considered solution


Discover paths to all nodes using flooding
Store this information at the original source


Microservers have infinite memory (compared to motes)
Use source routing or path-vector to send packets
MOAP: Conclusion





Full binary updates over multiple hops
Ripple dissemination reduces energy consumption
significantly
Sliding window method and unicast retransmission policy
also reduce energy consumption and complexity
Successful updates of images up to 30K in size
Next steps



Sending DIFFS instead of full image
Status reports
Control and selective updates

Routing from source to receivers