SLALoM : A Scalable Location Management Scheme for Large Ad

Download Report

Transcript SLALoM : A Scalable Location Management Scheme for Large Ad

Scalable Location Management for
Large Mobile Ad hoc Networks
Sumesh J. Philip
Contents
Wireless Ad hoc networks
Issue of Scalability
Geographic Routing
Scalable Location Update based Routing
SLALoM - Scalable Location Management
Grid Location Service
Hierarchical Grid Location Management
Numerical study
Conclusion
Wireless Ad hoc networks
Infrastructure-less networks that can be easily
deployed
Each wireless host acts as an independent router
for relaying packets
Network topology changes frequently and
unpredictably
Key challenge lies in routing packets
Quite a lot of protocols proposed in literature
(table driven/reactive/hybrid)
Dynamic source Routing (DSR) works well for
small networks
Issue of Scalability
Increasing density increases average node
degree, decreases average path length
Routing cost less
Any reasonable scheme might work!
To test scalability, area (playground size) must
increase with nodes
Average node degree constant
Will present a mobility model that consolidates the
above relationship
Traditional Protocols
Table driven
incur large overheads due to routing table
maintenance
Delayed topology updates can cause loops
On-demand
flood the entire network with discovery packets
long latency for discovery
Path maintenance means additional state
No separation between data and control
Ultimately, data suffers!!
Any contenders ?
Not many invariants to play with (IP address, local
connectivity)
Nodes physically located closer likely to be
connected by a small number of radio hops
Geolocation techniques can be used to identify a
node’s physical position
Geographic forwarding
Packet header contains the destination’s location
Intermediate nodes switch packets based on
location
Geographic Forwarding
C’s radio range
A
D
C
B
F
G
E
A addresses a packet to G’s latitude, longitude
C only needs to know its immediate neighbors to forward
packets towards G.
Geographic forwarding needs location management!
Desirable Properties of
Location Management
Spread load evenly over all nodes
Degrade gracefully as nodes fail
Queries for nearby nodes stay local
Per-node storage and communication costs grow
slowly as the network size grows
Scalable Location based
Routing Protocol (SLURP)
Hybrid Protocol that has a deterministic manner
of discovering the destination
Topography divided into square grids
Each node (ID) selects a home region using
f(ID), and periodically registers with the HR
Nodes that wish to communicate with a node
query its HR using f--1(ID)
Use geographic forwarding to send data, once
location is known (e.g. MFR)
Example
[12]
- Home region
ID = 22; RT= 12;
HR=22%12 = 10;
- Update/Query
[10]
- Data
- Location
Database
f(ID)
- ID Mod(RT)
DST = 22;
RT= 12;
HR=22%12 = 10;
Cost of Location Management
Location Registration
Periodic
Triggered
Location Maintenance
Operations for database consistency
Location Discovery
Query/response
Data Transfer
Mobility Model
Each node moves independently and randomly
Direction [0  2 ] , Velocity [v-c, v+c] at t
New direction and velocity at destination
2

r
t
Node degree =
N
A
To keep degree constant, A must grow linearly
with N
Location update Overhead
  rate of region crossing
b  broadcast cost
u  number of hops
v  velocity of node
rt  transmiss ion range
2 R  side of region
a  area of region

v
2

 2 RCos d

v
4R
Location U pdate cost (cu )   (b  u ) / sec
 a 
b 1  2 
 rt 
Location update Overhead
z  most forward progress
n  average node degree
  mean inter - node distance
Az  Area of excluded region
  Average nodes in a region
G
N

 Number of regions
d R G R
N
v 2
N
(R  R
)
R

 O (v N )
1 e

cu   (b  u )   (b 

f z ( z) 
2 rt 2  z 2
rt
d
)
z
z   zf z ( z )dz
0
n
2
e Az
Home Region Maintenance
On region crossing
Inform previous region of departure
Inform new region of arrival
Update from any node in new region
cm   (2b   ); 1    
 a 
(2(1   2  )   )
4R
 rt 
Maintenanc e Overhead  O(v)

v
N  Total number of nodes
  Average number of nodes per region
Total Overhead
Cost of Locating
Send a Location query to Home region
cl  2u  2
d
 O( N )
z
Total Overhead = Sum of all overheads for all nodes
c  cu  cm  cl
 Nv N  Nv  N N
 O(vN N ) / sec
ScaLAble Location
Management (SLALoM)
Define a hierarchy of regions : Order(3), Order(2),
Order(1)
Each Order(2) region consists of K2 Order(1) regions
Each node assigned a HR in an Order(2) region
To reduce location update overhead, define far and
near HRs; near regions updated frequently
Nodes that wish to communicate with another node
query its HR in current Order(2) grid
Queries from far HRs find way to near ones for exact
location of destination
Grid Ordering in SLALoM
Terrain divided into Order-1 regions
K2 Order-1 regions combined to form Order-2 regions
Function f maps ID to home region in Order-2 region
Order-1
Home
region
Order-2, K = 4
Near and Far Home Regions
9 home regions around U’s current O-2 are near
Rest are far home regions
Near Home
region
Far Home
region
Location Update
If movement within O-2, update near home regions
Otherwise update all home regions via multicast
Near home regions know exact location of U
Far home regions know approximate location (O-2)
Movement
Update
Location Maintenance
On entry into a grid, a node broadcasts its presence
A server node replies with location information that the
newly arrived node has to store
Use of timers to avoid a broadcast storm
Mobile Node
Movement
Location
database
to store ?
A (A_loc)
B (B_loc)
…
Location Query
If U and V in same O-1, V knows U’s location
Otherwise, send a query to U’s closest home region
If far home region, route to nearest “near” home region
V
Query
W
Grid Location Service (GLS)
sibling level-0
squares
sibling level-1
squares
sibling level-2
squares
s
n s
s
s
s
s
s
s
s
s is n’s successor in that square.
(Successor is the node with “least ID greater than” n )
GLS Updates
... 1
11
2
...
9
1
9
11, 2
6
23
23, 2
Invariant (for all levels):
For node n in a square,
n’s successor in each
sibling square “knows”
about n.
...
3
...
2
16
29
...
7
6
...
...
17
...
26
...
21
5
...
25
...
4
...
location table content
8
...
19
location update
GLS Query
... 1
11
2
...
9
1
9
11, 2
6
23
23, 2
...
3
...
2
16
29
...
7
6
...
...
17
...
26
...
21
5
...
25
...
4
location table content
...
8
...
19
query from 23 for 1
Using Multilevel Hierarchies
Random node movements and communication
assumptions
Not realistic for all applications for large networks
Localized node movement; network traversals rare
Update cost proportional to mobility
Frequent data connections may occur in a locality
Multiple server regions redundant
Local queries stay local
Ideal for a hierarchical set up of node locations
Unfortunately, formation and maintenance of
hierarchy is cumbersome
Hierarchical Grid Ordering
(HGRID)
Grid hierarchy built from unit grids recursively
At each level, one of the four lower level leaders
selected as the leader for the next level
Grid ordering arbitrary; alternate orderings possible
Level III
Level II
Level I
Level 0
Location Update
Nodes update servers as they cross grid boundaries
Number of updates, and distance traversed by the
updates depends upon boundary hierarchy
Localized movement results in low overhead
Broadcast
Update
Location Discovery & Data
Transfer
Source sends query to its leader
Query visits leaders until approximate location of
destination is found; sends response
Data forwarded to more accurate locations until it
reaches the destination
Query
Response
Data
V
U
Performance Study
Glomosim: packet level simulator
Simulator setup
Random
Waypoint
Mobility
Application
CBR
Transport
UDP
Network
IP
LL/MAC
Radio
IEEE802.11
No Noise
PHY
Free Space
Location
Management
Geographic
Routing
Scalability with Mobility
(High load)
Throughput
Discovery Delay
HGRID performs best, with throughput more than 90%
Surprisingly, SLALoMK2 performs better than others
Explained by lower location discovery delay and packet
buffer
SLURP performs worst
Scalability with Mobility
Data Delay
Control Overhead
HGRID performs best overall due to low signaling
overhead
SLALoM performs worst due to congestion caused by
network wide updates
Interestingly, overhead (bytes) more for HGRID than
SLURP
Scalability with Network Size
Packets delivered
Data Delay
Tradeoff between signaling overhead and throughput/delay
HGRID performs best overall
Scalability with Network Size
Control Overhead
Database Size
Overhead (bytes) highest for SLALoM; maintenance of
large databases increases overall overhead of HGRID
Storage cost grows slightly with network size for HGRID
Summary
Issue of scalability in mobile ad hoc routing
Topology updates congest the network
Discovery, maintenance cause unnecessary flood
Geographic routing is a potential candidate
Localized and guaranteed
Need scalable location management schemes
Grid based protocols (Flat vs. Hierarchical)
SLURP, SLALoM, GLS, HGRID
Relative scalability of LM protocols dependant on
location update, maintenance and discovery
Performance studies show HGRID scales well with
network size, mobility