Transcript Brocade

Brocade
Landmark Routing on P2P Networks
Gisik Kwon
April 9, 2002
Motivation
Problems with existing P2P systems




Existing systems : CAN, Chord, Tapestry, Pastry
Constrained by the theoretical approach adopted, nodes are
treated uniformly
Routing algorithms are decoupled from underlying topology and
node capability
Result: Sub-optimal performance


Due to 1) the asymmetry of nodes in reality

Less powerful nodes can be overloaded
Due to 2) the lack of structure

No advantage of the aggregated knowleges of the network
P2P does not need to operate in the pure P2P style

A secondary overlay network on top of existing system(e.g.,
Tapestry)
Brocade
A philosophy: A system is more efficient when it
is organized
Respect the differences and take advantage of
those that are more powerful – Supernodes!


Fast/well-connected/situated near network access points
Supernodes have better knowledge of underlying
network characteristics. Benefit from aggregation.
Network in reality


Transit-stub topology, disparate resources per node
Result: Inefficient inter-domain routing
AS-3
AS-1
S
R
AS-2
P2P Overlay
Network
Landmark Routing
Goals

Eliminate unnecessary wide-area hops for inter-domain
messages


Eliminate traffic going through high latency, congested stub
links
Reduce wide-area bandwidth utilization
Brocade Architecture
Brocade Layer
Original Route
Brocade Route
AS-3
AS-1
S
R
AS-2
P2P
Network
Intuitive mechanisms
Intuition: route quickly to destination domain



Organize group of supernodes into secondary overlay
Sender (S) sends message to local supernode SN1
SN1 finds and routes message to supernode SN2 near
receiver R


SN1 uses Tapestry object location to find SN2
SN2 sends message to R via normal routing
• Overlay nodes are grouped by their supernodes
• Supernodes treat their overlay nodes as objects that they possess
• Routing on Brocade => Object Location. Use your favorite
mechanism: Tapestry, CAN, Chord, Pastry
• Message filtering: only send inter-domain messages to Brocade.
Case Study - Brocade On Tapestry
Tapestry: A novel wide-area fault-tolerant location
and routing infrastructure
Construction

Supernode selection
Significant processing power
 Minimum number of ip hops to wide-area network
 High bandwidth outgoing link
 So, gateway routers or machines close by those


Existing connections among supernodes as Brocade
links
AS-1
S
Classifying Traffic
Brocade not useful for intra-domain messages


P2P layer should exploit some locality (Tapestry)
Undesirable processing overhead
Classifying traffic by destination


Proximity caches:
- Every overlay node keeps list of nodes it knows to
be local
- Need not be optimal
Cover set:
- Supernode keeps list of all nodes in its domain
- Acts as authority on local vs. distant traffic
Entering the Brocade
Route: Sender  Supernode
Naïve Brocade: Tapestry routing unchanged. Message gets
onto the Brocade overlay if a supernode is encountered on
its route.


Advantage: simple, no modification to ordinary nodes.
Disadvantage: possibility of hitting a supernode in Tapestry routing
small.
IP Snooping Brocade: Supernodes snoop IP packets to
intercept Tapestry messages.


Advantage:
- No modification to ordinary nodes.
- High possibility of encountering supernodes because supernodes
are situated near the edge of local networks.
Disadvantage: difficult to implement
Entering the Brocade
Directed Brocade: Each overlay node keep info about
its supernode and decides by its own whether to send a
message to supernode directly.
 Feasible: only local information required
 Decision Engine:
• A small cache storing most
frequently used nodes in its
cover set will do the trick.
Destination is in
my cover set?
No
Send to
supernode
Yes
Ordinary Tapestry
Routing
• Query locality will make hit
rate high
• Consequences of mistakes
aren’t expensive
Inter-supernode Routing
Route: Supernode (sender)  Supernode (receiver)


Locate receiver’s supernode given destination nodeID
Use Tapestry object location or Bloom filter
Tapestry


Routing mesh w/ built in proximity metrics
Location exploits locality (finds closer objects faster)
Finding supernodes


Supernode “publishes” cover set on brocade layer as
locally stored objects
To route to node N, locate server on brocade storing N
Brocade Summary
P2P systems assume uniformity


Extraneous hops through backbone to domains
Routing across congested stubs links
Constrain inter-domain routing


Remove unnecessary routing through stubs
Reduce expected inter-domain hops
Result: lower latency, less bandwidth utilization
Appendix
Tapestry
Bloom filter
Brief Tapestry
Suffix matching ( similar to Plaxton )

Incrementally routing digital by digital
7598
B4F8
Msg to 4598
4598
9098
6789
B437

Maximum hops : logb(N)
Routing : Neighbor maps
•A table with b*logb(N)
entries
•The i-th level neighbor
share (i-1) suffix chunks
•Entry( i, j )
Pointer to the neighbor
“ j” + (i-1) suffix
0642
Locating : basic procedure
4 phrases locating




Map the Object ID to a “virtual” Node ID
Route the request to that node
Arrive the surrogate or“root for the object
Direct to the server
6234 <O:1234,S:B4F8>
B234
F734
8724
Client : B4F8
Surrogate
Routing
Server : B4F8
1234
Publishing : basic procedure
Similar to locating
1.
2.
3.
Server send msg and pretends to locate the object
Find the surrogate node as the “root” for the Obj.
Save the related info there, such as <O,S>
6234 <O:1234,S:B4F8>
B234
F734
8724
Server :B4F8
1234
Surrogate
Routing
Insert a new node: basic procedure
1.
2.
3.
4.
5.
Get an Node ID
Begin with a “Gateway node” G
Pretends to route to itself
Establish nearly optimal neighbor map during the “pseudo routing” by
coping & Choosing nearest ones.
6234
Go back and notify neighbors
B234
F734
8724
Gateway node : B4F8
New node : 1234
Surrogate
Routing
Bloom filter(1)
•Set S
•Query: is x in S?
•If filter says no, x is not in S.
•If filter says yes, x is probably in S.
•Set is bitmap: 010110
•Sequence of hash functions f(x, i)
–Independent on x, i
–F(x,i) = 0 or 1
•If f(x,i) is 1 for i values:
–0,2,3 not present
–2,4 probably present
Bloom filter(2)
Procedure BloomFilter(set A, hash_functions, integer m)
returns filter
filter = allocate m bits initialized to 0
foreach ai in A:
foreach hash function hj:
filter[hj(ai)] = 1
end foreach
end foreach
return filter
Procedure MembershipTest (elm, filter, hash_functions)
returns yes/no
foreach hash function hj:
if filter[hj(elm)] != 1 return No
end foreach
return Yes