Transcript Chord
Chord: A Scalable Peer-to-peer Lookup
Service for Internet Applications
Ion Stoica, Robert Morris, David
Karger, M. Frans Kaashoek and
Hari alakrishnan
MIT Laboratory for CS
SIGCOMM Proceedings, 2001
http://pdos.lcs.mit.edu/chord/
Chord Contribution
Chord is a scalable protocol for
lookup in a dynamic peer-to-peer
system with frequent node arrivals
and departures.
Peer-to-peer Application
Characteristics
Direct connection, without a central
point of management
Large scale
Concurrent node join
Concurrent node leave
The Lookup Problem
N1
Key=“Ma Yo-yo”
Value= Sonata…
Publisher
N2
Internet
N4
N5
N3
?
N6
Client
Lookup(“Ma Yo-yo”)
Some Approaches
Napster (DNS) - centralized
Gnutella - flooded
Freenet, Chord - decentralized
Centralized lookup (Napster)
Simple
Least info maintained in each node
A single point of failure
N1
N2
1. Register “Ma Yo-yo”
N7
Key=“Ma Yo-yo”
Value= Sonata…
N3
3. Here (“Ma Yo-yo”, N7)
DB server2. Lookup(“Ma Yo-yo”)
Client
4. Download
N4
N5
N6
Flooded queries (Gnutella)
Robust
Maintain neighbor state in each node
Flooded messages over the network
N2
N1
N3
Lookup(“Ma Yo-yo”)
N7
Key=“Ma Yo-yo”
Value=Sonata…
N8
Here(“Ma Yo-yo”, N7)
N4
N5
N6
Client
Routed queries (Freenet, Chord)
Decentralized
Maintain route info in each node
Scalable
N2
N1
N3
N7
Key=“Ma Yo-yo”
Value=Sonata…
N4
N5
Here (“Ma Yo-yo”, N7)
Client
Lookup(“Ma Yo-yo”)
N6
Chord
Basic protocol
Node Joins
Stabilization
Failures and Replication
Chord Properties
Assumption: no malicious participants
Efficiency: O(log(N)) messages per lookup
N is the total number of servers
Scalability: O(log(N)) state per node
Robustness: survives massive failures
Load balanced: spreading keys evenly
Basic Protocol
Main operation
Given a key, maps the key onto a node
Consistent hashing
Key identifier = SHA-1(title)
Node identifier = SHA-1(IP)
Find successor(Key)
map key IDs to node IDs
Consistent Hashing [Karger 97]
Target: web page caching
Like normal hashing, assigns items to
buckets so that each bucket receives
roughly the same number of items
Unlike normal hashing, a small change in
the bucket set does not induce a total
remapping of items to buckets
Consistent Hashing
ID: 2^7 = 128
Key 5
Key 105
K5
K20
N105
Circular 7-bit
ID space
N32
A key is stored at its successor:
node with next higher ID
N90
K80
Basic Lookup
Inefficient, O(N)
N120
N10 “Where is key 80?”
N105
“N90 has K80”
K80 N90
N60
N32
Finger Table (1)
Speed up the lookup
A routing table with m entries - fingers
At node n, i th entry of its finger table
s = successor (n+2^(i-1)), 1≦i≦m
At node n, each finger of its finger table
points to the successor of [(n+2^(i-1))
mod (2^m)]
Allows log(N) - time lookups
S=(1+(2^(1-1))) mod (2^3) = 2
S=(1+(2^(2-1))) mod (2^3) = 3
S=(1+(2^(3-1))) mod (2^3) = 5
Lookups take O(log(N)) hops
N5
N10
K19
N20
N110
N99
N32 Lookup(K19)
N80
N60
Node Join (1)
N25
N36
1. Lookup(36)
N40
K30
K38
Node Join (2)
N25
2. N36 sets its own
successor pointer
N36
N40
K30
K38
Node Join (3)
N25
3. Copy keys 26..36
from N40 to N36
N36 K30
N40
K30
K38
Node Join (4)
N25
4. Set N25’s successor
pointer
N36 K30
N40
K30
K38
Update finger pointers in the background
Correct successors produce correct lookups
Stabilization
Stabilization algorithm periodically
verifies and refreshes node
knowledge
Successor pointers
Predecessor pointers
Finger tables
Failures and Replication
Maintain correct successor pointers
N120
N113
N10
N102
N85
Lookup(90)
N80
N80 doesn’t know correct successor, so incorrect lookup
Solution: successor lists
Each node knows r immediate
successors
After failure, will know first live
successor
Correct successors guarantee
correct lookups
Path length as a function of network size
Failed Lookups versus Node Fail/Join Rate
Chord Summary
Chord provides peer-to-peer hash
lookup
Efficient: O(log(n)) messages per
lookup
Robust as nodes fail and join
Not deal with malicious users
http://pdos.lcs.mit.edu/chord/