Transcript Document

Peer-to-peer
Courtesy of
Luciano Bononi with Univ. of Bologna
Michael Nelson with Old Dominion Univ.
Anthony Joseph with Berkeley
What is Peer-to-Peer?
• P2P is a communications model in which each
party has the same capabilities and either party
can initiate a communication session.
Whatis.com
• A type of network in which each workstation
has equivalent capabilities and responsibilities.
Webopedia.com
• A P2P computer network refers to any network
that does not have fixed clients and servers, but
a number of peer nodes that function as both
clients and servers to other nodes on the
network.
Wikipedia.org
What is P2P?
• “Peer-to-peer is a class of
applications that takes advantage of
resources -- storage, cycles,
content, human presence -available at the edges of the Internet.”
– Clay Shirky
What is P2P?
• “Because accessing these
decentralized resources means
operating in an environment of
unstable connectivity and
unpredictable IP addresses, peer-topeer nodes must operate outside
DNS and have significant or total
autonomy from central servers”
– Clay Shirky
Is Peer-to-peer new?
• Certainly doesn’t seem like it
– What about Usenet? News groups first truly
decentralized system
– DNS? Handles huge number of clients
– Basic IP? Vastly decentralized, many equivalent
routers
• One view: P2P is a reverting to the old internet
– Once upon a time, all members on the internet were
trusted.
– Every machine had an IP address.
– Every machine was a client and server.
– Many machines were routers.
P2P now
• What is new?
– Scale: people are envisioning much larger scale
– Security: Systems must deal with privacy and
integrity
– Anonymity/deniability: Protect identity and
prevent censorship
– (In)Stability: Deal with unstable components as
the edges
• But, can systems designed this way be more stable?
Why the hype???
• File Sharing: Napster, Gnutella, KaZaa, etc
– Is this peer-to-peer? Hard to say.
– Suddenly people could contribute to active global
network
– Served a high-demand niche: online jukebox
• Anonymity/Privacy/Anarchy: FreeNet, Publis,
etc
– Libertarian dream of freedom from the man
– Extremely valid concern of Censorship/Privacy
– In search of copyright violators, RIAA challenging
rights to privacy
RIAA: Recording Industry Association of America
Hype? (cont’d)
• Computing: The Grid
– Scavenge the numerous free cycles of the
world to do work
– Seti@Home: most visible version of this
• Management: Businesses
– Suddenly Businesses have discovered
extremely distributed computing
• <-> IDC
– Does P2P mean “self-configuring” from
equivalent resources?
P2P Litmus Test
1. Does it allow for variable connectivity and
temporary network addresses?
2. Does it give the nodes at the edges of
the network significant autonomy?
Client-Server
query=DJ Shadow
query=Thievery Corporation
query=Veloce
query=Radiohead
Hybrid: C-S & P2P
query=Yo La Tengo
query=Spinal Tap
Full P2P
query=Cut Chemist
query=Cut Chemist
query=Cut Chemist
Better? Or Just Different?
• What scenarios would you prefer C-S?
• What scenarios would you prefer P2P?
• Hybrids?
P2P - the cure for what ails you?
• “decentralization is a tool, not a goal”
• if P2P doesn’t make sense for a project,
don’t use it…
• the “disruptive technology” will be
assimilated
disruptive technology is a new, lower performance, but less
expensive product;
sustaining technology provides improved performance and will
almost always be incorporated into the incumbent's product
“The P in P2P is People”
• Understanding how people operate is
critical to knowing how P2P systems
operate
– P2P is, after all, a reflection of people’s
desire to communicate with each other
– social networks…
It’s a Small World
• Stanley Milgram’s “small world” experiment
– Stanley Milgram: The Small World Problem, Psychology Today, 1(1), 6067 (1967)
– range = 2-10, median = 5
• http://backspaces.net/PLaw/
• Oracle of Bacon
– http://www.cs.virginia.edu/oracle/
• Cf. The Erdös Number Project
– http://www.oakland.edu/enp/
Graphs of Social Networks
• A node has K neighbors (edges)
• N is the number of edges among neighbors of the node,
• C=N/(K*(K-1)/2) is a clustering coefficient
• L is the average distance between nodes
• http://backspaces.net/PLaw/
Zipf’s Law
• Pk ~ Ck-a
where
1.
2.
3.
Pk = frequency of kth ranked item;
C = a corpus-wide constant;
a~1
–
Zipf was a linguist who sought to describe
the frequency of words in language
•
http://linkage.rockefeller.edu/wli/zipf/
Zipf’s Law
http://www.sewanee.edu/Phy_Students/123_Spring01/schnejm0/PROJECT.html
Power Law (Pareto’s Law)
• a formalization of the “80/20 Rule”
• A power-law implies that small occurrences are
extremely common, whereas large instances are
extremely rare
• Px ~ x-k
• http://ginger.hpl.hp.com/shl/papers/ranking/ranking.html.
Why Do We Care?
• This shows up in web page linkage:
• And it shows up in P2P usage:
– Adar & Huberman, “Free Riding on Gnutella”
• http://www.firstmonday.dk/issues/issue5_10/adar/
• Summary: if people are involved, “small
world” and “power law” effects will be
observed…
– design your systems accordingly
P2P can solve this?
Napster
• program for sharing files over the Internet
Napster: how does it work?
Application-level, client-server protocol over
point-to-point TCP
Four steps:
• Connect to Napster server
• Upload your list of files (push) to server.
• Give server keywords to search the full list with.
• Select “best” of correct answers. (pings)
Napster
1. File list is
uploaded
napster.com
users
2. User
requests
search at
server.
Napster
napster.com
Request
and
results
user
3. User pings
hosts that
apparently
have data.
Looks for
best transfer
rate.
Napster
napster.com
pings
pings
user
Napster
4. User
retrieves file
napster.com
Retrieves
file
user
Napster: architecture notes
• centralized server:
– single logical point of failure
– can load balance among servers using DNS
rotation
– potential for congestion
– Napster “in control” (freedom is an illusion)
• no security:
– passwords in plain text
– no authentication
– no anonymity
Case studies from now on
• What we care about:
– How much traffic does one query
generate?
– how many hosts can it support at once?
– What is the latency associated with
querying?
– Is there a bottleneck?
Gnutella
• peer-to-peer networking: applications
connect to peer applications
• focus: decentralized method of searching
for files
• servent == server and client
• each application instance serves to:
– store selected files
– route queries (file searches) from and to its
neighboring peers
– respond to queries (serve file) if file stored
locally
Gnutella
Gnutella: how it works
Searching by flooding:
• If you don’t have the file you want, query 7
of your partners.
• If they don’t have it, they contact 7 of their
partners, for a maximum hop count of 10.
• Requests are flooded, but there is no tree
structure.
• No looping but packets may be received
twice.
• Reverse path forwarding
Flooding in Gnutella: loop prevention
Seen already list: “A”
Gnutella message format
• Message ID: 16 bytes
• FunctionID: 1 byte indicating
–
–
–
–
00 ping: used to probe gnutella network for hosts
01 pong: used to reply to ping, return # files shared
80 query: search string, and desired minimum bandwidth
81 query hit: indicating matches to 80:query, my IP
address/port, available bandwidth
• RemainingTTL: decremented at each peer
to prevent TTL-scoped flooding
• HopsTaken: number of peer visited so far
by this message
• DataLength: length of data field
Gnutella: initial problems and fixes
• Freeloading: WWW sites offering
search/retrieval from Gnutella network without
providing file sharing or query routing.
– Block file-serving to browser-based non-file-sharing
users
• Prematurely terminated downloads:
– long download times over modems
– modem users run gnutella peer only briefly (Napster
problem also!) or any users becomes overloaded
– fix: peer can reply “I have it, but I am busy. Try again
later”
A More Centralized Gnutella?
• Reflectors
– maintains an index of its neighbors
– does not re-transmit the query, but answers
from its own index
• “mini-napster”
• prelude to “super-nodes”
• Host caches
– bootstrapping your connection
– more convenient for users, but it doesn’t
produce a nice random graph
• everyone ends up in the tightly connected cell
Gnutella - Power Law?
figures 5&6 from Ripeanu, Iamnitchi & Foster,
IEEE IC, 6(1), 2002
Chord: A Scalable Peer-to-peer Lookup
Service for Internet Applications
Ion Stoica, Robert Morris, David Karger,
M. Frans Kaashoek, Hari Balakrishnan
MIT and Berkeley
Motivation
How to find data in a distributed file sharing system?
Publisher
Key=“LetItBe”
Value=MP3 data
N2
N1
Internet
N4

N3
Lookup is the key problem
N5
Client ?
Lookup(“LetItBe”)
Centralized Solution

Central server (Napster)
Publisher
Key=“LetItBe”
Value=MP3 data
N2
N1
N3
Internet
DB
N4
Requires O(M) state
 Single point of failure

N5
Client
Lookup(“LetItBe”)
Distributed Solution (1)

Flooding (Gnutella, Morpheus, etc.)
Publisher
Key=“LetItBe”
Value=MP3 data
N2
N1
Internet
N4

N3
N5
Worst case O(N) messages per lookup
Client
Lookup(“LetItBe”)
Distributed Solution (2)

Routed messages (Freenet, Tapestry, Chord, CAN, etc.)
Publisher
Key=“LetItBe”
Value=MP3 data
N2
N1
Internet
N4

Only exact matches
N3
N5
Client
Lookup(“LetItBe”)
Routing Challenges

Define a useful key nearness metric

Keep the hop count small

Keep the routing tables “right size”

Stay robust despite rapid changes in membership
Chord Overview

Provides peer-to-peer hash lookup service:

Lookup(key)  IP address

Chord does not store the data

How does Chord locate a node?

How does Chord maintain routing tables?

How does Chord cope with changes in membership?
Chord IDs

m bit identifier space for both keys and nodes

Key identifier = SHA-1(key)
Key=“LetItBe”

SHA-1
ID=60
Node identifier = SHA-1(IP address)
IP=“198.10.10.1”
SHA-1
ID=123

Both are uniformly distributed

How to map key IDs to node IDs?
Consistent Hashing [Karger 97]
0 K5
IP=“198.10.10.1”
N123
K101
N90

K20
Circular 7-bit
ID space
N32
Key=“LetItBe”
K60
A key is stored at its successor: node with next higher ID
Consistent Hashing

Every node knows of every other node
 requires global information

Routing tables are large O(N)

Lookups are fast O(1)
0
N10
Where is “LetItBe”?
Hash(“LetItBe”) = K60
N123
N32
“N90 has K60”
K60
N90
N55
Chord: Basic Lookup

Every node knows its successor in the ring
0
N10
N123
Where is “LetItBe”?
Hash(“LetItBe”) = K60
N32
“N90 has K60”
K60 N90

requires O(N) time
N55
“Finger Tables”

Every node knows m other nodes in the ring

Increase distance exponentially
N112
80 + 25
N96
80 + 24
80 + 23
80 + 22
80 + 21
80 + 20
N80
N16
80 + 26
“Finger Tables”

Finger i points to successor of n+2i
N120
N112
80 + 25
N96
80 + 24
80 + 23
80 + 22
80 + 21
80 + 20
N80
N16
80 + 26
Lookups are Faster

Lookups take O(Log N) hops
N5
N10
N110
N20 K19
N99
N32 Lookup(K19)
N80
N60
Chord properties

Efficient: O(Log N) messages per lookup

N is the total number of servers

Scalable: O(Log N) state per node

Robust: survives massive changes in membership

Proofs are in paper / tech report

Assuming no malicious participants
Joining the Ring


Three step process:

Initialize all fingers of new node

Update fingers of existing nodes

Transfer keys from successor to new node
Less aggressive mechanism (lazy finger update):

Initialize only the finger to successor node

Periodically verify immediate successor, predecessor

Periodically refresh finger table entries
Joining the Ring - Step 1

Initialize the new node finger table

Locate any node p in the ring

Ask node p to lookup fingers of new node N36

Return results to new node
N5
N20
N36
N99
1. Lookup(37,38,40,…,100,164)
N40
N80
N60
Joining the Ring - Step 2

Updating fingers of existing nodes

new node calls update function on existing nodes
existing nodes can recursively update fingers of other
nodes

N5
N20
N99
N36
N40
N80
N60
Joining the Ring - Step 3

Transfer keys from successor node to new node

only keys in the range are transferred
N5
N20
N99
N36
K30
N40 K38
K30
N80
K38
N60
Copy keys 21..36
from N40 to N36
Handing Failures

Failure of nodes might cause incorrect lookup
N120
N113
N10
N102
N85
Lookup(90)
N80

N80 doesn’t know correct successor, so lookup fails

Successor fingers are enough for correctness
Handling Failures


Use successor list

Each node knows r immediate successors

After failure, will know first live successor

Correct successors guarantee correct lookups
Guarantee is with some probability
Can choose r to make probability of lookup failure
arbitrarily small

Evaluation Overview

Quick lookup in large systems

Low variation in lookup costs

Robust despite massive failure

Experiments confirm theoretical results
Cost of lookup
Cost is O(Log N) as predicted by theory

constant is 1/2
Average Messages per Lookup

Number of Nodes
Robustness

Simulation results: static scenario

Failed lookup means original node with key failed (no replica of keys)

Result implies good balance of keys among nodes!
Robustness

Simulation results: dynamic scenario

Failed lookup means finger path has a failed node

500 nodes initially

average stabilize( ) call 30s

1 lookup per second (Poisson)

x join/fail per second (Poisson)
Current implementation

Chord library: 3,000 lines of C++

Deployed in small Internet testbed

Includes:

Correct concurrent join/fail

Proximity-based routing for low delay (?)

Load control for heterogeneous nodes (?)

Resistance to spoofed node IDs (?)
Strengths

Based on theoretical work (consistent hashing)

Proven performance in many different aspects


“with high probability” proofs
Robust (Is it?)
Weakness

NOT that simple (compared to CAN)

Member joining is complicated


aggressive mechanisms requires too many messages and updates

no analysis of convergence in lazy finger mechanism
Key management mechanism mixed between layers

upper layer does insertion and handle node failures

Chord transfer keys when node joins (no leave mechanism!)

Routing table grows with # of members in group

Worst case lookup can be slow
Discussions

Network proximity (consider latency?)

Protocol security

Malicious data insertion

Malicious Chord table information

Keyword search and indexing

...