Transcript Lecture 4

Networks and
Distributed Systems
CSE 490h
This presentation contains content licensed under the Creative
Commons Attribution 2.5 License.
Outline
Networking
 Remote Procedure Calls (RPC)
 Transaction Processing Systems
 Failure & Reliability

Fundamentals of
Networking
Sockets: The Internet = tubes?
A socket is the basic network interface
 Provides a two-way “pipe” abstraction
between two applications
 Client creates a socket, and connects to
the server, who receives a socket
representing the other side

Ports


Within an IP address, a port is a sub-address
identifying a listening program
Allows multiple clients to connect to a server at
once
Example: Web Server (1/3)
1) Server creates a socket
attached to port 80
80
The server creates a listener socket attached to a specific port.
80 is the agreed-upon port number for web traffic.
Example: Web Server (2/3)
2) Client creates a socket and
connects to host
(anon)
Connect: 66.102.7.99 : 80
80
The client-side socket is still connected to a port, but the OS
chooses a random unused port number
When the client requests a URL (e.g., “www.google.com”), its
OS uses a system called DNS to find its IP address.
Example: Web Server (3/3)
3) Server accepts connection,
gets new socket for client
80
(anon)
(anon)
4) Data flows across connected
socket as a “stream”, just like a file
Server chooses a randomly-numbered port to handle this
particular client
Listener is ready for more incoming connections, while we
process the current connection in parallel
What makes this work?


Underneath the socket layer are several more
protocols
Most important are TCP and IP (which are used
hand-in-hand so often, they’re often spoken of as
one protocol: TCP/IP)
IP header
TCP
header
Your data
Even more low-level protocols handle how data is sent over Ethernet
wires, or how bits are sent through the air using 802.11 wireless…
IP: The Internet Protocol
Defines the addressing scheme for
computers
 Encapsulates internal data in a “packet”

Does not provide reliability
 Just includes enough information for the
data to tell routers where to send it

TCP: Transmission Control
Protocol



Built on top of IP
Introduces concept of “connection”
Provides reliability and ordering
IP header
TCP
header
Your data
Why is This Necessary?


Not actually tube-like “underneath the hood”
Unlike phone system (circuit switched), the
packet switched Internet uses many routes at
once
you
www.google.com
Networking Issues
If a party to a socket disconnects, how
much data did they receive?
 … Did they crash? Or did a machine in the
middle?
 Can someone in the middle
intercept/modify our data?
 Traffic congestion makes switch/router
topology important for efficient throughput

Remote Procedure
Calls (RPC)
How RPC Doesn’t Work

Regular client-server protocols involve
sending data back and forth according to a
shared state
Client:
Server:
HTTP/1.0 index.html GET
200 OK
Length: 2400
(file data)
HTTP/1.0 hello.gif GET
200 OK
Length: 81494
…
Remote Procedure Call

RPC servers will call arbitrary functions in
dll, exe, with arguments passed over the
network, and return values back over
network
Client:
Server:
foo.dll,bar(4, 10, “hello”)
“returned_string”
foo.dll,baz(42)
err: no such function
…
Possible Interfaces

RPC can be used with two basic
interfaces: synchronous and
asynchronous
Synchronous RPC is a “remote function
call” – client blocks and waits for return val
 Asynchronous RPC is a “remote thread
spawn”

Synchronous RPC
client
server
s = RPC(server_name, “foo.dll”,
get_hello, arg, arg, arg…)
RPC dispatcher
time
foo.dll:
String get_hello(a, b, c)
{
…
return “some hello str!”;
}
print(s);
...
Asynchronous RPC
client
server
h = Spawn(server_name,
“foo.dll”, long_runner, x, y…)
RPC dispatcher
...
keeps running…)
GiantObject myObj = Sync(h);
time
(More code
foo.dll:
String long_runner(x, y)
{
…
return new GiantObject();
}
Asynchronous RPC 2: Callbacks
client
server
h = Spawn(server_name, “foo.dll”,
callback, long_runner, x, y…)
RPC dispatcher
time
(More code
...
Thread spawns:
runs…)
void callback(o)
{
Uses Result
}
foo.dll:
String long_runner(x, y)
{
…
return new Result();
}
Wrapper Functions

Writing rpc_call(foo.dll, bar, arg0, arg1..) is
poor form
 Confusing
code
 Breaks abstraction

Wrapper function makes code cleaner
bar(arg0, arg1); //just write this; calls “stub”
More Design Considerations
Who can call RPC functions? Anybody?
 How do you handle multiple versions of a
function?
 Need to marshal objects
 How do you handle error conditions?
 Numerous protocols: DCOM, CORBA,
JRMI…

(break)
Transaction
Processing Systems
(We’re using the blue cover
sheets on the TPS reports
now…)
TPS: Definition

A system that handles transactions coming
from several sources concurrently

Transactions are “events that generate
and modify data stored in an information
system for later retrieval”*
* http://en.wikipedia.org/wiki/Transaction_Processing_System
Reliability Demands

Support partial failure
 Total
system must support graceful decline in
application performance rather than a full halt
Reliability Demands

Data Recoverability
 If
components fail, their workload must be
picked up by still-functioning units
Reliability Demands

Individual Recoverability
 Nodes
that fail and restart must be able to
rejoin the group activity without a full group
restart
Reliability Demands

Consistency
 Concurrent
operations or partial internal
failures should not cause externally visible
nondeterminism
Reliability Demands

Scalability
 Adding
increased load to a system should not
cause outright failure, but a graceful decline
 Increasing resources should support a
proportional increase in load capacity
Reliability Demands

Security
 The
entire system should be impervious to
unauthorized access
 Requires considering many more attack
vectors than single-machine systems
Ken Arnold, CORBA designer:
“Failure is the defining difference between
distributed and local programming”
Component Failure

Individual nodes simply stop
Data Failure
Packets omitted by overtaxed router
 Or dropped by full receive-buffer in kernel
 Corrupt data retrieved from disk or net

Network Failure

External & internal links can die
 Some
can be routed around in ring or mesh
topology
 Star topology may cause individual nodes to
appear to halt
 Tree topology may cause “split”
 Messages may be sent multiple times or not
at all or in corrupted form…
Timing Failure

Temporal properties may be violated
 Lack
of “heartbeat” message may be
interpreted as component halt
 Clock skew between nodes may confuse
version-aware data readers
Byzantine Failure

Difficult-to-reason-about circumstances
arise
 Commands
sent to foreign node are not
confirmed: What can we reason about the
state of the system?
Malicious Failure

Malicious (or maybe naïve) operator
injects invalid or harmful commands into
system
Preparing for Failure
Distributed systems must be robust to
these failure conditions
 But there are lots of pitfalls…

The Eight Design Fallacies








The network is reliable.
Latency is zero.
Bandwidth is infinite.
The network is secure.
Topology doesn't change.
There is one administrator.
Transport cost is zero.
The network is homogeneous.
-- Peter Deutsch and James Gosling, Sun Microsystems
Dealing With Component Failure
Use heartbeats to monitor component
availability
 “Buddy” or “Parent” node is aware of
desired computation and can restart it
elsewhere if needed
 Individual storage nodes should not be the
sole owner of data

 Pitfall:
How do you keep replicas consistent?
Dealing With Data Failure

Data should be check-summed and
verified at several points
 Never
trust another machine to do your data
validation!

Sequence identifiers can be used to
ensure commands, packets are not lost
Dealing With Network Failure

Have well-defined split policy
 Networks
should routinely self-discover
topology
 Well-defined arbitration/leader election
protocols determine authoritative components

Inactive components should gracefully clean up
and wait for network rejoin
Dealing With Other Failures
Individual application-specific problems
can be difficult to envision
 Make as few assumptions about foreign
machines as possible
 Design for security at each step

Key Features of TPS: ACID

“ACID” is the acronym for the features a TPS must
support:

Atomicity – A set of changes must all succeed or all fail
Consistency – Changes to data must leave the data in
a valid state when the full change set is applied
Isolation – The effects of a transaction must not be
visible until the entire transaction is complete
Durability – After a transaction has been committed
successfully, the state change must be permanent.



Atomicity & Durability
What happens if we write half of a
transaction to disk and the power goes
out?
Logging: The Undo Buffer
1.
2.
3.

Database writes to log the current values
of all cells it is going to overwrite
Database overwrites cells with new
values
Database marks log entry as committed
If db crashes during (2), we use the log
to roll back the tables to prior state
Consistency: Data Types
Data entered in databases have rigorous
data types associated with them, and
explicit ranges
 Does not protect against all errors
(entering a date in the past is still a valid
date, etc), but eliminates tedious
programmer concerns

Consistency: Foreign Keys
Purchase_id
Purchaser_name
Item_purchased FOREIGN
Item_id
Item_name
Cost


Database designers declare that fields are
indices into the keys of another table
Database ensures that target key exists before
allowing value in source field
Isolation
Using mutual-exclusion locks, we can
prevent other processes from reading data
we are in the process of writing
 When a database is prepared to commit a
set of changes, it locks any records it is
going to update before making the
changes

Faulty Locking
Lock (A)

Locking alone does
not ensure isolation!

Changes to table A
are visible before
changes to table B –
this is not an isolated
transaction
Write to table A
Unlock (A)
time
Lock (B)
Lock (A)
Read from A
Unlock (A)
Write to table B
Unlock (B)
Two-Phase Locking

Lock (A)
Lock (B)
Write to table A
Write to table B
Unlock (A)
Unlock (B)
time

After a transaction
has released any
locks, it may not
acquire any new locks
Effect: The lock set
owned by a
transaction has a
“growing” phase and
a “shrinking” phase
Lock (A)
Read from A
Unlock (A)
Relationship to Distributed Comp
At the heart of a TPS is usually a large
database server
 Several distributed clients may connect to
this server at points in time
 Database may be spread across multiple
servers, but must still maintain ACID

Conclusions
We’ve seen 3 layers that make up a
distributed system
 Designing a large distributed system
involves engineering tradeoffs at each of
these levels
 Appreciating subtle concerns at each level
requires diving past the abstractions, but
abstractions are still useful in general
