Packet Switching

Download Report

Transcript Packet Switching

CS 381
Introduction to computer
networks
Chapter 1 - Lecture 2
2/5/2015
The network edge:
•End systems (hosts):
• All Internet applications are implemented at the end systems.
• HTTP, FTP, SSH, SCP, DNS, SMTP
• Reasons for this?
Introduction
1-2
The Network Core
• Mesh of interconnected routers
• The fundamental question:
• how is data transferred through network?
• Compare telephone network and
Internet
• Telephone network employs “circuit switching”
• resources necessary to make call are reserved for
duration of communication
Introduction
1-3
Packet Switched Networks
• Packet switches generally use technique called store-andforward:
• All of the bits of a packet must have arrived at input link before
switch will put first bit of packet onto output link.
• In other words, no part of a data packet can be on different links at the same time.
• Introduces store-and-forward delay.
• Time needed for packet to move from the link into the packet switch
• Time needed to process outgoing link from packet switch
• Time needed to push packing onto outgoing link
Packet Switched Networks
• Switches have multiple input links and output links
• Input/Output queues associated with each link in the packet
switch (Nodal Delay)
• Output links can hold packets that arrived when switch was
transmitting another packet.
• Input links hold bits of a packet until all bits have arrived.
Packet Switched Networks
• Queuing Delay:
• The length of time packets spends in the input/output queue of a router/switch
• Time in queue determined by multiple factors
• Highly variable
• Two delays in packet switched networks introduced:
• Store-and-forward delays
• Queuing delays
• More delays to come!
• Easier now to visualize the idea that the internet is a “best effort”
service.
Packet Switch Delays and Loss
• If a packet switch is receiving more packets than it can handle,
buffers will begin to overflow
•
•
•
•
Dropped packets.
Buffers on the routers/switches are just cache memory, which have finite storage capacities
The application has no knowledge of this and is not informed.
It learns of the dropped packet(s) only after they have not arrived.
• Even then, it doesn’t know if it was dropped or is being delayed in the network.
• Is this a problem?
• Of course!
• Solutions?
• Flow Control
• Congestion Control
Packet Switching: Statistical Multiplexing
100 Mb/s
Ethernet
A
B
statistical multiplexing
C
1.5 Mb/s
queue of packets
waiting for output
link
D
E
• Sequence of A & B packets does not have fixed pattern, bandwidth shared on demand
• statistical multiplexing.
• TDM: each host gets same slot in revolving TDM frame.
Introduction
1-8
Packet-switching: store-and-forward
L
R
R
R
• store and forward:
• entire packet must arrive at router before it can be transmitted on next
link
• Let R be the transmission rate in bits per second
• Let L be the number of bits in a packet
• Transmission delay = L/R seconds
Introduction
1-9
Packet-switching: store-and-forward
L
R
R
R
• Transmission delay of link is L/R seconds.
• Takes L/R seconds to transmit (push out) packet of L bits on to link at R bps
• Assuming all links have the same transmission rate, and ignoring propagation delay
• total transmission delay = NumberOfLinks*L/R
• Example:
• L = 7.5 Mb
• R = 1.5 Mbps
• Total transmission delay = 3(7.5) / 1.5 = 15 seconds
• 5 seconds per link, 3 links
Introduction
1-10
Packet-switching: store-and-forward
L
R
R
R
R
• Another Example:
• L = 8 Mb
• R = 2 Mbps
• Total transmission delay = 4(8) / 2 = 16 seconds
• 4 seconds per link, 4 links
Introduction
1-11
Compare Packet Switching and Circuit
Switching
• Problems with circuit switching:
• Likely to be times when circuit is not being used but is still
dedicated to the communication
• No one else can use resource even though it is idle.
• More expensive and more difficult to implement than packet
switching
• Think about how dedicated bandwidth services would be implemented.
• Each communication stream would require link setup
Packet Switching (can be) More Efficient
• Assume 10 circuits on one megabit/second TDM link
• 10 slots per second (.1 seconds per slot)
• Each circuit can transmit at 1 megabit/second for 1/10th of a second for
effective transmission rate of 100 kbps.
• One user generates 1000 packets of 1000 bits = 1 megabit
• How long to transmit data?
Packet Switching (can be) More Efficient
• How long to transmit data?
• 10 seconds: transmit 100 kbps during its time slot, then waits for next
opportunity to send ….
• Takes 10 seconds even if no other communication is currently active
Packet Switching
• Can allow more users to use network.
• Consider 1 Mb/s link
• Assume each user:
• 100 kb/s when “active”
• active 10% of time
N users
1 Mbps link
• circuit-switching:
• 10 users
• packet switching:
• with 35 users, probability less than 10 active at same time is quite low
• 99.99% of time less than 10 active users
• Since this is the case, a larger number of users can connect to the
network
• Increasing efficiency without increasing bandwidth.
Introduction
1-15
Packet switching versus circuit switching
• Is packet switching a “slam dunk winner?”
• Packet switching provides
• resource sharing
• and is simpler, no call setup
• Great for so called “bursty” data (I.e., not a continuous stream but transmitting data
from time to time)
• Assumes low probability of users being active at exactly the same time.
• However there are problems with packet switching networks
• Varying delays
• Difficult to provide constant path resources
• congestion: packet delay and loss
• protocols needed for reliable data transfer, congestion control
Introduction
1-16
Packet switching versus circuit switching
• Q: How to provide circuit-like behavior?
• bandwidth guarantees needed for audio/video apps
• still an unsolved problem (chapter 7)
Introduction
1-17
Internet structure: network of networks
• Roughly hierarchical
• At center: “tier-1” ISPs (e.g., Verizon, Sprint, AT&T, Cable and Wireless),
national/international coverage
• treat each other as equals
Tier-1
providers
interconnect
(peer)
privately
Tier 1 ISP
Tier 1 ISP
Tier 1 ISP
Introduction
1-18
Internet structure: network of networks
• Tier 1 ISP
• Network that can reach every other network on the Internet without purchasing IP
transit
• Tier 2 ISP
• Network that peers with one or more tier 1 ISPs, possibly other tier 2 ISPs
• Regional ISP
• Tier 3 ISP
• A network that solely purchases transit from other networks to participate in the
Internet
• Local ISP
Introduction
1-19
Internet structure: network of networks
• A packet passes through many networks!
local
ISP
Tier 3
ISP
Tier-2 ISP
local
ISP
local
ISP
local
ISP
Tier-2 ISP
Tier 1 ISP
Tier 1 ISP
Tier-2 ISP
local
local
ISP
ISP
Tier 1 ISP
Tier-2 ISP
local
ISP
Tier-2 ISP
local
ISP
Introduction
1-20
Chapter 1: roadmap
1.1 What is the Internet?
1.2 Network edge
 end systems, access networks, links
1.3 Network core
 circuit switching, packet switching, network structure
1.4 Delay, loss and throughput in packet-switched networks
1.5 Protocol layers, service models
1.6 Networks under attack: security
1.7 History
Introduction
1-21
How do loss and delay occur?
packets queue in router buffers
• packet arrival rate to link exceeds output link capacity
• packets queue, wait for turn
packet being transmitted
A
B
packets queueing (delay)
free (available) buffers: arriving packets
dropped (loss) if no free buffers
Introduction
1-22
Four sources of packet delay
1. nodal processing:
2. queueing
• check bit errors

• determine output link

time waiting to start
transmission
depends on congestion level of
router
transmission
A
propagation
B
nodal
processing
queueing
Introduction
1-23
Delay in packet-switched networks
3. Transmission delay:
4. Propagation delay:
• R=link bandwidth (bps)
• d = length of physical link
• L=packet length (bits)
• s = propagation speed in medium
(~2x108 m/sec)
• time to send bits into link = L/R
• propagation delay = d/s
transmission
A
propagation
B
nodal
processing
queueing
Introduction
1-24
Nodal delay
d nodal  d proc  d queue  d trans  d prop
• Nodal Delay:
• Delays experienced at each network switch/router along a path
• dproc = processing delay
• Packet header examination time
• typically a few microsecs or less
• dqueue = queuing delay
• Input/output queues
• Input Queue delay: time waiting at node before processing
• Output Queue delay: time waiting at node before transmission onto link
• depends on congestion, node CPU
• dtrans = transmission delay
• Time needed to “push” packet onto link
• = L/R, significant for low-speed links
• dprop = propagation delay
• Time needed to traverse a link (wired or wireless)
• A few microsecs to hundreds of msecs
Introduction
1-25
Queuing delay (revisited)
• Queuing delay is most highly variable aspect of delay
• Important to get a feel for how queuing delay happens
•
•
•
•
•
R=link bandwidth (bps)
L=packet length (bits)
a=average packet arrival rate (number of packets per second)
Assume all packets are of length L and that packets arrive at a steady rate.
Define:
• Traffic Intensity (I) = La/R
• La  (mean) number of bits per second arriving at the router
• R is the link transmission rate
• What does it mean when I > 1
Introduction
1-26
Queuing delay (revisited)
• What does it mean when Traffic Intensity > 1?
• Router is receiving bits at a faster rate than it can process
them
• Queuing delay increases as router cache begins to store packets for processing.
• Can eventually lead to packet loss.
• Queue full, nowhere for incoming packets, dropped
Introduction
1-27
Queuing delay (revisited)
• What does it mean when Traffic Intensity is close to 0?
• The number of bits arriving at the router is much lower than the rate at which they
can be transmitted on the link.
• Would not expect much queuing at all
• Router can handle all of the incoming traffic
Introduction
1-28
Queuing delay (revisited)
• What does it mean when Traffic Intensity is close to1?
• The number of bits arriving at the router is equal to the number of bits
that the router can handle.
• If traffic is at all bursty (multiple packets arriving at same time) then
there are times when router is overwhelmed.
• Queuing may develop and increase over time.
Introduction
1-29
Queueing delay (revisited)
• R=link bandwidth (bps)
• L=packet length (bits)
• a=average packet arrival rate
traffic intensity = La/R
 La/R ~ 0: average queuing delay small
 La/R -> 1: delays become large
 La/R > 1: more “work” arriving than can be serviced,
average delay infinite!
Introduction
1-30
“Real” Internet delays and routes
• What does “real” Internet delay & loss look like?
• Traceroute program:
• Sends UDP packet to each router on the path from source to
destination.
• Each router on the path sends back a “special” message to source
• Source tracks the time from when it sent the UDP packet and when it
receives the message from the router.
• Actually sends three UDP packets to each router to provide three different
timings.
Introduction
1-31
How Traceroute Works
• Recall that TCP and UDP are the two transport mechanisms in the
Internet
• When a packet is put onto the network by either protocol, the packet
has a header
• Provides information such as the source/destination addresses of the packet.
How Traceroute Works
• One field of the header is Time To Live (TTL) which specifies
the maximum number of routers that the packet can traverse
• TTL is decremented by each router it goes through
• When TTL reaches 0, the router sends an Internet Control Message
Protocol (ICMP) message to the source informing it that the TTL has
expired.
Packet loss
• queue (aka buffer) preceding link in buffer has finite capacity
• packet arriving to full queue dropped (aka lost)
• lost packet may be retransmitted by previous node, by source end system, or not at all
buffer
(waiting area)
A
B
packet being transmitted
packet arriving to
full buffer is lost
Introduction
1-34
Throughput
• throughput:
• rate (bits/time unit) at which bits transferred between sender/receiver
• instantaneous:
• rate at given point in time
• average:
• rate over longer period of time
• Consider transferring a file from host A to host B
• Assume the file is F bits and it takes T seconds to complete transfer.
What is the average throughput?
• F/T
Introduction
1-35
Throughput
• throughput: rate (bits/time unit) at which bits transferred between sender/receiver
•
•
•
•
instantaneous: rate at given point in time
average: rate over longer period of time
What is average throughput?
Minimum of Rs and Rc
server,
with
server
sends
bits
file of
F bits
(fluid)
into
pipe
to send to client
link
capacity
pipe
that
can carry
Rs bits/sec
fluid
at rate
Rs bits/sec)
link that
capacity
pipe
can carry
Rfluid
c bits/sec
at rate
Rc bits/sec)
Introduction
1-36
Throughput (more)
• Rs < Rc What is average end-end throughput?
Rs bits/sec
Rc bits/sec
 Rs > Rc What is average end-end throughput?
Rs bits/sec
Rc bits/sec
bottleneck link
link on end-end path that constrains end-end throughput
Introduction
1-37
Throughput: Internet scenario
• Per-connection end-end
throughput:
• min(Rc,Rs,R/10)
Rs
Rs
Rs
• In practice:
R
• Rc or Rs is often bottleneck
Rc
Rc
Rc
10 connections (fairly) share
backbone bottleneck link R bits/sec
Introduction
1-38