Flow-level models - Stanford Computer Forum
Download
Report
Transcript Flow-level models - Stanford Computer Forum
Modern Queueing Theory,
and Internet Address Allocation
Balaji
Prabhakar
Balaji
Prabhakar
High Performance
Switching and Routing
Telecom Center Workshop: Sept 4, 1997.
Clean Slate Workshop
March 21, 2007
21st Century Queueing Theory,
and Internet Address Allocation
Balaji
Prabhakar
Balaji
Prabhakar
High Performance
Switching and Routing
Telecom Center Workshop: Sept 4, 1997.
Clean Slate Workshop
March 21, 2007
Overview
• I will talk about two projects
– Scalable queuing theory for Internet flows
• Current theoretical models are limited:
(i) They are packet-centric, (ii) do not model the “feedback loop” of
TCP dynamically, and (iii) use strong assumptions which allow a
detailed analysis; but this has also made them fragile (assumptiondependent).
– IPv6 address allocation
• Addresses currently allocated without taking into account a user’s
potential growth rate. This leads to address fragmentation and
complicates the address lookup problem.
– I will describe new approaches to both problems.
3
Overview of flow models project
• Themes
– Flows in a packet-switched Internet
• View a flow as a minimum data unit
• Algorithms for enabling the cheap recognition of flows
• Processing flows, not packets, at high speed
– Flow-level models
• Current theoretical models have two drawbacks:
(i) They are packet-centric (ii) they do not model the “feedback loop” of
TCP dynamically
• We study models which address the above drawbacks
• There are some interesting and serious theoretical challenges
– Scaleable Network (Queueing) Theory
• Classical queueing theory makes strong assumptions which allow
detailed analysis
• However, it is too fragile (assumption-dependent) and scales poorly
• We propose a new approach
4
Overview of Project
• People
– PIs: Amin Saberi and Balaji Prabhakar
– Students: Mohsen Bayati, Abdul Kabbani, Yi Lu
– Visitors/collaborators: Ashvin Lakshmikantha (UIUC), Maury
Bramson (Minnesotta), Devavrat Shah (MIT), John Tsitsiklis (MIT)
– Industry collaborators (flow detection, processing)
• Flavio Bonomi and Rong Pan, Cisco Systems
5
What is a flow-level model?
• First, what is a packet-level model?
– Minimum data unit: packet; i.e. decisions made per packet
– Classical (20th century) queueing theory is cast in this set up
• M/M/1 queue, Jackson Networks, Kelly Networks, etc
– Used extensively from the late 80s onwards
– To model packet radio networks and switches
• E.g. Tassiulas and Ephremides; McKeown, Anantharam and Walrand
– Very successful for determining maximum throughput algorithms,
get delay/backlog bounds
– More recently used by Gupta and Kumar for determining how
throughput scales with network size in ad hoc wireless networks
6
Switch scheduling
The switch
The queueing model
1
2
3
1
2
3
• Crossbar constraints
–
–
each input can connect to at most one output
each output can connect to at most one input
7
Scheduling algorithms
19
3
4
1
21
18
7
19
19
1
Practical
Maximal Matchings
Not stable
7
Max Size Matching
18
Max Wt Matching
Stable
Not stable
(McKeown-Ananthram-Walrand 96)
(Tassiulas-Ephremides 92,
McKeown et. al. 96,
Dai-Prabhakar 00)
8
Limitations of packet-level models
• Open loop: Fails to incorporate end-to-end behavior
– Scheduling and drop decisions on current packets affect future
arrivals
• Sees all packets as equal
– Is unable to model short/long flows, short/long RTTs, etc
• Fails to capture the flow-level bandwidth sharing that results
from TCP/AQM interactions
9
Flow-level models
•
This is more realistic: flows, of differing sizes, arrive at random times and
are transferred through the network by the congestion management
algorithms and transport protocols
–
Flow completion (transfer) time is the main quantity of interest: what is its
mean? variance? how does it depend of flow sizes? on network topology, on
round trip time, etc?
10
A flow-level model of a 2 link network
L1
L2
•
This type of model
introduced in the Internet
context by
– Bonald and Massoulie
• Active research area…
– Bramson, Gromoll,
Kelly, Shah, Williams,
etc
11
Bandwidth sharing: Link capacity = 1
•
Max-min fair sharing
– Route 1 flows: 1/5
– Route 2 flow: 2/5
– Route 3 flows: 1/5
•
Proportional fair sharing
– Route 1 flows: 1/7
– Route 2 flow: 4/7
– Route 3 flows: 2/7
12
More formally…
• The general picture
– Routes on left, links on right
– Adjacency graph shows captures
network topology
– Bandwidth sharing due to
combination of TCP/AQM
• The queueing model
– Flows arrive according to Poisson processes on routes
– Flow sizes are arbitrarily distributed (heavy-tailed)
• Questions
–
–
–
–
–
Throughput: How many flows can be processed per unit time?
Delays: What is the flow delay due to a particular scheme?
What effect does network topology have?
What happens if we give more bandwidth to short flows?
What is the effect of scaling network size and/or link capacities?
13
Need new theory
• Classical QT: As in the books of Kleinrock and Kelly
– Consider network of queues, and assume
• Poisson arrivals, exponential services, random routing
– These assumptions give networks with good properties
• Reversibility
• Decomposibility (independence or “product-form”)
– Hence obtain
• Throughputs, delays, backlogs, etc
• Almost all classical results are some variation of the above
• Our situation: none of the above holds!
– Main reason: a job (flow) requires service from multiple servers
(links) simultaneously; this leads to loss of independence
– This type of network studied briefly in earlier literature as “Whittle
Networks”
14
The struggle for independence
• Our approach: don’t consider small-sized networks, let network sizes go to
infinity
– Then, amazingly, you get all the good properties, esp independence
– The key feature needed: correlation decay
– In these arguments, an “infinite-sized network” is one which has about
100 or so nodes; i.e., infinity is not too far away
• Have applied this approach to the so-called “supermarket model” used to
study load balancing
Allocator
15
Further work
•
•
•
The main idea is: consider the thermodynamic limit of large
networks
The problem simplifies and becomes meaningful because
correlations are local and die in the limit
Now, what type of a large network should we consider?
1. Internet: Power law graphs
2. Ad hoc networks: Geometric random graphs
3. Other random graphs, e.g. Bernoulli random graph
•
•
In this context we can answer the questions we previously
asked about how bandwidth sharing algorithms, topology, flow
size, RTT, etc, affect performance
There is a huge simulation component to ensure fidelity of
models, understand accuracy with scaling, etc
16
IPv6 Address Allocation
Address Allocation
• Address allocation sets the foundation of the network hierarchy
– Therefore, it is important to have the right address allocation scheme
• The goal of this work is to answer the question:
– How should addresses be allocated for future networks with a clean
slate design?
• Collaborators: Mei Wang, Ashish Goel
– This is essentially the work of Mei Wang
– She has been working with (at) Cisco and with CNNIC to apply her
growth-based allocation scheme, neat software being developed, etc
– See her poster this afternoon for more details
18
How are Addresses Allocated?
Internet Assigned Number
Authority
IANA
Regional Internet Registries
RIR
(ARIN, RIPE, APNIC, AfriNIC, LACNIC)
ISP/LIR
Local Internet Registries
(ISP’s)
EU(ISP)
EU
End Users
19
Definition – Address Collision
A
A
B
A
20
Definitions – Address Fragmentation
Not Fragmented
A
A
B
B
Fragmented
A
B
A
B
21
Asia Pacific (APNIC) IPv4 Allocation Data
Statistics
Number of Requests
Distribution of Customers Requests
50
40
APNIC data
30
20
10
0
1
3
5
7
9
11 13 15 17 19 21
Customer ID
• Number of customers is 21
• Address space is 225
• Number of requests is 197
22
Current Practice
• Addresses allocated as prefixes; two methods
– Sequential: Allocate in any (first) open range of suitable size
– Bisection: same as above, but more systematic; a candidate for
IPv6 allocation
a)
b)
1
1
5
3
2
4
3
2
4
Address space
23
GAP:
Growth-based Address Partitioning
Take growth-rates into account, maximize time to collision.
An+1, Rn+1
n+1
Ai, Ri
1
i
n
Li
Si
max min t ( Li , Ri ), t ( Si , Rn1 ), i 1,..., n
“A Growth-based Address Allocation Scheme for IPv6”, Mei Wang, Networking, 2005.
24
Performance Results
• Theoretical proofs:
– Sequential is optimum given static sizes
– GAP is online optimum
• Simulations
• Experiments with real data
– Use IPv4 data, what if we had allocated addresses using GAP?
– Growth-rates derived empirically, no additional input was needed.
– Larger enhancements can be accomplished if customers provide
growth-rate estimations.
25
Comparison:
Asia Pacific (APNIC) Allocation Data
•
26
Comparison:
China (CNNIC) Allocation Data
27
Software for Address Allocation
• Being jointly developed with Cisco and CNNIC
• Provides visualization of address allocation
• Simulate real allocations taking dynamic requests
• A platform for analysis of different algorithms
28
Current Status & Future Work
• Increasing interest from parties in various layers:
– Regional registries, country registries, ISPs, large corporations
• Economic incentive model for accurate estimation of
growth-rates
• Non-prefix based address allocation (clean slate)
– Prefix assignment equivalent to assigning subtrees
– The simplest non-prefix assignment would assign a pair of
subtrees, or equivalently, an edge on the hypercube
• Provider Independent (PI) address allocation
29