Metro & CPE Flow Router

Download Report

Transcript Metro & CPE Flow Router

Internet Creation and Future
Dr. Lawrence Roberts
CEO, Founder, Anagran
1
Packet Switching History
Redundancy
Routing
0.85716
Economics
0.7143
Rand Report
IEEE paper
Paul
Baran
Rand
ARPANET Program
IFIP paper
ACM paper
Donald Davies
NPL
0.57144
Topology
Queuing
0.42858
Len Kleinrock
MIT
RLE Report
0.28572
Protocol
Book “Communication Nets”
Roberts
& Marill
MIT
Davies &
Larry Roberts Scantlebury
ARPA
NPL
J.C.R. Licklider - Intergalactic Network
One Node
TX-2-SDC
Experiment
0.14286
2 Node Exp
FJCC Paper
INTERNET
IEEE papers
ACM paper
3 nodes 13
SJCC
Paper
20
38
0
1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 2
Packet Switching – 1969 Cost Crossover
60
65
70
75
80
From: “Data by the Packet,” IEEE Spectrum, Lawrence Roberts, Vol. 11, No. 2, February 1974, pp.3 46-51.
Original Internet Design
It was designed for Data
File Transfer and Email main activities
Constrained by high cost of memory
–
–
–
–
–
–
–
Only Packet Destination Examined
No Source Checks
No QoS
No Security
Best Effort Only
Voice Considered
Video not feasible
ARPANET 1971
Not much change since then
4
The Beginning of the Internet
ARPANET became the Internet
• 1965 – MIT- 2 Computer Experiment
• Roberts designs packet structure
• Len Kleinrock – queuing theory
• 1967 - Roberts moved to ARPA
• Designs ARPANET
• 1968 – RFP for Packet Switch - BBN
• 1969 – Student team designs protocol
• Crocker, Cerf, others  NCP
• 1969 – First 4 nodes installed:
• UCLA, SRI, UCSB, U. Utah
• 1971 – ICCC Show – Proved to world
• Network 21 nodes & productive
Roberts at MIT
• Email created  Main traffic soon
• 1972 – Network spawned sub-networks, Satellite network to UK added
Computer
• Aloha packet radio added – pre WiFi, Ethernet developed & connected
• Bob Kahn joins me at ARPA – takes on network program
• 1973 – Roberts leaves – Starts Telenet, first commercial packet carrier in world
• 1974 – TCP design paper published by Kahn & Cerf
• 1975 – Vint Cerf joins ARPA – continues work on new protocol TCP/IP
• 1983 – TCP/IP installed on ARPANET & required for DoD
• 1993 – Internet opened to commercial use
5
Internet Early History
“Internet”
100,000
Name first used- RFC 675
Roberts term at ARPA
Kahn term at ARPA
Cerf term at ARPA
Hosts or Traffic in bps/10
10,000
SATNET - Satellite to UK
Aloha-Packet Radio
PacketRadioNET
Spans US
DNS
1,000
TCP/IP Design
100
Ethernet
EMAIL
FTP
NCP
TCP/IP
Hosts
Traffic
10
ICCC Demo
X.25 – Virtual Circuit standard
1
1969
1971
1973
1975 1977
1979
1981
1983
1985
1987
6
ARPANET Logical Structure
7
Internet Growth
ARPANET July 1977
8
NAE Draper Award Laureates
Feb. 20th, 2001 for creating the Internet
Roberts
Kahn
Kleinrock
Cerf
9
Major Internet Contributions
1959-1964 - Kleinrock develops packet network theory proving that
message segments (packets) could be safely queued with modest
buffers at network nodes – later proves theory by measurement
1965 – Roberts tests a two node packet network and proves
telephone network inadequate for data, packet network needed
1967-1973 Roberts at ARPA designs ARPANET, contracts parts
out (routers, transmission lines, protocol, application software),
growing network to 38 nodes and 50 computers
1973-1985 Kahn at ARPA, manages ARPANET, converting to
TCP/IP, and standardizing DoD (also world) on TCP/IP
1975-1983 Cerf at ARPA designs TCP/IP and helps grow network
1990-1993 Berners-Lee designs hypertext browser (WWW)
10
Internet Traffic: Growth = 1 Trillion in 39 years
Internet Traffic Grow th
1.E+05
1.E+04
World Total Gbps
Doubling/year
Commercial
1.E+03
1.E+02
1.E+01
Gbps/second
NSFNET
1.E+00
WWW
1.E-01
1.E-02
1.E-03
1.E-04
ARPANET
1.E-05
TCP/IP
1.E-06
1.E-07
1.E-08
1970
1980
1990
2000
2010
11
TCP - Network Stability
Has Allowed the Network to Scale
TCP
Network
TCP and Network Equipment keep a balance
This balance keeps the network stable
– TCP speeds up until a packet lost, then slows down
– Network drops packets if overloaded
Result:
–
–
–
–
TCP grows to fill network
Network then loses random packets
All traffic impacted by packet losses, random rate changes
However, system is basically stable
12
A New Alternative - Flow Management in the Network
TCP or the Network need to Change
Network Equipment has always dropped random packets
– IPTV cannot be controlled – it is just banged around
Flow Management provides a new control alternative
– Control the rate of each TCP flow individually
– Measure the rate of each group of flows including IPTV
– Smoothly adjust the TCP rates to fill the available capacity
Replacing random drops with rate control:
– Network Stability is maintained
– All traffic moves smoothly without random loss
– Video flows cleanly with no loss or delay jitter
13
Changing Use of Internet
Major changes in Network Use
Voice
Video
Totally moving to packets
– Low loss, low delay required
Totally moving to packets
– Low loss, low delay jitter required
Emergency Services
Security
No Preference Priority
Cyberwar is now a real threat
TCP unfairness – multiple flows (P2P, Clouds, …)
– Congests network – 5% of users take 80% of capacity
14
Changing Structure of Internet
Was: Low Speed Edge, High speed Core
– No way to Overload the Core
– Unlimited use was OK
EDGE CORE EDGE
Now: Broadband Edge, Core Limited Economically
– Edge Speed is for Burst Speed, not Continuous use
– Unlimited use not a reasonable option
– Edge Traffic must be controlled
EDGE
CORE EDGE
15
Internet Traffic
Grown 1012 since 1970
World Internet Traffic
100000
P2P Traffic
10000
PetaBytes per month
1000
100
10
1
0.1
WWW
0.01
0.001
0.0001
0.00001
Normal Traffic
0.000001
0.0000001
TCP
0.00000001
0.000000001
1970
1975
1980
1985
1990
1995
2000
2005
In 1999 P2P applications discovered using multiple flows could give them
more capacity and their traffic moved up to 70% of the network capacity
2010
16
Where will the Internet be in the next decade
% World Population On-Line
Total Traffic PB/month
Traffic per User GB/month
GB/mo/user Developed areas
GB/mo/user Less Dev. areas
2009
30%
14,600
6
9
0.3
2019
99%
300,000
40
250
3
People in less developed areas will have more capacity than is available in
developed areas today!
Users in developed areas could see 5 -10 hours of video per day (HD or SD)
Requires a 60 times increase in capacity (Moore’s Law increase)
17
Network Change Required
Fairness
– Multi-flow applications (P2P) overload access networks
Network Security
– Need User Authentication and Source Checking
Emergency Services
– Need Secure Preference Priorities
Cost & Power
– Growth constrained to Moore’s law & developed areas
Quality & Speed
– Video & Voice require lower jitter and loss, consistent speed
18
– TCP stalls slow interactive applications like the web
Technology Improvement – Flow Management
Historically, congestion managed by queues and discards
– Creates delay, jitter, and random losses
– TCP flow rates vary widely, often stall
– UDP can overload, if so all flows hurt
Alternatively, flows can be rate controlled to fill link
–
–
–
–
Keep table of all flows, measure output, assign rates to each flow
Rate control TCP flows to avoid congestion but maintain utilization
Limit total fixed rate flow utilization by rejecting excessive requests
Assign rate priorities to flows to insure fairness and quality
Flow Management requires less power, size, & cost
– There are 14 times as many packets as flows
– Flows have predictable rate and user significance
19
Flow Management Architecture
Flow State Memory
Assign Rate, QoS, Output
Port, & Class
Input
Switch
Processors
Output
Discard
Rate of Each Flow Controlled
at Input
Traffic measured on both the output port
and in up to 4000 Classes
Flows measured and policed at input
Unique TCP rate control – Fair and precise rate/flow
Rates controlled based on utilization of both output port and class
All traffic controlled to fill output at 90%+
No output queue – Minimal delay
Voice and video protected to insure quality
20
Flow Rates Control with Intelligent Flow Delivery (IFD)
Discard 1 packet
Fair Rate
Instead of random discards in an output queue:
Anagran controls each flows rate at the input
IFD does not ever discard if the flow stays below the Fair Rate
If the flow rate exceeds a threshold, one packet is discarded
Then the rate is watched until the next cycle and repeats
This assures the flow averages the Fair Rate
The flow then has low rate variance (s=.33) and does not stall
21
IFD Eliminates TCP Stalls, Equalizes Rates
Normal Network
With Flow Management
 Rates often stall
 Peak utilization high
 Response time is slow
 Jumble hurts Video & Voice
 No stalled flows
 Less peak utilization
 3 times faster response times
 Video and Voice protected
22
Above graphs are actual data captures
Impact of Flow Management at Network Edge
Web access three times faster
TCP stalls eliminated – all requests complete
Voice quality protected – no packet loss, low delay
Video quality protected – no freeze frame, no artifact
Critical apps can be assigned rate priority
When traffic exceeds peak trunk capacity:
– Eliminates the many impacts of congestion
– Smooth slowdown of less critical traffic
– Voice and video quality maintained
23
Fairness - In the beginning
A flow was a file transfer, or a voice call
The voice network had 1 flow per user
– All flows were equal (except for 911)
– Early networking was mainly terminal to
computer
– Again we had 1 flow (each way) per user
– No long term analysis was done on fairness
It was obvious that under congestion:
Users are equal
thus
Equal Capacity per Flow
was the default design
24
Fairness - Where is the Internet now?
The Internet is still equal capacity per flow under congestion
Computers, not users, now generate flows today
– Any process can use any number of flows
– P2P takes advantage of this using 10-1000 flows
P2P
FTP
Congestion typically occurs at the Internet edge
–
–
–
–
Here, many users share a common capacity pool
TCP generally expands until congestion occurs
This forces equal capacity per flow
Then the number of flows determines each users capacity
The result is therefore unfair to users who paid the same
25
1,000 Users
10 Mbps peak rate
Typical Home Network Access
100 Kbps Average / User
100 Mbps
INTERNET
Internet Service Providers provision for average use
Average use today is about 100 Kbps per subscriber
Without P2P all users would usually get the peak TCP rate
With >0.5% P2P users, average users see much lower rates
26
Internet Traffic Recently
Since 2004, total traffic has increased 60% per year
– P2P has increased 70% per year – Consuming most of the capacity growth
– Normal traffic has only increased 45% per year –Significantly slowdown from past
Multi-Flow traffic (mainly P2P) slows other traffic so users can’t do as much
This may account for the normal traffic growth being slow
World Internet Traffic
Impact of Multi-Flow Traffic
12,000
10,000
PB/month
8,000
6,000
Multi-Flow
Traffic
4,000
2,000
Normal Traffic
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
27
Deep Packet Inspection (DPI) Fails to Stop P2P
DPI currently main defense – but recently has problems with encrypted P2P
– Studies show it detects < 75% of P2P – reducing the P2P users from 5% to 1.3%
– As P2P adds encryption, DPI detection misses 25% already and encryption growing
– Remainder of P2P simply adds more flows, again filling capacity to congestion
Upstream Capacity Usage
Asym etric DSL ISP
25
Mbps
20
15
Wasted
10
P2P Users
Ave. Users
5
0
No Regulation
DPI Filtering
Equalization
Result – Even ½ % P2P still overload the upstream channel
– This slows the Average Users acknowledgements which limits their downstream usage
User Equalization based on flow rate management solves problem
28
A New Fairness Rule
Inequity in TCP/IP – Currently equal capacity per flow
– P2P has taken advantage of this, using 10-1000 flows
– This gives the 5% P2P users 80-95% of the capacity
– P2P does not know when to stop until it sees congestion
Instead we should give equal capacity for equal pay
– This is simply a revised equality rule – similar users get equal capacity
– This tracks with what we pay
– If network assures all similar users get equal service, file sharing will find the
best equitable method – perhaps slack time and local hosts
This is a major worldwide problem
– P2P is not bad, it can be quite effective
– But, without revised fairness, multi-flow applications can take capacity away
from other users, dramatically slowing their network use
29
– It then becomes an arms race – who can use the most flows
P2P Control with Flow Management
Normal & P2P Traffic - Before & After Anagran Control
Traffic %
Measured from a University Wireless Area
100%
90%
80%
70%
60%
50%
40%
30%
20%
10%
0%
5:48
P2P
Normal
Control Off
5:52
5:57
Control On
6:01
6:05
Time (AM)
6:09
6:13
6:17
These are actual measurements showing the effect of controlling P2P traffic as a class
In this case, all P2P was limited to a fixed capacity, then equalized for fairness
P2P was reduced from 67% to 1.6%
Normal traffic then increased by 4:1
30
Why is it Important to Change Fairness Rule?
P2P is attractive and growing rapidly
It cannot determine its fair share itself
The network must provide the fair boundary
Without fairness, normal users will slow down and stall
Multi-flow applications will be misled on economics
–
–
–
–
Today most P2P users believe their peak capacity is theirs
They do not realize they may be slowing down other users
The economics of file transfer are thus badly misjudged
This leads to globally un-economic product decisions
User equality will lead to economic use of communications
31
Network Security
Today the network is open and unchecked
All security is based on “flawless” computer systems
This needs to change - the network must help
Finding Bots is best done watching network traffic
Knowing who is trying to connect can help stop penetration
Allocating high priority capacity requires authentication
– Emergency services, critical services, paid services
High value services need authentication, not passwords
– On-line banking, credit transactions, etc.
32
Authentication Security Program
New DARPA project will allow users to be authenticated
The network can insure source IP address is not faked
The network can assign user based priorities
– Emergency services needs priority
– Corporations have priority applications
The recipient can know who is trying to connect
– Filter out request from un-authenticated sources
– Control application access to specific users
Today security is based on fixing all computer holes
Network assistance greatly reduces the threat
33
DARPA Secure Authentication Program
Each Flow Start:
Each Flow Start: SH
SH = Secure Hash (Identifies
checked by NC using Key
user when hashed with Key)
SH sent to NC
NC
Sender
NC
NC
Each Flow Start: User can
be checked with AAA using SH
Receiver
NC
First Packet: NC checks user via
SH with AAA, get Key & priority
User Log-in: NC identifies
self to AAA, gets SH & Key
AAA Server
NC=Network Controller
• Network finds users priority & QoS info from AAA server
• Receiver can check user ID if allowed & reject flow if desired
• Intermediate NC’s can also check users priority & QoS
• Result: Users ID securely controls network access & priority
34
The New Network Edge – Flow Management
Flow Management at the ISP edge can:
–
–
–
–
–
Insure fairness – equal capacity for equal pay
Eliminate overload problems (TCP stalls and video artifact)
Insure voice works over wireless & WiFi
Add authentication security to network
Support rate controlled service levels per subscriber
All these benefits at much lower cost & power vs. DP
40 Gbps capacity in 1 RU with Anagran
35
Summary
Today’s IP Networks need improvement
Fairness is poor – 5% of users take 80% of capacity
– The cause is the old rule of equal capacity per flow
– This needs to change to equal capacity for equal pay
Response time and QoS suffer from random discards
– Web access suffers from unequal flow rates, TCP stalls
– Video suffers from packet loss and TCP stalls
– Voice suffers from packet loss and excessive delay
Security could be improved if network did authentication
– Avoid unknown users penetrating computers
– Permit priority for emergency workers, critical apps
Flow Management allows these improvements at lower36cost