10 Gbps - Internet2

Download Report

Transcript 10 Gbps - Internet2

Abilene Update Session
Steve Corbató
Director, Backbone Network Infrastructure
HENP Working Group
Washington DC
8 May 2002
Agenda
Network status & events of interest
10-Gbps  upgrade plans
Optical networking on the regional &
national scale
8 May 2002
2
Abilene – May, 2002
IP-over-SONET backbone (OC-48c, 2.5 Gbps)
53 direct connections
•
•
•
•
4 OC-48c connections
1 Gigabit Ethernet trial
23 will connect via at least OC-12c (622 Mbps) by 1Q02
Number of ATM connections decreasing
215 participants – research universities & labs
• All 50 states, District of Columbia, & Puerto Rico
• 15 regional GigaPoPs support ~70% of participants
Expanded access
• 50 sponsored participants
– New: Smithsonian Institution, Arecibo Radio Telescope
• 23 state education networks (SEGPs)
8 May 2002
3
Abilene international connectivity
Transoceanic R&E bandwidths growing!
• GÉANT – 5 Gbps between Europe and New York City now
Key international exchange points facilitated
by Internet2 membership and the U.S.
scientific community
•
•
•
•
•
•
STARTAP & STAR LIGHT – Chicago (GigE)
AMPATH – Miami (OC-3c  OC-12c)
Pacific Wave – Seattle (GigE)
MAN LAN - New York City (GigE/10GigE EP soon)
CA*NET3/4: Seattle, Chicago, and New York
CUDI: CENIC and Univ. of Texas at El Paso
International transit service
• Collaboration with CA*NET3 and STARTAP
8 May 2002
4
09 March 2002
Abilene International Peering
STAR TAP/Star Light
Pacific Wave
AARNET,
APAN/TransPAC,
CA*net3,
TANET2
APAN/TransPAC, Ca*net3, CERN, CERnet, FASTnet, GEMnet,
IUCC, KOREN/KREONET2, NORDUnet, RNP2, SURFnet,
SingAREN, TAnet2
NYCM
SNVA
BELNET,
CA*net3,
Washington
GEANT*,
HEANET,
JANET,
NORDUnet
GEMNET,Sacramento
SINET,
SingAREN, WIDE
LOSA
Los Angeles
UNINET
OC3-OC12
San Diego (CALREN2)
CUDI
El Paso (UACJ-UT El Paso)
CUDI
AMPATH
REUNA, RNP2 RETINA,
ANSP, (CRNet)
* ARNES, CARNET, CESnet, DFN, GRNET, RENATER, RESTENA, SWITCH, HUNGARNET, GARR-B, POL-34, RCST, RedIRIS
Packetized raw High Definition
Television (HDTV)
Raw HDTV/IP – single UDP flow of 1.5 Gbps
• Project of USC/ISIe, Tektronix, & U. of Wash (DARPA)
• 6 Jan 2002: Seattle to Washington DC via Abilene
– Single flow utilized 60% of backbone bandwidth
• 18 hours: no packets lost, 15 resequencing episodes
• End-to-end network performance (includes P/NW & MAX
GigaPoPs)
– Loss: <0.8 ppb (90% c.l.)
– Reordering: 5 ppb
• Transcontinental 1-Gbps TCP
requires loss of
– <30 ppb (1.5 KB frames)
– <1 ppm (9KB jumbo)
8 May 2002
6
End-to-End Performance:
‘High bandwidth is not enough’
Bulk TCP flows (transfers > 10 Mbytes)
• Current median flow rate over Abilene: 1.9 Mbps
– 95th percentile: 7.0 Mbps
8 May 2002
7
Netflow information sources
Weekly summaries
• http://netflow.internet2.edu/weekly/
Raw data manipulation
• http://www.itec.oar.net/abilene-netflow/
8 May 2002
8
Jumbo frames are supported here
Default Abilene MTU: 4.5 kB
Now we also support 9 kB MTUs on per
connector basis
Motivation: support for HPC computing
Interested connectors?
• Contact the NOC
8 May 2002
9
Future of Abilene
Original UCAID/Qwest agreement
amended on October 1, 2001
Extension of MoU for another 5 years –
until October, 2006
• Originally expired March, 2003
Upgrade of Abilene backbone to optical
transport capability - ’s (unprotected)
• x4 increase in the core backbone bandwidth
–OC-48c SONET (2.5 Gbps) to 10-Gbps DWDM
8 May 2002
10
Key aspects of next generation
Abilene backbone - I
Native IPv6
• Motivations
– Resolving IPv4 address exhaustion issues
– Preservation of the original End-to-End Architecture model
• p2p collaboration tools, reverse trend to CO-centrism
– International collaboration
– Router and host OS capabilities
• Run natively - concurrent with IPv4
• Replicate multicast deployment strategy
• Close collaboration with Internet2 IPv6 Working Group on
regional and campus v6 rollout
– Addressing architecture
8 May 2002
11
Key aspects of next generation
Abilene backbone - II
Network resiliency
• Abilene ’s will not be ring protected like SONET
• Increasing use of videoconferencing/VoIP impose
tighter restoration requirements (<100 ms)
• Options:
– MPLS/TE fast reroute (initially)
– IP-based IGP fast convergence (preferable)
8 May 2002
12
Key aspects of next generation
Abilene backbone - III
New & differentiated measurement
capabilities
• Significant factor in NGA rack design
– 4 dedicated servers at each nodes
– Additional provisions for future servers
– Local data collection to capture data at times of network
instability
• Enhance active probing
– Now: latency & jitter, loss, reachability (Surveyor)
– Regular TCP/UDP throughput tests – ~1 Gbps
• Separate server for E2E performance beacon
• Enhance passive measurement
– Now: SNMP (NOC) & traffic matrix/type (Netflow)
– Routing (BGP & IGP)
– Optical splitter taps on backbone links at select location(s)
8 May 2002
13
Abilene Observatories
Currently a program outline for better support of
computer science research
• Influenced by discussions with NRLC members
1) Improved & accessible data archive
• Need coherent database design
• Unify & correlate 4 separate data types
– SNMP, active measurement data, routing, Netflow
2) Provision for direct network measurement and
experimentation
• Resources reserved for two additional servers
– Power (DC), rack space (2RU), router uplink ports (GigE)
• Need process for identifying meritorious projects
• Need ‘rules of engagement’ (technical & policy)
8 May 2002
14
8 May 2002
16
Next generation router selection
Extensive router specification and test plan
developed
• Team effort: UCAID staff, NOC, NC and Ohio ITECs
– Chris Heerman, Matt Davy, Lee Graham, John Moore, Paul
Schopis, Matt Zekauskas
• Discussions with four router vendors
Tests focused on next gen advanced services
•
•
•
•
High performance TCP/IP throughput
High performance multicast
IPv6 functionality & throughput
Classification for QoS and measurement
3 routers tested & comm. ISPs referenced
 New Juniper T640 platform selected
8 May 2002
17
Two leading national initiatives in
the U.S.
Next Generation Abilene
• Advanced Internet backbone
– connects entire campus networks of the research universities
• 10 Gbps nationally
TeraGrid
• Virtual machine room for distributed computing (Grid)
• Connecting 4 HPC centers initially
– Illinois: NCSA, Argonne
– California: SDSC, Caltech
• 4x10 Gbps: Chicago  Los Angeles
Ongoing collaboration between both projects
8 May 2002
18
Deployment timing
Ongoing – Backbone router procurement
Detailed deployment planning
July – Rack assembly (Indiana Univ.)
Aug/Sep – New rack deployment at all 11 nodes
Fall – First Wave ’s commissioned
Fall meeting demonstration events
• iGRID 2002 (Amsterdam) – late Sep.
• Internet2 Fall Member Meeting (Los Angeles) – late Oct.
• SC2002 (Baltimore) – mid Nov.
Remaining ’s commissioned in 2003
Please let us know now of 2002 upgrade plans
8 May 2002
19
Abilene cost recovery model
Connection (per connection)
Annual fee
OC-3 (155 Mbps)*
$110,000
OC-12 (622 Mbps)
$270,000
Gigabit Ethernet (1 Gbps)**
$325,000
OC-48 (2.5 Gbps)
$430,000
OC-192/10 GigE** (10 Gbps)
$490,000
Participation (per university)
$20,000
8 May 2002
20
Abilene program changes
10-Gbps (OC-192c POS) connections
•  backhaul available wherever needed & possible
– Only required now for 1 of 4 OC-48c connections
• 3-year connectivity commitment required
Gigabit and 10-Gigabit Ethernet
• Available when connector has dark fiber access into
Abilene router node
• Backhaul not available
ATM connection & peer support
• TAC recommended ending ATM support by fall 2003
• Two major ATM-based GigaPoPs have migrated
• 2 of 3 NGIXes still are ATM-based
– NGIX-Chicago @ STAR LIGHT is now GigE
• Urging phased migration for connectors & peers
8 May 2002
21
Conclusions – Abilene future
Backbone upgrade project underway
• Partnership with Qwest extended thru 2006
• Juniper T640 routers selected for backbone
• 10-Gbps backbone  deployment starts this fall
Advanced service foci
• Native, high-performance IPv6
• Enhanced, differentiated measurement
• Network resiliency
Incremental, non-disruptive transition
Complementary to and collaborative with NSF’s
TeraGrid
8 May 2002
22
For more information
Web:
www.internet2.edu/abilene
E-mail: [email protected]
Again please let us know now of 2002
connection upgrade plans
8 May 2002
23
Optical network project
differentiation
Distance
scale (km)
Examples
Equipment
UW(SEA),
Metro
< 60
Dark fiber & end
terminals
USC/ISI(LA)
State/
< 500
I-WIRE (IL),
Add OO
Regional
(ULH:
<2500)
CENIC ONI,
amplifiers
I-LIGHT (IN)
PLR,
Add OEO
TeraGrid
regenerators
Abilene
& O&M $’s
Extended
Regional/
National
> 500
8 May 2002
24
Regional optical networking
Regional (state-based) optical networking
projects are critical for next generation
architecture:
• Three-level hierarchy:
– National backbones, GigaPoPs, Campuses
• Leading examples of state-based initiatives
– CENIC ONI (California), I-WIRE (Illinois), I-LIGHT (Indiana), NC
Close collaboration with the Quilt Project
• Regional Optical Networking effort
U.S. carrier DWDM access is now not nearly as
widespread as with SONET
• 30-60 cities for DWDM vs. ~120 cities for SONET (ca. 1998)
8 May 2002
25
Pacific Light Rail
(Source: Greg Scott, CENIC/UCSC)
8 May 2002
26
National optical networking
options
1 – Provision incremental wavelengths
•
•
Obtain 10-Gbps ’s as with SONET
Exploit smaller incremental cost of additional ’s
– 1st  cost is ~10x than subsequent ’s
2 – Build dim fiber facility
•
Partner with a facilities-based provider
–
–
•
Acquire 2 fiber pairs on a national scale
Outsource operation of transmission equipment
Needs lower-cost optical transmission equipment
–
Find ELH/ULH optical kit partner
The classic ‘buy vs. build’ decision in
Information Technology
•
Option 1 selected for TeraGrid and Next Gen Abilene
8 May 2002
28
National Light Rail
Project objectives
• form lightweight, but highly coordinated, collaboration to
provision, acquire, and/or operate optical networking assets
and services
• leverage collective buying power and experience of the
consortium (ANL, CENIC, P/NW, UCAID) from the metropolitan
to the national scales
• serve as optical infrastructure substrate for e-science projects
proposing to a diverse array of funding agencies
• facilitate advanced network measurement and academic
research
Initial collaboration
• TeraGrid (Argonne), UCAID, CENIC and P/NW GigaPoPs
• UCSD, UIC
8 May 2002
29
National Light Rail – an evolving
view
Key Functions
•  brokerage service using established relationships
with multiple facilities-based carriers
• Ongoing evaluation of potential acquisition and
operation of national fiber optical network facility in
partnership with the corporate sector
8 May 2002
30
www.internet2.edu