ESnet IP core
Download
Report
Transcript ESnet IP core
ESnet Update
ESnet/Internet2 Joint Techs
Madison, Wisconsin
July 17, 2007
Joe Burrescia
ESnet General Manager
Lawrence Berkeley National Laboratory
1
Outline
ESnet’s Role in DOE’s Office of Science
ESnet’s Continuing Evolutionary Dimensions
Capacity
Reach
Reliability
Guaranteed Services
2
DOE Office of Science and ESnet
•
“The Office of Science is the single largest supporter of basic
research in the physical sciences in the United States, …
providing more than 40 percent of total funding … for the
Nation’s research programs in high-energy physics, nuclear
physics, and fusion energy sciences.” (http://www.science.doe.gov)
•
The large-scale science that is the mission of the Office of Science
is dependent on networks for
o
Sharing of massive amounts of data
o
Supporting thousands of collaborators world-wide
o
Distributed data processing
o
Distributed simulation, visualization, and computational steering
o
Distributed data management
• ESnet’s mission is to enable those aspects of science that
depend on networking and on certain types of large-scale
collaboration
3
The Office of Science U.S. Community
Pacific Northwest
National Laboratory
Idaho National
Laboratory
Ames Laboratory
Argonne National
Laboratory
Fermi
National
Accelerator
Laboratory
Lawrence
Berkeley
National
Laboratory
Brookhaven
National
Laboratory
Stanford
Linear
Accelerator
Center
Lawrence
Livermore
&
Sandia
National
Laboratories
Princeton
Plasma
Physics
Laboratory
Thomas Jefferson
National
Accelerator Facility
General
Atomics
Oak Ridge
National
Laboratory
Sandia
National
Laboratory
Institutions supported by SC
Major User Facilities
DOE Specific-Mission Laboratories
DOE Program-Dedicated Laboratories
DOE Multiprogram Laboratories
Los Alamos
National
Laboratory
National
Renewable Energy
Laboratory
SC Program sites
4
Footprint of SC Collaborators - Top 100 Traffic Generators
Universities and research institutes that are the top 100 ESnet users
• The top 100 data flows generate 30% of all ESnet traffic (ESnet handles about 3x109 flows/mo.)
• 91 of the top 100 flows are from the Labs to other institutions (shown) (CY2005 data)
5
Changing Science Environment New Demands on Network
• Increased capacity
o
Needed to accommodate a large and steadily
increasing amount of data that must traverse the
network
• High-speed, highly reliable connectivity between
Labs and US and international R&E institutions
o
To support the inherently collaborative, global nature
of large-scale science
• High network reliability
o
For interconnecting components of distributed largescale science
• New network services to provide bandwidth
guarantees
o
For data transfer deadlines for remote data analysis,
real-time interaction with instruments, coupled
computational simulations, etc.
6
0
Apr, 99
May,
Dec, 05
Jul, 05
Feb, 05
Sep, 04
Apr, 04
Nov, 03
Jun, 03
Jan, 03
Aug, 02
Mar, 02
Oct, 01
May,
Dec, 00
Jul, 00
Feb, 00
1000
Sep, 99
1200
Nov, 98
Jun, 98
Jan, 98
Aug, 97
Mar, 97
Oct, 96
May,
Dec, 95
Jul, 95
Feb, 95
Sept,
Apr, 94
Nov, 93
Jun, 93
Jan, 93
Aug, 92
Mar, 92
Oct, 91
May,91
Dec, 90
Jul, 90
Feb, 90
TBytes/Month
Network Utilization
ESnet Accepted Traffic (Bytes)
Jan 1990 to Jun 2006
ESnet is Currently Transporting over 1.2 Petabytes/month
and this volume is increasing exponentially
1.20 Petabyte/mo June 2006
1.04 Petabyte/mo April 2006
800
600
400
200
7
ESnet traffic has increased by
10X every 47 months, on average, since 1990
Apr., 2006
1 PBy/mo.
10000.0
Nov., 2001
100 TBy/mo.
53 months
1000.0
100.0
R2 = 0.9898
40 months
Oct., 1993
1 TBy/mo.
57 months
10.0
Aug., 1990
100 MBy/mo.
38 months
1.0
0.1
Log Plot of ESnet Monthly Accepted Traffic, January, 1990 – June, 2006
Jan, 06
Jan, 05
Jan, 04
Jan, 03
Jan, 02
Jan, 01
Jan, 00
Jan, 99
Jan, 98
Jan, 97
Jan, 96
Jan, 95
Jan, 94
Jan, 93
Jan, 92
Jan, 91
0.0
Jan, 90
Terabytes / month
Jul., 1998
10 TBy/mo.
High Volume Science Traffic Continues to Grow
1/05
•
Top 100 flows are increasing
as a percentage of total
traffic volume
•
99% to 100% of top 100
flows are science data
(100% starting mid-2005)
•
A small number of largescale science users account
for a significant and growing
fraction of total traffic volume
2 TB/month
6/05
2 TB/month
1/06
2 TB/month
7/06
2 TB/month
9
Who Generates ESnet Traffic?
ESnet Inter-Sector Traffic Summary for June 2006
76%
DOE sites
DOE is a net
supplier of data
because DOE
facilities are used
~7%
by universities and
commercial
entities, as well as
by DOE
researchers
31%
7%
ESnet
DOE collaborator traffic, inc. data
5%
30%
5%
R&E (mostly
universities)
Peering Points
31%
14%
Traffic notes
• more than 90% of all traffic is Office of Science
• less that 10% is inter-Lab
Commercial
International
(almost entirely
R&E sites)
Traffic coming into ESnet = Green
Traffic leaving ESnet = Blue
Traffic between ESnet sites
% = of total ingress or egress traffic
10
ESnet’s Domestic, Commercial, and International Connectivity
Japan (SINet)
Australia (AARNet)
Canada (CA*net4
Taiwan (TANet2)
Singaren
CA*net4
France
GLORIAD
(Russia, China)
Korea (Kreonet2)
MREN
Netherlands
StarTap
Taiwan (TANet2)
UltraLight
GÉANT
- France, Germany,
Italy, UK, etc
SINet (Japan)
Russia (BINP)
CERN
(USLHCnet
CERN+DOE funded)
PNWGPoP/
PacificWave
Australia
Equinix
USN
USN
PAIX-PA
Equinix
MAE-West
MAE-E
NGIX-E
NGIX-W
Equinix
PacificWave
UNM
Australia
AMPATH
S. America
ESnet provides:
• High-speed peerings with Abilene, CERN,
and the international R&E networks
• Management of the full complement of global
Internet routes (about 180,000 unique IPv4
routes) in order to provide DOE scientists rich
connectivity to all Internet sites
AMPATH
(S. America)
High Speed International Connection
Commercial and R&E peering points
ESnet core hubs
High-speed peering points with Abilene
11
NLR Supplied Circuits
ESnet’s Physical Connectivity (Summer 2006)
Japan (SINet)
Australia (AARNet)
Canada (CA*net4
Taiwan (TANet2)
Singaren
CA*net4
France
GLORIAD
(Russia, China)
Korea (Kreonet2
MREN
Netherlands
StarTap
Taiwan (TANet2)
UltraLight
GÉANT
- France, Germany,
Italy, UK, etc
SINet (Japan)
Russia (BINP)
CERN
(USLHCnet
CERN+DOE funded)
LIGO
PNNL
AU
MAN rings
ESnet IP core
MIT
USN
JGI
LVK
LLNL
SNLL
LBNL
NERSC
SLAC
PAIX-PA
Equinix, etc.
NASA
Ames
USN
YUCCA MT
NREL
42 end user sites
Office Of Science Sponsored (22)
NNSA Sponsored (12)
Joint Sponsored (4)
Other Sponsored (NSF LIGO, NOAA)
Laboratory Sponsored (6)
commercial and R&E peering points
ESnet core hubs
MAE-E
Equinix
KCP
JLAB
OSTI
LANL
GA
PPPL
ORAU DC
LBNL DC
LLNL/LANL
DC Offices
OSC GTN
NNSA
IARC
AU
AMPATH
FNAL
ANL
AMES
BNL
ARM
SNLA
ORNL
ORAU
NOAA
SRS
AMPATH
(S. America)
ESnet IP core: Packet over
SONET Optical Ring and Hubs
high-speed peering points with Internet2/Abilene
International (high speed)
Lab Supplied
10 Gb/s SDN core
10G/s IP core
2.5 Gb/s IP core
MAN rings (≥ 10 G/s)
OC12 / GigEthernet
OC3 (155 Mb/s)
45 Mb/s and less
12
ESnet LIMAN with SDN Connections
13
LIMAN and BNL
•
ATLAS (A Toroidal LHC ApparatuS), is one of four
detectors located at the Large Hadron Collider
(LHC) located at CERN
•
BNL is the largest of the ATLAS Tier 1 centers and
the only one in the U.S, and so is responsible for
archiving and processing approximately 20 percent
of the ATLAS raw data
•
During a recent multi-week exercise, BNL was able
to sustain an average transfer rate from CERN to
their disk arrays of 191 MB/s (~1.5 Gb/s) compared
to a target rate of 200 MB/s
o
This was in addition to “normal” BNL site traffic
14
Chicago Area MAN with SDN Connections
15
CHIMAN: FNAL and ANL
•
Fermi National Laboratory is the only US Tier1
center for the Compact Muon Solenoid (CMS)
experiment at LHC
•
Argonne National Laboratory will house a 5-teraflop
IBM BlueGene computer part of the National
Leadership Computing Facility
•
Together with ESnet, FNAL and ANL will build the
Chicago MAN (CHIMAN) to accommodate the vast
amounts of data these facilities will generate and
receive
o
o
o
Five 10GE circuits will go into FNAL
Three 10GE circuits will go into ANL
Ring connectivity to StarLight and to the Chicago ESnet
POP
16
Jefferson Laboratory Connectivity
DC - MAX
Giga-POP
MATP
MAX
GIGAPOP
NYC
Lovitt
Bute St
CO
T320
ODU
Eastern LITE
NASA
ESnet Router
(E-LITE)
Old Dominion
University
VMASC
W&M
JLAB
JTASC
ESnet core
MATP
Virginia
Tech
Atlanta
JLAB Site
Switch
10GE
OC192
OC48
ESnet Target Architecture:
IP Core+Science Data Network Core+Metro Area Rings
Asia-Pacific
Canada
(CANARIE)
Canada
(CANARIE)
Europe
(GEANT)
CERN
CERN
Europe
(GEANT)
Australia
SDN Core
Aus.
Sunnyvale
New York
Denver
Metropolitan
Area Rings
IP Core
Loop off
Backbone
Washington,
DC
LA
San Diego
IP core hubs
South America
(AMPATH)
Primary DOE Labs
SDN hubs
possible hubs
Albuquerque
South America
(AMPATH)
10-50 Gb/s circuits
Production IP core
Science Data Network core
Metropolitan Area Networks
International connections
18
Reliability
“5 nines”
“4 nines”
“3 nines”
Dually connected sites
19
Guaranteed Services Using Virtual Circuits
• Traffic isolation and traffic engineering
– Provides for high-performance, non-standard transport mechanisms that
cannot co-exist with commodity TCP-based transport
– Enables the engineering of explicit paths to meet specific requirements
• e.g. bypass congested links, using lower bandwidth, lower latency paths
• Guaranteed bandwidth [Quality of Service (QoS)]
– Addresses deadline scheduling
• Where fixed amounts of data have to reach sites on a fixed schedule,
so that the processing does not fall far enough behind that it could never
catch up – very important for experiment data analysis
• Reduces cost of handling high bandwidth data flows
– Highly capable routers are not necessary when every packet goes to the
same place
– Use lower cost (factor of 5x) switches to relatively route the packets
• End-to-end connections are required between Labs and collaborator
institutions
20
OSCARS: Guaranteed Bandwidth VC Service For SC Science
•
ESnet On-demand Secured Circuits and Advanced Reservation System
(OSCARS)
•
To ensure compatibility, the design and implementation is done in collaboration
with the other major science R&E networks and end sites
o
Internet2: Bandwidth Reservation for User Work (BRUW)
- Development of common code base
o
GEANT: Bandwidth on Demand (GN2-JRA3), Performance and Allocated Capacity for
End-users (SA3-PACE) and Advance Multi-domain Provisioning System (AMPS)
- Extends to NRENs
BNL: TeraPaths - A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research
o GA: Network Quality of Service for Magnetic Fusion Research
o SLAC: Internet End-to-end Performance Monitoring (IEPM)
o USN: Experimental Ultra-Scale Network Testbed for Large-Scale Science
o
•
In its current phase this effort is being funded as a research project by the Office
of Science, Mathematical, Information, and Computational Sciences (MICS)
Network R&D Program
•
A prototype service has been deployed as a proof of concept
To date more then 20 accounts have been created for beta users, collaborators, and
developers
o More then 100 reservation requests have been processed
o
21
OSCARS - BRUW Interdomain Interoperability Demonstration
LSP
Indianapolis
IN
3
LSP
Chicago
IL
Chicago
IL
2
BRUW
Sunnyvale
CA
3
OSCARS
4
1
Source
Sink
• The first interdomain, automatically configured,
virtual circuit between ESnet and Abilene was
created on April 6, 2005
22
A Few URLs
•
ESnet Home Page
o
•
National Labs and User Facilities
o
•
http://www.sc.doe.gov/sub/organization/organization.htm
ESnet Availability Reports
o
•
http://www.es.net
http://calendar.es.net/
OSCARS Documentation
o
http://www.es.net/oscars/index.html
23