Transcript ESnet
ESnet
Joint Techs, Feb. 2005
William E. Johnston, ESnet Dept. Head and Senior Scientist
R. P. Singh, Federal Project Manager
Michael S. Collins, Stan Kluz,
Joseph Burrescia, and James V. Gagliardi, ESnet Leads
Gizella Kapus, Resource Manager
and the ESnet Team
Lawrence Berkeley National Laboratory
1
ESnet’s Mission
Support the large-scale, collaborative science of
DOE’s Office of Science
Provide high reliability networking to support the
operational traffic of the DOE Labs
• Provide network services to other DOE facilities
Provide leading-edge network and Grid services to
support collaboration
•
ESnet is a component of the Office of Science
infrastructure critical to the success of its research
programs (program funded through Office of
Advanced Scientific Computing Research / MICS;
managed and operated by ESnet staff at LBNL)
2
ESnet Science Data Network (SDN)
core
ESnet Physical Network – mid 2005
High-Speed Interconnection of DOE Facilities
and Major Science Collaborators
Australia
CA*net4
Taiwan
(TANet2)
Singaren
CA*net4
France
GLORIAD
Kreonet2
MREN
Netherlands
StarTap
TANet2
Taiwan (ASCC)
SInet (Japan)
Japan – Russia(BINP)
CERN
(DOE link)
GEANT
- Germany, France,
Italy, UK, etc
LIGO
PNNL
ESnet IP core
MIT
BNL
JGI
LBNL
NERSC
SLAC
TWC
QWEST
ATM
LLNL
SNLL
AMES
FNAL
INEEL-DC
ORAU-DC
ANL
LLNL/LANL-DC
PPPL
MAE-E
4xLAB-DC
KCP
JLAB
ORNL
YUCCA MT
GA
Equinix
GTN&NNSA
PAIX-PA
Equinix, etc.
OSTI
LANL
ALB
HUB
ARM
SNLA
42 end user sites
Allied
Signal
Office Of Science Sponsored (22)
NNSA Sponsored (12)
Joint Sponsored (3)
Other Sponsored (NSF LIGO, NOAA)
Laboratory Sponsored (6)
peering points
SND core hubs
IP core hubs
high-speed peering points
ORAU
NOAA
SRS
ESnet IP core: Packet over
SONET Optical Ring and Hubs
International (high speed)
10 Gb/s SDN core
10G/s IP core
2.5 Gb/s IP core
MAN rings (> 10 G/s)
OC12 ATM (622 Mb/s)
OC12 / GigEthernet
OC3 (155 Mb/s)
45 Mb/s and less
ESnet Logical Network: Peering and Routing Infrastructure
ESnet peering points
(connections to other
networks)
Australia
CA*net4
Taiwan
(TANet2)
Singaren
PNW-GPOP
University
GEANT
- Germany
- France
- Italy
- UK
- etc
SInet (Japan)
KEK
Japan – Russia (BINP)
International
Commercial
SEA HUB
CA*net4
France
Kreonet2
Netherlands
Taiwan (ASCC)
CERN
GLORIAD
MREN
StarTap
TANet2
2 PEERS
Distributed 6TAP
18 Peers
Abilene
1 PEER
LBNL
CalREN2
1 PEER
Abilene +
6 Universities
Abilene 2 PEERS
PAIX-W
36 PEERS
10 PEERS
16 PEERS
MAX GPOP
13 PEERS
14 PEERS
2 PEERS
GA
MAE-E
EQX-ASH
28 PEERS
TECHnet
2 PEERS
CENIC
SDSC
NYC HUBS
LANL
Abilene
ATL HUB
ESnet supports collaboration by providing full Internet access
• manages the full complement of Global Internet routes (about 150,000
IPv4 from 180 peers) at 40 general/commercial peering points
• high-speed peerings w/ Abilene and the international R&E networks.
This is a lot of work, and is very visible, but provides full Internet access
for DOE.
Drivers for the Evolution of ESnet
August, 2002 Workshop
Organized by Office of
Science
Mary Anne Scott, Chair, Dave Bader,
Steve Eckstrand. Marvin Frazier, Dale
Koelling, Vicky White
Workshop Panel Chairs
Ray Bair, Deb Agarwal, Bill Johnston,
Mike Wilde, Rick Stevens, Ian Foster,
Dennis Gannon, Linda Winkler,
Brian Tierney, Sandy Merola, and
Charlie Catlett
•The network and middleware requirements to support DOE science
were developed by the OSC science community representing major DOE
science disciplines
o
o
o
o
Climate simulation
Spallation Neutron Source facility
Macromolecular Crystallography
High Energy Physics experiments
•The network is essential for:
o
o
o
o
Magnetic Fusion Energy Sciences
Chemical Sciences
Bioinformatics
(Nuclear Physics)
long term (final stage) data analysis
o “control loop” data analysis (influence an experiment in progress)
o distributed, multidisciplinary simulation
Available at www.es.net/#research
5
o
Evolving Quantitative Science Requirements for Networks
Science Areas
Today
End2End
Throughput
5 years
End2End
Throughput
5-10 Years
End2End
Throughput
Remarks
High Energy
Physics
0.5 Gb/s
100 Gb/s
1000 Gb/s
high bulk
throughput
Climate (Data &
Computation)
0.5 Gb/s
160-200 Gb/s
N x 1000 Gb/s
high bulk
throughput
SNS NanoScience
Not yet started
1 Gb/s
1000 Gb/s +
QoS for control
channel
remote control
and time critical
throughput
Fusion Energy
0.066 Gb/s
(500 MB/s
burst)
0.198 Gb/s
(500MB/
20 sec. burst)
N x 1000 Gb/s
time critical
throughput
Astrophysics
0.013 Gb/s
(1 TBy/week)
N*N multicast
1000 Gb/s
computational
steering and
collaborations
Genomics Data &
Computation
0.091 Gb/s
(1 TBy/day)
100s of users
1000 Gb/s +
QoS for control
channel
high throughput
and steering
6
0
Aug, 04
Mar, 04
Oct, 03
May,03
Dec, 02
Jul, 02
Feb, 02
Sep, 01
Apr, 01
Nov, 00
Jun, 00
Jan, 00
Aug, 99
Mar, 99
Oct, 98
May, 98
Dec, 97
Jul, 97
Feb, 97
Sep, 96
Apr, 96
Nov, 95
Jun, 95
Jan, 95
Aug, 94
Mar, 94
Oct,93
May,93
Dec, 92
Jul, 92
Feb, 92
350
Sep, 91
400
Apr, 91
Nov, 90
Jun, 90
Jan, 90
TBytes/Month
TByte/Month
ESnet is Currently Transporting About 350 terabytes/mo.
ESnet Monthly
Accepted
Traffic Traffic
Through
ESnet
Monthly
Accepted
Jan., 1990
Dec. 2004
Dec,–2004
Annual growth in the past
five years about 2.0x
annually.
300
250
200
150
100
50
7
A Small Number of Science Users
Account for a Significant Fraction of all ESnet Traffic
16
Top Flows - ESnet Host-to-Host, 2 Mo., 30 Day Averaged
DOE LabInternational R&E
14
Total ESnet traffic
(Dec, 2004) = 330 TBy
TBytes/Month
12
Lab-U.S. R&E
10
Domestic
Lab-Lab
8
International
6
4
1
3
2
2
Top 100 host-host
flows = 99 TBy
0
37
34
31
28
25
22
19
16
13
10
7
4
1
Note that this data does not include intra-Lab traffic.
ESnet ends at the Lab border routers, so science traffic on the Lab LANs is invisible to ESnet.
TBytes/Month
8
6
4
2
1
2
3
4
5
6
7
8
9
LBNL U. Wisc.
NERSC LBNL
FNAL Karlsruhe (DE)
NERSC NASA Ames
NERSC LBNL
SLAC (US) IN2P3 (FR)
NERSC NASA Ames
NERSC LBNL
BNL (US) IN2P3 (FR)
FNAL Johns Hopkins
NERSC LBNL
SLAC (US) IN2P3 (FR)
FNAL SDSC
NERSC LBNL
SLAC (US) RAL (UK)
FNAL MIT
NERSC LBNL
Fermilab (US) WestGrid (CA)
Fermilab (US) WestGrid (CA)
10
?? LBNL
BNL LLNL
FNAL MIT
BNL LLNL
12
SLAC (US) RAL (UK)
SLAC (US) INFN CNAF (IT)
14
LLNL NCAR
BNL LLNL
LIGO Caltech
BNL LLNL
16
Fermilab (US) IN2P3 (FR)
Top Flows - ESnet Host-to-Host, 2 Mo., 30 Day Averaged
int
dom
intra Lab
0
10
9
ESnet Traffic
•
Since BaBar (SLAC high energy physics
experiment) production started, the top 100 ESnet
flows have consistently accounted for 30% - 50% of
ESnet’s monthly total traffic
•
As LHC (CERN high energy physics accelerator)
data starts to move, this will increase a lot (200-2000
times)
o
•
Both LHC tier 1 (primary U.S. experiment data centers)
are at DOE Labs – Fermilab and Brookhaven
U.S. tier 2 (experiment data analysis) centers will be
at universities – when they start pulling data from the
tier 1 centers the traffic distribution will change a lot
10
Monitoring DOE Lab ↔ University Connectivity
• Current monitor infrastructure (red&green) and target infrastructure
• Uniform distribution around ESnet and around Abilene
AsiaPac
SEA
CERN
CERN
Europe
Europe
LBNL
Abilene
FNAL
ESnet
OSU
Japan
Japan
CHI
NYC
DEN
SNV
DC
KC
BNL
IND
Japan
LA
NCS
SDG
SDSC
ALB
ELP
HOU
DOE Labs w/ monitors
Universities w/ monitors
Initial site monitors
network hubs
high-speed cross connects: ESnet ↔ Internet2/Abilene
ATL
ESnet
Abilene
ORNL
11
ESnet Evolution
•
With the current architecture ESnet cannot address
o
the increasing reliability requirements
- Labs and science experiments are insisting on network redundancy
o
the long-term bandwidth needs
- LHC will need dedicated 10/20/30/40 Gb/s into and out of FNAL and BNL
- Specific planning drivers include HEP, climate, SNS, ITER and SNAP, et al
•
The current core ring cannot handle the anticipated large
science data flows at affordable cost
•
The current point-to-point tail circuits are neither reliable nor
scalable to the required bandwidth
New York (AOA)
DOE sites
ESnet
Core
Washington, DC (DC)
Sunnyvale (SNV)
El Paso (ELP)
Atlanta (ATL)
12
ESnet Strategy – A New Architecture
•
Goals derived from science needs
o
o
o
•
Fully redundant connectivity for every site
High-speed access to the core for every site (at least 20
Gb/s)
100 Gbps national bandwidth by 2008
Three part strategy
1) Metropolitan Area Network (MAN) rings to provide dual
site connectivity and much higher site-to-core bandwidth
2) A Science Data Network core for
-
large, high-speed science data flows
multiply connecting MAN rings for protection against hub failure
a platform for provisioned, guaranteed bandwidth circuits
alternate path for production IP traffic
3) A High-reliability IP core (e.g. the current ESnet core) to
address Lab operational requirements
13
ESnet MAN Architecture
core router
R&E peerings
International peerings
T320
ESnet
SDN
core
ESnet
production
IP core
core
router
switches managing
multiple lambdas
ESnet managed
λ / circuit services
ESnet production
IP service
ESnet
management and
monitoring
2-4 x 10 Gbps
channels
Lab
Lab
monitor
site equip.
ESnet managed
λ / circuit services
tunneled through
the IP backbone
Site gateway router
Site LAN
monitor
Site gateway router
Site LAN
site equip.
14
New ESnet Strategy:
Science Data Network + IP Core + MANs
CERN
AsiaPacific
Seattle
(SEA)
Sunnyvale
(SNV)
GEANT
(Europe)
ESnet
Science Data
Network
(2nd Core)
Metropolitan
Area
Rings
New York
(AOA)
Core loops
ESnet
IP Core
Albuquerque (ALB)
Existing IP core hubs
SDN hubs
New hubs
Primary DOE Labs
Possible new hubs
El Paso (ELP)
Washington,
DC (DC)
Atlanta (ATL)
Tactics for Meeting Science Requirements – 2007/2008
AsiaPac
SEA
• 10 Gbps enterprise IP traffic
• 40-60 Gbps circuit based transport CERN
Aus.
Europe
Europe
ESnet
Science Data Network
(2nd Core – 30-50 Gbps,
National Lambda Rail)
SNV
Japan
Japan
CHI
NYC
DEN
DC
Metropolitan
Area
Rings
Aus.
ALB
SDG
ESnet
IP Core
(>10 Gbps ??)
ATL
ESnet hubs
ESnet hubs
ELP
Metropolitan Area Rings
Major DOE Office of Science Sites
High-speed cross connects with Internet2/Abilene
Production IP ESnet core
High-impact science core
2.5 Gbs
Lab supplied
10 Gbs
Major international
10Gb/s
30Bg/s
Future phases
40Gb/s 16
ESnet Services Supporting Science Collaboration
•
In addition to the high-bandwidth network
connectivity for DOE Labs, ESnet provides several
other services critical for collaboration
•
That is ESnet provides several “science services” –
services that support the practice of science
o
Access to collaborators (“peering”)
o
Federated trust
- identity authentication
– PKI certificates
– crypto tokens
o
Human collaboration – video, audio, and data
conferencing
17
5250
5000
4750
4500
4250
4000
3750
3500
3250
3000
2750
2500
2250
2000
1750
1500
1250
1000
750
500
250
0
User Certificates
Service Certificates
Expired(+revoked)
Certificates
Total Certificates Issued
Total Cert Requests
Ja
n
Fe -0 3
b
M - 03
ar
Ap - 03
M r- 03
ay
Ju -03
nJu 0 3
Au l-03
g
Se - 03
p
O - 03
ct
No 03
v
De -0 3
cJa 0 3
n
Fe -0 4
b
M - 04
ar
Ap - 04
r
M - 04
ay
Ju -04
nJu 0 4
Au l-04
g
Se - 04
p
O - 04
ct
No 04
v
De -0 4
cJa 0 4
n05
No.of certificates or requests
DOEGrids CA Usage Statistics
Production service began in June 2003
User Certificates
1386 Total No. of Certificates
3569
Service Certificates
2168 Total No. of Requests
4776
Host/Other Certificates
* FusionGRID CA certificates not included here.
15 Internal PKI SSL Server
certificates
36
* Report as of Jan 11,200518
DOEGrids CA Usage - Virtual Organization Breakdown
DOEGrids CA Statistics (Total Certs 3569)
ANL
4.3%
DOESG
0.5% ESG
1.0%
ESnet
0.6%
FusionGRID
7.4%
*Others
38.9%
*
iVDGL
17.9%
LBNL
1.8%
NERSC
4.0%
LCG
0.3%
NCC-EPA
0.1%
FNAL
8.6%
PNNL
PPDG 0.6%
13.4%
ORNL
0.7%
*DOE-NSF
collab.
19
ESnet Collaboration Services: Production Services
ISDN
Audio, Data
H.323
6-T1's
6-T1's
1- PRI
.3.86 Production
RADVISION ECS-500
Gatekeeper
(DELL)
.3.171 Production
RADVISION ViaIP MCU
.3.166 Production
Web Latitude
Server
(DELL)
.3.167 Production
Latitude
M3 AudioBridge
Eastern Research
.4.185 RADVISION
Gateway
H.323
Router
.3.175 Production
RADVISION ECS-500
Gatekeeper
(DELL)
.3.172 Production
Codian MCU
ESnet
•
•
•
•
Web-based registration and audio/data bridge scheduling
•
Over 1000 registered users worldwide
Ad-Hoc H.323 and H.320 videoconferencing
Streaming on the Codian MCU using Quicktime or REAL
“Guest” access to the Codian MCU via the worldwide Global Dialing System
(GDS)
20
ESnet Collaboration Services: H.323 Video Conferencing
• Radvision and Codian
70 ports on Radvision available at 384 kbps
o 40 ports on Codian at 2 Mbps plus streaming
o Usage leveled, but, expect increase in early 2005 (new
groups joining ESnet Collaboration)
o Radvision increase to 200 ports at 384 kbps by mid-2005
o
H.323 MCU Port Hours
4500
4000
3500
3000
2500
2000
1500
1000
500
0
Sep-04
Oct-04
Nov-04
Dec-04
Jan-05
21
Conclusions
•
ESnet is an infrastructure that is critical to DOE’s
science mission and that serves all of DOE
•
ESnet is working on providing the DOE mission
science networking requirements with several new
initiatives and a new architecture
•
ESnet is very different today in both planning and
business approach and in goals than in the past
22