Transcript Part1 6.0M

Networks for HENP and ICFA SCIC
Harvey B. Newman
California Institute of Technology
APAN High Energy Physics Workshop
January 21, 2003
Next Generation Networks for
Experiments: Goals and Needs
Large data samples explored and analyzed by thousands of
globally dispersed scientists, in hundreds of teams
 Providing rapid access to event samples, subsets
and analyzed physics results from massive data stores
 From Petabytes by 2002, ~100 Petabytes by 2007,
to ~1 Exabyte by ~2012.
 Providing analyzed results with rapid turnaround, by
coordinating and managing the large but LIMITED computing,
data handling and NETWORK resources effectively
 Enabling rapid access to the data and the collaboration
 Across an ensemble of networks of varying capability
 Advanced integrated applications, such as Data Grids,
rely on seamless operation of our LANs and WANs
 With reliable, monitored, quantifiable high performance
Four LHC Experiments: The
Petabyte to Exabyte Challenge
ATLAS, CMS, ALICE, LHCB
Higgs + New particles; Quark-Gluon Plasma; CP Violation
Data stored
~40 Petabytes/Year and UP;
CPU
0.30 Petaflops and UP
0.1 to
1
Exabyte (1 EB = 1018 Bytes)
(2007)
(~2012 ?) for the LHC Experiments
LHC Data Grid Hierarchy
CERN/Outside Resource Ratio ~1:2
Tier0/( Tier1)/( Tier2)
~1:1:1
~PByte/sec
~100-400
MBytes/sec
Online System
Experiment
CERN 700k SI95
~1 PB Disk;
Tape Robot
Tier 0 +1
Tier 1
~2.5 Gbps
IN2P3 Center
INFN Center
RAL Center
FNAL
2.5 Gbps
~2.5 Gbps
Tier 2
Tier2 Center
Tier2 Center
Tier2 Center
Tier2 CenterTier2 Center
Tier 3
Institute Institute
~0.25TIPS
Physics data cache
Workstations
Institute
Institute
0.1 to 10 Gbps
Tier 4
Tens of Petabytes by 2007-8.
An Exabyte within ~5 Years later.
ICFA and Global Networks for HENP
National and International Networks, with sufficient
(rapidly increasing) capacity and capability, are
essential for
The daily conduct of collaborative work in both
experiment and theory
Detector development & construction on a global scale;
Data analysis involving physicists from all world regions
The formation of worldwide collaborations
The conception, design and implementation of
next generation facilities as “global networks”
“Collaborations on this scale would never have
been attempted, if they could not rely on excellent
networks”
ICFA and International Networking
ICFA Statement on Communications in Int’l HEP
Collaborations of October 17, 1996
See http://www.fnal.gov/directorate/icfa/icfa_communicaes.html
“ICFA urges that all countries and institutions wishing
to participate even more effectively and fully in
international HEP Collaborations should:
 Review their operating methods to ensure they
are fully adapted to remote participation
 Strive to provide the necessary communications
facilities and adequate international bandwidth”
NTF
ICFA Network Task Force: 1998
Bandwidth Requirements Projection
(Mbps)
1998
2000
2005
0.05 - 0.25
(0.5 - 2)
0.2 – 2
(2-10)
0.8 – 10
(10 – 100)
0.25 - 10
1.5 - 45
34 - 622
BW to a Home Laboratory Or
Regional Center
1.5 - 45
34 - 155
622 - 5000
BW to a Central Laboratory
Housing One or More Major
Experiments
34 - 155
BW on a Transoceanic Link
1.5 - 20
BW Utilized Per Physicist
(and Peak BW Used)
BW Utilized by a University
Group
155 - 622 2500 - 10000
34 - 155
622 - 5000
100–1000 X Bandwidth Increase Foreseen for 1998-2005
See the ICFA-NTF Requirements Report:
http://l3www.cern.ch/~newman/icfareq98.html
ICFA Standing Committee on
Interregional Connectivity (SCIC)
 Created by ICFA in July 1998 in Vancouver ; Following ICFA-NTF
 CHARGE:
Make recommendations to ICFA concerning the connectivity between
the Americas, Asia and Europe (and network requirements of HENP)
As part of the process of developing these
recommendations, the committee should
 Monitor traffic
 Keep track of technology developments
 Periodically review forecasts of future
bandwidth needs, and
 Provide early warning of potential problems
 Create subcommittees when necessary to meet the charge
 The chair of the committee should report to ICFA once per
year, at its joint meeting with laboratory directors (Feb. 2003)
 Representatives: Major labs, ECFA, ACFA, NA Users, S. America
ICFA-SCIC Core Membership
 Representatives from major HEP  ECFA representatives:
laboratories:
W. Von Reuden
(CERN)
Volker Guelzow
(DESY)
Vicky White
(FNAL)
Yukio Karita
(KEK)
Richard Mount
(SLAC)
 User Representatives
Richard Hughes-Jones (UK)
Harvey Newman
(USA)
Dean Karlen
(Canada)
 For Russia:
Slava Ilyin
(MSU)
Denis Linglin (IN2P3, Lyon)
Frederico Ruggieri (INFN
Frascati)
 ACFA representatives:
Rongsheng Xu
(IHEP Beijing)
H. Park, D. Son
(Kyungpook Nat’l
University)
 For South America:
Sergio F. Novaes
(University of Sao Paulo)
SCIC Sub-Committees




Web Page http://cern.ch/ICFA-SCIC/
Monitoring: Les Cottrell
(http://www.slac.stanford.edu/xorg/icfa/scic-netmon)
With Richard Hughes-Jones (Manchester), Sergio Novaes
(Sao Paolo); Sergei Berezhnev (RUHEP), Fukuko Yuasa (KEK),
Daniel Davids (CERN), Sylvain Ravot (Caltech),
Shawn McKee (Michigan)
Advanced Technologies: Richard Hughes-Jones,
With Vladimir Korenkov (JINR, Dubna), Olivier Martin(CERN),
Harvey Newman
The Digital Divide: Alberto Santoro (Rio, Brazil)
 With Slava Ilyin, Yukio Karita, David O. Williams
 Also Dongchul Son (Korea), Hafeez Hoorani (Pakistan),
Sunanda Banerjee (India), Vicky White (FNAL)
Key Requirements: Harvey Newman
 Also Charlie Young (SLAC)
Transatlantic Net WG (HN, L. Price)
Bandwidth Requirements [*]

CMS
ATLAS
BaBar
CDF
D0
BTeV
DESY
2001 2002
2003
2004
2005
2006
100
200
300
600
800
2500
50
100
300
600
800
2500
300
600
1100
1600
2300
3000
100
300
400
2000
3000
6000
400
1600
2400
3200
6400
8000
20
40
100
200
300
500
100
180
210
240
270
300
CERN 155- 622 2500 5000 10000 20000
BW
310
[*] BW Requirements Increasing Faster Than Moore’s Law
See http://gate.hep.anl.gov/lprice/TAN
History – One large Research Site
Much of the Traffic:
SLAC  IN2P3/RAL/INFN;
via ESnet+France;
Abilene+CERN
Current Traffic ~400 Mbps;
ESNet Limitation
Projections: 0.5 to 24 Tbps by ~2012
Tier0-Tier1 Link Requirements
Estimate: for Hoffmann Report 2001
Tier1  Tier0 Data Flow for Analysis
Tier2  Tier0 Data Flow for Analysis
Interactive Collaborative Sessions (30 Peak)
Remote Interactive Sessions (30 Flows Peak)
Individual (Tier3 or Tier4) data transfers
Limit to 10 Flows of 5 Mbytes/sec each
 TOTAL Per Tier0 - Tier1 Link
1)
2)
3)
4)
5)
0.5 - 1.0 Gbps
0.2 - 0.5 Gbps
0.1 - 0.3 Gbps
0.1 - 0.2 Gbps
0.8
Gbps
1.7 - 2.8 Gbps
NOTE:
 Adopted by the LHC Experiments; given in the upcoming
Hoffmann Steering Committee Report: “1.5 - 3 Gbps per
experiment”
 Corresponds to ~10 Gbps Baseline BW Installed on US-CERN Link
 Hoffmann Panel also discussed the effects of higher bandwidths
 For example all-optical 10 Gbps Ethernet across WANs
Tier0-Tier1 BW Requirements
Estimate: for Hoffmann Report 2001
 Does Not Include the more recent ATLAS Data Estimates
 270 Hz at 1033 Instead of 100Hz
 400 Hz at 1034 Instead of 100Hz
 2 MB/Event Instead of 1 MB/Event
 Does Not Allow Fast Download to Tier3+4
of “Small” Object Collections
 Example: Download 107 Events of AODs (104 Bytes)  100 Gbytes;
At 5 Mbytes/sec per person (above) that’s 6 Hours !
 This is a still a rough, bottoms-up, static, and
hence Conservative Model.
 A Dynamic distributed DB or “Grid” system with Caching,
Co-scheduling, and Pre-Emptive data movement
may well require greater bandwidth
 Does Not Include “Virtual Data” operations:
Derived Data Copies; Data-description overheads
 Further MONARC Computing Model Studies are Needed
ICFA SCIC Meetings[*] and Topics
 Focus on the Digital Divide This Year
 Identification of problem areas; work on ways to improve
 Network Status and Upgrade Plans in Each Country
 Performance (Throughput) Evolution in Each Country,
and Transatlantic
 Performance Monitoring World-Overview
(Les Cottrell, IEPM Project)
 Specific Technical Topics (Examples):
 Bulk transfer, New Protocols; Collaborative Systems, VOIP
 Preparation of Reports to ICFA (Lab Directors’ Meetings)
 Last Report: World Network Status and Outlook - Feb. 2002
 Next Report: Digital Divide, + Monitoring, Advanced
Technologies; Requirements Evolution – Feb. 2003
[*] Seven Meetings in 2002; at KEK In December 13.
Network Progress in 2002 and
Issues for Major Experiments
 Backbones & major links advancing rapidly to 10 Gbps range
 “Gbps” end-to-end throughput data flows have been
tested; will be in production soon (in 12 to 18 Months)
 Transition to Multi-wavelengths 1-3 yrs. in the
“most favored” regions
 Network advances are changing the view of the net’s roles
 Likely to have a profound impact on the experiments’
Computing Models, and bandwidth requirements
 More dynamic view: GByte to TByte data transactions;
dynamic path provisioning
 Net R&D Driven by Advanced integrated applications, such
as Data Grids, that rely on seamless LAN and WAN operation
 With reliable, quantifiable (monitored), high performance
 All of the above will further open the Digital Divide chasm.
We need to take action
ICFA SCIC: R&E Backbone and
International Link Progress
GEANT Pan-European Backbone (http://www.dante.net/geant)
 Now interconnects >31 countries; many trunks 2.5 and 10 Gbps
UK: SuperJANET Core at 10 Gbps
 2.5 Gbps NY-London, with 622 Mbps to ESnet and Abilene
France (IN2P3): 2.5 Gbps RENATER backbone from October 2002
 Lyon-CERN Link Upgraded to 1 Gbps Ethernet
 Proposal for dark fiber to CERN by end 2003
SuperSINET (Japan): 10 Gbps IP and 10 Gbps Wavelength Core
 Tokyo to NY Links: 2 X 2.5 Gbps started; Peer with ESNet by Feb.
CA*net4 (Canada): Interconnect customer-owned dark fiber
nets across Canada at 10 Gbps, started July 2002
 “Lambda-Grids” by ~2004-5
GWIN (Germany): 2.5 Gbps Core; Connect to US at 2 X 2.5 Gbps;
Support for SILK Project: Satellite links to FSU Republics
Russia: 155 Mbps Links to Moscow (Typ. 30-45 Mbps for Science)
 Moscow-Starlight Link to 155 Mbps (US NSF + Russia Support)
 Moscow-GEANT and Moscow-Stockholm Links 155 Mbps
R&E Backbone and Int’l Link Progress
Abilene (Internet2) Upgrade from 2.5 to 10 Gbps in 2002
 Encourage high throughput use for targeted applications; FAST
ESNET: Upgrade: to 10 Gbps “As Soon as Possible”
US-CERN
 to 622 Mbps in August; Move to STARLIGHT
 2.5G Research Triangle from 8/02; STARLIGHT-CERN-NL;
to 10G in 2003. [10Gbps SNV-Starlight Link Loan from Level(3)
SLAC + IN2P3 (BaBar)
 Typically ~400 Mbps throughput on US-CERN, Renater links
 600 Mbps Throughput is BaBar Target for Early 2003
(with ESnet and Upgrade)
FNAL: ESnet Link Upgraded to 622 Mbps
 Plans for dark fiber to STARLIGHT, proceeding
NY-Amsterdam Donation from Tyco, September 2002:
Arranged by IEEAF: 622 Gbps+10 Gbps Research Wavelength
US National Light Rail Proceeding; Startup Expected this Year
2.5 10 Gbps Backbone
> 200 Primary Participants
All 50 States, D.C. and Puerto Rico
75 Partner Corporations and Non-Profits
23 State Research and Education Nets
15 “GigaPoPs” Support 70% of Members
2003: OC192 and OC48 Links Coming Into Service;
Need to Consider Links to US HENP Labs
National R&E Network Example
Germany: DFN Transatlantic Connectivity 2002
 2 X OC48: NY-Hamburg
and NY-Frankfurt
 Direct Peering to Abilene (US) and
Canarie (Canada)
UCAID said to be adding another 2
OC48’s; in a Proposed Global Terabit
Research Network (GTRN)
STM 16
Virtual SILK Highway Project (from 11/01):
NATO ($ 2.5 M) and Partners ($ 1.1M)
 Satellite Links to South Caucasus
and Central Asia (8 Countries)
In 2001-2 (pre-SILK) BW 64-512 kbps
Proposed VSAT to get 10-50 X BW
for same cost
See www.silkproject.org
[*] Partners: CISCO, DESY. GEANT,
UNDP, US State Dept., Worldbank,
UC London, Univ. Groenigen
National Research Networks
in Japan
SuperSINET
NIFS
 Started operation January 4, 2002
IP
Nagoya U
NIG
 Support for 5 important areas:
WDM path
HEP, Genetics, Nano-Technology,
Nagoya
Space/Astronomy, GRIDs
 Provides 10 ’s:
Osaka
 10 Gbps IP connection
Osaka U
 Direct intersite GbE links
 9 Universities Connected Kyoto U
IP router
OXC
Tohoku
U
KEK
Tokyo
NII Chiba
NII
Hitot.
ICR
Kyoto-U
ISAS
January 2003: Two TransPacific
2.5 Gbps Wavelengths (to NY);
Japan-US-CERN Grid Testbed Soon
U Tokyo
Internet
NAO
IMS
U-Tokyo
SuperSINET Updated Map: October 2002
APAN Links in Southeast Asia
January 15, 2003