Slides - Indico

Download Report

Transcript Slides - Indico

Networks and Grids for HENP as Global e-Science
Harvey B. Newman
California Institute of Technology
CHEP2004, Interlaken
September 30, 2004
ICFA and Global Networks
for Collaborative Science
 Given the wordwide spread and data-intensive challenges
in our field
 National and International Networks, with sufficient (rapidly
increasing) capacity and seamless end-to-end capability, are
essential for
 The daily conduct of collaborative work in both
experiment and theory
 Experiment development & construction
on a global scale
 Grid systems supporting analysis involving
physicists in all world regions
 The conception, design and implementation of
next generation facilities as “global networks”
 “Collaborations on this scale would never have been
attempted, if they could not rely on excellent networks”
History of Bandwidth Usage – One Large
Network; One Large Research Site
ESnet Accepted Traffic 1990 – 2004
Exponential Growth Since ’92;
ESnet Monthly Accepted Traffic 1/90-1/04
Annual Rate Increased from 1.7 to 2.0X
Per Year In the Last 5 Years
300
L. Cottrell
W. Johnston
200
150
100
Progress
in Steps
50
Jul, 03
Oct, 02
Jan, 02
Apr, 01
Jul, 00
Oct, 99
Jan, 99
Apr, 98
Jul, 97
Oct, 96
Jan, 96
Apr, 95
Jul, 94
Oct,93
Jan, 93
Apr, 92
Jul,91
Oct, 90
0
Jan, 90
TByte/Month
250
SLAC Traffic ~400 Mbps; Growth in
Steps (ESNet Limit): ~ 10X/4 Years.
Projected: ~2 Terabits/s by ~2014
Int’l Networks BW on Major Links
for HENP: US-CERN Example
 Rate of Progress >> Moore’s Law (US-CERN Example)
 9.6 kbps Analog









(1985)
64-256 kbps Digital
(1989 - 1994)
1.5 Mbps Shared
(1990-3; IBM)
2 -4 Mbps
(1996-1998)
12-20 Mbps
(1999-2000)
155-310 Mbps
(2001-2)
622 Mbps
(2002-3)
2.5 Gbps 
(2003-4)
10 Gbps 
(2005)
4x10 Gbps or 40 Gbps (2007-8)
[X 7 – 27]
[X 160]
[X 200-400]
[X 1.2k-2k]
[X 16k – 32k]
[X 65k]
[X 250k]
[X 1M]
[X 4M]
 A factor of ~1M Bandwidth Increase since 1985; ~4M by 2007-8
A factor of ~5k Since 1995;
 HENP has become a leading applications driver, and also a codeveloper of global networks
LHC Data Grid Hierarchy:
Developed at Caltech
CERN/Outside Resource Ratio ~1:2
Tier0/( Tier1)/( Tier2)
~1:1:1
~PByte/sec
~100-1500
MBytes/sec
Online System
Experiment
CERN Center
PBs of Disk;
Tape Robot
Tier 0 +1
Tier 1
10 - 40 Gbps
IN2P3 Center
INFN Center
RAL Center
FNAL Center
~10 Gbps
Tier 2
Tier 3
Tier2 Center
Tier2 Center
Tier2 Center
Tier2 CenterTier2 Center
~1-10 Gbps
Institute Institute
Physics data cache
Workstations
Institute
Institute
1 to 10 Gbps
Tens of Petabytes by 2007-8.
An Exabyte ~5-7 Years later.
Tier 4
Emerging Vision: A Richly Structured, Global Dynamic System
Challenges of Next Generation
Science in the Information Age
Petabytes of complex data explored and analyzed by
1000s of globally dispersed scientists, in hundreds of teams
 Flagship Applications
 High Energy & Nuclear Physics, AstroPhysics Sky Surveys:
TByte to PByte “block” transfers at 1-10+ Gbps
 Fusion Energy: Time Critical Burst-Data Distribution;
Distributed Plasma Simulations, Visualization, Analysis
 eVLBI: Many real time data streams at 1-10 Gbps
 BioInformatics, Clinical Imaging: GByte images on demand
 Advanced integrated Grid applications rely on reliable,
high performance operation of our LANs and WANs
 Analysis Challenge: Provide results with rapid turnaround,
over networks of varying capability in different world regions
Internet 2 Land Speed Records (LSR):
See Talk by S. Ravot
6.6 Gbps
16500km
LSR History – IPv4 single stream
5.6 Gbps
10949km
120
100
4.2 Gbps
16343km
80
5.4 Gbps
7067km
60
2.5 Gbps
0.9 Gbps 10037km
0.4 Gbps 10978km
12272km
40
20
04
Jun-
Nov-
03
Monitoring of the Abilene traffic in LA:
Apr 04
03
Feb-
Oct 03
02
Nov-
0
Apr 02
 Judged on product of transfer speed
and distance end-to-end, using
standard (TCP/IP) protocols, Across
Production Net: e.g. Abilene
 IPv6: 4.0 Gbps Geneva-Phoenix
(SC2003)
 IPv4 with Windows & Linux: 6.6 Gbps
Caltech-CERN (15.7 kkm; “Grand
Tour of Abilene”) June 2004
 Exceeded 100 Petabit-m/sec
 7.48 Gbps X 16 kkm (Linux, 1 Stream)
Achieved in July
 11 Gbps (802.3ad) Over LAN in Sept.
 Concentrate now on reliable
Terabyte-scale file transfers
Note System Issues: CPU, PCI-X
Bus, NIC, I/O Controllers, Drivers
June 2004 Record Network
SC04: 100 Gbps Challenge
Petabitmeter (10^15 bit*meter)
Redefining the Role and Limits of TCP
HENP Bandwidth Roadmap
for Major Links (in Gbps)
Year
Production
Experimental
2001
2002
0.155
0.622
0.622-2.5
2.5
2003
2.5
10
DWDM; 1 + 10 GigE
Integration
2005
10
2-4 X 10
 Switch;
 Provisioning
2007
2-4 X 10
1st Gen.  Grids
2009
~10 X 10
or 1-2 X 40
~5 X 40 or
~20 X 10
~Terabit
~10 X 10;
40 Gbps
~5 X 40 or
~20-50 X 10
~25 X 40 or
~100 X 10
2011
2013
~MultiTbps
Remarks
SONET/SDH
SONET/SDH
DWDM; GigE Integ.
40 Gbps 
Switching
2nd Gen  Grids
Terabit Networks
~Fill One Fiber
Continuing Trend: ~1000 Times Bandwidth Growth Per Decade;
Compatible with Other Major Plans (ESNet, NLR; GN2, GLIF)
Evolving Quantitative Science Requirements for
Networks (DOE High Perf. Network Workshop)
Today
End2End
Throughput
5 years
End2End
Throughput
High Energy
Physics
Climate (Data &
Computation)
SNS
NanoScience
0.5 Gb/s
100 Gb/s
5-10 Years
End2End
Throughput
1000 Gb/s
0.5 Gb/s
160-200 Gb/s
N x 1000 Gb/s
Not yet
started
1 Gb/s
1000 Gb/s + QoS
for Control
Channel
Fusion Energy
0.066 Gb/s
(500 MB/s
burst)
0.013 Gb/s
(1 TByte/week)
0.198 Gb/s
(500MB/
20 sec. burst)
N*N multicast
N x 1000 Gb/s
0.091 Gb/s
(1 TBy/day)
100s of users
Science Areas
Astrophysics
Genomics Data
& Computation
Remarks
High bulk
throughput
High bulk
throughput
Remote
control and
time critical
throughput
Time critical
throughput
Computat’l
steering and
collaborations
1000 Gb/s + QoS
High
for Control
throughput
Channel
and steering
1000 Gb/s
See http://www.doecollaboratory.org/meetings/hpnpw/
Transition beginning now to optical, multiwavelength Community owned or leased
“dark fiber” (10 GbE) networks for R&E
National Lambda Rail (NLR): www.nlr.net
NLR
Coming
Up Now
Initially 4 10G
Wavelengths
Northern Route
LA-JAX by 4Q04
Internet2 HOPI
Initiative (w/HEP)
To 40 10G
Waves in Future
Initiatives in: nl, ca,
pl, cz, uk, ko, jp
 + 18 US States
(CA, IL, FL, IN, …)

ESnet Beyond FY07 (W. Johnston)
AsiaPac
SEA
CERN
Europe
Europe
Japan
Japan
CHI
SNV
NYC
DEN
DC
Japan
ALB
ATL
SDG
MANs
Qwest – ESnet hubs
ELP
NLR – ESnet hubs
High-speed cross connects with Internet2/Abilene
Major DOE Office of Science Sites
Production IP ESnet core
High-impact science core
Lab supplied
Major international
2.5 Gbs
10 Gbs
10Gb/s
30Bg/s
Future phases
40Gb/s
11
GLIF: Global Lambda Integrated Facility: www.glif.is
“GLIF is a World Scale
Lambda based Lab for
Application and
Middleware
development, where
Grid applications ride on
dynamically configured
networks based on
optical wavelengths ...
Coexisting with more
traditional packetswitched network traffic
4th GLIF Workshop:
Nottingham UK Sept.
2004
Also JGN2 (Japan) and
KREONet (Korea)
10 Gbps Wavelengths For R&E Network
Development Are Prolifering,
Across Continents and Oceans
NLR/SC04 Waves (v11)
CalTech/Newman
FL/Avery
SLAC
Optiputer
Ed Seidel
HOPI
UW/Rsrch Chnl
Starlight
SEA
NLR-PITT-LOSA-10GE-15
NLR-PITT-LOSA-10GE-14
NLR-SEAT-SAND-10GE-7
CHI
PSC
SVL
WDC
CalTech
LA
All lines 10GE
SD
JAX
SC04
UltraLight Collaboration:
http://ultralight.caltech.edu
 Caltech, UF, UMich,
SLAC,FNAL, CERN,
MIT, FIU, NLR, CENIC,
UCAID, Translight,
UKLight, Netherlight,
UvA, UCLondon, KEK,
Taiwan, KNU (Korea),
UERJ (Rio), Sao Paolo
 Cisco, Level(3)
 Integrated hybrid experimental network, leveraging Transatlantic
R&D network partnerships; packet-switched + dynamic optical paths
 10 GbE across US and the Atlantic: NLR, DataTAG, TransLight,
NetherLight, UKLight, etc.; Extensions to Japan, Taiwan, Korea, Brazil
 End-to-end monitoring; Realtime tracking and optimization;
Dynamic bandwidth provisioning
 Agent-based services spanning all layers of the system, from the
optical cross-connects to the applications.
ICFA SCIC (Since 1998)
http://cern.ch/icfa-scic
Three 2004 Reports; Presented to ICFA in February
 Main Report: “Networking for HENP” [H. Newman et al.]
 Includes Updates on Monitoring, the Digital Divide
and Advanced Technologies [*]
 A World Network Overview (with 27 Appendices):
Status and Plans for the Next Few Years of National &
Regional Networks, and Optical Network Initiatives
 Monitoring Working Group Report
[L. Cottrell]
 Digital Divide in Russia
[V. Ilyin]
August 2004 Update Reports at the SCIC Web Site:
See http://icfa-scic.web.cern.ch/ICFA-SCIC/documents.htm
 Asia Pacific, Latin America, GLORIAD (US-Ru-Ko-China);
Brazil, Korea, ESNet, etc.
ICFA Report Update (8/2004): Main
Trends Continue, Some Accelerate
 Current generation of 2.5-10 Gbps network backbones and major Int’l






links arrived in 2-3 Years [US+Europe+Japan; Now Korea and China]
 Capability Grew 4 to 100s of Times; Much Faster than Moore’s Law
Proliferation of 10G links across the Atlantic Now
 Direct result of Falling Network Prices: $ 0.5 – 1M Per Year for 10G
Ability to fully use long 10G paths with TCP continues to advance:
7.5 Gbps X 16kkm (August 2004)
Technological progress driving equipment costs in end-systems lower
 “Commoditization” of Gbit Ethernet (GbE) ~complete:
($20-50 per port); 10 GbE commoditization underway: < $ 2K acad.
Move to owned or leased optical nets (us, ca, nl, sk, po, ko, jp)
well underway in several areas of the world
Emergence of “Hybrid” Network Model: GNEW2004; UltraLight, GLIF
While there is progress in some less-advantaged regions, the
gap between the technologically “rich” and “poor” is widening
SCIC Main Conclusion for 2003-4
 The disparity among regions in HENP could increase even more
sharply, as we learn to use advanced networks effectively, and we
develop dynamic Grid systems in the “most favored” regions
 We must therefore take action, and work to Close the Digital Divide
 To make Physicists from All World Regions Full Partners in Their
Experiments; and in the Process of Discovery
 This is essential for the health of our global experimental
collaborations, our plans for future projects, and our field.
 Critical Path Items (for All Regions)
 A coherent approach to End-to-end monitoring that allows
physicists throughout the world to extract clear information
 Upgrading campus infrastructures.
To support Gbps data transfers in most HEP centers.
 Removing local, last mile, and nat’l and int’l bottlenecks
end-to-end, whether technical or political in origin.
Bandwidths across borders, the countryside or the city is
much less than the major backbones
[This is true in many countries: From China to Brazil to NE US]
ICFA SCIC Monitoring WG (L. Cottrell)
See www.slac.stanford.edu/grp/scs/net/talk03/icfa-aug04.ppt
Central Asia, Russia, SE Europe,
L. America, Middle East, China:
4-5 yrs behind
India, Africa: 7-8 yrs behind, and falling farther
10000
C. Asia (8)
Latin America (37)
50% Improvement/year
~ factor of 10 in < 6 years
10000
Edu (141)
1000
1000
Europe(150)
Canada (27)
Mid East (16)
S.E.
Europe (21)
10
100
100
10
Caucasus (8)
Important for policy makers
Dec-04
Dec-03
1
Dec-02
Dec-01
Africa (30)
Dec-00
Dec-99
India(7)
Dec-98
Dec-97
Dec-96
Jan-96
China (13)
Russia(17)
1
Jan-95
View from
CERN
Confirms
This View
TCP throughput measured from N. America
From the PingER project, Aug 2004
to World Regions
Derived TCP throughput in KBytes/sec
PingER
World View
from SLAC
PROGRESS in SE Europe (Sk, Pl, Cz, Hu, …)
1660 km of Dark
Fiber CWDM Links,
up to 112 km.
Slovak Academic Network
( 2004)
1 to 4 Gbps (GbE)
August 2002:
First NREN in
Europe to establish
Int’l GbE Dark Fiber
Link, to Austria
April 2003 to Czech
Republic.
Planning 10 Gbps
Backbone; dark
fiber link to Poland
Slovakia
VRVS Team
The Advantage of Dark Fiber
CESNET Case Study (Czech Republic)
Leased 1 x 2,5G
(EURO/Month)
about 150km (e.g. Ústí n.L. - Liberec)
7,000
about 300km (e.g. Praha - Brno)
8,000
1 x 2,5G
2513 km
Leased Fibers
(Since 1999)
*
**
Leased 4 x 2,5G
(EURO/Month)
about 150km (e.g. Ústí n.L. - Liberec)
14,000
about 300km (e.g. Praha - Brno)
23,000
Cost Savings of
50-70% Over 4 Years
for Few Hundred Km
2.5G -10G Links
*
**
Leased 1 x 10G
(EURO/Month)
about 150km (e.g. Ústí n.L. - Liberec)
14,000
about 300km (e.g. Praha - Brno)
16,000
*
**
8 000 *
11 000 **
Leased fibre with own equipment
(EURO/Month)
5 000 *
8 000 **
2 x booster 21dBm, 2 x DCF
2 x (booster 21dBm + in-line + preamplifier) + 6 x DCF
Leased 4 x 10G
(EURO/Month)
about 150km (e.g. Ústí n.L. - Liberec)
29,000
about 300km (e.g. Praha - Brno)
47,000
4 x 10G
Leased fibre with own equipment
(EURO/Month)
2 x booster 24dBm, DWDM 2,5G
2 x (booster+In-line+preamp), 6 x DCF, DWDM 2,5G
1 x 10G
*
**
5 000 *
7 000 **
2 x booster 18dBm
2 x booster 27dBm + 2 x preamp + 6 x DCF
4 x 2,5G
Case Study Result
Wavelength Service
Vs. Fiber Lease:
Leased fibre with own equipment
(EURO/Month)
Leased fibre with own equipment
(EURO/Month)
12 000 *
14 000 **
2 x booster 24dBm, 2 x DCF, DWDM 10G
2 x (booster +In-line+preampr), 6 x DCF, DWDM 10G
Asia Pacific Academic Network Connectivity
APAN Status July 2004
RU

Europe
200M
34M
Connectivity to
US from JP, KO,
AU is Advancing
Rapidly (> 30G)
Progress in the
Region, and to
Europe is Much
Slower
CN

2G
KR
155M 
1.2G
310M
 TW
`722M
HK 
TH 
 LK
45M 
90M
155M
932M
(to 21 G)
 ID
2.5M
Access Point
Exchange Point
Current Status
2004 (plan)
45M
7.5M
 PH
1.5M
155M 1.5M
VN
MY 2M

12M
SG
2M
US
9.1G
622M
777M
 IN
20.9G
JP
16M
AU 
Moves to Better North/South Linkages within Asia
JP-SG link: 155Mbps in 2005 is proposed to NSF by CIREN
JP- TH link: 2Mbps  45Mbps in 2004 is being studied.
Concept of an AP Ring with GLORIAD + IEEAF
Research Networking in Latin
America: Just Taking Off in 2004
 AmPath Provided connectivity for
some Latin American countries
Argentina, Brazil, Chile,
Mexico, Venezuela
 New CHEPREO Sao Paolo-Miami Link
at 622 Mbps Starting This Month
New: CLARA (Funded by EU)
 Regional Network Connecting 19
Countries:
Argentina
Brasil
Bolivia
Chile
Colombia
Costa Rica
Cuba
Dominican Republic
Ecuador
El Salvador
Guatemala
Honduras
Mexico
Panama
Paraguay
Peru
Uruguay
Venezuela
Nicaragua
155 Mbps Backbone with 10-45 Mbps Spurs;
4 Mbps Satellite to Cuba; 622 Mbps to Europe
AmPath
Also WHREN NSF
Proposal: 2.5G to US
Brazilian HEPGrid:
Rio, Sao Paolo etc.
Global Ring Network for Advanced Applications Development
www.gloriad.org: US-RUSSIA-CHINA + KOREA Global Optical Ring
 OC3 circuits Moscow-Chicago-
Beijing since January 2004
Aug. 8 2004: P.K. Young,
 OC3 circuit Moscow-Beijing July
Korean IST Advisor to
2004 (completes the ring)
President Announces
Rapid traffic growth with heaviest
 Korea Joining GLORIAD
US use from DOE (FermiLab),
as a full partner
NASA, NOAA, NIH and 260+ Univ.
 TEIN gradually to 10G,
(UMD, IU, UCB, UNC, UMN…
connected to GLORIAD
Many Others)
 Plans for Central Asian extension,
> 5TBytes now transferred monthly
with Kyrgyz Gov’t
via GLORIAD to US, Russia, China
GLORIAD 5-year Proposal (with US NSF) for expansion to 2.5G-10G
Moscow-Amsterdam-Chicago-Pacific-Hong Kong-Pusan-Beijing early 2005;
10G ring around northern hemisphere 2007;
Multi-wavelength hybrid service from ~2008-9
Internet in China
(J.P.Wu APAN July 2004)
Internet users in China:
from 6.8 Million to 78 Million within 6 months
Wireline
Dial Up
ISDN
23.4M
45.0M
4.9M
Broad
band
9.8M
Backbone:2.5-10G DWDM+Router
International links:20G
Exchange Points:> 30G(BJ,SH,GZ)
Last Miles

Ethernet,WLAN,ADSL,CTV,CDMA,ISDN,GPRS,
Dial-up
IP Addresses: 32M(1A+233B+146C);
Need IPv6
AFRICA: NectarNet Initiative
W. Matthews
Georgia Tech
www.nectarnet.org
Growing Need to connect academic researchers, medical
researchers & practitioners to many sites in Africa
Examples:
 CDC & NIH: Global AIDS Project, Dept. of Parasitic Diseases,




Nat’l Library of Medicine (Ghana, Nigeria)
Gates $ 50M HIV/AIDS Center in Botswana
Monsoon Project, Dakar [cf. East US Hurricanes tp Africa]
US Geological Survey: Global Spatial Data Infrastructure
Distance Learning: Emory Univ.-Ibadan (Nigeria); Research Channel
But Africa is Hard: 11M Sq. Miles, 600 M People, 54 Countries
 Little Telecommunications Infrastructure
Approach: Use SAT-3/WASC Cable (to S. Africa to Portugal); GEANT Across
Europe, AMS-NY Link Across Atlantic, Peer with Abilene in NYC
 Cable Landings in 8 West African Countries and South Africa
 Pragmatic approach to reach end points: VSAT,ADSL,microwave, etc.
Note: World Conference on Physics and Sustainable Development,
10/31 – 11/2/05 in Durban South Africa; Part of World Year of Physics 2005.
Sponsors: UNESCO, ICTP, IUPAP, APS, SAIP
HEP Active in the World Summit on
the Information Society 2003-2005
 GOAL: To Create an “Information Society”.
Common Definition Adopted (Tokyo Declaration, January 2003):
“… One in which highly developed ICT networks, equitable and
ubiquitous access to information, appropriate content in accessible
formats, and effective communication can help people achieve their
potential”
Kofi Annan Challenged the Scientific Community to Help (3/03)
 WSIS I (Geneva 12/03): SIS Forum,
CERN/Caltech Online Stand
 CERN RSIS Event
 Visitors at WSIS I
 Kofi Annan, UN Sec’y General
 John H. Marburger, Science
Adviser to US
President
 Ion Iliescu, President of
Romania, …

Planning Now Underway for
HEPGRID and Digital Divide Workshop
UERJ, Rio de Janeiro, Feb. 16-20 2004
Theme: Global Collaborations, Grids and
Their Relationship to the Digital Divide
NEWS:
Bulletin: ONE TWO
WELCOME BULLETIN
General Information
Registration
Travel Information
Hotel Registration
Participant List
How toTutorials
Get UERJ/Hotel

C++ Accounts
Computer

GridPhone
Technologies
Useful
Numbers
Program

Grid-Enabled
Contact
us:
Analysis
Secretariat

Networks
Chairmen

Collaborative
For the past three years the SCIC has focused on
understanding and seeking the means of reducing or
eliminating the Digital Divide. It proposed to ICFA
that these issues, as they affect our field of High
Energy Physics, be brought to our community for
discussion. This led to ICFA’s approval, in July 2003,
of the First Digital Divide and HEP Grid Workshop
 Review of R&E Networks; Major Grid Projects
 Perspectives on Digital Divide Issues by Major
HEP Experiments, Regional Representatives
 Focus on Digital Divide Issues in Latin America;
Relate to Problems in Other Regions
More Info: http://www.lishep.uerj.br
SPONSORS
Systems
CLAF
CNPQ
FAPERJ
UERJ
Sessions &
Tutorials Available
(w/Video) on
the Web
International ICFA Workshop on HEP
Networking, Grids and Digital Divide
Issues for Global e-Science
Dates: May 23-27, 2005
Venue: Daegu, Korea
Dongchul Son
Center for High Energy Physics
Kyungpook National University
ICFA, Beijing, China
Aug. 2004
Approved by ICFA
August 20, 2004
International ICFA Workshop on HEP Networking, Grids
and Digital Divide Issues for Global e-Science
 Workshop Goals
 Review the current status, progress and barriers to effective
use of major national, continental and transoceanic networks
used by HEP
 Review progress, strengthen opportunities for collaboration,
and explore the means to deal with key issues in Grid
computing and Grid-enabled data analysis, for high energy
physics and other fields of data intensive science, now and in
the future
 Exchange information and ideas, and formulate plans to
develop solutions to specific problems related to the Digital
Divide in various regions, with a focus on Asia Pacific, as well
as Latin America, Russia and Africa
 Continue to advance a broad program of work on reducing or
eliminating the Digital Divide, and ensuring global
collaboration, as related to all of the above aspects.
고에너지물리연구센터
CENTER FOR HIGH ENERGY PHYSICS
Networks and Grids for HENP and
Global Science
 Network backbones and major links used by HENP and other fields
are advancing rapidly
 To the 10 G range in < 3 years; much faster than Moore’s Law
 New HENP and DOE Roadmaps: a factor ~1000 BW Growth per decade
 We are learning to use long distance 10 Gbps networks effectively
 2004 Developments: to 7.5 Gbps flows with TCP over 16 kkm
 Transition to community-operated optical R&E networks (us, ca, nl, pl, cz,
sk, kr, jp …); Emergence of a new generation of “hybrid” optical networks
 We Must Work to Close to Digital Divide
 To Allow Scientists in All World Regions to Take Part in Discoveries
 Removing Regional, Last Mile, Local Bottlenecks and
Compromises in Network Quality are now On the Critical Path
 Important Examples on the Road to Progress in Closing the Digital Divide
 CLARA, CHEPREO, and the Brazil HEPGrid in Latin America
 Optical Networking in Central and Southeast Europe
 APAN Links in the Asia Pacific: GLORIAD and TEIN
 Leadership and Outreach: HEP Groups in Europe, US, Japan, & Korea
Extra Slides
Follow
Internet Growth in the World At Large
Amsterdam Internet Exchange Point Example
5 Minute
Max
30 Gbps
20 Gbps
Some Annual Growth Spurts;
Typically In Summer-Fall
The Rate of HENP Network Usage Growth
(~100% Per Year) is Similar to the World at Large
11.08.04
Average
HENP Lambda Grids:
Fibers for Physics
 Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes
from 1 to 1000 Petabyte Data Stores
 Survivability of the HENP Global Grid System, with
hundreds of such transactions per day (circa 2007)
requires that each transaction be completed in a
relatively short time.
 Example: Take 800 secs to complete the transaction. Then
Transaction Size (TB)
Net Throughput (Gbps)
1
10
10
100
100
1000 (Capacity of
Fiber Today)
 Summary: Providing Switching of 10 Gbps wavelengths
within ~2-4 years; and Terabit Switching within 5-8 years
would enable “Petascale Grids with Terabyte transactions”,
to fully realize the discovery potential of major HENP programs,
as well as other data-intensive research.
JGN2: Japan Gigabit Network (4/04 – 3/08)
20 Gbps Backbone, 6 Optical Cross-Connects
[Legends ]
20Gbps
10Gbps
1Gbps
Optical testbeds
Access points
<10G>
・Ishikawa Hi-tech Exchange Center
(Tatsunokuchi-machi, Ishikawa Prefecture)
<100M>
・Toyama Institute of Information Systems
(Toyama)
・Fukui Prefecture Data Super Highway AP * (Fukui)
Core network nodes
<1G>
・Teleport Okayama
(Okayama)
・Hiroshima University (Higashi
Hiroshima)
<100M>
・Tottori University of
Environmental Studies (Tottori)
・Techno Ark Shimane
(Matsue)
・New Media Plaza Yamaguchi
(Yamaguchi)
<10G>
・Kyushu University (Fukuoka)
<100M>
・NetCom Saga
(Saga)
・Nagasaki University
(Nagasaki)
・Kumamoto Prefectural Office
(Kumamoto)
・Toyonokuni Hyper Network AP
*(Oita)
・Miyazaki University (Miyazaki)
・Kagoshima University
(Kagoshima)
<10G>
・Kyoto University
(Kyoto)
・Osaka University
(Ibaraki)
<1G>
・NICT Kansai Advanced Research Center (Kobe)
<100M>
・Lake Biwa Data Highway AP *
(Ohtsu)
・Nara Prefectural Institute of Industrial
Technology (Nara)
・Wakayama University
(Wakayama)
・Hyogo Prefecture Nishiharima Technopolis
(Kamigori-cho, Hyogo Prefecture)
<100M>
・Hokkaido Regional Network Association
AP *
(Sapporo)
Sapporo
<1G>
・Tohoku University
(Sendai)
・NICT Iwate IT Open Laboratory
(Takizawa-mura, Iwate Prefecture)
<100M>
・Hachinohe Institute of Technology
(Hachinohe, Aomori Prefecture)
・Akita Regional IX * (Akita)
・Keio University Tsuruoka Campus
(Tsuruoka, Yamagata Prefecture)
・Aizu University
(Aizu Wakamatsu)
<100M>
・Niigata University
(Niigata)
・Matsumoto Information
Creation Center
(Matsumoto,
Nagano Prefecture)
<10G>
・Tokyo University
Fukuoka
Sendai
NICT Kita Kyushu IT
Open Laboratory
Kanazawa
Nagano
Osaka
NICT Koganei
Headquarters
Okayama
Kochi
Okinawa
<100M>
・Kagawa Prefecture Industry Promotion
Center (Takamatsu)
・Tokushima University (Tokushima)
・Ehime University (Matsuyama)
・Kochi University of Technology
(Tosayamada-cho, Kochi Prefecture)
NICT Keihannna Human
Info-Communications
Research Center
Nagoya
<100M>
・Nagoya University (Nagoya)
・University of Shizuoka (Shizuoka)
・Softopia Japan (Ogaki, Gifu Prefecture)
・Mie Prefectural College of Nursing (Tsu)
(Bunkyo Ward, Tokyo)
・NICT Kashima Space Research Center
(Kashima, Ibaraki Prefecture)
<1G>
・Yokosuka Telecom Research Park
(Yokosuka, Kanagawa Prefecture)
<100M>
・Utsunomiya University (Utsunomiya)
・Gunma Industrial Technology Center
(Maebashi)
・Reitaku University
(Kashiwa, Chiba Prefecture)
・NICT Honjo Information and
Communications Open Laboratory
(Honjo, Saitama Prefecture)
・Yamanashi Prefecture Open R&D Center
Research (Nakakoma-gun, Yamanashi Prefecture)
NICT Tsukuba
Center
Otemachi
USA
*IX:Internet eXchange
AP:Access Point
ICFA Standing Committee on
Interregional Connectivity (SCIC)
 Created by ICFA in July 1998 in Vancouver
 CHARGE:
Make recommendations to ICFA concerning the connectivity
between the Americas, Asia and Europe
 As part of the process of developing these
recommendations, the committee should
 Monitor traffic
 Keep track of technology developments
 Periodically review forecasts of future bandwidth needs
 Provide early warning of potential problems
 Representatives: Major labs, ECFA, ACFA; North American
and Latin American Physics Communities
 Monitoring, Advanced Technologies, and Digital Divide
Working Groups Formed in 2002
APAN Link Information (1 Of 2)










Countries
AU-US
AU-US (PAIX)
CN-HK
CN-HK
CN-JP
CN-JP
CN-US
CN-US
HK-US
HK-TW
IN-US/UK
JP-ASIA
JP-ID
JP-KR
JP-LA
JP-MY
JP-PH
JP-PH
JP-SG
JP-TH
JP-TH
JP-US
Network
AARNet
AARNet
CERNET
CSTNET
CERNET
CERNET
CERNET
CSTNET
HARNET
HARNET/TANET/ASNET
ERNET
UDL
AI3(ITB)
APII
AI3 (NUOL)
AI3(USM)
AI3(ASTI)
MAFFIN
AI3(TP)
AI3(AIT)
SINET(ThaiSarn)
TransPac
Bandwidth (Mbps)
310 to 2 x 10 Gbps soon
622
622
155
155
45
155
155
45
100
16
9
0.5/1.5
2Gbps
0.128/0.128
1.5/0.5
1.5/0.5
6
1.5/0.5
(service interrupted)
2
5 Gbps to 2x10 Gbps soon
2004.7.7 [email protected]
AUP/Remark
R&E + Commodity
R&E + Commodity
R&E + Commodity
R&E
R&E
Native IPv6
R&E + Commodity
R&E
R&E
R&E
R&E
R&E
R&E
R&E
R&E
R&E
R&E
Research
R&E
R&E
R&E
R&E
SCIC Focus on the Digital Divide:
Several Perspectives
 Work on Policies and/or Pricing: pk, in, br, SE Europe, …




 Find Ways to work with vendors, NRENs, and/or Gov’ts
 Point to Model Cases: e.g. Poland, Slovakia, Czech Republic
 Share Pricing and Technology-Cost Information
Inter-Regional Projects
 GLORIAD, Russia-China-US Optical Ring
 Latin America: CHEPREO (US-Brazil); EU CLARA Project
Workshops and Tutorials/Training Sessions
 For Example: Digital Divide and HEPGrid Workshop,
UERJ Rio, Feb. 2004; Next DD Workshop in Daegu May 2005
Help with Modernizing the Infrastructure
 Raise Technology Awareness; Help Commission, Develop
 Provide Tools for Effective Use: Monitoring, Collaboration
Participate in Standards Development; Open Tools
 Advanced TCP stacks; Grid systems
Bandwidth prices in Africa vary dramatically; are in general
many times what they could be if universities purchase in volume
Sample Bandwidth Costs for African Universities
Nigeria
$20.00
Average
$11.03
Uganda
$9.84
Ghana
$6.77
IBAUD Target
USA
$3.00
Avg. Unit Cost is 40X US Avg.;
Cost is Several Hundred Times,
Compared to Leading Countries
$0.27
$0.00
$5.00
$10.00
$15.00
$20.00
$25.00
$/kbps/month
Sample size of 26 universities
Average Cost for VSAT service: Quality, CIR,
Rx, Tx not distinguished
Roy Steiner
Internet2 Workshop
Managing Global Systems: Dynamic
Scalable Services Architecture
MonALISA: http://monalisa.cacr.caltech.edu
24 X 7 Operations
Multiple Orgs.
 Grid2003
 US CMS
 CMS-DC04
 ALICE
 STAR
 VRVS
 ABILENE
 GEANT
 + GLORIAD
 “Station Server”
Services-engines
at sites host many
“Dynamic Services”
 Scales to
thousands of
service-Instances
 Servers autodiscover
and interconnect
dynamically to form
a robust fabric
 Autonomous agents
+ CLARENS: Web Services Fabric and Portal Architecture
GEANT and CERNlink
•
GEANT plays a role in Europe similar to Abilene and ESnet in the
US – it interconnects the European National Research and
Education networks, to which the European R&E sites connect
•
GEANT currently carries essentially all ESnet international traffic
(LHC use of CERNlink to DOE labs is still ramping up)
• GN2 is the second phase of the GEANT project
o
•
The architecture of GN2 is remarkably similar to the new ESnet
Science Data Network + IP core network model
CERNlink will be the main CERN to US, LHC data path
o
Both US, LHC tier 1 centers are on ESnet (FNAL and BNL)
o
ESnet directly connects at 10 Gb/s to the CERNlink
o
The ESnet new architecture (Science Data Network) will
accommodate the anticipated 40 Gb/s from LHC to US
41
AsiaPac
ESnet New Architecture Goal FY05
Science Data Network Phase 1 and SF BA MAN
SEA
Europe
CERN (2X10Gb/s)
Europe
Japan
Japan
New
Core
CHI
SNV
NYC
DEN
DC
Japan
ALB
SDG
Existing ESnet Core
MANs
current ESnet hubs
ELP
new ESnet hubs
High-speed cross connects with Internet2/Abilene
Major DOE Office of Science Sites
Qwest ESnet core
NLR ESnet core
Lab supplied
Major international
UltraSciNet
2.5 Gbs
10 Gbs
ATL
Future phases
42
AsiaPac
SEA
ESnet New Architecture Goal FY06
Science Data Network Phase 2 and Chicago MAN
Europe
CERN (3X10Gb/s)
Europe
Japan
Japan
CHI
SNV
NYC
DEN
DC
Japan
ALB
SDG
MANs
current ESnet hubs
ELP
new ESnet hubs
High-speed cross connects with Internet2/Abilene
Major DOE Office of Science Sites
ESnet IP core (Qwest)
ESnet SDN core
Lab supplied
Major international
UltraSciNet
2.5 Gbs
10 Gbs
ATL
Future phases
43
Abilene Map During LSR Trial
CERN
Monalisa
TCP variants performance
Tests between CERN and Caltech
Capacity = OC-192 9.5Gbps; 264 ms round trip latency; 1 flow
Sending station: Tyan S2882 motherboard, 2x Opteron 2.4 GHz , 2 GB DDR.
Receiving station (CERN OpenLab): HP rx4640, 4x 1.5GHz Itanium-2, zx1
chipset, 8GB memory
Network adapter: S2io 10 GE
3.0 Gbps
Linux TCP
4.1 Gbps
5.0 Gbps
Linux Westwood+ Linux BIC TCP
7.3 Gbps
FAST
High Throughput Disk to Disk
Transfers: From 0.1 to 1GByte/sec
Server Hardware (Rather than Network)
Bottlenecks:
 Write/read and transmit tasks share
the same limited resources: CPU, PCIX bus, memory, IO chipset
 PCI-X bus bandwidth: 8.5 Gbps
[133MHz x 64 bit]
 Link aggregation (802.3ad): Logical
interface with two physical interfaces
on two independent PCI-X buses.
 LAN test: 11.1 Gbps (memory to
memory)
Performance in this range (from 100 MByte/sec
up to 1 GByte/sec) is required to build a
responsive Grid-based Processing and
Analysis System for LHC
UltraLight Optical Exchange Point
Photonic switch
L1, L2 and L3 services
Interfaces
 1GE and 10GE
 10GE WAN-PHY (SONET friendly)
Hybrid packet- and circuit-switched PoP
 Interface between packet- and circuit-switched networks
Control plane is L3
SC2004: HEP network layout
 Joint Caltech, FNAL,
CERN, SLAC, UF….
 11 10 Gbps waves to
HEP’s show floor
 Bandwidth challenge:
aggregate throughput of
100 Gbps
 FAST TCP