ppt - The Center for High Energy Physics

Download Report

Transcript ppt - The Center for High Energy Physics

Networks and Grids for Global Science
Harvey B. Newman
California Institute of Technology
3rd International Data Grid Workshop
Daegu, Korea August 26, 2004
Challenges for Global HENP Experiments
LHC Example- 2007
5000+ Physicists
250+ Institutes
60+ Countries


BaBar/D0 Example - 2004
500+ Physicists
100+ Institutes
35+ Countries
Major Challenges (Shared with Other Fields)
 Worldwide Communication and Collaboration
 Managing Globally Distributed Computing & Data Resources,
for Cooperative Software Development and Analysis
Large Hadron Collider (LHC)
CERN, Geneva: 2007 Start
 pp s =14 TeV L=1034 cm-2 s-1
 27 km Tunnel in Switzerland & France
CMS
TOTEM
5000+ Physicists
250+ Institutes
60+ Countries
First Beams:
Summer 2007
Physics Runs:
from Fall 2007
ALICE : HI
Atlas
LHCb: B-physics
Higgs, SUSY, QG Plasma, CP Violation, … the Unexpected
LHC Data Grid Hierarchy:
Developed at Caltech
CERN/Outside Resource Ratio ~1:2
Tier0/( Tier1)/( Tier2)
~1:1:1
~PByte/sec
~100-1500
MBytes/sec
Online System
Experiment
CERN Center
PBs of Disk;
Tape Robot
Tier 0 +1
Tier 1
10 - 40 Gbps
IN2P3 Center
INFN Center
RAL Center
FNAL Center
~10 Gbps
Tier 2
Tier 3
Tier2 Center
Tier2 Center
Tier2 Center
Tier2 CenterTier2 Center
~1-10 Gbps
Institute Institute
Physics data cache
Workstations
Institute
Institute
1 to 10 Gbps
Tens of Petabytes by 2007-8.
An Exabyte ~5-7 Years later.
Tier 4
Emerging Vision: A Richly Structured, Global Dynamic System
ICFA and Global Networks
for Collaborative Science
 National and International Networks, with sufficient
(rapidly increasing) capacity and seamless end-to-end
capability, are essential for
 The daily conduct of collaborative work in both
experiment and theory
 Experiment development & construction
on a global scale
 Grid systems supporting analysis involving
physicists in all world regions
 The conception, design and implementation of
next generation facilities as “global networks”
 “Collaborations on this scale would never have been
attempted, if they could not rely on excellent networks”
Challenges of Next Generation
Science in the Information Age
Petabytes of complex data explored and analyzed by
1000s of globally dispersed scientists, in hundreds of teams
 Flagship Applications
 High Energy & Nuclear Physics, AstroPhysics Sky Surveys:
TByte to PByte “block” transfers at 1-10+ Gbps
 Fusion Energy: Time Critical Burst-Data Distribution;
Distributed Plasma Simulations, Visualization, Analysis
 eVLBI: Many real time data streams at 1-10 Gbps
 BioInformatics, Clinical Imaging: GByte images on demand
 Provide results with rapid turnaround, over networks
of varying capability in different world regions
 Advanced integrated applications, such as Data Grids,
rely on seamless operation of our LANs and WANs
 With reliable, quantifiable high performance
Int’l Networks BW on Major Links
for HENP: US-CERN Example
 Rate of Progress >> Moore’s Law (US-CERN Example)
 9.6 kbps Analog









(1985)
64-256 kbps Digital
(1989 - 1994)
1.5 Mbps Shared
(1990-3; IBM)
2 -4 Mbps
(1996-1998)
12-20 Mbps
(1999-2000)
155-310 Mbps
(2001-2)
622 Mbps
(2002-3)
2.5 Gbps 
(2003-4)
10 Gbps 
(2005)
4x10 Gbps or 40 Gbps (2007-8)
 A factor of ~1M Bandwidth Improvement over
1985-2005 (a factor of ~5k during 1995-2005)
 A prime enabler of major HENP programs
 HENP has become a leading applications driver,
and also a co-developer of global networks
[X 7 – 27]
[X 160]
[X 200-400]
[X 1.2k-2k]
[X 16k – 32k]
[X 65k]
[X 250k]
[X 1M]
[X 4M]
History of Bandwidth Usage – One Large
Network; One Large Research Site
ESnet Accepted Traffic 1/90 – 1/04
Exponential Growth Since ’92;
ESnet Monthly Accepted Traffic 1/90-1/04
Annual Rate Increased from 1.7 to 2.0X
Per Year In the Last 5 Years
300
200
150
100
50
Jul, 03
Oct, 02
Jan, 02
Apr, 01
Jul, 00
Oct, 99
Jan, 99
Apr, 98
Jul, 97
Oct, 96
Jan, 96
Apr, 95
Jul, 94
Oct,93
Jan, 93
Apr, 92
Jul,91
Oct, 90
0
Jan, 90
TByte/Month
250
SLAC Traffic ~400 Mbps; Growth in
Steps (ESNet Limit): ~ 10X/4 Years.
Projected: ~2 Terabits/s by ~2014
Internet Growth in the World At Large
Amsterdam Internet Exchange Point Example
5 Minute
Max
30 Gbps
20 Gbps
Some Annual Growth Spurts;
Typically In Summer-Fall
The Rate of HENP Network Usage Growth
(~100% Per Year) is Similar to the World at Large
11.08.04
Average
http://www.guinnessworldrecords.com/
6.6 Gbps
16500km
LSR History – IPv4 single stream
80
5.4 Gbps
7067km
60
2.5 Gbps
0.9 Gbps 10037km
0.4 Gbps 10978km
12272km
40
20
Jun-
04
0
Apr 04
03
Nov-
03
Feb-
Oct 03
02
Monitoring of the Abilene traffic in LA:
Nov-
120
100
4.2 Gbps
5.6 Gbps 16343km
10949km
Apr 02
 Judged on product of transfer
speed and distance end-to-end,
using standard (TCP/IP) protocols.
 Across Production Net: Abilene
 IPv6 record: 4.0 Gbps between
Geneva and Phoenix (SC2003)
 IPv4 Multi-stream record with
Windows & Linux: 6.6 Gbps
between Caltech and CERN (16 kkm;
“Grand Tour d’Abilene”) June 2004
 Exceeded 100 Petabit-m/sec
 Single Stream 7.5 Gbps X 16 kkm
with Linux Achieved in July
 Concentrate now on reliable
Terabyte-scale file transfers
Note System Issues: CPU, PCI-X
Bus, NIC, I/O Controllers, Drivers
June 2004 Record Network
Petabitmeter (10^15 bit*meter)
Internet 2 Land Speed Record (LSR)
HENP Bandwidth Roadmap
for Major Links (in Gbps)
Year
Production
Experimental
2001
2002
0.155
0.622
0.622-2.5
2.5
2003
2.5
10
DWDM; 1 + 10 GigE
Integration
2005
10
2-4 X 10
 Switch;
 Provisioning
2007
2-4 X 10
1st Gen.  Grids
2009
~10 X 10
or 1-2 X 40
~5 X 40 or
~20 X 10
~Terabit
~10 X 10;
40 Gbps
~5 X 40 or
~20-50 X 10
~25 X 40 or
~100 X 10
2011
2013
~MultiTbps
Remarks
SONET/SDH
SONET/SDH
DWDM; GigE Integ.
40 Gbps 
Switching
2nd Gen  Grids
Terabit Networks
~Fill One Fiber
Continuing Trend: ~1000 Times Bandwidth Growth Per Decade;
Keeping Pace with Network BW Usage (ESNet, SURFNet etc.)
Evolving Quantitative Science Requirements for
Networks (DOE High Perf. Network Workshop)
Science Areas
Today
End2End
Throughput
5 years
End2End
Throughput
5-10 Years
End2End
Throughput
Remarks
High Energy
Physics
0.5 Gb/s
100 Gb/s
1000 Gb/s
High bulk
throughput
Climate (Data &
Computation)
0.5 Gb/s
160-200 Gb/s
N x 1000 Gb/s
High bulk
throughput
SNS NanoScience
Not yet started
1 Gb/s
1000 Gb/s + QoS
for Control
Channel
Remote control
and time critical
throughput
Fusion Energy
0.066 Gb/s
(500 MB/s
burst)
0.198 Gb/s
(500MB/
20 sec. burst)
N x 1000 Gb/s
Time critical
throughput
Astrophysics
0.013 Gb/s
(1 TByte/week)
N*N multicast
1000 Gb/s
Computational
steering and
collaborations
Genomics Data &
Computation
0.091 Gb/s
(1 TBy/day)
100s of users
1000 Gb/s + QoS
for Control
Channel
High throughput
and steering
Transition beginning now to optical, multiwavelength Community owned or leased
“dark fiber” networks for R&E
National Lambda Rail (NLR)
SEA
NLR
POR
Coming
SAC
NYC
CHI
OGD
DEN
SVL
CLE
FRE
PIT
KAN
NAS
STR
LAX
RAL
PHO
SDG
WAL
OLG
ATL
DAL
JAC
BOS
WDC
Up Now
Initially 4 10G
Wavelengths
Northern Route
LA-JAX by 4Q04
Internet2 HOPI
Initiative (w/HEP)
To 40 10G
Waves in Future
nl, ca, pl, cz,
uk, ko, jp
18 US States

15808 Terminal, Regen or OADM site
Fiber route
JGN2: Japan Gigabit Network (4/04 – 3/08)
20 Gbps Backbone, 6 Optical Cross-Connects
[Legends ]
20Gbps
10Gbps
1Gbps
Optical testbeds
Access points
<10G>
・Ishikawa Hi-tech Exchange Center
(Tatsunokuchi-machi, Ishikawa Prefecture)
<100M>
・Toyama Institute of Information Systems
(Toyama)
・Fukui Prefecture Data Super Highway AP * (Fukui)
Core network nodes
<1G>
・Teleport Okayama
(Okayama)
・Hiroshima University (Higashi
Hiroshima)
<100M>
・Tottori University of
Environmental Studies (Tottori)
・Techno Ark Shimane
(Matsue)
・New Media Plaza Yamaguchi
(Yamaguchi)
<10G>
・Kyushu University (Fukuoka)
<100M>
・NetCom Saga
(Saga)
・Nagasaki University
(Nagasaki)
・Kumamoto Prefectural Office
(Kumamoto)
・Toyonokuni Hyper Network AP
*(Oita)
・Miyazaki University (Miyazaki)
・Kagoshima University
(Kagoshima)
<10G>
・Kyoto University
(Kyoto)
・Osaka University
(Ibaraki)
<1G>
・NICT Kansai Advanced Research Center (Kobe)
<100M>
・Lake Biwa Data Highway AP *
(Ohtsu)
・Nara Prefectural Institute of Industrial
Technology (Nara)
・Wakayama University
(Wakayama)
・Hyogo Prefecture Nishiharima Technopolis
(Kamigori-cho, Hyogo Prefecture)
<100M>
・Hokkaido Regional Network Association
AP *
(Sapporo)
Sapporo
<1G>
・Tohoku University
(Sendai)
・NICT Iwate IT Open Laboratory
(Takizawa-mura, Iwate Prefecture)
<100M>
・Hachinohe Institute of Technology
(Hachinohe, Aomori Prefecture)
・Akita Regional IX * (Akita)
・Keio University Tsuruoka Campus
(Tsuruoka, Yamagata Prefecture)
・Aizu University
(Aizu Wakamatsu)
<100M>
・Niigata University
(Niigata)
・Matsumoto Information
Creation Center
(Matsumoto,
Nagano Prefecture)
<10G>
・Tokyo University
Fukuoka
Sendai
NICT Kita Kyushu IT
Open Laboratory
Kanazawa
Nagano
Osaka
NICT Koganei
Headquarters
Okayama
Kochi
Okinawa
<100M>
・Kagawa Prefecture Industry Promotion
Center (Takamatsu)
・Tokushima University (Tokushima)
・Ehime University (Matsuyama)
・Kochi University of Technology
(Tosayamada-cho, Kochi Prefecture)
NICT Keihannna Human
Info-Communications
Research Center
Nagoya
<100M>
・Nagoya University (Nagoya)
・University of Shizuoka (Shizuoka)
・Softopia Japan (Ogaki, Gifu Prefecture)
・Mie Prefectural College of Nursing (Tsu)
(Bunkyo Ward, Tokyo)
・NICT Kashima Space Research Center
(Kashima, Ibaraki Prefecture)
<1G>
・Yokosuka Telecom Research Park
(Yokosuka, Kanagawa Prefecture)
<100M>
・Utsunomiya University (Utsunomiya)
・Gunma Industrial Technology Center
(Maebashi)
・Reitaku University
(Kashiwa, Chiba Prefecture)
・NICT Honjo Information and
Communications Open Laboratory
(Honjo, Saitama Prefecture)
・Yamanashi Prefecture Open R&D Center
Research (Nakakoma-gun, Yamanashi Prefecture)
NICT Tsukuba
Center
Otemachi
USA
*IX:Internet eXchange
AP:Access Point
GLIF: Global Lambda Integrated Facility
“GLIF is a World Scale
Lambda based Lab for
Application and
Middleware
development, where
Grid applications ride on
dynamically configured
networks based on
optical wavelengths ...
Coexisting with more
traditional packetswitched network traffic
4th GLIF Workshop:
Nottingham UK Sept.
2004
10 Gbps Wavelengths For R&E Network
Development Are Prolifering,
Across Continents and Oceans
ICFA Standing Committee on
Interregional Connectivity (SCIC)
 Created by ICFA in July 1998 in Vancouver
 CHARGE:
Make recommendations to ICFA concerning the connectivity
between the Americas, Asia and Europe
 As part of the process of developing these
recommendations, the committee should
 Monitor traffic
 Keep track of technology developments
 Periodically review forecasts of future bandwidth needs
 Provide early warning of potential problems
 Representatives: Major labs, ECFA, ACFA; North American
and Latin American Physics Communities
 Monitoring, Advanced Technologies, and Digital Divide
Working Groups Formed in 2002
SCIC in 2003-2004
http://cern.ch/icfa-scic
Three 2004 Reports; Presented to ICFA in February
 Main Report: “Networking for HENP” [H. Newman et al.]
 Includes Brief Updates on Monitoring, the Digital Divide
and Advanced Technologies [*]
 A World Network Overview (with 27 Appendices):
Status and Plans for the Next Few Years of National &
Regional Networks, and Optical Network Initiatives
 Monitoring Working Group Report
[L. Cottrell]
 Digital Divide in Russia
[V. Ilyin]
August 2004 Update Reports at the SCIC Web Site:
See http://icfa-scic.web.cern.ch/ICFA-SCIC/documents.htm
 Asia Pacific, Latin America, GLORIAD (US-Ru-Ko-China);
Brazil, Korea, ESNet, etc.
SCIC Main Conclusion for 2003
Setting the Tone for 2004
 The disparity among regions in HENP could increase
even more sharply, as we learn to use advanced networks
effectively, and we develop dynamic Grid systems in the
“most favored” regions
 We must take action, and work to Close the Digital Divide
 To make Physicists from All World Regions Full
Partners in Their Experiments; and in the Process
of Discovery
 This is essential for the health of our global
experimental collaborations, our plans for future
projects, and our field.
Cont’d SCIC Focus on the Digital
Divide: Several Perspectives
 Work on Policies and/or Pricing: pk, in, br, SE Europe, …




 Find Ways to work with vendors, NRENs, and/or Gov’ts
 Point to Model Cases: e.g. Poland, Slovakia, Czech Republic
Inter-Regional Projects
 GLORIAD, Russia-China-US Optical Ring
 Latin America: CHEPREO (US-Brazil); EU CLARA Project
 Virtual SILK Highway Project (DESY): FSU satellite links
Workshops and Tutorials/Training Sessions
 For Example: Digital Divide and HEPGrid Workshop,
UERJ Rio, Feb. 2004; Next DD Workshop in Daegu May 2005
Help with Modernizing the Infrastructure
 Raise Technology Awareness; Help Commission, Develop
 Provide Tools for Effective Use: Monitoring, Collaboration
Participate in Standards Development; Open Tools
 Advanced TCP stacks; Grid systems
ICFA Report: Networks for HENP
General Conclusions (2)
 Reliable high End-to-end Performance of networked applications such as
large file transfers and Data Grids is required. Achieving this requires:
 A coherent approach to End-to-end monitoring extending to all regions
that allows physicists throughout the world to extract clear information
 Upgrading campus infrastructures.
To support Gbps data transfers in most HEP centers. One reason for
under-utilization of national and Int’l backbones, is the lack of bandwidth
to end-user groups in the campus
 Removing local, last mile, and nat’l and int’l bottlenecks
end-to-end, whether technical or political in origin.
While National and International backbones have reached 2.5 to
10 Gbps speeds in many countries, the bandwidths across borders,
the countryside or the city may be much less.
This problem is very widespread in our community, with
examples stretching from the Asia Pacific to Latin America
to the Northeastern U.S. Root causes for this vary, from lack
of local infrastructure to unfavorable pricing policies.
ICFA Report Update (8/2004): Main
Trends Continue, Some Accelerate
 Current generation of 2.5-10 Gbps network backbones and major Int’l






links arrived in 2-3 Years [US+Europe+Japan; Now Korea and China]
 Capability Grew 4 to 100s of Times; Much Faster than Moore’s Law
Proliferation of 10G links across the Atlantic Now
 Direct result of Falling Network Prices: $ 0.5 – 1M Per Year for 10G
Ability to fully use long 10G paths with TCP continues to advance:
7.5 Gbps X 16kkm (August 2004)
Technological progress driving equipment costs in end-systems lower
 “Commoditization” of Gbit Ethernet (GbE) ~complete:
($20-50 per port); 10 GbE commoditization underway
Move to owned or leased optical nets (us, ca, nl, sk, po, ko, jp)
well underway in several areas of the world
Emergence of “Hybrid” Network Model: GNEW2004; UltraLight, GLIF
While there is progress in some less-advantaged regions, the
gap between the technologically “rich” and “poor” is widening
ICFA SCIC Monitoring WG (L. Cottrell)
See www.slac.stanford.edu/grp/scs/net/talk03/icfa-aug04.ppt
 Now monitoring 650 sites in 115 countries
 In last 9 months:
 Several sites in Russia (GLORIAD)
 Many hosts in Africa (27 of 54 Countries)
 Monitoring sites in Pakistan, Brazil
C. Asia, Russia, SE Europe,
L. America, M. East, China:
4-5 yrs behind
India, Africa: 7 yrs behind
10000
Edu (141)
1000
1000
Europe(150)
Canada (27)
Mid East (16)
S.E.
Europe (21)
10
100
100
10
Caucasus (8)
Important for policy makers
Dec-04
Dec-03
1
Dec-02
Dec-01
Africa (30)
Dec-00
Dec-99
India(7)
Dec-98
Dec-97
Dec-96
China (13)
Russia(17)
1
Jan-96
View from
CERN
Confirms
This View
10000
C. Asia (8)
Latin America (37)
50% Improvement/year
~ factor of 10 in < 6 years
Jan-95
PingER
World View
Seen from
SLAC
Derived TCP throughput in KBytes/sec
TCP throughput measured from N. America
From the PingER project, Aug 2004
to World Regions
Research Networking in Latin
America: August 2004
 The only Countries with research
network connectivity now in
Latin America:
Argentina, Brazil, Chile,
Mexico, Venezuela
 AmPath Provided connectivity for
some South American countries
 New CHEPREO Sao Paolo-Miami Link
at 622 Mbps Starting This Month
AmPath
New: CLARA (Funded by EU)
 Regional Network Connecting 19 Countries:
Argentina
Brasil
Bolivia
Chile
Colombia
Costa Rica
Cuba
Dominican Republic
Ecuador
El Salvador
Guatemala
Honduras
Mexico
Panama
Paraguay
Peru
Uruguay
Venezuela
Nicaragua
155 Mbps Backbone with 10-45 Mbps Spurs;
4 Mbps Satellite to Cuba; 622 Mbps to Europe
Also NSF Proposals To
Connect at 2.5G to US
HEPGRID (CMS) in Brazil (Santoro et al.)
HEPGRID-CMS/BRAZIL is a project to build a Grid that
At Regional Level will include CBPF,UFRJ,UFRGS,UFBA, UERJ & UNESP
At International Level will be integrated with CMS Grid based at CERN;
focal points include iVGDL/Grid3 and bilateral projects with Caltech Group
Brazilian HEPGRID
On line
systems
T0 +T1
2.5 - 10
Gbps
CERN
T1
France
Germany
UNESP/USP
SPRACE-Working
T3 T2 UFRGS
UERJ:
T2T1,
100500
Nodes;
Plus T2s to
100 Nodes
Italy
BRAZIL
622 Mbps
UERJ Regional
Tier2 Ctr
USA
T2 T1
Gigabit
UERJ
CBPF
UFBA
UFRJ
T4
Individual
Machines
PROGRESS in SE Europe (Sk, Pl, Cz, Hu, …)
1660 km of Dark
Fiber CWDM Links,
up to 112 km.
1 to 4 Gbps (GbE)
August 2002:
First NREN in
Europe to establish
Int’l GbE Dark Fiber
Link, to Austria
April 2003 to Czech
Republic.
Planning 10 Gbps
Backbone; dark
fiber link to Poland
this year.
Dark Fiber in Eastern Europe
Poland: PIONIER Network
2650 km Fiber
Connects 16 MANs;
to 5200 km and
21 MANs by 2005
GDAŃS K
KOS ZALIN
OLS ZTYN
S ZCZECIN
BYDGOS ZCZ
BIAŁYS TOK
TORUŃ
POZNAŃ
Support
 Computational Grids;
Domain-Specific
Grids
 Digital Libraries
 Interactive TV
 Add’l Fibers for
WARS ZAWA
GUBIN
ZIELONA
GÓRA
S IEDLCE
ŁÓDŹ
PUŁAWY
WROCŁAW
RADOM
CZĘS TOCHOWA
KIELCE
OPOLE
GLIWICE
KATOWICE
KRAKÓW
CIES ZYN
BIELS KO-BIAŁA
e-Regional Initiatives
Ins ta lle d fibe r
P IONIER node s
Fibe rs pla nne d in 2004
P IONIER node s pla nne d in 2004
RZES ZÓW
LUBLIN
The Advantage of Dark Fiber
CESNET Case Study (Czech Republic)
Leased 1 x 2,5G
(EURO/Month)
about 150km (e.g. Ústí n.L. - Liberec)
7,000
about 300km (e.g. Praha - Brno)
8,000
1 x 2,5G
2513 km
Leased Fibers
(Since 1999)
*
**
Leased 4 x 2,5G
(EURO/Month)
about 150km (e.g. Ústí n.L. - Liberec)
14,000
about 300km (e.g. Praha - Brno)
23,000
Cost Savings of
50-70% Over 4 Years
for Long 2.5G
or 10G Links
*
**
Leased 1 x 10G
(EURO/Month)
about 150km (e.g. Ústí n.L. - Liberec)
14,000
about 300km (e.g. Praha - Brno)
16,000
*
**
8 000 *
11 000 **
Leased fibre with own equipment
(EURO/Month)
5 000 *
8 000 **
2 x booster 21dBm, 2 x DCF
2 x (booster 21dBm + in-line + preamplifier) + 6 x DCF
Leased 4 x 10G
(EURO/Month)
about 150km (e.g. Ústí n.L. - Liberec)
29,000
about 300km (e.g. Praha - Brno)
47,000
4 x 10G
Leased fibre with own equipment
(EURO/Month)
2 x booster 24dBm, DWDM 2,5G
2 x (booster+In-line+preamp), 6 x DCF, DWDM 2,5G
1 x 10G
*
**
5 000 *
7 000 **
2 x booster 18dBm
2 x booster 27dBm + 2 x preamp + 6 x DCF
4 x 2,5G
Case Study Result
Wavelength Service
Vs. Fiber Lease:
Leased fibre with own equipment
(EURO/Month)
Leased fibre with own equipment
(EURO/Month)
12 000 *
14 000 **
2 x booster 24dBm, 2 x DCF, DWDM 10G
2 x (booster +In-line+preampr), 6 x DCF, DWDM 10G
Asia Pacific Academic Network Connectivity
APAN Status July 2004
RU

Europe
200M
34M
Connectivity to
US from JP, KO,
AU is Advancing
Rapidly.
Progress in the
Region, and to
Europe is Much
Slower
CN

2G
KR
155M 
1.2G
310M
 TW
`722M
HK 
TH 
 LK
45M 
90M
155M
932M
(to 21 G)
 ID
2.5M
Access Point
Exchange Point
Current Status
2004 (plan)
45M
7.5M
 PH
1.5M
155M 1.5M
VN
MY 2M

12M
SG
2M
US
9.1G
622M
777M
 IN
20.9G
JP
16M
AU 
Better North/South Linkages within Asia
JP-SG link: 155Mbps in 2005 is proposed to NSF by CIREN
JP- TH link: 2Mbps  45Mbps in 2004 is being studied.
CIREN is studying an extension to India
APAN Link Information (1 Of 2)










Countries
AU-US
AU-US (PAIX)
CN-HK
CN-HK
CN-JP
CN-JP
CN-US
CN-US
HK-US
HK-TW
IN-US/UK
JP-ASIA
JP-ID
JP-KR
JP-LA
JP-MY
JP-PH
JP-PH
JP-SG
JP-TH
JP-TH
JP-US
Network
AARNet
AARNet
CERNET
CSTNET
CERNET
CERNET
CERNET
CSTNET
HARNET
HARNET/TANET/ASNET
ERNET
UDL
AI3(ITB)
APII
AI3 (NUOL)
AI3(USM)
AI3(ASTI)
MAFFIN
AI3(TP)
AI3(AIT)
SINET(ThaiSarn)
TransPac
Bandwidth (Mbps)
310 to 2 x 10 Gbps soon
622
622
155
155
45
155
155
45
100
16
9
0.5/1.5
2Gbps
0.128/0.128
1.5/0.5
1.5/0.5
6
1.5/0.5
(service interrupted)
2
5 Gbps to 2x10 Gbps soon
2004.7.7 [email protected]
AUP/Remark
R&E + Commodity
R&E + Commodity
R&E + Commodity
R&E
R&E
Native IPv6
R&E + Commodity
R&E
R&E
R&E
R&E
R&E
R&E
R&E
R&E
R&E
R&E
Research
R&E
R&E
R&E
R&E
Internet in China
(J.P.Wu APAN July 2004)
Internet users in China:
from 6.8 Million to 78 Million within 6 months
Wireline
Dial Up
ISDN
23.4M
45.0M
4.9M
Broad
band
9.8M
Backbone:2.5-10G DWDM+Router
International links:20G
Exchange Points:> 30G(BJ,SH,GZ)
Last Miles

Ethernet,WLAN,ADSL,CTV,CDMA,ISDN,GPRS,
Dial-up
IP Addresses: 32M(1A+233B+146C);
Need IPv6
China: CERNET Update
1995, 64K Nationwide backbone connecting
8 cities, 100 Universities
1998, 2M Nation wide backbone connecting
20 cities, 300 Universities
2000, Own dark fiber crossing 30+ major
cities and 30,000 kilometers
2001, CERNET DWDM/SDH network finished
2001, 2.5G/155M Backbone connecting 36
cities, 800 universities
2003, 1300+ universities and institutes, over
15 Million Users
CERNET2 and Key Technologies
CERNET 2: Next Generation Education
and Research Network in China
CERNET 2 Backbone connecting 15-20
GigaPOPs at 2.5G-10Gbps (I2-like Model)
Connecting 200 Universities and 100+
Research Institutes at 1Gbps-10Gbps
Native IPv6 and Lambda Networking
Support/Deployment of:
E2E performance monitoring
 Middleware and Advanced Applications
 Multicast

APAN-KR : KREONET/KREONet2 II
KREONET
 11 Regions, 12 POP Centers
 Optical 2.5-10G Backbone;
SONET/SDH, POS, ATM
 National IX Connection
SuperSIREN (7 Res. Institutes)
 Optical 10-40G Backbone
 High Speed Wireless: 1.25 G
 Collaborative Environment
Support
KREONET2
 Support for Next Gen. Apps:
 IPv6, QoS, Multicast;
Bandwidth Alloc. Services
 StarLight/Abilene Connection
Int’l Links
 US: 2 X 622 Mbps via CA*Net;
GbE Link via TransPAC;
155 (to 10G) GLORIAD Link
 Japan: 2 Gbps
 TEIN to GEANT: 34 Mbps
 Singapore (SingAREN): 8 Mbps
KR-US/CA Transpacific connection
 Participation in Global-scale Lambda Networking
 Two STM-4 circuits (1.2G) : KR-CA-US
 Global lambda networking : North America, Europe,
Asia Pacific, etc.
Global Lambda
Networking
KREONET/SuperSIReN
CA*Net4
StarLight
STM-4 * 2
Chicago
APII-testbed/KREONet2
Seattle
PacWave
New Record: 916 Mbps from CHEP/KNU
to Caltech (UDP KOREN-TransPAC-Caltech, 22/06/’04)
Date: Tue, 22 Jun 2004 13:47:25 +0900
From: Kihwan Kwon
To: Dongchul Son <[email protected]>
[root@sul Iperf]# ./iperf -c socrates.cacr.caltech.edu
-u -b 1000m
-----------------------------------------------------------Client connecting to socrates.cacr.caltech.edu, UDP port 5001
Sending 1470 byte datagrams; UDP buffer size: 64.0 KByte
-----------------------------------------------------------[ 5] local 155.230.20.20 port 33036 connected with
131.215.144.227
[ ID] Interval
Transfer
Bandwidth
[ 5] 0.0-2595.2 sec 277 GBytes 916 Mbits/sec
USA TransPAC
KNU/Korea
Max. 947.3Mbps
G/H-Japan
Global Ring Network for Advanced Applications Development
 OC3 circuits Moscow-Chicago-
Beijing since January 2004
 OC3 circuit Moscow-Beijing July
2004 (completes the ring)
 Korea (KISTI) joining US, Russia,
China as full partner in GLORIAD
 Plans for Central Asian extension,
with Kyrgyz Gov’t
 Rapid traffic growth with heaviest
US use from DOE (FermiLab),
NASA, NOAA, NIH and 260+ Univ.
(UMD, IU, UCB, UNC, UMN…
Many Others)
Aug. 8 2004: P.K. Young,
Korean IST Advisor to
President Announces
 Korea Joining GLORIAD
 TEIN gradually to 10G,
connected to GLORIAD
 Asia Pacific Info. InfraStructure (1G) will be
backup net to GLORIAD
> 5TBytes now transferred monthly
via GLORIAD to US, Russia, China
GLORIAD 5-year Proposal Pending (with US NSF) for expansion: 2.5G MoscowAmsterdam-Chicago-Seattle-Hong Kong-Pusan-Beijing circuits early 2005; 10G
ring around northern hemisphere 2007; multiple wavelength service 2009 –
providing hybrid circuit-switched (primarily Ethernet) and routed services
AFRICA: NectarNet Initiative
W. Matthews
Georgia Tech
Growing Need to connect academic researchers, medical
researchers & practitioners to many sites in Africa
Examples:
 CDC & NIH: Global AIDS Project, Dept. of Parasitic Diseases,




Nat’l Library of Medicine (Ghana, Nigeria)
Gates $ 50M HIV/AIDS Center in Botswana; Project Coord at Harvard
Africa Monsoon AMMA Project, Dakar Site [cf. East US Hurricanes]
US Geological Survey: Global Spatial Data Infrastructure
Distance Learning: Emory-Ibadan (Nigeria); Research Channel
But Africa is Hard: 11M Sq. Miles, 600 M People, 54 Countries
 Little Telecommunications Infrastructure
Approach: Use SAT-3/WASC Cable (to Portugal), GEANT Across Europe,
AMS-NY Link Across Atlantic, Peer with Abilene in NYC
 Cable Landings in 8 West African Countries and South Africa
 Pragmatic approach to reach end points: VSAT,ADSL,microwave, etc.
Note: World Conference on Physics and Sustainable Development,
10/31 – 11/2/05 in Durban South Africa; Part of World Year of Physics 2005.
Sponsors: UNESCO, ICTP, IUPAP, APS, SAIP
Bandwidth prices in Africa vary dramatically; are in general
many times what they could be if universities purchase in volume
Sample Bandwidth Costs for African Universities
Nigeria
$20.00
Average
$11.03
Uganda
$9.84
Ghana
$6.77
IBAUD Target
USA
$3.00
Avg. Unit Cost is 40X US Avg.;
Cost is Several Hundred Times,
Compared to Leading Countries
$0.27
$0.00
$5.00
$10.00
$15.00
$20.00
$25.00
$/kbps/month
Sample size of 26 universities
Average Cost for VSAT service: Quality, CIR,
Rx, Tx not distinguished
Roy Steiner
Internet2 Workshop
HEP Active in the World Summit on
the Information Society 2003-2005
 GOAL: To Create an “Information Society”.
Common Definition Adopted (Tokyo Declaration, January 2003):
“… One in which highly developed ICT networks, equitable and
ubiquitous access to information, appropriate content in accessible
formats and effective communication can help people achieve their
potential”
Kofi Annan Challenged the Scientific Community to Help (3/03)
 WSIS I (Geneva 12/03): CERN RSIS
Event, SIS Forum, CERN/Caltech
Online Stand
 Visitors at WSIS I
 Kofi Annan, UN Sec’y General
 John H. Marburger, Science
Adviser to US
President
 Ion Iliescu, President of
Romania, …

Planning Now Underway for
Role of Sciences in Information
Society. Palexpo, Geneva 12/2003
 Demos at the CERN/Caltech
RSIS Online Stand
 Advanced network and




Grid-enabled analysis
Monitoring very large scale
Grid farms with MonALISA
World Scale multisite multi-protocol
videoconference with VRVS
(Europe-US-Asia-South America)
Distance diagnosis and surgery using
Robots with “haptic” feedback
(Geneva-Canada)
Music Grid: live performances with
bands at St. John’s, Canada and the
Music Conservatory of Geneva on
stage
VRVS 37k hosts
106 Countries
2-3X Growth/Year
Grid3: An Operational Production
Grid, Since October 2003



29 sites (U.S., Korea)
to ~3000 CPUs
to ~1200 Concurrent Jobs
www.ivdgl.org/grid2003
Trillium:
PPDG
GriPhyN
iVDGL
Also LCG
and EGEE
Project
Korea
Prelude to Open Science Grid: www.opensciencegrid.org
HENP Data Grids, and Now
Services-Oriented Grids
 The original Computational and Data Grid concepts are
based on largely stateless, open systems
 Analogous to the Web
 The classical Grid architecture had a number of implicit
assumptions
 The ability to locate and schedule suitable resources,
within a tolerably short time (i.e. resource richness)
 Short transactions with relatively simple failure modes
 HENP Grids are Data Intensive & Resource-Constrained
 Resource usage governed by local and global policies
 Long transactions; some long queues
 Grid-Enabled Analysis: 1000s of users compete for resources
at dozens of sites: Complex scheduling; management
 HENP Stateful, End-to-end Monitored and Tracked Paradigm
 Adopted in OGSA [Now WS Resource Framework]
Managing Global Systems: Dynamic
Scalable Services Architecture
MonALISA: http://monalisa.cacr.caltech.edu
24 X 7 Operations
Multiple Orgs.
 Grid2003
 US CMS
 CMS-DC04
 ALICE
 STAR
 VRVS
 ABILENE
 GEANT
 + GLORIAD
 “Station Server”
Services-engines
at sites host many
“Dynamic Services”
 Scales to
thousands of
service-Instances
 Servers autodiscover
and interconnect
dynamically to form
a robust fabric
 Autonomous agents
+ CLARENS: Web Services Fabric and Portal Architecture
Grid-Enabled Analysis Environment
CLARENS: Web Services Architecture
Analysis
Client
Analysis
Client
 Analysis Clients talk
Analysis
Client
HTTP, SOAP, XML/RPC
Grid Services
Web Server
Scheduler
Catalogs
FullyAbstract
Planner
Metadata
PartiallyAbstract
Planner
FullyConcrete
Planner
Data
Management
Virtual
Data
Monitoring
Replica
Execution
Priority
Manager
Grid Wide
Execution
Service
Caltech GAE Team
Applications
standard protocols to
the CLARENS “Grid
Services Web Server”,
with a simple Web
service API
 The secure Clarens
portal hides the
complexity of the Grid
Services from the client
 Key features: Global
Scheduler, Catalogs,
Monitoring, and Gridwide Execution service;
Clarens servers form
a Global Peer to peer
Network
45
UltraLight Collaboration:
http://ultralight.caltech.edu
 Caltech, UF, FIU,
UMich, SLAC,FNAL,
CERN, UERJ(Rio),
NLR, CENIC, UCAID,
Translight, UKLight,
Netherlight, UvA,
UCLondon, KEK,
Taiwan
 Cisco, Level(3)
 Integrated hybrid experimental network, leveraging Transatlantic
R&D network partnerships; packet-switched + dynamic optical paths
 10 GbE across US and the Atlantic: NLR, DataTAG, TransLight,
NetherLight, UKLight, etc.; Extensions to Japan, Taiwan, Brazil
 End-to-end monitoring; Realtime tracking and optimization;
Dynamic bandwidth provisioning
 Agent-based services spanning all layers of the system, from the
optical cross-connects to the applications.
HEPGRID and Digital Divide Workshop
UERJ, Rio de Janeiro, Feb. 16-20 2004
Theme: Global Collaborations, Grids and
Their Relationship to the Digital Divide
NEWS:
Bulletin: ONE TWO
WELCOME BULLETIN
General Information
Registration
Travel Information
Hotel Registration
Participant List
How toTutorials
Get UERJ/Hotel

C++ Accounts
Computer

GridPhone
Technologies
Useful
Numbers
Program

Grid-Enabled
Contact
us:
Analysis
Secretariat

Networks
Chairmen

Collaborative
For the past three years the SCIC has focused on
understanding and seeking the means of reducing or
eliminating the Digital Divide. It proposed to ICFA
that these issues, as they affect our field of High
Energy Physics, be brought to our community for
discussion. This led to ICFA’s approval, in July 2003,
of the Digital Divide and HEP Grid Workshop.
 Review of R&E Networks; Major Grid Projects
 Perspectives on Digital Divide Issues by Major
HEP Experiments, Regional Representatives
 Focus on Digital Divide Issues in Latin America;
Relate to Problems in Other Regions
More Info: http://www.lishep.uerj.br
SPONSORS
Systems
CLAF
CNPQ
FAPERJ
UERJ
Sessions &
Tutorials Available
(w/Video) on
the Web
International ICFA Workshop on HEP
Networking, Grids and Digital Divide
Issues for Global e-Science
Dates: May 23-27, 2005
Venue: Daegu, Korea
Dongchul Son
Center for High Energy Physics
Kyungpook National University
ICFA, Beijing, China
Aug. 2004
Approved by ICFA
August 20, 2004
International ICFA Workshop on HEP Networking, Grids
and Digital Divide Issues for Global e-Science
 Themes
 Networking, Grids, and Their Relationship to the Digital Divide for
HEP as Global e-Science
 Focus on Key Issues of Inter-regional Connectivity
 Workshop Goals
 Review the current status, progress and barriers to the effective use
of the major national, continental and transoceanic networks used
by HEP
 Review progress, strengthen opportunities for collaboration, and
explore the means to deal with key issues in Grid computing and
Grid-enabled data analysis, for high energy physics and other fields
of data intensive science, now and in the future
 Exchange information and ideas, and formulate plans to develop
solutions to specific problems related to the Digital Divide in various
regions, with a focus on Asia Pacific, as well as Latin America,
Russia and Africa
 Continue to advance a broad program of work on reducing or
eliminating the Digital Divide, and ensuring global collaboration,
as related to all of the above aspects.
고에너지물리연구센터
CENTER FOR HIGH ENERGY PHYSICS
Networks and Grids for HENP and
Global Science
 Network backbones and major links used by major experiments
in HENP and other fields are advancing rapidly
 To the 10 G range in < 3 years; much faster than Moore’s Law
 New HENP and DOE Roadmaps: a factor ~1000 improvement per decade
 Important advances in Asia-Pacific, notably Korea, Japan, China
 We are learning to use long distance 10 Gbps networks effectively
 2004 Developments: to 7.5 Gbps flows with TCP over 16 kkm
 A transition to community-owned and operated R&E networks (us, ca, nl, pl,
cz, sk, co, jp …); A new generation of “hybird” optical networks is emerging
 We Must Work to Close to Digital Divide
 To Allow Scientists and Students from All World Regions
to Take Part in Discoveries at the Frontiers of Science
 Removing Regional, Last Mile, Local Bottlenecks and
Compromises in Network Quality are now On the Critical Path
 Important Examples on the Road to Progress in Closing the Digital Divide
 CLARA, CHEPREO, and Brazil HEP Grid in Latin America
 Optical Networking in Central and Southeast Europe
 APAN Links in the Asia Pacific
 Leadership and Outreach: HEP Groups in Korea, Japan, US and Europe
Extra Slides
Follow
LHC Global Collaborations
ATLAS
CMS
CMS 1980 Physicists and Engineers
36 Countries, 161 Institutions
SC2004: HEP Network Layout
Preview of Future Grid systems
SLAC
Australia
Japan
Brazil
StarLight
2*10 Gbps
NLR
10 Gbps
NLR
2 Metro
10 Gbps Waves
LA-Caltech
Caltech
CACR
3*10Gbps
TeraGrid
10 Gbps
Abilene
FNAL
LA
UK
10 Gbps
LHCNet
CERN
Geneva
 Joint Caltech, CERN, SLAC,
FNAL, UKlight, HP, Cisco… Demo
 6 to 8 10 Gbps waves to HEP
setup on the show floor
 Bandwidth challenge: aggregate
throughput goal
of 40 to 60 Gbps
18 State Dark Fiber
Initiatives
In the U.S. (As of 3/04)
California (CALREN),
Colorado (FRGP/BRAN)
Connecticut Educ. Network,
Florida Lambda Rail,
Indiana (I-LIGHT),
Illinois (I-WIRE),
Md./DC/No. Virginia (MAX),
Michigan,
Minnesota,
NY + New England (NEREN),
N. Carolina (NC LambdaRail),
Ohio (Third Frontier Net)
Oregon,
Rhode Island (OSHEAN),
SURA Crossroads (SE U.S.),
Texas,
Utah,
Wisconsin
The Move to Dark Fiber
is Spreading
FiberCO
Grid and Network Workshop
at CERN March 15-16, 2004
WORKSHOP GOALS
CONCLUDING
STATEMENT
 Share and challenge the lessons learned by nat’l and
international
projects
in the past
years; Workshop
"Following
the 1st
International
Gridthree
Networking
 Share the current
engineering
and by
(GNEW2004)
that wasstate
heldof
atnetwork
CERN and
co-organized
infrastructureDANTE,
and its likely
near future;
CERN/DataTAG,
ESnet,evolution
Internet2in&the
TERENA,
there is
Examine
our understanding
of the networking
needs of
of
awide
consensus
that hybrid network
services capable
Grid applications
see the ICFA-SCIC reports);
offering
both packet-(e.g.,
and circuit/lambda-switching
as well
 highly
Develop
a vision performance
of how network
engineering and
as
advanced
measurements
and a new
infrastructure will (or should) support Grid computing
generation
distributed
needs inof
the
next threesystem
years. software, will be required in
order to support emerging data intensive Grid applications,
Such as High Energy Physics, Astrophysics, Climate and
Supernova modeling, Genomics and Proteomics, requiring
10-100 Gbps and up over wide areas."


HENP Lambda Grids:
Fibers for Physics
 Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes
from 1 to 1000 Petabyte Data Stores
 Survivability of the HENP Global Grid System, with
hundreds of such transactions per day (circa 2007)
requires that each transaction be completed in a
relatively short time.
 Example: Take 800 secs to complete the transaction. Then
Transaction Size (TB)
Net Throughput (Gbps)
1
10
10
100
100
1000 (Capacity of
Fiber Today)
 Summary: Providing Switching of 10 Gbps wavelengths
within ~2-4 years; and Terabit Switching within 5-8 years
would enable “Petascale Grids with Terabyte transactions”,
to fully realize the discovery potential of major HENP programs,
as well as other data-intensive research.
SCIC in 2003-2004
http://cern.ch/icfa-scic
 Strong Focus on the Digital Divide Continues
 A Striking Picture Continues to Emerge: Remarkable
Progress in Some Regions, and a Deepening Digital
Divide Among Nations
 Intensive Work in the Field: > 60 Meetings and Workshops:
 E.g., Internet2, TERENA, AMPATH, APAN, CHEP2003, SC2003,
Trieste, Telecom World 2003, SC2003, WSIS/RSIS, GLORIAD
Launch, Digital Divide and HEPGrid Workshop (Feb. 16-20 in
Rio), GNEW2004, GridNets2004, NASA ONT Workshop, … etc.
 3rd Int’l Grid Workshop in Daegu (August 26-28, 2004); Plan for
2nd ICFA Digital Divide and Grid Workshop in Daegu (May 2005)
 HENP increasingly visible to governments; heads of state:
 Through Network advances (records), Grid developments,
Work on the Digital Divide and issues of Global Collaboration
 Also through the World Summit on the Information Society
Process. Next Step is WSIS II in TUNIS November 2005
Coverage
 Now monitoring 650 sites in 115 countries
 In last 9 months added:
 Several sites in Russia (thanks GLORIAD)
 Many hosts in Africa (5  36 now; in 27 out of 54 countries)
 Monitoring sites in Pakistan and Brazil (Sao Paolo and Rio)
 Working to install monitoring host in Bangalore, India
Monitoring site
Remote site
Achieving throughput
 User can’t achieve throughput available (Wizard gap)
 TCP Stack, End-System and/or Local, Regional,
Nat’l Network Issues
 Big step just to know what is achievable
(e.g. 7.5 Gbps over 16 kkm Caltech-CERN)
Latin America: CLARA Network
(2004-2006 EU Project)
 Significant contribution from
European Comission and Dante
through ALICE project
 NRENs in 18 LA countries forming
a regional network for
collaboration traffic
 Initial backbone ring bandwidth f
155 Mbps
 Spur links at 10 to 45 Mbps
(Cuba at 4 Mbps by satellite)
 Initial connection to Europe at 622
Mbps from Brazil
 Tijuana (Mexico) PoP soon to be
connected to US through dark fibre
link (CUDI-CENIC)
 access to US, Canada and Asia
- Pacific Rim
NSF IRNC 2004: Two Proposals to
Connect CLARA to the US (and Europe)
1st Proposal:
FIU and CENIC
To West
Coast
2nd Proposal:
Indiana and Internet2
To East
to West Coast
Coast
to East
Coast
Note: CHEPREO (FIU, UF, FSU Caltech, UERJ, USP, RNP)
622 Mbps Sao Paolo – Miami Link Started in August
GIGA Project: Experimental Gbps
Network: Sites in Rio and Sao Paolo
Universities
IME
PUC-Rio
UERJ
UFF
UFRJ
Unesp
Unicamp
USP
R&D Centres
CBPF
- physics
CPqD
- telecom
CPTEC - meteorology
CTA
- aerospace
Fiocruz - health
IMPA
- mathematics
INPE
- space sciences
LNCC
- HPC
LNLS
- physics
Slide from M. Stanton
About 600 km extension - not to scale
LNCC
CTA
INPE
CPqD
LNLS
Unicam
p
CPTEC
Fapesp
telcos
Unesp
USP – Incor
USP - C.Univ.
CBPF
LNCC
Fiocruz
IME
IMPARNP
PUC-Rio
telcos
UERJ
UFRJ
UFF
Extension of the GIGA
Project Using 3000 km
of dark fiber.
João Pessoa
“A good and real
Advancement for
Science in Brazil”
– A. Santoro.
Maceió
GIGA Project in
Rio and Sao Paolo
“This is wonderful NEWS!
our colleagues from Salvador
-Bahia will can start to work
with us on CMS.”
Latin America Science Areas Interested
in Improving Connectivity ( by Country)
Subject
Argentina
Brazil
Chile
Colombia
Costa
Rica
Equator
Astrophysics
e-VLBI
High Energy
Physics
Geosciences
Marine
sciences
Health and
Biomedical
applications
Environmental
studies
Networks and Grids: The Potential to Spark
a New Era of Science in Latin America
Mexico
Trans-Eurasia Information Network
TEIN (2004-2007)
 Circuit between KOREN(Korea) and RENATER(France).
 AP Beneficiaries:
China, Indonesia, Malaysia, Philippines, Thailand, Vietnam
(Non-beneficiaries: Brunei, Japan, Korea, Singapore
 EU partners: NRENs of France, Netherlands, UK
 The scope expanded to South-East Asia and China recently.
 Upgraded to 34 Mbps in 11/2003. Upgrade to 155Mbps planned
 12M Euro EU Funds
 Coordinating Partner
DANTE
 Direct EU-AP Link;
Other Links go
Across the US
AFRICA: Key Trends
M. Jensen and P. Hamilton Infrastructure Report, March 2004
 Growth in traffic and lack of infrastructure Predominance of Satellite;
But these satellites are heavily subscribed
 Int’l Links: Only ~1% of traffic on links is for Internet connections;
Most Internet traffic (for ~80% of countries) via satellite
 Flourishing Grey market for Internet & VOIP traffic using VSAT dishes
 Many Regional fiber projects in “planning phase” (some languished in
the past); Only links from South Africa to Nimibia, Botswana done so far
 Int’l fiber Project: SAT-3/WASC/SAFE Cable from South Africa to Portugal
Along West Coast of Africa
 Supplied by Alcatel to Worldwide Consortium of 35 Carriers
 40 Gbps by Mid-2003; Heavily Subscribed. Ultimate Capacity 120 Gbps
 Extension to Interior Mostly by Satellite: < 1 Mbps to ~100 Mbps typical
APAN Recommendations
(July 2004 Meeting in CAIRNS, Au)
Central Issues for APAN this decade
 Stronger linkages between applications and infrastructure neither can exist independently; also among APAN members.
 Continuing focus on APAN as an organization that represents
infrastructure interests in Asia
 Closer connection between APAN the infrastructure &
applications organization, and regional political organizations
(e.g. APEC, ASEAN)
New issues demand attention
 Application measurement, particularly end-to-end
network performance measurement (for deterministic
networking)
 Security now a consideration for every application
and every network.
APAN Link Information (2 Of 2)





Countries
(JP)-US-EU
JP-US
JP-US
JP-US
JP-US
JP-VN
KR-FR
KR-SG
KR-US
LK-JP
MY-SG
SG-US
TH-US
TW-HK
TW-JP
TW-SG
TW-US
(TW)-US-NL
Network
SINET
SINET
IEEAF
IEEAF
Japan-Hawaii
AI3(IOIT)
KOREN/RENATER
APII
KOREN/KREONet2
LEARN
NRG/SICU
SingaREN
Uninet
ASNET/TANET/TWAREN
ASNET/TANET
ASNET/SingAREN
ASNET/TANET/TWAREN
ASNET/TANET/TWAREN
Bandwidth (Mbps)
155
5 Gbps
10Gbps
622
155
1.5/0.5
34
8
1.2Gbps
2.5
2
90
155
622
622
155
6.6 Gbps
2.5 Gbps
2004.7.7 [email protected]
AUP/Remark
R&E / No Transit
R&E / No Transit
R&E wave service
R&E
R&E
R&E
Research (TEIN)
R&E
R&E
R&E
Experiment (Down)
R&E
R&E
R&E
R&E
R&E
R&E
R&E
APAN China Consortium
Has been established in 1999. The China Education and Research
Network (CERNET), the Natural Science Foundation of China
Network (NSFCNET) and the China Science and Technology
Network (CSTNET) are the main three advanced networks.
CERNet
NSFCnet
2.5 Gbps
Tsinghua --- Tsinghua University
PKU --- Peking University
NSFC --- Natural Science Foundation of China
CAS --- China Academy of Sciences
BUPT --- Beijing Univ. of Posts and Telecom.
BUAA --- Beijing Univ. of Aero- and Astronautics
GLORIAD and HENP Example:
Network Needs of IHEP Beijing
 ICFA SCIC Report: Appendix 18, on Network Needs
for HEP in China (See http://cern.ch/icfa-scic)
 “IHEP is working with the Computer Network Information Center
(CNIC) and other universities and institutes to build Grid
applications for the experiments. The computing resources and
storage management systems are being built or upgraded in the
Institute. IHEP has a 100 Mbps link to CNIC, so it is quite easy to
connect to GLORIAD and the link could be upgraded as needed.”
 Prospective Network Needs for IHEP Bejing
Experiment
LHC/LCG
BES
YICRO
AMS
Others
Total (Sharing)
Year 2004-2005
622Mbps
100Mbps
100Mbps
100Mbps
100Mbps
1Gbps
Year 2006 and on
2.5Gbps
155Mbps
100Mbps
100Mbps
100Mbps
2.5Gbps
World Summit on the Information Society
(WSIS): Geneva 12/2003 and Tunis in 2005
 The UN General Assembly adopted in 2001 a resolution
endorsing the organization of the World Summit on the
Information Society (WSIS), under UN Secretary-General,
Kofi Annan, with the ITU and host governments taking
the lead role in its preparation.
 GOAL: To Create an Information Society:
A Common Definition was adopted
in the “Tokyo Declaration” of January 2003:
“… One in which highly developed ICT networks, equitable
and ubiquitous access to information, appropriate content
in accessible formats and effective communication can
help people achieve their potential”
 Kofi Annan Challenged the Scientific Community to Help (3/03)
 CERN and ICFA SCIC have been quite active in the WSIS in
Geneva (12/2003)
The Open Science Grid
http://www.opensciengrid.org
The Open Science Grid will
 Build on the experience of Grid2003, as a persistent,
production-quality Grid of national and international scope
 Ensure that the U.S. plays a leading role in defining and
operating the global grid infrastructure needed for largescale collaborative and international scientific research.
 Combine computing resources at several DOE labs and at
dozens of universities to effectively become a single national
computing infrastructure for science, the Open Science Grid.
 Provide opportunities for educators and students to
participate in building and exploiting this grid infrastructure
and opportunities for developing and training a scientific and
technical workforce. This has the potential to transform the
integration of education and research at all levels.
Increased functionality,
standardization
The Move to OGSA and then
Managed Integrated Systems
~Integrated Systems
Web services + …
X.509,
LDAP,
FTP, …
Custom
solutions
App-specific
Services
Open Grid
Web Services
Services Arch
Resrc Framwk
Stateful;
Managed
GGF: OGSI, …
(+ OASIS, W3C)
Globus Toolkit Multiple implementations,
including Globus Toolkit
Defacto standards
GGF: GridFTP, GSI
Time