PPT - The Center for High Energy Physics

Download Report

Transcript PPT - The Center for High Energy Physics

Networks, Grids and the Digital Divide in
HEP and Global e-Science
Harvey B. Newman
ICFA Workshop on HEP Networking, Grids,
and Digital Divide Issues for Global e-Science
Daegu, May 23 2005
Large Hadron Collider
CERN, Geneva: 2007 Start
 pp s =14 TeV L=1034 cm-2 s-1
 27 km Tunnel in Switzerland & France
CMS
TOTEM
Atlas
pp, general
purpose; HI
5000+ Physicists
250+ Institutes
60+ Countries
ALICE : HI
LHCb: B-physics
Higgs, SUSY, Extra Dimensions, CP Violation, QG Plasma, …
the Unexpected
LHC Data Grid Hierarchy:
Developed at Caltech
CERN/Outside Resource Ratio ~1:2
Tier0/( Tier1)/( Tier2)
~1:1:1
~PByte/sec
~150-1500
MBytes/sec
Online System
Experiment
CERN Center
PBs of Disk;
Tape Robot
Tier 0 +1
Tier 1
10 - 40 Gbps
IN2P3 Center
INFN Center
RAL Center
FNAL Center
~10 Gbps
Tier 2
Tier 3
Tier2 Center
Tier2 Center
Tier2 Center
Tier2 CenterTier2 Center
~1-10 Gbps
Institute Institute
Physics data
cache
Workstations
Institute
Institute
1 to 10 Gbps
Tens of Petabytes by 2007-8,
at ~100 Sites.
An Exabyte ~5-7 Years later.
Tier 4
Emerging Vision: A Richly Structured, Global Dynamic System
ICFA and Global Networks
for Collaborative Science
 National and International Networks, with rapidly
increasing capacity and end-to-end capability
are essential, for
 The daily conduct of collaborative work in both
experiment and theory
 Experiment development & construction
on a global scale
 Grid systems supporting analysis involving
physicists in all world regions
 The conception, design and implementation of
next generation facilities as “global networks”
 “Collaborations on this scale would never have been
attempted, if they could not rely on excellent networks”
Challenges of Next Generation
Science in the Information Age
Petabytes of complex data explored and analyzed by 100s-1000s
of globally dispersed scientists, in 10s-100s of teams
 Flagship Applications
 High Energy & Nuclear Physics, AstroPhysics Sky Surveys:
TByte to PByte “block” transfers at 1-10+ Gbps
 Fusion Energy: Time Critical Burst-Data Distribution;
Distributed Plasma Simulations, Visualization, Analysis
 eVLBI: Many (quasi-) real time data streams at 1-10 Gbps
 BioInformatics, Clinical Imaging: GByte images on demand
 Advanced integrated Grid applications rely on reliable,
high performance operation of our LANs and WANs
 Analysis Challenge: Provide results with rapid turnaround,
over networks of varying capability in different world regions
Huygens Space Probe Lands on Titan Monitored by 17 telescopes in Au, Jp, CN, US
 In October 1997, the Cassini spacecraft left Earth to
travel to Saturn
 On Christmas day 2004, the Huygens probe separated
from Cassini
 On 14 January 2005 it started its descent through the
dense (methane, nitrogen) atmosphere of Titan
(speculated to be similar to that of Earth
billions of years ago)

 The signals sent back from Huygens to Cassini were
monitored by 17 telescopes in Australia, China, Japan
and the US to accurately position the probe to within a
kilometre (Titan is ~1.5 billion kilometres from Earth)
Courtesy G. McLaughlin
Australian eVLBI data sent over high
speed links to the Netherlands
 The data from two of the Australian telescopes were
transferred to the Netherlands over the SXTransport and
IEEAF links, and CA*net4 using UCLP, and were the first to
be received by JIVE (Joint Institute for VLBI in Europe), the
correlator site
 The data was transferred at an average rate of 400Mbps
(note 1Gbps was available)
 The data from these two telescopes were reformatted and
correlated within hours of the end of the landing
 This early correlation allowed calibration of the data
processor at JIVE, ready for the data from other telescopes
to be added
 Significant int’l collaborative effort: 9 Organizations
G. McLaughlin, D. Riley
ICFA Standing Committee on
Interregional Connectivity (SCIC)
 Created in July 1998 in Vancouver ; Following ICFA-NTF
CHARGE:
 Make recommendations to ICFA concerning the connectivity
between the Americas, Asia and Europe
 As part of the process of developing these
recommendations, the committee should
 Monitor traffic on the world’s networks
 Keep track of technology developments
 Periodically review forecasts of future
bandwidth needs, and
 Provide early warning of potential problems
 Create subcommittees as needed to meet the charge
 Representatives: Major labs, ECFA, ACFA, North and
South American Users
 Chair of the committee reports to ICFA twice per year
SCIC in 2004-2005
http://cern.ch/icfa-scic
Three 2005 Reports, Presented to ICFA Today
 Main Report: “Networking for HENP” [H. Newman et al.]
Includes Updates on the Digital Divide, World
Network Status; Brief updates on Monitoring and
Advanced Technologies [*]
18 Appendices: A World Network Overview
Status and Plans for the Next Few Years of Nat’l &
Regional Networks, and Optical Network Initiatives
 Monitoring Working Group Report
[L. Cottrell]
Also See:
 SCIC Digital Divide Report
[A. Santoro et al.]
 SCIC 2004 Digital Divide in Russia Report
[V. Ilyin]
 TERENA (www.terena.nl) 2004 Compendium
SCIC Main Conclusion for 2002-5
 The disparity among regions in HENP could increase
even more sharply, as we learn to use advanced
networks effectively, and we develop dynamic Grid
systems in the “most favored” regions
We must therefore take action, and
work to Close the Digital Divide
 To make Scientists in All World Regions Full
Partners in the Process of Frontier Discoveries
 This is essential for the health of our global
experimental collaborations, for our field,
and for international collaboration in many
fields of science.
HEPGRID and Digital Divide Workshop
UERJ, Rio de Janeiro, Feb. 16-20 2004
NEWS:
Bulletin: ONE TWO
WELCOME BULLETIN
General Information
Registration
Travel Information
Hotel Registration
Participant List
How Tutorials
to Get UERJ/Hotel

C++ Accounts
Computer

GridPhone Numbers
Useful
Program
Technologies
Contact
us:

Grid-Enabled
Secretariat
Analysis
Chairmen

Networks
Theme: Global Collaborations, Grids and Their
Relationship to the Digital Divide
For the past three years the SCIC has focused on
understanding and seeking the means of reducing or
eliminating the Digital Divide. It proposed to ICFA
that these issues, as they affect our field, be brought
to our community for discussion. This led to ICFA’s
approval, in July 2003, of the Digital Divide and HEP
Grid Workshop.
 Review of R&E Networks; Major Grid Projects
 Perspectives on Digital Divide Issues by Major
HEP Experiments, Regional Representatives
 Focus on Digital Divide Issues in Latin
America; Relate to Problems in Other Regions
See http://www.lishep.uerj.br
SPONSORS
Collaborative
Systems
A. Santoro
CLAF
CNPQ
FAPERJ
UERJ
International ICFA Workshop on
HEP Networking, Grids, and Digital
Divide Issues for Global e-Science
May 23-27, 2005
Daegu, Korea
Dongchul Son
Center for High Energy Physics
Harvey Newman
California Institute of Technology
 Focus on Asia-Pacific
 Also Latin America, Middle East,
Africa
Approved by ICFA
August 2004
International ICFA Workshop on HEP Networking, Grids
and Digital Divide Issues for Global e-Science
 Workshop Goals
Review the current status, progress and barriers to
effective use of major national, continental and
transoceanic networks
Review progress, strengthen opportunities for collaboration,
and explore the means to deal with key issues in Grid
computing and Grid-enabled data analysis, for high energy
physics and other fields of data intensive science
Exchange information and ideas, and formulate plans to
develop solutions to specific problems related to the Digital
Divide, with a focus on the Asia Pacific region, as well as
Latin America, Russia and Africa
Continue to advance a broad program of work on reducing
or eliminating the Digital Divide, and ensuring global
collaboration, as related to all of the above aspects.
고에너지물리연구센터
CENTER FOR HIGH ENERGY PHYSICS
PingER: World View from SLAC, CERN
C. Asia, Russia, SE Europe,
L. America, M. East, China:
4-7 yrs behind
India, Africa: 7-8 yrs behind
S.E. Europe, Russia: Catching up
Latin Am., China: Keeping up
India, Mid-East, Africa: Falling Behind
Latin America
Latin America


R. Cottrell
Connectivity to Africa
 Internet Access: More than an order of magnitude lower than the
corresponding percentages in Europe (33%) & N. America (70%).
INTERNET USERS AND POPULATION STATISTICS FOR AFRICA
Population
( 2004 Est. )
Pop.
Pct.
World
Internet
Users
Usage
Growth
(2000-4)
Penetration: Users:
Percent of
Percent
Population in World
TOTAL for
AFRICA
893 M
14.0 %
12.9 M
186.6 %
1.4 %
1.6 %
REST of
WORLD
5,497 M
86.0 %
800 M
124.4 %
14.6 %
98.4 %
 Digital Divide: Lack of Infrastructure, especially interior, high prices
(e.g. $ 4-10/kbps/mo.); “Gray” satellite bandwidth market
 Intiatives: EUMEDCONNECT (EU-North Africa); GEANT: 155 Mbps to
S. Africa; Nectarnet (Ga. Tech); IEEAF/I2 NSF-Sponsored Initiative
Bandwidth prices in Africa vary dramatically; are in general
many times what they could be if universities purchase in volume
Sample Bandwidth Costs for African Universities
Nigeria
$20.00
Average
$11.03
Uganda
$9.84
Ghana
$6.77
IBAUD Target
USA
$3.00
Avg. Unit Cost is 40X US Avg.;
Cost is Several Hundred Times,
Compared to Leading Countries
$0.27
$0.00
$5.00
$10.00
$15.00
$20.00
$25.00
$/kbps/month
Sample size of 26 universities
Average Cost for VSAT service: Quality, CIR,
Rx, Tx not distinguished
Roy Steiner Internet2
2004 Workshop
Asia Pacific Academic Network Connectivity
APAN Status 7/2004
RU

Europe
200M
34M
Connectivity to
US from JP, KO,
AU is Advancing
Rapidly.
Progress in the
Region, and to
Europe is Much
Slower
CN
2G
KR
155M 
1.2G
310M
 TW
`722M
HK 
 IN
TH 
 LK
45M 
45M
90M
7.5M
 PH
1.5M
155M 1.5M
VN
MY 2M

12M
SG
2M
US
9.1G
622M
777M
Access Point
Exchange Point
Current Status
2004 (plan)
D. Y. Kim

20.9G
JP
155M
932M
 ID
2.5M
16M
AU 
Better North/South Linkages within Asia Needed
JP- TH link: 2Mbps  45Mbps in 2004.
Some APAN Links
Countries
ID-ID
IN-US/UK
JP-CN
JP-CN
JP-ID
JP-KR
JP-LA
JP-LK
JP-MY
JP-PH
JP-PH
JP-SG
JP-TH
JP-TH
JP-US
JP-US
JP-US
Network
AI3 (ITB-UNIBRAW)
ERNET
NICT-CERNET
NICT-CSTNET
AI3 (ITB)
APII
AI3 (NUOL)
AI3 (IOIT)
AI3 (USM)
AI3 (ASTI)
MAFFIN (ASTI)
AI3 (TP)
AI3 (AIT)
SINET (ThaiSarn)
IEEAF
JGN2
SINET
Bandwidth(Mbps)
0.128/0.128
70
1 Gbps
1 Gbps
0.5/1.5 (From/To JP)
2 Gbps
0.128
0.5/0.5 (From/To JP)
0.5/0.5 (From/To JP)
0.5/0.5 (From/To JP)
6
0.5/0.5 (From/To JP)
0.5/1.5 (From/To JP)
45
10 Gbps+622
10 Gbps
10 Gbps
AUP/Remark
R&E + Commodity
R&E
R&E
R&E
R&E + Commodity
R&E
R&E + Commodity
R&E + Commodity
R&E + Commodity
R&E + Commodity
R&E
R&E + Commodity
R&E + Commodity
R&E
R&E
R&E
R&E G. McLaughlin
Digital Divide Illustrated by Network
Infrastructures: TERENA NREN Core Capacity
In Two Years
United Kingdom
Sweden
Current
Poland
Netherlands
Germany
Belgium
Spain
Norway
Core capacity goes up
in Large Steps: 10 to 20 Gbps;
2.5 to 10 Gbps;
0.6-1 to 2.5 Gbps
Italy
Hungary
Greece
France
Finland
Czech Republic
Portugal
Switzerland
Slovakia
Ireland
Iceland
Denmark
Current Core Capacity
Expected Increase in two years
Austria
Lithuania
Slovenia
Malta
Latvia
Estonia
Cyprus
SE Europe, Medit., FSU, Middle East:
Less Progress Based on Older
Technologies (Below 0.15, 1.0 Gbps):
Digital Divide Will Not Be Closed
Serbia/Montenegro
Turkey
Romania
Croatia
Algeria
Bulgaria
Iran
Source:
TERENA
Morocco
Jordan
Azerbaijan
Moldova
Ukraine
0
1
2
3
4
5
6
7
8
9
10
11
12
Core Capacity in Gbps
13
14
15
16
17
18
19
20
Long Term Trends in Network
Traffic Volumes: 300-1000X/10Yrs
ESnet Accepted Traffic 1990 – 2004
Exponential Growth Since ’92;
Annual Rate Increased from 1.7 to 2.0X
Per Year In the Last 5 Years
L. Cottrell
10 Gbit/s
ESnet Monthly Accepted Traffic Through
W. Johnston
Nov, 2004
W. Johnston
400
TByte/Month
350
300
250
Progress
in Steps
200
150
100
50
 FNAL:
10 to 20 (+40) Gbps
by Fall 2005
Nov, 04
Jan, 04
Jun, 04
Mar, 03
Aug, 03
May,02
Oct, 02
Jul, 01
Dec, 01
Feb, 01
Apr, 00
Sep, 00
Nov, 99
Jan, 99
Jun, 99
Aug, 98
Oct, 97
Mar, 98
May, 97
Jul, 96
Dec, 96
Feb, 96
Apr, 95
Sep, 95
Nov, 94
Jan, 94
Jun, 94
0
 SLAC
Traffic ~400 Mbps; Growth in
Steps (ESNet Limit): ~ 10X/4 Years.
 Projected: ~2 Terabits/s by ~2014
 July 2005: 2x10 Gbps links: one
for production and one for research
CANARIE (Canada) Utilization
W. St. Arnaud
Trends and UCLPv2
Gbps
30
25
20
Lightpaths
IP Peak
IP Average
Network Capacity
Limit
15
10
5
0
8
9
0
1
2
3
4
9
9
0
0
0
0
0
n
n
n
n
n
n
n
Ju
Ju
Ju
Ju
Ju
Ju
Ju
 “Demand for Customer
Empowered Nets (CENs) is
exceeding our wildest
expectations
 New version of UCLP will allow
easier integration of CENs into
E2E nets for specific
communities &/or disciplines
 UCLPv2 will be based on SOA,
web services and workflow to
allow easy integration into
cyber-infrastructure projects
 The Network is no longer a
static fixed facility – but can be
‘orchestrated’ with different
topologies, routing etc to meet
specific needs of end users”
Transition beginning now to optical, multiwavelength Community owned or leased
“dark fiber” networks for R&E
National Lambda Rail (NLR): www.nlr.net
NLR
 Initially
4-8 10G
Wavelengths
 To 40 10G
Waves in Future
 Ultralight, Internet2
HOPI, Cisco Research
&
UltraScience Net
Initiatives w/HEP
 Atlantic & Pacific
Wave
Initiatives in: nl, ca, jp, uk,
kr; pl, cz, sk, pt, ei, gr, sb/mn
… + 30 US States
(Ca, Il, Fl, In, …)
GEANT2 Hybrid Architecture
Global
Connectivity
10 Gbps +
3x2.5 Gbps to
North America
 2.5 Gbps to
Japan




622 Mbps to
South America

45 Mbps to
Mediterranean
countries
Cooperation of 26 NRENs
 155 Mbps to
Implementation on dark fiber, IRU
South Africa
Asset, Transmission & Switching Equipment Will be Improved
 Layer 1 & 2 switching, “the Light Path”
in GEANT2
 Point to Point (E2E) Wavelength services
H. Doebbling
 LHC in Europe: N X 10G T0-T1 Overlay Net
SXTransport: Au-US 2 X 10G
AARNet has dual 10Gbps circuits to the US
via Hawaii, dual 622Mbps commodity links
G. McLaughlin
JGN2: Japan Gigabit Network (4/04 – 3/08)
20 Gbps Backbone, 6 Optical Cross-Connects
[Legends ]
20Gbps
10Gbps
1Gbps
Optical testbeds
Access points
<10G>
・Ishikawa Hi-tech Exchange Center
(Tatsunokuchi-machi, Ishikawa Prefecture)
<100M>
・Toyama Institute of Information Systems
(Toyama)
・Fukui Prefecture Data Super Highway AP * (Fukui)
AP *services
 Connection
Core network nodes
<1G>
・Teleport Okayama
(Okayama)
・Hiroshima University (Higashi
Hiroshima)
<100M>
・Tottori University of
Environmental Studies (Tottori)
・Techno Ark Shimane
(Matsue)
・New Media Plaza Yamaguchi
(Yamaguchi)
<10G>
・Kyushu University (Fukuoka)
<100M>
・NetCom Saga
(Saga)
・Nagasaki University
(Nagasaki)
・Kumamoto Prefectural Office
(Kumamoto)
・Toyonokuni Hyper Network AP
*(Oita)
・Miyazaki University (Miyazaki)
・Kagoshima University
(Kagoshima)
JGN2
<100M>
<10G>
・Kyoto University
(Kyoto)
・Osaka University
(Ibaraki)
<1G>
・NICT Kansai Advanced Research Center (Kobe)
<100M>
・Lake Biwa Data Highway AP *
(Ohtsu)
・Nara Prefectural Institute of Industrial
Technology (Nara)
・Wakayama University
(Wakayama)
・Hyogo Prefecture Nishiharima Technopolis
(Kamigori-cho, Hyogo Prefecture)
the optical level:
1 GbE and <1G>
10GbE
Sapporo
 Optical
・Matsumoto Information
Creation Center
(Matsumoto,
Nagano Prefecture)
Sendai
Nagano
NICT Koganei
Headquarters
Okayama
Kochi
Okinawa
<100M>
・Kagawa Prefecture Industry Promotion
Center (Takamatsu)
・Tokushima University (Tokushima)
・Ehime University (Matsuyama)
・Kochi University of Technology
(Tosayamada-cho, Kochi Prefecture)
Y. Karita
NICT Keihannna Human
Info-Communications
Research Center
(Sendai)
・Akita Regional IX * (Akita)
・Keio University Tsuruoka Campus
(Tsuruoka, Yamagata Prefecture)
・Aizu University
(Aizu Wakamatsu)
<10G>
・Tokyo University
Kanazawa
Osaka
・Tohoku University
・NICT Iwate IT Open Laboratory
testbeds:
e.g.
(Takizawa-mura,
Iwate Prefecture)
<100M>
<100M>
GMPLS
Interop.
Tests
・Niigata University
・Hachinohe Institute
of Technology
(Niigata)
(Hachinohe, Aomori Prefecture)
Fukuoka
NICT Kita Kyushu IT
Open Laboratory
at
・Hokkaido Regional Network Association
(Sapporo)
Nagoya
<100M>
・Nagoya University (Nagoya)
・University of Shizuoka (Shizuoka)
・Softopia Japan (Ogaki, Gifu Prefecture)
・Mie Prefectural College of Nursing (Tsu)
NICT Tsukuba
Research Center
(Bunkyo Ward, Tokyo)
・NICT Kashima Space Research Center
(Kashima, Ibaraki Prefecture)
<1G>
・Yokosuka Telecom Research Park
(Yokosuka, Kanagawa Prefecture)
<100M>
・Utsunomiya University (Utsunomiya)
・Gunma Industrial Technology Center
(Maebashi)
・Reitaku University
(Kashiwa, Chiba Prefecture)
・NICT Honjo Information and
Communications Open Laboratory
(Honjo, Saitama Prefecture)
・Yamanashi Prefecture Open R&D
Center
(Nakakoma-gun, Yamanashi Prefecture)
Otemachi
USA
*IX:Internet eXchange
AP:Access Point
APAN-KR : KREONET/KREONet2 II
KREONET
 11 Regions, 12 POP Centers
 Optical 2.5-10G Backbone;
SONET/SDH, POS, ATM
 National IX Connection
D. Son
SuperSIREN (7 Res. Institutes)
 Optical 10-40G Backbone
 Collaborative Environment
Support
 High Speed Wireless: 1.25 G
KREONET2
 Support for Next Gen. Apps:
 IPv6, QoS, Multicast;
Bandwidth Alloc. Services
 StarLight/Abilene Connection
International Links
 GLORIAD Link to 10G to
Seattle Aug. 1 (MOST)
 US: 2 X 622 Mbps via
CA*Net; GbE via TransPAC
 Japan: 2 Gbps
 TEIN to GEANT: 155 Mbps
The Global Lambda Integrated Facility for
Research and Education (GLIF)
 Architecting an international LambdaGrid infrastructure
 Virtual organization supports persistent data-intensive scientific
research and middleware development on “LambdaGrids”
Many 2.5 - 10G Links Across the Atlantic and Pacific
Peerings: Pacific & Atlantic Wave; Seattle, LA, Chicago, NYC, HK
Internet 2 Land Speed Record (LSR)
4
Nov0
Jun 04
Ap r04
3
No v0
03
Oct-
Feb 03
2
No v0
Ap r02
Throughput (Gbps)
Throuhgput
(Petabit-m/sec)
7.2G X 20.7
7.21 Gbps kkm
 Product of transfer speed and
20675 km
Internet2
LSR - Single IPv4 TCP
stream
Internet2
LSRs:
160
distance using standard
Blue = HEP
140
6.6 Gbps
16500km
Internet (TCP/IP) protocols.
120
4.2 Gbps
100
 Single Stream 7.5 Gbps X 16 kkm
5.6 Gbps 16343km
10949km
80
with Linux: July 2004
5.4 Gbps
60
2.5 Gbps 7067km
 IPv4 Multi-stream record with FAST
0.9 Gbps 10037km
40
0.4 Gbps 10978km
TCP: 6.86 Gbps X 27kkm: Nov 2004
12272km
20
 IPv6 record: 5.11 Gbps between
0
Geneva and Starlight: Jan. 2005
 Concentrate now on reliable
Nov. 2004 Record Network
Terabyte-scale file transfers
 Disk-to-disk Marks:
536 Mbytes/sec (Windows);
500 Mbytes/sec (Linux)
Note System Issues: PCI-X Bus,
Network Interface, Disk I/O
Controllers, CPU, Drivers
NB: Computing Manuf.’s Roadmaps for
S. Ravot
2006: One Server Pair to One 10G Link
SC2004 Bandwidth Record by HEP: High
Speed TeraByte Transfers for Physics
 Caltech, CERN




SLAC, FNAL, UFl,
FIU, ESNet, UK,
Brazil, Korea;
NLR, Abilene,
LHCNet, TeraGrid;
DOE, NSF, EU, …;
Cisco, Neterion,
HP, NewiSys, …
Ten 10G Waves,
80 10GbE Ports,
50 10GbE NICs
Aggregate Rate of
101 Gbps
1.6 Gbps to/from
Korea
2.93 Gbps to/from
Brazil
UERJ, USP
Monitoring
NLR, Abilene,
LHCNet,
SCINet, UERJ,
USP, Int’l R&E
Nets and
9000+
Grid Nodes
Simultaneously
I. Legrand
SC2004 KNU Traffic: 1.6 Gbps to/From
Pittsburgh Via Transpac (LA) and NLR
Monitoring in Daegu
Courtesy K. Kwon
SC2004: 2.93 (1.95 + 0.98) Gbps
Sao Paulo – Miami – Pittsburgh
(Via Abilene)
GEANT
(SURFNet)
Madrid
& GEANT
J. Ibarra
Brazilian T2+T3 HEPGrid:
Rio + Sao Paolo
Also 500 Mbps Via Red CLARA,
GEANT (Madrid)
HENP Bandwidth Roadmap
for Major Links (in Gbps)
Year
Production
Experimental
2001
2002
0.155
0.622
0.622-2.5
2.5
2003
2.5
10
DWDM; 1 + 10 GigE
Integration
2005
10
2-4 X 10
 Switch;
 Provisioning
2007
2-4 X 10
1st Gen.  Grids
2009
~10 X 10
or 1-2 X 40
~5 X 40 or
~20 X 10
~Terabit
~10 X 10;
40 Gbps
~5 X 40 or
~20-50 X 10
~25 X 40 or
~100 X 10
2011
2013
~MultiTbps
Remarks
SONET/SDH
SONET/SDH
DWDM; GigE Integ.
40 Gbps 
Switching
2nd Gen  Grids
Terabit Networks
~Fill One Fiber
Continuing Trend: ~1000 Times Bandwidth Growth Per Decade;
HEP: Co-Developer as well as Application Driver of Global Nets
Evolving Quantitative Science Requirements for
Networks (DOE High Perf. Network Workshop)
Today
End2End
Throughput
5 years
End2End
Throughput
High Energy
Physics
Climate (Data &
Computation)
SNS
NanoScience
0.5 Gb/s
100 Gb/s
5-10 Years
End2End
Throughput
1000 Gb/s
0.5 Gb/s
160-200 Gb/s
N x 1000 Gb/s
High bulk
throughput
Not yet
started
1 Gb/s
1000 Gb/s + QoS
for Control
Channel
Fusion Energy
0.066 Gb/s
(500 MB/s
burst)
0.013 Gb/s
(1 TByte/week)
0.198 Gb/s
(500MB/
20 sec. burst)
N*N multicast
N x 1000 Gb/s
Remote
control and
time critical
throughput
Time critical
throughput
0.091 Gb/s
(1 TBy/day)
100s of users
Science Areas
Astrophysics
Genomics Data
& Computation
Remarks
High bulk
throughput
Computat’l
steering and
collaborations
1000 Gb/s + QoS
High
for Control
throughput
Channel
and steering
1000 Gb/s
See http://www.doecollaboratory.org/meetings/hpnpw/
W. Johnston
LHCNet , ESnet Plan 2007/2008:
40Gbps US-CERN, ESnet MANs, IRNC
AsiaPac
SEA
Europe
Europe
Aus.
ESnet
2nd Core:
30-50G
SNV
BNL
Japan
Japan
LHCNet US-CERN:
9/05: 10G CHI + 10G NY
2007: 20G + 20G
2009: ~40G + 40G
CHI
NYC
DEN
Metro
Rings
GEANT2
SURFNet
IN2P3
DC
FNAL
Aus.
SDG
ALB
ESnet IP Core
(≥10 Gbps)
ATL
CERN
ELP
ESnet hubs
New ESnet hubs
Metropolitan Area Rings
Major DOE Office of Science Sites
High-speed cross connects with Internet2/Abilene
Production IP ESnet core, 10 Gbps enterprise IP traffic
Science Data Network core, 40-60 Gbps circuit transport
Lab supplied
Major international
LHCNet Data Network
NSF/IRNC circuit; GVA-AMS connection via Surfnet or Geant2
10Gb/s
10Gb/s
30Gb/s
2 x 10Gb/s
LHCNet
Data Network
(4 x 10 Gbps
US-CERN)
S. Ravot
We Need to Work on the Digital Divide
from Several Perspectives
 Workshops and Tutorials/Training Sessions





 For Example: ICFA DD Workshops, Rio 2/04, HONET (Pakistan)
12/04; Daegu May 2005
Share Information: Monitoring, BW Progress; Dark Fiber Projects;
Prices in different markets
 Use Model Cases: Poland, Slovakia, Czech Rep., China, Brazil,…
Encourage, and Work on Inter-Regional Projects
 GLORIAD, Russia-China-Korea US Optical Ring
 Latin America: CHEPREO/WHREN (US-Brazil); RedCLARA
Help with Modernizing the Infrastructure
 Design, Commissioning, Development
 Provide Tools for Effective Use: Monitoring, Collaboration
Systems; Advanced TCP Stacks, Grid System Software
Work on Policies and/or Pricing: pk, br, cn, SE Europe, in, …
 Encourage Access to Dark Fiber
Raise World Awareness of Problems, & Opportunities for Solutions
UERJ T2 HEPGRID Inauguration:
Dec. 2004: The Team (Santoro et al.)
100 Dual
Nodes;
Upgrades
Planned
Also Tier3 in
Sao Paulo
(Novaes)
UERJ Tier2 Now On Grid3 and
Open Science Grid (5/13)
Grid3, the Open Science Grid
and DISUN
Grid3: A National
Grid Infrastructure
 35 sites, 3500
CPUs: Univ. +
4 Nat’l labs
 Part of LHC Grid
 Running since
October 2003
 HEP, LIGO,
SDSS, Biology,
Computer Sci.
+Brazil
(UERJ,
USP)
P. Avery
Transition to Open Science Grid (www.openscience.org)
7 US CMS Tier2s; Caltech, Florida, UCSD, UWisc Form DISUN
Science-Driven: HEPGRID (CMS) in Brazil
HEPGRID-CMS/BRAZIL is a project to build a Grid that
At Regional Level will include CBPF,UFRJ,UFRGS,UFBA, UERJ & UNESP
At International Level will be integrated with CMS Grid based at CERN;
focal points include Grid3/OSG and bilateral projects with Caltech Group
Brazilian HEPGRID
On line
systems
T0 +T1
ICFA DD Workshop 2/04;
T2 Inauguration +
GIGA/RNP Agree 12/04
2.5 - 10
Gbps
CERN
T1
France
Germany
UNESP/USP
SPRACE-Working
T3 T2 UFRGS
UERJ:
T2T1,
100500
Nodes;
Plus T2s to
100 Nodes
Italy
BRAZIL
622 Mbps
UERJ Regional
Tier2 Ctr
USA
T2 T1
Gigabit
UERJ
CBPF
UFBA
UFRJ
T4
Individual
Machines
Rio Tier2-SPRACE (Sao Paolo)
-Ampath Direct Link at 1 Gbps
Giga
Fiber
T2 Rio
1 Gbps
UERJ
CC-USP
Giga Fiber
1 Gbps
Giga
Router
Jump CC
NAP of Brazil
AMPATH Eletropaulo Fiber Terremark
leased to ANSP
SPRACE
1 Gbps
L. Lopez
ANSP
Routers
Caltech/Cisco
Routers
Aus
tria (
AC O
net)
Be lg
ium
(B EL
N ET
)
Cy p
r
us ( C
Czec
h Re
YNE
T)
publ
ic (C
ESN
ET)
De n
mark
( UNI
.C)
Esto
ni a (
EEN
ET)
Finla
nd (F
U NE
Fran
T)
ce ( R
ENA
TER
)
Germ
any
(D FN
)
Gree
ce (G
Hun
R NE
gar y
T)
( HUN
GAR
NET
)
Icela
nd (R
H ne
t)
Irela
nd ( H
EAn
et )
Italy
(GA R
R)
Lat v
ia ( L
ANE
T)
Lat v
ia ( L
ANE
T-2)
Lithu
ania
( LITN
Luxe
ET)
mbo
urg (
RES
Ne th
TEN
A)
erlan
ds S
URF
net )
Norw
ay (U
N IN E
TT)
Pola
nd (P
IONI
ER )
Port
ugal
( FCC
N)
Slov
enia
( ARN
ES)
Slov
akia
(SAN
E T)
Spai
n (R
edIR
IS)
Swe
den
(
S
Swit
UN E
ze rla
T)
nd (S
WITC
H)
Le ge nd
has dark
fibe r
will hav e
dark fibe r
no dark
fibe r
10
0.01G
Highest Link Speed (Mbps)
Highest Bandwidth Link in NREN’s Infrastructure,
EU & EFTA Countries, & Dark Fiber
10000
10.0G
1.0G
1000
100
0.1G
1
 Owning (or leasing) dark fiber is an interesting option
for an NREN; Depends on the national situation.
 NRENs that own dark fiber can decide for themselves
which technology and what speeds to use on it
Source:
TERENA
Europe: Core Network Bandwidth Increase
for Years 2001-2004 and 2004-2006
1000
Expected increase in two years
Increase factor
Increase factor for 2001-2004
100
10
C
ze
ch
La
tv
i
R
ep a ( L
ub
A
lic TN
(C ET
)
Fi
E
S
nl
N
an
E
Fr
d
T)
an
(F
U
ce
(R NE
N
T)
E
or
N
w
a y AT
(U ER
D
e n NIN )
m
ET
a
S
lo rk ( T)
ve
U
S
N
ni
w
I.
a
it z
(A C)
er
R
la
N
nd
E
S
(S
)
E
W
N
s
I
t
TC
et
on
he
U
ia
H
ni
)
rla
(E
te
nd
E
d
N
s
K
et
(S
in
)
gd
U
om RF
( U net
G
)
K
re
ER
ec
N
e
( G A)
A
us
R
N
tri
ET
a
)
Ire (AC
O
la
ne
nd
t)
(
H
P
H
E
o
un
r
A
ne
ga tug
al
t)
ry
(H (FC
U
C
N
N
G
)
A
S
R
pa
N
in
E
T)
(
G Re
dI
er
R
m
an IS
S
)
y
w
ed
(D
FN
en
Li
)
(
S
th
U
ua
N
ni
E
T)
a
P
(L
ol
IT
an
N
d
( P ET
S
)
I
lo
va ON
I
ki
E
a
R
)
(S
A
N
E
T)
1
 Countries With No Increase Already Had 1-2.5G Backbone in 2001
 These are all going to 10G backbones by 2006-7
 Countries Showing the Largest Increase Are:
 PIONIER (Poland) from 155 Mbps to 10 Gbps capacity (64X)
 SANET (Slovakia) from 4 Mbps to 1 Gbps (250X).
Source:
TERENA
120km CBDF
Cost 4 k
Per Month
1 GE Now;
10G Planned
 1660 km of Dark Fiber CWDM Links, 1 to 4 Gbps (GbE)
 August 2002: Dark Fiber Link, to Austria
 April 2003:
Dark Fiber Link to Czech Republic > 250X: 2002-2005
 2004:
Dark Fiber Link to Poland
T. Weis
 Planning 10 Gbps Backbone
Dark Fiber in Eastern Europe
Poland: PIONIER (10-20G) Network
GDAŃSK
2763 km Lit Fiber
Connects 22 MANs;
+1286 km (9/05)
+ 1159 km (4Q/06)
Vision: Support -
KOSZALIN
OLSZTYN
SZCZECIN
 Add’l Fibers for
e-Regional Initiatives
BASNET
34 Mb/s
TORUŃ
GÉANT
POZNAŃ
ZIELONA
GÓRA
WARSZAWA
 Computational Grids;
Domain-Specific Grids
 Digital Libraries
 Interactive TV
BIAŁYSTOK
BYDGOSZCZ
ŁÓDŹ
WROCŁAW
CZĘSTOCHOWA
OPOLE
RADOM
KIELCE
PUŁAWY
LUBLIN
KATOWICE
10 Gb/s
(2 lambdas)
10 Gb/s
1 Gb/s
Metropolitan
Area
Networks
KRAKÓW
RZESZÓW
BIELSKO-BIAŁA
CESNET, SANET
4Q05 Plan: Courtesy
M. Przybylski
PIONIER Cross Border
Dark Fiber Plan Locations
Single
GEANT PoP
in Poznan
CESNET2 (Czech Republic)
Network Topology, Dec. 2004
2500+ km
Leased Fibers
(Since 1999)
2005: 10GE Link Praha-Brno (300km) in Service;
Plan to go to 4 X 10G and higher as needed;
More 10GE links planned
J. Gruntorad
APAN China Consortium
Established in 1999. The China Education and Research
Network (CERNET) and the China Science and Technology
Network (CSTNET) are the main advanced networks.
CERNET
 2000: Own dark fiber crossing 30+
major cities and 30,000 kilometers
 2003: 1300+ universities and
institutes, over 15 million users
CERNet
CERNET 2: Next Generation R&E Net
 Backbone connects 15-20 Giga-
POPs at 2.5G-10Gbps (I2-like)
 Connects to 200 Universities and
2.5 Gbps
J. P. Wu, H. Chen
100+ Research Institutes at
1 Gbps-10 Gbps
 Native IPv6 andCSTnet
Lambda Networking
From 6 to 78 Million Internet Users
in China from Jan. – July 2004
Brazil (RNP2): Rapid Backbone Progress
and the GIGA Project
RNP & GIGA: Extend GIGA
to the Northeast, with 4000
km of dark fiber by 2008


RNP Connects the regional
networks in all 26 states of Brazil
Backbone on major links to 155
Mbps; 622 Mbps Rio – Sao Paulo.
 2.5G to 10G Core in 2005
(300X Improvement in 2 Years)
 The GIGA Project – Dark Fiber
experimental network
 700 km of fiber, 7 cities
and 20 institutions in
Sao Paolo and Rio
 GbE to Rio Tier-2, Sao
Paulo Tier-3
L. Lopez
DFN (Germany):
X-WiN-Fiber Network
KIE
13.04.2005
ROS
DES
HAM
BRE
HAN
BIE
MUE
TUB
POT
BRA
ZIB
MAG
DUI
LEI
FZJ
AAC
WEI
BIR
FRA
Faser KPN
Faser GL
Faser GC
Faser
vorhanden
JEN
CHE
BAY
ESF
HEI
ADH
 Several fibre and
wavelengths providers
 Fibre is relatively cheap
– in most cases more
economic than (one)
wavelength
DRE
ILM
GSI
 Most of the X-WiN core
will be a fibre network,
(see map), the rest will
be provided by
HUB
wavelengths
ERL
REG
FZK
STU
 X-Win creates many new
options besides being
cheaper than the current
G-WiN core
GAR
K. Schauerhammer
Romania: Inter-City Links were 2 to 6 Mbps in 2002;
Improved to 155 Mbps in 2003-2004;
GEANT-Bucharest Link: 155 to 622 Mbps
RoEduNet
January 2005
N. Tapus
Plan: 3-4 Centers at 2.5 Gbps;
Dark Fiber InterCity Backbone
T. Ul Haq
Compare Pk: 56 univ. share
155 Mbps Internationally
ICFA Report: Networks for HENP
General Conclusions
 Reliable high End-to-end Performance of networked applications
such as Data Grids is required. Achieving this requires:
 A coherent approach to End-to-end monitoring in all regions
that allows scientists throughout the world to extract clear
information
 Upgrading campus infrastructures.
To support Gbps flows to HEP centers.

Removing local, last mile, and nat’l and int’l bottlenecks
end-to-end, whether technical or political in origin.
Bandwidth across borders, the countryside or the city is
often much less than on national backbones and int’l links
This problem is very widespread in our community:
Examples stretching from the Asia Pacific to Latin America
to the Northeastern US. Root causes vary, from lack of
local infrastructure, to unfavorable policies and pricing
Situation of Local Access
in Belém in Brazil in 2004
Institution
Summary of local network connections
CEFET
CESUPA
(4 campi)
IEC/MS
(2 campi)
MPEG
(2 campi)
UEPA
(5 campi)
UFPA
(4 campi)
UFRA
UNAMA
(4 campi)
Access to provider at 512 kbps
Internal + access to provider at 6 Mbps
Annual
Cost (US$)
22,200
57,800
Internal at 512 kbps + Access to provider at
512 kbps
Internal at 256 kbps; Access at 34 Mbps
(radio link)
Internal at 128 kbps; Access at 512 kbps
13,300
Internal at 128 kbps; Provider PoP
16,700
Access to provider at 1 Mbps
Internal wireless links, access at 6 Mbps
16,000
88,900
7,600
18,500
Annual telco charges for POOR local access = US$ 241,000
Belém: a Possible Topology
(30 km ring)
Alternative Approach in Brazil – Do It Yourself
(DIY) Networking (M. Stanton, RNP)
1. Form a consortium for joint network provision
2. Build your own optical fibre network to reach ALL the campi
of ALL consortium members
3. Light it up and go!
Costs involved:
 Building out the fibre: using utility poles of electric company
 US$ 7,000 per km
 Monthly rental of US $1 per pole (~25 poles per km)
 Equipment costs: mostly use cheap 2 port GbE switches
 Operation and maintenance
In Belém for 11 institutions using All GigE connections:
 Capital costs around US $ 500,000
 Running costs around US $ 40,000 p.a.
 Compare with current US $ 240,000 p.a. for traditional telco
solution [for 0.128 to 6 Mbps: ~1000X less bandwidth]
Brazil: RNP Nat’l Plan for Optical
Metro Nets in 2005-6
 In December 2004, RNP signed contracts with
“Finep” (the agency of the Ministry of Science and
Technology) to build optical metro networks in all 27
capital cities in Brazil
 Total value of more than US$15 millions
 Most of this money will be spent in 2005
M. Stanton
GLORIAD Topology – Current, Plans for Years 1-5
Moscow
Seattle
Amsterdam
Chicago
Novosibirsk
Khabarovsk
Beijing
Pusan
NYC
Hong Kong
Segment
Current
Year 1
Year 2
Year 3
Year 4
Year 5
1 - TransAsia
155 Mbps
2.5 Gbps (USChina),10 Gbps
(US-Korea-China)
2 x 10 Gbps
US-China;
US-Korea
-China
2 x 10
Gbps
N x 10
Gbps
N x 10
Gbps
2 - TransChina
2.5 Gbps (155
Mbps, BeijingKhabarovsk)
2.5 Gbps
1 x 10 Gbps
2 x 10
Gbps
N x 10
Gbps
N x 10
Gbps
3 - TransRussia
155 Mbps
622 Mbps
2.5 Gbps
4 - TransEurope
622 Mbps
622 Mbps
622 Mbps
5 - TransAtlantic
622 Mbps
1 Gbps
1 x 10 Gbps
6 - TransNorth
America
2.5G (AsiaChicago), GbE
NYC-Chicago (via
CANARIE)
10 Gbps, SeattleChicago-NYC
10 Gbps,
SeattleChicago-NYC
1 x 10
Gbps
2 x 10
Gbps
2 x 10
Gbps
2 x 10
Gbps
N x 10
Gbps
N x 10
Gbps
N x 10
Gbps
N x 10
Gbps
N x 10
Gbps
N x 10
Gbps
N x 10
Gbps
N x 10
Gbps
G. Cole
Closing the Digital Divide: R&E
Networks in/to Latin America
PacificWave
AtlanticWave
RNP2 and the GIGA Project
(ANSP)
AmPath: 45 Mbps Links
to US (2002-5)
CHEPREO (NSF, from 2004):
622 Mbps Sao Paulo – Miami
WHREN/LILA (NSF, from 2005)
 0.6 to 2.5 G Ring by 2006
 Connections to
Pacific & Atlantic Wave
RedCLARA (EU): Connects
18 Latin Am. NRENs, Cuba;
622 Mbps Link to Europe
To GEANT
622 Mbps
Sao Paulo
ANSP
Role of Science in the Information
Society; WSIS 2003-2005
HEP Active in WSIS I, Geneva
Theme: “Creating a
Sustainable Process of
Innovation”
CERN RSIS Event
SIS Forum & CERN/Caltech
Online Stand at WSIS I (12/03)
> 50 Demos: Advanced Nets
& Grids, Global VideoConf.,
Telesurgery, “Music Grids”…
 Visitors at WSIS I:
Kofi Annan, UN Sec’y Gen’l
John H. Marburger, Science
Adviser to US President
Ion Iliescu,
WSIS President
II: TUNISof
Romania; and Dan Nica,
11/16-11/18/2005
Minister of ICT
… www.itu.int/wsis
World Conference on Physics
GOAL: An Information Society:
and
Sustainable
Development
“… One
in which
highly developed
Durban, equitable
South Africa
10/31-11/2/05
networks,
and ubiquitous
access
to information,
appropriate
“The
World
Conference will
serve as the first
content
global
in accessible
forum to focus
formats,
the physics
community
toward
development goals
and effective
communication
can and to
create
new achieve
mechanisms
cooperation
help
people
theirofpotential”
toward their achievement.”
www.saip.org.za/physics2005/WCPSD2005.html
Networks and Grids for HEP and
Data Intensive Global Science
 Networks used by HEP and other fields of DIS are advancing rapidly
 To the 10 G range and now N X 10G; much faster than Moore’s Law
 New HENP and DOE Roadmaps: Factor ~1000 BW Growth/Decade
 HEP & CS are learning to use long range 10 Gbps networks effectively
 2004-5: 7+ Gbps TCP flows over 20+ kkm; 101 Gbps Record
 Transition to community-operated/owned optical R&E networks
(us, ca, nl, jp, kr; pl, cz, br, sk, pt, ei, gr, … ) is underway
 A new era of “hybrid” optical networks & Grid systems is emerging
 We Must Work to Close to Digital Divide, from Several Perspectives
 To Allow Scientists in All World Regions to Take Part in Discoveries
 Removing Regional, Last Mile, Local Bottlenecks and
Compromises in Network Quality are On the Critical Path
 Important Examples on the Road to Closing the Digital Divide
 GLORIAD (US-Russia-China-Korea) Global Optical Ring
 IEEAF “Global Quilt”; NSF: IRNC and New Initiative on Africa
 CHEPREO, WHREN and the Brazil HEPGrid in Latin America
 Leadership & Outreach: HEP Groups in US, EU, Japan, Korea,
Latin America
Acknowledgements




















R. Aiken
A. Ali
S. Altmanova
P. Avery
J. Boroumand
J. Bakken
L. Bauerdick
J. Bunn
R. Cavanaugh
H. S. Chen
K. Cho
G. Cole
L. Cottrell
D. Davids
H. Doebbling
J. Dolgonas
E. Fantegrossi
D. Foster
I. Foster
P. Galvez





















J. Gruntorad
J. Ibarra
V. Ilyin
W. Johnston
Y. Karita
D. Y. Kim
Y. Kim
K. Kwon
I. Legrand
M. Livny
S. Low
O. Martin
R. Mount
S. McKee
G. McLaughlin
D. Nae
K. Neggers
S. Novaes
J. Pool
D. Petravick
S. Ravot





















D. Reese
D. Riley
A. Santoro
K. Schauerhammer
C. Smith
D. Son
M. Stanton
C. Steenberg
X. Su
R. Summerhill
M. Thomas
F. Van Lingen
E. Velikhov
D. Walsten
T. Weis
T. West
D. Williams
V. White
J. P. Wu
F. Wuerthwein
Y. Xia


















US DOE
US NSF
ESnet
European
Commission
CERN
SLAC
Fermilab
CACR
NLR
CENIC
Internet2
/HOPI
FLR
UltraIight
Starlight
KISTI, KAIST
RNP, ANSP
Cisco
Neterion
Some Extra
Slides Follow
Int’l Networks BW on Major Links
for HENP: US-CERN Example
 Rate of Progress >> Moore’s Law (US-CERN Example)
 9.6 kbps Analog
1985)
 64-256 kbps Digital
1989 - 1994)
 1.5 Mbps Shared
1990-3; IBM)
 2 -4 Mbps
1996-1998)
 12-20 Mbps
1999-2000)
 155-310 Mbps
2001-2)
 622 Mbps
2002-3)
 2.5 Gbps ()
2003-4
 10 Gbps
2004-5
 2 x 10 Gbps
2005-6
 4 x 10 Gbps
~2007-8
 8 x 10 Gbps or 2x40Gbps ~2009-10
 A factor of ~1M Bandwidth Improvement over
1985-2005; A factor of ~5k during 1995-2005
 HENP has become a leading applications driver,
and also a co-developer of global networks
[X 7 – 27]
[X 160]
[X 200-400]
[X 1.2k-2k]
[X 16k – 32k]
[X 65k]
[X 250k]
[X 1M]
[X 2M]
[X 4M]
[X 8M]
Amsterdam Internet Exchange Point Example
60 Gbps
5 Minute
Max
40 Gbps
Average
20 Gbps
Some Annual Growth Spurts;
Typically In Summer-Fall
“Acceleration” Last Summer
The Rate of HENP Network Usage Growth
(> 100% Per Year) is Not Unlike the World at Large
66 Gbps
Internet Growth in the World At Large
Brazil: National backbone in 2003
RNP2 – May 2003
( 30 Mbps)
M. Stanton
BRAZIL: Nat’l Backbone May 2005
RNP – May 2005
( 622 Mbps)
BRAZIL: RNPng – 10G & 2.5G
Core Network (3Q2005)
Fortaleza
Recife
Salvador
Brasília
Belo Horizonte
Curitiba
Rio de Janeiro
São Paulo
Florianópolis
Porto Alegre
Core Network
Tender
Completed
May 12th-13th
2.5 Gbps
10 Gbps
HENP Data Grids, and Now
Services-Oriented Grids
 The classical Grid architecture had a number of implicit
assumptions
 The ability to locate and schedule suitable resources,
within a tolerably short time (i.e. resource richness)
 Short transactions with relatively simple failure modes
 HENP Grids are Data Intensive & Resource-Constrained
 Resource usage governed by local and global policies
 Long transactions; some long queues
 Grid Analysis: 1000s of users compete for resources
at dozens of sites: Complex scheduling; management
 HENP Stateful, End-to-end Monitored and Tracked Paradigm
 Adopted in OGSA, Now WS Resource Framework
Grid Analysis: A Real Time SOA
Enabling Global Communities
Analysis
Client
Analysis
Client
Analysis
Client
HTTP, SOAP, XML-RPC
Grid Services
Web Server
Scheduler
Catalogs
FullyAbstract
Planner
Metadata
PartiallyAbstract
Planner
Virtual
Data
Data
Management
FullyConcrete
Planner
Monitoring
Replica
Execution
Priority
Manager
Grid Wide
Execution
Service
Applications
Analysis Clients talk
standard protocols to
the Clarens data/services
portal; hides complexity.
Simple Web service API
allows diverse Analysis
Clients (simple or
complex)
Clarens Servers
Autodiscover, Autoconnect to form a
“Services Fabric”
Key features: Global
Scheduler, Catalogs,
Monitoring, and Strategic
Grid-wide Execution
service.
F. Van Lingen,
M. Thomas
 Caltech, UF, UMich,
FNAL, SLAC, CERN,
10 Gbps
KNU
 UERJ (Rio), USP
(Sao Paulo), FIU,
KNU (Korea), KEK
 NLR, CENIC, UCAID,
http://ultralight.caltech.edu
CHEPREO
Translight,
UKLight,
Netherlight, UvA;
UCLondon,
Taiwan
UERJ,
USP
 Cisco
 Next generation Information System, with the network as an integrated,
actively managed subsystem in a global Grid
 Hybrid network infrastructure: packet-switched + dynamic optical paths
 End-to-end monitoring; Realtime tracking and optimization;
Dynamic bandwidth provisioning; Agent-based services spanning all layers
S. McKee; G. Karmous-Edwards
LISA- Localhost Information Service
Agent End To End Monitoring Tool
Complete, lightweight monitoring of end user systems &
network connectivity. Uses MonALISA framework
to optimize client applications.
 Easy to deploy & install with any browser;
user friendly GUI
 Detects system architecture, OS
 For all versions of Windows, Linux, Mac.
 Complete system monitoring of the host
 CPU, memory, IO, disk, …
 Hardware detection including Audio, Video
equipment; drivers installed in the system
 Provides embedded clients for IPERF
(or other net monitoring tools, e.g. Web 100 )
LISA and ApMon2: A basis for strategic,
end-to-end managed Grids
Try it: http://monalisa.caltech.edu/lisa/lisa.jnlp
I. Legrand
ApMon Grid Analysis Demo: Distributed
Processing and Monitoring at SC2004

Demonstrated how CMS analysis jobs can be submitted to
multiple sites, and monitored from anywhere else on the grid
 Three sites: Caltech, UERJ and USP
 Using the “BOSS” job submission and tracking tool,
running as a “Clarens” Grid service
 Job status is monitored by MonALISA, and visualized using
(netlogger-style) “lifelines” in real time
M. Thomas