International Networks and the US-CERN Link
Download
Report
Transcript International Networks and the US-CERN Link
HENP Networks and Grids for
Global VOs: from Vision to Reality
Harvey B. Newman
California Institute of Technology
GNEW2004
March 15, 2004
ICFA and Global Networks
for HENP
National and International Networks, with sufficient
(rapidly increasing) capacity and seamless end-to-end
capability, are essential for
The daily conduct of collaborative work in both
experiment and theory
Detector development & construction
on a global scale
Grid systems supporting analysis involving
physicists in all world regions
The conception, design and implementation of
next generation facilities as “global networks”
“Collaborations on this scale would never have been
attempted, if they could not rely on excellent networks”
The Challenges of Next Generation
Science in the Information Age
Petabytes of complex data explored and analyzed by
1000s of globally dispersed scientists, in hundreds of teams
Flagship Applications
High Energy & Nuclear Physics, AstroPhysics Sky Surveys:
TByte to PByte “block” transfers at 1-10+ Gbps
eVLBI: Many real time data streams at 1-10 Gbps
BioInformatics, Clinical Imaging: GByte images on demand
HEP Data Example:
From Petabytes in 2004, ~100 Petabytes by 2008,
to ~1 Exabyte by ~2013-5.
Provide results with rapid turnaround, coordinating
large but limited computing and data handling resources,
over networks of varying capability in different world regions
Advanced integrated applications, such as Data Grids,
rely on seamless operation of our LANs and WANs
With reliable, quantifiable high performance
Four LHC Experiments: The
Petabyte to Exabyte Challenge
ATLAS, CMS, ALICE, LHCB
Higgs + New particles; Quark-Gluon Plasma; CP Violation
6000+ Physicists &
Engineers; 60+ Countries;
250 Institutions
Tens of PB 2008; To 1 EB by ~2015
Hundreds of TFlops To PetaFlops
LHC Data Grid Hierarchy
CERN/Outside Resource Ratio ~1:2
Tier0/( Tier1)/( Tier2)
~1:1:1
~PByte/sec
~100-1500
MBytes/sec
Online System
Experiment
CERN Center
PBs of Disk;
Tape Robot
Tier 0 +1
Tier 1
~10 Gbps
IN2P3 Center
INFN Center
RAL Center
FNAL Center
2.5-10 Gbps
Tier 3
~2.5-10 Gbps
Tier 2
Institute Institute
Physics data cache
Workstations
Institute
Tier2 Center
Tier2 Center
Tier2 Center
Tier2 CenterTier2 Center
Institute
0.1 to 10 Gbps
Tens of Petabytes by 2007-8.
An Exabyte ~5-7 Years later.
Tier 4
Emerging Vision: A Richly Structured, Global Dynamic System
ICFA Standing Committee on
Interregional Connectivity (SCIC)
Created by ICFA in July 1998 in Vancouver
CHARGE:
Make recommendations to ICFA concerning the connectivity
between the Americas, Asia and Europe
As part of the process of developing these
recommendations, the committee should
Monitor traffic
Keep track of technology developments
Periodically review forecasts of future
bandwidth needs, and
Provide early warning of potential problems
Representatives: Major labs, ECFA, ACFA, North
and South American Physics Community
SCIC in 2003-2004
http://cern.ch/icfa-scic
Strong Focus on the Digital Divide Since 2002
Three 2004 Reports; Presented to ICFA Feb. 13
Main Report: “Networking for HENP”
[H. Newman et al.]
Includes Brief Updates on Monitoring, the Digital Divide
and Advanced Technologies [*]
A World Network Overview (with 27 Appendices):
Status and Plans for the Next Few Years
of National and Regional Networks,
and Optical Network Initiatives
Monitoring Working Group Report
[L. Cottrell]
Digital Divide in Russia
[V. Ilyin]
[*] Also See the 2003 SCIC Reports of the Advanced
Technologies and Digital Divide Working Groups
ICFA Report: Networks for HENP
General Conclusions (1)
Bandwidth Usage Continues to Grow by 80-100% Per Year
Current generation of 2.5-10 Gbps backbones and major Int’l links
used by HENP arrived in the last 2 Years [US+Europe+Japan+Korea]
Capability Increased from ~4 to several hundred times,
i.e. much faster than Moore’s Law
This is a direct result of the continued precipitous fall
of network prices for 2.5 or 10 Gbps links in these regions
Technological progress may drive BW higher, unit price lower
More wavelengths on a fiber; Cheap, widespread Gbit Ethernet
Grids may accelerate this growth, and the demand for
seamless high performance
Some regions are moving to owned or leased dark fiber
The rapid rate of progress is confined mostly to the US, Europe,
Japan , Korea, and the major TransAtlantic and Pacific routes
This may worsen the problem of the Digital Divide
History of Bandwidth Usage – One Large
Network; One Large Research Site
ESnet Accepted Traffic 1/90 – 1/04
Exponential
Growth
Since
ESnet Monthly
Accepted
Traffic’92;
1/90-1/04
Annual Rate Increased from 1.7 to 2.0X
Per Year In the Last 5 Years
300
200
150
100
50
Jul, 03
Oct, 02
Jan, 02
Apr, 01
Jul, 00
Oct, 99
Jan, 99
Apr, 98
Jul, 97
Oct, 96
Jan, 96
Apr, 95
Jul, 94
Oct,93
Jan, 93
Apr, 92
Jul,91
Oct, 90
0
Jan, 90
TByte/Month
250
SLAC Traffic ~300 Mbps; ESnet Limit
Growth in Steps: ~ 10X/4 Years
Projected: ~2 Terabits/s by ~2014
Internet Growth in the World At Large
Amsterdam Internet Exchange Point Example
75-100% Growth Per Year
5 Minute
Max
20 G
Derived throughputs in kbits/s for Aug '03
August-03 AU CA EDU GOV CH DE DK HU IT UK JP NET ORG RU SU Avg
N America 234 716 9346 19 3 32 34 305 329 3 7 405 197 3231 4121 278 148 1487
Australasia 783 72 258 2 4
134 132 164 279 194 96 939
Balkans 149 23 276 365 1 1 154 939 4745 1093 1 48 15 321 351 670 346 896
Europe 148 234 316 380 16 2 1714 1049 1840 1596 2538 164 390 367 720 298 895
Baltics 126 183 265 3 1 64 10 9 1693 851 896 804 149 298 313 1050 213 58
Russia 129 170 2 0 2 5 508 593 912 545 587 561 321 285 24 809 768 458
E Asia 142 196 219 209 143 163 157 152 147 181 35 9 205 219 163 157 401
M East 125 187 168 234 2 7 505 386 453 645 572 125 243 283 3 1 185 31
L America 131 150 2 0 191 165 162 152 159 164 175 1 7 198 186 139 10 161
S Asia 73 179 1 2
121
Caucasus 87
87
Central Asia 64 68
6
Africa 40 56 27 4
32 31
38
Avg 10 2 205 898 418 45 754 69 1023 563 727 598 548 698 520 257 496
10 G
Avg
Some Growth Spurts;
Typically In Summer-Fall
The Rate of HENP Network Usage Growth
(~100% Per Year) is Similar to the World at Large
Bandwidth Growth of Int’l HENP
Networks (US-CERN Example)
Rate of Progress >> Moore’s Law. (US-CERN Example)
9.6 kbps Analog
64-256 kbps Digital
1.5 Mbps Shared
2 -4 Mbps
12-20 Mbps
155-310 Mbps
622 Mbps
2.5 Gbps
10 Gbps
(1985)
(1989 - 1994)
(1990-3; IBM)
(1996-1998)
(1999-2000)
(2001-2)
(2002-3)
(2003-4)
(2005)
[X 7 – 27]
[X 160]
[X 200-400]
[X 1.2k-2k]
[X 16k – 32k]
[X 65k]
[X 250k]
[X 1M]
A factor of ~1M over a period of 1985-2005
(a factor of ~5k during 1995-2005)
HENP has become a leading applications driver,
and also a co-developer of global networks;
Pan-European Multi-Gigabit Backbone (33 Countries)
February 2004
Note 10 Gbps
Connections
to Poland,
Czech
Republic,
Hungary
Planning Underway for “GEANT2” (GN2),
Multi-Lambda Backbone, to Start In 2005
1G
10G
Core Capacity on Western European NRENs 2001-2003
100M
TERENA Compendium
www.terena.nl
15 European NRENs have made a step up to
1, 2.5 or 10 Gbps core capacity in the last 3 years
Log
Scale
SuperSINET in JAPAN: Updated Map Oct. 2003
SuperSINET 10 Gbps
Int’l Circuit ~ 5-10 Gbps
Domestic Circuit 30 – 100 Mbps
SuperSINET
10 Gbps IP;
Tagged VPNs
Additional 1 GbE
Inter-University
Waves For HEP
4 X 2.5 Gb to NY;
10 GbE Peerings:
to Abilene, ESnet
and GEANT
SURFNet5 in the Netherlands
Fully Optical 10G IP Network
Fully dual stack:
IPv4 + IPv6
65% of Customer
Base Connects
with Gigabit
Ethernet
Germany: 2003, 2004, 2005
GWIN Connects 550 Universities, Labs, Other Institutions
GWIN: Q4/03
GWIN: Q4/04
Plan
XWIN: Q4/05
(Dark Fiber Option)
Abilene - Upgrade Completed!
PROGRESS in SE Europe (Sk, Pl, Cz, Hu, …)
1660 km of Dark
Fiber CWDM Links,
up to 112 km.
1 to 4 Gbps (GbE)
August 2002:
First NREN in
Europe to establish
Int’l GbE Dark Fiber
Link, to Austria
April 2003 to Czech
Republic.
Planning 10 Gbps
Backbone; dark
fiber link to Poland
this year.
Romania: Improved to 34 to 155 Mbps in 2003;
GEANT-Bucharest Link Improved: 155 to 622 Mbps.
Note: Inter-City Links Were Only 2 to 6 Mbps in 2002
GEANT connection
Timişoara
RoEduNet
2004
Plans for Intercity
Dark Fiber Backbone
Australia (AARnet):
SXTransport Project in 2004
Connect Major Australian Universities to 10 Gbps Backbone
Two 10 Gbps Research Links to the US
Aarnet/USLIC Collaboration on Net R&D Starting Soon
Connect Telescopes
in Australia, Hawaii (Muana Kea)
GLORIAD: Global Optical Ring
(US-Russia-China)
“Little Gloriad” (OC3) Launched January 12; to OC192
Also Important for
Intra-Russia Connectivity;
Education and Outreach
ITER Distributed Ops.;
Fusion-HEP Cooperation
Transition beginning now to optical, multiwavelength Community owned or leased
fiber networks for R&E
National Lambda Rail (NLR)
SEA
NLR
POR
Coming
SAC
CHI
OGD
DEN
SVL
CLE
FRE
KAN
NAS
STR
LAX
PHO
SDG
WAL
OLG
DAL
15808 Terminal, Regen or OADM site
Fiber route
Up Now;
BOS Initially 4 10G
NYC
Wavelengths
Full Footprint
WDC
PIT
by ~4Q04
RAL
Internet2 HOPI
Initiative
ATL
(w/HEP)
To 40 10G
Waves in
JAC
Future
18 State Dark Fiber
Initiatives
In the U.S. (As of 3/04)
California (CALREN),
Colorado (FRGP/BRAN)
Connecticut Educ. Network,
Florida Lambda Rail,
Indiana (I-LIGHT),
Illinois (I-WIRE),
Md./DC/No. Virginia (MAX),
Michigan,
Minnesota,
NY + New England (NEREN),
N. Carolina (NC LambdaRail),
Ohio (Third Frontier Net)
Oregon,
Rhode Island (OSHEAN),
SURA Crossroads (SE U.S.),
Texas,
Utah,
Wisconsin
The Move to Dark Fiber
is Spreading
FiberCO
CA*net4 (Canada)
Two 10G Waves Vancouver- Halifax
Interconnects
Edmonton
Saskatoon
St. John’s
Regina
Calgary
Winnipeg
Charlottetown
Vancouver
Montreal
Ottawa
Fredericton
Halifax
Seattle
Chicago
New York
Toronto
Regional Nets at 10G
Acts as Parallel
Discipline-Oriented
Nets; 650 Sites
Connects to US Nets
at Seattle, Chicago
and NYC
Third Nat’l Lambda
Later in 2004
User Controlled
Light Path Software
(UCLP) for “Lambda
Grids”
SURFNet6 in the Netherlands
3000 km of Owned Dark Fiber
40M Euro Project
Scheduled
Start Mid-2005;
Support Hybrid Grids
HEP is Learning How to Use Gbps Networks Fully:
Factor of ~500 Gain in Max. Sustained TCP Thruput
in 4 Years, On Some US+Transoceanic Routes
*
9/01
105 Mbps 30 Streams: SLAC-IN2P3; 102 Mbps 1 Stream CIT-CERN
5/20/02 450-600 Mbps SLAC-Manchester on OC12 with ~100 Streams
6/1/02
290 Mbps Chicago-CERN One Stream on OC12
9/02
850, 1350, 1900 Mbps Chicago-CERN 1,2,3 GbE Streams, 2.5G Link
11/02 [LSR] 930 Mbps in 1 Stream California-CERN, and California-AMS
FAST TCP 9.4 Gbps in 10 Flows California-Chicago
2/03
[LSR] 2.38 Gbps in 1 Stream California-Geneva (99% Link Use)
5/03
[LSR] 0.94 Gbps IPv6 in 1 Stream Chicago- Geneva
TW & SC2003 [LSR]: 5.65 Gbps (IPv4), 4.0 Gbps (IPv6) GVA-PHX (11 kkm)
3/04
[LSR] 6.25 Gbps (IPv4) in 8 Streams LA-CERN
Transatlantic Ultraspeed TCP Transfers
Throughput Achieved: X50 in 2 years
Terabyte Transfers by the Caltech-CERN Team:
Across Abilene (Internet2) Chicago-LA,
Sharing with normal network traffic
Oct 15: 5.64 Gbps IPv4 Palexpo-L.A. (10.9 kkm)
Peaceful Coexistence with a Joint Internet2-
Telecom World VRVS Videoconference
Nov 18: 4.00 Gbps IPv6 Geneva-Phoenix (11.5 kkm)
March 2004: 6.25 Gbps in 8 Streams (S2IO
Interfaces)
Nov 19: 23+ Gbps TCP: Caltech,
SLAC, CERN, LANL, UvA, Manchester
Juniper,
HP
Level(3)
Telehouse
HENP Major Links: Bandwidth
Roadmap in Gbps
Year
Production
Experimental
Remarks
SONET/SDH
2001
2002
0.155
0.622
0.622-2.5
2.5
2003
2.5
10
DWDM; 1 + 10 GigE
Integration
2005
10
2-4 X 10
Switch;
Provisioning
2007
2-4 X 10
~10 X 10;
40 Gbps
~5 X 40 or
~20-50 X 10
~25 X 40 or
~100 X 10
1st Gen. Grids
SONET/SDH
DWDM; GigE Integ.
40 Gbps
~10 X 10
Switching
or 1-2 X 40
2nd Gen Grids
2011
~5 X 40 or
Terabit Networks
~20 X 10
~Fill One Fiber
2013
~T erabit
~MultiTbps
Continuing the Trend: ~1000 Times Bandwidth Growth Per Decade;
A new DOE Science Network Roadmap: Compatible
2009
HENP Lambda Grids:
Fibers for Physics
Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes
from 1 to 1000 Petabyte Data Stores
Survivability of the HENP Global Grid System, with
hundreds of such transactions per day (circa 2007-8)
requires that each transaction be completed in a
relatively short time.
Example: Take 800 secs to complete the transaction. Then
Transaction Size (TB)
Net Throughput (Gbps)
1
10
10
100
100
1000 (Capacity of
Fiber Today)
Summary: Providing Switching of 10 Gbps wavelengths
within ~2-4 years; and Terabit Switching within 5-8 years
would enable “10G Lambda Grids with Terabyte transactions”,
to fully realize the discovery potential of major HENP programs,
as well as other data-intensive research.
ICFA Report: Networks for HENP
General Conclusions (2)
Reliable high End-to-end Performance of networked applications such as
large file transfers and Data Grids is required. Achieving this requires:
Removing local, last mile, and nat’l and int’l bottlenecks
end-to-end, whether technical or political in origin.
While National and International backbones have reached 2.5 to 10 Gbps
speeds in many countries, the bandwidths across borders, the countryside
or the city may be much less.
This problem is very widespread in our community, with examples
stretching from China to South America to the Northeastern U.S.
Root causes for this vary, from lack of local infrastructure to
unfavorable pricing policies.
Upgrading campus infrastructures.
These are still not designed to support Gbps data transfers in most of HEP
centers. One reason for the under-utilization of National and International
backbones, is the lack of bandwidth to groups of end-users inside the campus.
End-to-end monitoring extending to all regions serving our community.
A coherent approach to monitoring that allows physicists throughout
our community to extract clear, unambiguous and inclusive information
is a prerequisite for this.
ICFA Report: Networks for HENP
General Conclusions (3)
We must Remove Firewall Bottlenecks
[Also at some Major HEP Labs]
Firewall systems are so far behind the needs that they
won’t match the data flow of Grid applications. The maximum
throughput measured across available products is limited to a
few X 100 Mbps !
It is urgent to address this issue by designing new architectures
that eliminate/alleviate the need for conventional firewalls. For
example, Point-to-point provisioned high-speed circuits as
proposed by emerging Light Path technologies could remove
the bottleneck.
With endpoint authentication [as in Grid AAA systems], for example,
the point-to-point paths are private, intrusion resistant circuits,
so they should be able to bypass firewalls if the endpoint sites
trust each other.
HENP Data Grids, and Now
Services-Oriented Grids
The original Computational and Data Grid concepts are
largely stateless, open systems: known to be scalable
Analogous to the Web
The classical Grid architecture had a number of implicit
assumptions
The ability to locate and schedule suitable resources,
within a tolerably short time (i.e. resource richness)
Short transactions with relatively simple failure modes
HENP Grids are Data Intensive & Resource-Constrained
1000s of users competing for resources at dozens of sites
Resource usage governed by local and global policies
Long transactions; some long queues
HENP Stateful, End-to-end Monitored and Tracked Paradigm
Adopted in OGSA [Now WS Resource Framework]
Increased functionality,
standardization
The Move to OGSA and then
Managed Integration Systems
~Integrated Systems
Web services + …
X.509,
LDAP,
FTP, …
Custom
solutions
App-specific
Services
Open
Grid
Web
Services
Services
Arch
Resrc Framwk
Stateful;
Managed
GGF: OGSI, …
(+ OASIS, W3C)
Globus Toolkit Multiple implementations,
including Globus Toolkit
Defacto standards
GGF: GridFTP, GSI
Time
Managing Global Systems: Dynamic
Scalable Services Architecture
MonALISA: http://monalisa.cacr.caltech.edu
“Station Server”
Services-engines
at sites host many
“Dynamic Services”
Scales to
thousands of
service-Instances
Servers autodiscover
and interconnect
dynamically to form
a robust fabric
Autonomous agents
Grid Analysis Environment
CLARENS: Web Services Architecture
Analysis
Client
Analysis
Client
Analysis
Client
HTTP, SOAP, XML/RPC
Grid Services
Web Server
Scheduler
Catalogs
FullyAbstract
Planner
Metadata
PartiallyAbstract
Planner
FullyConcrete
Planner
Data
Management
Virtual
Data
Replica
Execution
Priority
Manager
Grid Wide
Execution
Service
Caltech GAE Team
Monitoring
Applications
Analysis Clients talk
standard protocols to
the “Grid Services Web
Server”, a.k.a. the
Clarens data/services
portal, with a simple
Web service API
The secure Clarens
portal hides the
complexity of the Grid
Services from the client
Key features: Global
Scheduler, Catalogs,
Monitoring, and Gridwide Execution service;
Clarens servers form
a Global Peer to peer
Network
42
UltraLight Collaboration:
http://ultralight.caltech.edu
Caltech, UF, FIU,
(HENP, VLBI, Oncology, …)
Application Frameworks
Grid Middleware
Grid/Storage
National
Lambda
Rail
Integrated
hybrid
experimental
network,
leveraging Transatlantic
Management
R&D network partnerships; packet-switched
+ dynamic optical paths
POR
10 GbE across US and the Atlantic: NLR, DataTAG, TransLight,
SAC
NYC
&
NetherLight,CHIUKLight,
etc.; Extensions Network
to Japan, Protocols
Taiwan, Brazil
OGD
DEN
SVL
CLE
PIT WDC
FRE
End-to-end
monitoring;
Realtime trackingBandwidth
and optimization;
Management
KAN
RAL
NAS provisioning
STR bandwidth
LAXDynamic
PHO
Distributed CPU & Storage
WAL ATL
SDG
OLG
Agent-based
services
spanning
all
layers
of the system, from the
DAL
JAC
optical cross-connects
to the applications. Network Fabric
SEA
End-to-end Monitoring
Intelligent Agents
Cisco, Level(3)
Flagship Applications
End-to-end Monitoring
Intelligent Agents
UMich, SLAC,FNAL,
MIT/Haystack,
CERN, UERJ(Rio),
NLR, CENIC, UCAID,
Translight, UKLight,
Netherlight, UvA,
UCLondon, KEK,
Taiwan
GLIF: Global Lambda Integrated Facility
“GLIF is a World Scale
Lambda based Lab for
Application & Middleware
development, where Grid
applications ride on
dynamically configured
networks based on
optical wavelengths ...
GLIF will use the Lambda
network to support data
transport for the most
demanding e-Science
applications, concurrent
with the normal best
effort Internet for
commodity traffic.”
10 Gbps Wavelengths For R&E Network
Development Are Proliferating,
Across Continents and Oceans
SCIC Report 2004
The Digital Divide
As the pace of network advances continues to accelerate,
the gap between the economically “favored” regions and
the rest of the world is in danger of widening.
We must therefore work to Close the Digital Divide
To make Physicists from All World Regions Full Partners
in Their Experiments; and in the Process of Discovery
This is essential for the health of our global
experimental collaborations, our plans for future
projects, and our field.
SCIC Monitoring WG
PingER (Also IEPM-BW)
Measurements from
33 monitors in 12 countries
850 remote hosts in 100
Countries; 3700 monitorremote site pairs
Measurements go back to ‘95
Reports on link reliability,
quality
Aggregation in
“affinity groups”
Monitoring Sites
Affinity Groups (Countries)
Countries monitored
Contain 78% of
world population
99% of Internet users
Anglo America (2), Latin America (14), Europe (24), S.E. Europe (9),
Africa (21), Mid East (7), Caucasus (3), Central Asia (8), Russia
includes Belarus & Ukraine (3), S. Asia (7), China (1) and Australasia (2).
SCIC Monitoring WG - Throughput
Improvements 1995-2004
Bandwidth of TCP < MSS/(RTT*Sqrt(Loss)) (1)
60% annual
improvement
Factor ~100/10 yr
Some
Regions
~5-10
Years
Behind
SE Europe,
Russia, Central
Asia May be
Catching Up
(Slowly);
India EverFarther Behind
Progress: but Digital Divide is Mostly Maintained
(1) Matthis et al., Computer Communication Review 27(3), July 1997
APAN Link Information
Countries
AU-US
CN-HK
CN-HK
CN-JP
CN-JP
CN-UK
CN-US
CN-US
CN-US
HK-US
HK-TW
JP-ID
JP-KR
JP-MY
JP-PH
JP-PH
JP-SG
JP-TH
JP-TH
JP-US
JP-VN
Network
AARNet
CERNET/HARNET
CSTNET
CERNET
CERNET
CERNET
CERNET
CERNET
CSTNET
HARNET
HARNET/TANET
AI3(ITB)
APII
AI3(USM)
AI3(ASTI)
MAFFIN
AI3(SICU)
AI3(AIT)
SINET(ThaiSarn)
TransPac
AI3(IOIT)
2004.1.05 [email protected]
Bandwidth(Mbps)
AUP/Remark
310 to 2 X 10 Gbps
R&E + Commodity
2
R&E
155
R&E
10
R&E
45
Native IPv6
45
R&E
Research
10
200
R&E
155
R&E
45
R&E
10
R&E
2/1.5
R&E
2Gbps
R&E
1.5/0.5
R&E
1.5/0.5
R&E
Research
2
1.5/0.5
R&E
1.5/0.5
R&E
2
R&E
10 Gbps
R&E
1.5/0.5
R&E
Inhomogeneous Bandwidth Distributio
in Latin America. CAESAR Report (6/02)
Int’l Links
0.071 Gbps Used
Need to Pay Attention
to End-point connections
4,236 Gbps
Capacity
to Latin America
DAI: State of the World
DAI: State of the World
Digital Access Index Top Ten +
Pakistan
0.03
0.54
0.41
0.2
0.01
0.24
Work on the Digital Divide
from Several Perspectives
Work on Policies and/or Pricing: pk, in, br, cn, SE Europe, …
Share Information: Comparative Performance and BW Pricing
Exploit Model Cases: e.g. Poland, Slovakia, Czech Republic
Find Ways to work with vendors, NRENs, and/or Gov’ts
Inter-Regional Projects
GLORIAD, Russia-China-US Optical Ring
South America: CHEPREO (US-Brazil); EU ALICE Project
Virtual SILK Highway Project (DESY): FSU satellite links
Help with Modernizing the Infrastructure
Design, Commissioning, Development
Provide Tools for Effective Use: Monitoring, Collaboration
Participate in Standards Development; Open Tools
Advanced TCP stacks; Grid systems
Workshops and Tutorials/Training Sessions
Example: ICFA Digital Divide and HEPGrid Workshop in Rio
Raise General Awareness of the Problem; Approaches to Solutions
WSIS/RSIS; Trieste Workshop
Dai Davies SERENATE Workshop Feb. 2003
www.serenate.org
International Connectivity Costs in the
Different European Market Segments
Market segment
Liberal Market with
transparent pricing
Liberal Market with less
transparent pricing structure
Emerging Market without
transparent pricing
Traditional Monopolist
market
Number of
Countries
8
Cost
Range
1-1.4
7
1.8-3.3
3
7.5-7.8
9
18-39
Ratio to 114 If Include Turkey, Malta;
Correlated with the Number of Competing Vendors
Virtual Silk Highway
The SILK Highway Countries
in Central Asia & the Caucasus
Hub Earth Station at DESY with
access to the European NRENs
and the Internet via GEANT
Providing International
Internet access directly
National Earth Station at each
Partner site
Operated by DESY, providing
international access
Individual uplinks, common
down-link, using DVB
Currently 4 Mbps Up; 12
Down; for ~$ 350k/Year
Note: Satellite Links are a Boon to the Region,
but Unit Costs are Very High compared to Fiber.
There is a Continued Need for Fiber Infrastructure
A Series of Strategic Studies into the Future of
Research and Education Networking in Europe
From Summary and Conclusions by D.O. Williams, CERN
A significant “divide” exists in Europe – the worst countries
[Macedonia, B-H, Albania, etc.] are 1000s of times worse off than
the best.
Also many of the 10 new EU members are ~5X worse off
than the 15 present members.
If there is one single technical lesson from SERENATE it is that
transmission is moving from the electrical domain to optical.
When there’s good competition users can still lease traditional
communications services (bandwidth) on an annual basis.
But: Without enough competition prices go through the roof.
The more you look at underlying costs the more you see the need
for users to get access to fibre.
Our best advice has to be
See www.
“If you’re in a mess, you must get access to fibre”.
serenate.org
Dark Fiber in Eastern Europe
Poland: PIONIER Network
2650 km Fiber
Connecting
16 MANs; 5200 km
and 21 MANs by 2005
GDAŃS K
KOS ZALIN
OLS ZTYN
S ZCZECIN
BYDGOS ZCZ
BIAŁYS TOK
TORUŃ
POZNAŃ
Support
Computational Grids
Domain-Specific
Grids
Digital Libraries
Interactive TV
WARS ZAWA
GUBIN
ZIELONA
GÓRA
S IEDLCE
ŁÓDŹ
PUŁAWY
WROCŁAW
RADOM
CZĘS TOCHOWA
KIELCE
OPOLE
GLIWICE
KATOWICE
KRAKÓW
Add’l Fibers for
CIES ZYN
BIELS KO-BIAŁA
e-Regional Initiatives
Ins ta lle d fibe r
P IONIER node s
Fibe rs pla nne d in 2004
P IONIER node s pla nne d in 2004
RZES ZÓW
LUBLIN
CESNET Dark Fiber Case Study
(Czech Republic): Within Reach
1 x 2,5G
2513 km
Leased Fibers
(Since 1999)
Case Study Result
Wavelength Service
Vs. Fiber Lease:
Cost Savings of
50-70% Over 4 Years
for Long 2.5G
or 10G Links
For Example: 4 X 10G
Over 300 km
for 14k Euro/Month
about 150km (e.g. Ústí n.L. - Liberec)
about 300km (e.g. Praha - Brno)
*
**
4 x 2,5G
about 150km (e.g. Ústí n.L. - Liberec)
about 300km (e.g. Praha - Brno)
*
**
1 x 10G
about 150km (e.g. Ústí n.L. - Liberec)
about 300km (e.g. Praha - Brno)
*
**
4 x 10G
about 150km (e.g. Ústí n.L. - Liberec)
about 300km (e.g. Praha - Brno)
*
**
Leased 1 x 2,5G
(EURO/Month)
7,000
8,000
Leased fibre with own equipment
(EURO/Month)
5 000 *
7 000 **
2 x booster 18dBm
2 x booster 27dBm + 2 x preamplifier + 6 x DCF
Leased 4 x 2,5G
(EURO/Month)
14,000
23,000
Leased fibre with own equipment
(EURO/Month)
8 000 *
11 000 **
2 x booster 24dBm, DWDM 2,5G
2 x (booster +In-line + preamplifier), 6 x DCF, DWDM 2,5G
Leased 1 x 10G
(EURO/Month)
14,000
16,000
Leased fibre with own equipment
(EURO/Month)
5 000 *
8 000 **
2 x booster 21dBm, 2 x DCF
2 x (booster 21dBm + in-line + preamplifier) + 6 x DCF
Leased 4 x 10G
(EURO/Month)
29,000
47,000
Leased fibre with own equipment
(EURO/Month)
12 000 *
14 000 **
2 x booster 24dBm, 2 x DCF, DWDM 10G
2 x (booster +In-line + preamplifier), 6 x DCF, DWDM 10G
HEPGRID and Digital Divide Workshop
UERJ, Rio de Janeiro, Feb. 16-20 2004
Theme: Global Collaborations, Grids and
Their Relationship to the Digital Divide
NEWS:
Bulletin: ONE TWO
WELCOME BULLETIN
General Information
Registration
Travel Information
Hotel Registration
Participant List
Tutorials
How to Get UERJ/Hotel
C++ Accounts
Computer
GridPhone
Technologies
Useful
Numbers
Grid-Enabled
Program
Analysis
Contact
us:
Secretariat
Networks
Chairmen
Collaborative
For the past three years the SCIC has
focused on understanding and seeking
the means of reducing or eliminating
the Digital Divide, and proposed to ICFA
that these issues, as they affect our
field of High Energy Physics, be brought
to our community for discussion.
This led to ICFA’s approval, in July
2003, of the Digital Divide and H
EP Grid Workshop.
http://www.uerj.br/lishep2004
SPONSORS
Systems
CLAF
CNPQ
FAPERJ
UERJ
All Sessions and
Tutorials Available
Live Via VRVS
World Summit on the Information Society
(WSIS): Geneva 12/2003 and Tunis in 2005
The UN General Assembly adopted in 2001 a resolution
endorsing the organization of the World Summit on the
Information Society (WSIS), under UN Secretary-General,
Kofi Annan, with the ITU and host governments taking
the lead role in its preparation.
GOAL: To Create an Information Society:
A Common Definition was adopted
in the “Tokyo Declaration” of January 2003:
“… One in which highly developed ICT networks, equitable
and ubiquitous access to information, appropriate content
in accessible formats and effective communication can
help people achieve their potential”
Kofi Annan Challenged the Scientific Community to Help (3/03)
CERN and ICFA SCIC have been quite active in the WSIS in
Geneva (12/2003)
Role of Science in the Information
Society. Palexpo, Geneva 2004
CERN SIS Forum, and
CERN/Caltech Online
Stand
Visitors:
Kofi Annan, UN Sec’y General
John H. Marburger,
Science Adviser to US President
Ion Iliescu, President of Romania;
and Dan Nica, Minister of ICT
Jean-Paul Hubert, Ambassador
of Canada in Switzerland
Carlo Lamprecht, Pres. of
Economic Dept. of Canton
de Geneva
…
Role of Sciences in Information
Society. Palexpo, Geneva 2003
Demos at the CERN/Caltech
RSIS Online Stand
Distance diagnosis and surgery using
Robots with “haptic” feedback
(Geneva-Canada)
World Scale multisite multi-protocol
videoconference with VRVS
(Europe-US-Asia-South America)
Music Grid: live performance with
bands at St. John’s, Canada and the
Music Conservatory of Geneva on
stage
Advanced network and Grid-enabled
analysis demonstrations
Monitoring very large scale Grid farms
with MonALISA
VRVS
(Version
3)
VRVS
on
Windows
Meeting in 8 Time
Zones
28k registered hosts
Users in 103 Countries
2-3X Growth/Year
Networks, Grids and HENP
Network backbones and major links used by HENP experiments
are advancing rapidly
To the 10 G range in < 2 years; much faster than Moore’s Law
Continuing a trend: a factor ~1000 improvement per decade;
a new HENP (and DOE) Roadmap
HENP is learning to use long distance 10 Gbps networks effectively
2003-2004 Developments: to 6 Gbps flows over 11 kkm
Transition to community-owned and operated R&E networks
is beginning (ca, nl, us, pl, cz, sk …) or considered (de, ro, …)
Removing Regional, Last Mile, Local Bottlenecks and
Compromises in Network Quality are now
On the critical path, in all world regions
Digital Divide: Network improvements are especially needed
in SE Europe, Much of Asia, So. America; and Africa
Work on These Issues in Concert with Internet2, Terena, APAN,
AMPATH; DataTAG, the Grid projects & the GGF
Some Extra Slides
Follow
Computing Model Progress
CMS Internal Review of Software and Computing
ICFA and International Networking
ICFA Statement on Communications in Int’l HEP
Collaborations of October 17, 1996
See http://www.fnal.gov/directorate/icfa/icfa_communicaes.html
“ICFA urges that all countries and institutions wishing
to participate even more effectively and fully in
international HEP Collaborations should:
Review their operating methods to ensure they
are fully adapted to remote participation
Strive to provide the necessary communications
facilities and adequate international bandwidth”
NREN Core Network Size (Mbps-km):
http://www.terena.nl/compendium/2002
100M
Logarithmic Scale
10M
In Transition
Gr
100k
Ir
Lagging
Ro
1k
Ukr
100
Hu
Advanced
1M
10k
Leading
It
Pl
Ch
Es
Fi
Nl
Cz
Network Readiness Index:
How Ready to Use Modern ICTs [*]?
Market
Environment
(US)
Network
Readiness
Index
(FI)
( ): Which Country
is First
(US)
Political/Regulatory (SG)
Infrastructure
(IC)
Individual Readiness (FI)
Readiness
Business Readiness (US)
Gov’t Readiness
(SG)
Individual Usage
(KR)
Usage
Business Usage
(DE)
(FI)
Gov’t Usage
(FI)
(SG)
From the 2002-2003 Global Information
Technology Report. See http://www.weforum.org
Throughput vs Net Readiness Index
NRI from Center for Int’l Development, Harvard
http://www.cid.harvard.edu/cr/pdf/gitrr2002_ch02.pdf
A&R focus
Internet for all focus
NRI Tops
Finland
5.92
US
5.79
Singapore 5.74
Sweden
5.58
Iceland
5.51
Canada
5.44
UK
5.35
Denmark 5.33
Taiwan
5.31
Germany 5.29
Netherlnd 5.28
Israel
5.22
Switz’land 5.18
Korea
5.10
Improved correlation (0.21 0.41) by Using derived throughput ~
MSS / (RTT * sqrt(loss)); fit an exponential
Interesting Outliers: Slovakia, Hungary, Portugal. Lithuania
UltraLight
http://ultralight.caltech.edu
Serving the major LHC experiments; developments
broadly applicable to other data-intensive programs
“Hybrid” packet-switched and circuit-switched,
dynamically managed optical network
Global services for system management
Trans-US wavelength riding on NLR: LA-SNV-CHI-JAX
Leveraging advanced research & production networks
USLIC/DataTAG, SURFnet/NLlight, UKLight,
Abilene, CA*net4
Dark fiber to CIT, SLAC, FNAL, UMich; Florida Light Rail
Intercont’l extensions: Rio de Janeiro, Tokyo, Taiwan
Flagship Applications with a diverse traffic mix
HENP: TByte to PByte “block” data transfers at 1-10+ Gbps
eVLBI: Real time data streams at 1 to several Gbps
User Requirements
In ALL countries and in ALL disciplines researchers are
eagerly anticipating improved networking tools. There is no
divide on the demand-side. Sciences, such as particle
physics, which make heavy use of advanced networking,
must help to break down any divide on the supply-side, or
else declare themselves elitist and irrelevant to researchers in
essentially all developing countries.
Connectivity pricing and competition
In some locations the price of connectivity is (really)
unreasonably high
Linked (obviously) to how competitive the market is
Strong competition on routes between various key European
cities, and between major national centres
Less competition effectively none as you move to countries
with de facto monopoly or simply to parts of countries where
operators see little reason to invest.
While some expensive routes are where you would expect,
others are much more surprising (at first sight), like
Canterbury and Lancaster (UK) and parts of Brittany (F)
Understanding transmission costs and DIY
solutions
Own trenching only makes sense in very special cases. Say 1
30 km. Even then look for partners.
Maybe useful (as a threat) over longer distances in countries
with crazy pricing
Now possible to lease (short- or long-term) fibres on many
routes in Europe [0.5 to 2 KEuro/km Typ.]
Transmission costs jump at ~200 km [below which you can
operate with “Nothing In Line” (NIL), above which you need
amplifiers] and ~800 km [above which you need signal
regenerators]
Possibly leading to some new approaches in GEANT-2
implementation
Jensen, ICTP
Typ. 0-7 bps
Per Person
Progress
in Africa ?
Limited
by many
external
systemic
factors:
Electricity;
Import Duties;
Education;
Trade
restrictions
Jensen, ICTP
DAI: State of the World
DAI: State of the World
RENATER3 in France
2.5 G Backbone (Since 10/2002)
650 Sites, Most Connect
Through MANs
< 50 Direct Connected
GEANT Connection to
10G in March 2004
IN2P3 Investigating
Dark Fiber to CERN
Courtesy: Prof. Kilnam Chon (APAN Chair)
APAN
APAN Link Map
APAN Members
ASEM Member:
China, Japan, Korea, Malaysia, Philippines, Singapore, Thailand
Non-ASEM Member:
Australia, Taiwan, Hong Kong, Sri Lanka
Applying APAN Membership:
Bangladesh, Nepal
Europe
RU
JP
CN
KR
USA
TW
HK
VN
TH
LK
PH
MY
SG
ID
Access Point
Exchange Point
Current Status
2004 (plan)
AU
AMPATH Miami-RNP Brazil
AMPATH Miami-REUNA (Chile)
Note: CMS-Tier1 (Brazil);
ALMA (Chile)
Brazil: RNP in Early 2004
NREN Core Network Size (Mbps-km):
Last Year
http://www.terena.nl/compendium/2002
Logarithmic Scale
Leading
100M
10M
Advanced
1M
In Transition
Gr
100k
10k
Ir
Lagging
Ro
1k
Ukr
100
It
Pl
Ch
Hu
Es
Fi
Nl
Cz
Relative Cost of Connectivity Compared
with Number of Suppliers
45
40
35
30
Relative cost
25
20
Trend Line
15
10
Number of suppliers
5
0
0
2
4
6
8
10
12
14
Multipliers for Differing Circuit Speeds
1.2
1
0.8
0.6
0.4
0.2
0
3000
2500
2000
1500
Speed In MBps
1000
500
0
Multiplier
PingER derived throughput vs. the ITU
Digital Access Index (DAI) for PingER
countries monitored from the U.S.
NRNs’ Bandwidth in Latin America
Country
Organization
Existing REN?
National connections
External
Capacity
Number of
Connected Sites
Connection
to US
Internet2
Argentina
RETINA
yes
256Kbps – 34 Mbps
59 Mbps
56
yes
Bolivia
BOLnet
yes
64 – 128 Kbps
1.5 Mbps
18
no
Brazil
RNP
yes
2 – 30 Mbps (backbone up
to 622 Mbps
202 Mbps
Chile
REUNA
yes
155 Mbps
Colombia
RedCETCol
Not known
Not known
Not known
Not known
Costa Rica
CRNet
32 – 512 Kbps
Not known
34
Cuba
RedUniv
University
Network
19.2 Kbps – 2 Mbps
Not known
Ecuador
FUNDACYT
In planning
no
El Salvador
CONACYT
In planning
no
Guatemala
Not known
Non-existent
no
Honduras
HONDUnet
Not known
no
Not known
no
Nicaragua
yes
Panama
PANNET/
SENACYT
University/Gov.
Network
Peru
CONCYTEC
Uruguay
RAU
yes
Venezuela
REACCIUN
yes
256 – 512 Kbps
45 Mbps via
AMPATH
1.54 Mbps
369
18
23
11
In planning
yes
yes
Not known
no
no
no
no
64 Kbps to 1 Mbps
6 Mbps
46
45 Mbps via
AMPATH
78
no
January
2003
Source:
CAESAR Review of
Developments in
Latin
America
Undersea Optical Infrastructure
Submarine Fiber-Optic Cable
System
Total Bandwidth Capacity (GB)
Americas 1
.560
Americas II
2.5
South American Crossing
1,280
Columbus II
.560
Columbus III
2.5
Telefonica’s Emergia
1,920
ARCOS
960
Maya-1
60
360 Americas
10
The total aggregate bandwidth capacity Latin America and
Caribbean region is estimated at 4,236 GB
It is generally accepted that once a technology is
perceived as having broad utilitarian value, price as a
% of per capita income, is the main driver of penetration
C. Casasus, CUDI (Mexico); CANARIE
Penetration of telecommunications in low
income countries is further inhibited by at least
3 factors…
Low income per capita
Less competition. Higher prices
from monopolies
Fewer applications. No broad
utilitarian value
C. Casasus, CUDI (Mexico); CANARIE
Income vs. penetration given the
price of a technology
Per capita
Income
High income
countries
High
Low income
countries
% Penetration
Low
Low
C. Casasus, CUDI (Mexico); CANARIE
High
Telecom monopolies have even higher
prices in low income countries
Fewer entrants. Less competition
No unbundling
Price cap regulation creates cross
subsidies between costumer groups.
Large customers (inelastic) subsidize
small costumers (elastic).
High bandwidth services are very
expensive
Inefficient ROW regulation
Inefficient spectrum policies
C. Casasus, CUDI (Mexico); CANARIE
Virtual Silk Highway Project
Managed by DESY and Partners
Virtual SILK Highway Project (from 11/01):
NATO ($ 2.5 M) and Partners ($ 1.1M)
Satellite Links to South Caucasus
and Central Asia (8 Countries)
In 2001-2 (pre-SILK) BW 64-512 kbps
Proposed VSAT to get 10-50 X BW
for same cost
See www.silkproject.org
[*] Partners: CISCO, DESY. GEANT,
UNDP, US State Dept., Worldbank,
UC London, Univ. Groenigen
Note: NATO Science for Peace Program