DataTAG presentation (Bucarest)
Download
Report
Transcript DataTAG presentation (Bucarest)
The DataTAG Project
Presented at the International workshop on
Grid and distributed computing
20th April 2002, BUCHAREST
Olivier H. Martin
CERN - IT Division
Presentation outline
CERN networking
DataTAG project
Internet connectivity
CERN Internet Exchange Point (CIXP)
Grid networking requirements
Evolution of transatlantic circuit costs
Partners
Goals
Positioning
Grid networking issues
Concluding remarks
Appendix A: Detailed Workplan
Main Internet connections at CERN
Swiss National Research
Network
Mission Oriented &
World Health Org.
General purpose
A&R and commodity
WHO
Internet connections
(Europe/USA/World)
Europe
Commercial
Research
CERN’s Distributed Internet Exchange Point
(CIXP)
Telecom Operators & dark fibre
providers:
Telecom
operators
isp
isp
isp
isp
isp
isp
c
i
x
p
isp
isp
CERN
firewall
Telecom
operators
Cablecom, COLT, France Telecom, Global
Crossing, GTS/EBONE, KPNQwest,
LDCom(*), Deutsche Telekom/Multilink,
MCI/Worldcom, SIG, Sunrise/diAx,
Swisscom (Switzerland), Swisscom (France),
SWITCH (**), Thermelec,
VTX/Smartphone.
Internet Service Providers include:
3GMobile (*), Infonet, AT&T Global Network
Services (formerly IBM), Cablecom, Callahan,
Carrier1, Colt, DFI, Deckpoint, Deutsche
Telekom, diAx (dplanet), Easynet, Ebone/GTS,
Eunet/KPNQwest, France Telecom/OpenTransit,
Global-One, InterNeXt, IS Internet Services
(ISION), IS-Productions, Nexlink, Net Work
Communications (NWC), PSI Networks (IProlink),
MCI/Worldcom, Petrel, Renater, Sita/Equant(*),
Sunrise, Swisscom IP-Plus, SWITCH, GEANT,
VTX, UUnet.
Cern Internal Network
Long term Data Grids networking
requirements
A basic assumption of Data intensive Grids is that
the underlying network is more or less invisible.
Is the hierarchical structure of European academic
R&E networks and the pan-European
interconnection backbone GEANT a sustainable long
term model, in order to adequately support Data
intensive Grids such as the LHC Grid (Large
Hadron Collider)?
A prerequisite, therefore, is very fast links
between Grid nodes
Are lambda Grids, feasible & affordable?
Interesting to note that the original LHC
computing model which was itself hierarchical
(Tier0, Tier1, etc) appears to be evolving towards
a somewhat more flexible model.
Evolution of LHC bandwidth
requirements
LHC Bandwidth Requirements (1999)
2.5 Gbps between CERN and some (or all) LHC regional
centers by 2005
“In any case, a great deal of optimism is needed in order to reach the LHC
target!”
LHC Bandwidth Requirements (2002)
Evolution of circuit costs
“There seems to be no other way to reach the LHC target than to significantly
increase the budget for external networking by a factor of 3 to5, depending on
when the bandwidth should be delivered”.
LHC Bandwidth Requirements (2001)
622 Mbps between CERN and some (or all) LHC regional
centers by 2005
10 Gbps between CERN and some (or all) LHC regional centers
by 2006
It is very likely that the first long haul 10Gbps circuits will
already appear at CERN in 2003/2004.
What happened?
What happened?
As a result of the EU wide deregulation of the
Telecom that took place in 1998, there is an
extraordinary situation today where circuit prices
have gone much below the most optimistic
forecasts!
However, as a result, many Telecom Operators are
having serious difficulties and it is very hard to
make predictions about the evolution of prices in
the future?
Technology still improving fast, 3 to 10 Tbps per
fiber, with more colors/lambdas per fiber and
faster lambdas….
Installation costs are increasing rather than
decreasing…
Will the unavoidable consolidation result in stable,
increasing or decreasing prices?
Only time will tell us!
Evolution of transatlantic circuit
costs
Since 1995, we have been tracking the prices of
transatlantic circuits in order to assess the budget
needed to meet the LHC bandwidth targets:
The following scenarios have been considered:
conservative (-20% per year)
very plausible (-29% per year, i.e. prices halved every
two years)
Moore’s law (-37% per year, i.e. prices halved every 18
months)
optimistic (-50% per year)
N.B. Unlike raw circuits, where a price factor of 2 to 2.5
for 4 times the capacity is usually the norm, commodity
Internet pricing are essentially linear (e.g. 150
CHF/Mbps)
Bandwith vs Budget (-20%/year)
8000.0
7000.0
6000.0
Bandwidth (Mbps)
5000.0
4000.0
1.6MCHF/year
2MCHF/year
2.5MCHF/year
3MCHF/year
Linear (2.5MCHF/year)
3000.0
2000.0
1000.0
0.0
1996
1998
2000
2002
2004
-1000.0
-2000.0
YEAR
2006
2008
2010
Bandwith vs Budget (-29%/year)
16000.0
14000.0
12000.0
Bandwidth (Mbps)
10000.0
8000.0
1.6MCHF/year
2MCHF/year
2.5MCHF/year
3MCHF/year
Linear (2.5MCHF/year)
6000.0
4000.0
2000.0
0.0
1996
1998
2000
2002
2004
-2000.0
-4000.0
YEAR
2006
2008
2010
Bandwidth vs Budget (-37%/year)
35000.0
30000.0
25000.0
Bandwidth (Mbps)
20000.0
1.6MCHF/year
2MCHF/year
2.5MCHF/year
3MCHF/year
Linear (2.5MCHF/year)
15000.0
10000.0
5000.0
0.0
1996
1998
2000
2002
2004
-5000.0
-10000.0
YEAR
2006
2008
2010
Bandwidth vs Budget (-50%/year)
140000.0
120000.0
100000.0
Bandwidth (Mbps)
80000.0
1.6MCHF
2MCHF
2.5MCHF
3MCHF
Linear (2.5MCHF)
60000.0
40000.0
20000.0
0.0
1996
1998
2000
2002
2004
-20000.0
-40000.0
YEAR
2006
2008
2010
622Mbps scenarios
-20%/year
-29%/year
-37%/year
-50%/year
Linear (-29%/year)
3500.00
3000.00
BUDGET KCHF
2500.00
2000.00
1500.00
1000.00
500.00
0.00
2000
2001
2002
2003
2004
-500.00
YEAR
2005
2006
2007
2008
2,5Gbps scenarios (based on 622Mbps prices)
-20%/year
-29%/year
-37%/year
-50%/year
Linear (-29%/year)
7000.00
6000.00
BUDGET KCHF
5000.00
4000.00
3000.00
2000.00
1000.00
0.00
2000
2001
2002
2003
2004
-1000.00
YEAR
2005
2006
2007
2008
The DataTAG Project
http://www.datatag.org
Funding agencies
Cooperating Networks
EU partners
Associated US partners
The project
European partners: INFN (IT), PPARC (UK),
University of Amsterdam (NL) and CERN, as
project coordinator.
INRIA (FR) will join in June/July2002.
ESA/ESRIN (IT) will provide Earth Observation
demos together with NASA.
Budget: 3.98 MEUR
Start date: January, 1, 2002
Duration: 2 years (aligned on DataGrid)
Funded manpower: ~ 15 persons/year
7/17/2015
The DataTAG Project
Olivier H. Martin (19)
US Funding & collaborations
US NSF support through the existing collaborative
agreement with CERN (Eurolink award).
US DoE support through the CERN-USA line
consortium.
7/17/2015
Significant contributions to the DataTAG workplan
have been made by Andy Adamson (University of
Michigan), Jason Leigh (EVL@University of Illinois),
Joel Mambretti (Northwestern University), Brian
Tierney (LBNL).
Strong collaborations already in place with ANL,
Caltech, FNAL, SLAC, University of Michigan, as
well as Internet2 and ESnet.
The DataTAG Project
Olivier H. Martin (20)
In a nutshell
Two main focus:
Interoperability between European and US Grids
(WP4)
2.5 Gbps transatlantic lambda between CERN
(Geneva) and StarLight (Chicago) around July 2002
(WP1).
7/17/2015
Grid related network research (WP2, WP3)
Dedicated to research (no production traffic)
Fairly unique multi-vendor testbed with layer2 and
layer 3 capabilities
In principle open to other EU Grid projects as well
as ESA for demonstrations
The DataTAG Project
Olivier H. Martin (21)
Multi-vendor testbed with layer3 as
well as layer2 capabilities
INFN (Bologna)
STARLIGHT (Chicago)
Abilene
CERN (Geneva)
GEANT
1.25Gbps
Juniper
M
2.5Gbps
ESnet
Juniper
M
Alcatel
Alcatel
GBE
Cisco
7/17/2015
Cisco 6509
622Mbps
M= LayerThe2DataTAG
MuxProject
Starlight
Cisco
Olivier H. Martin (22)
Goals
•
End to end Gigabit Ethernet performance
using innovative high performance
transport protocols.
•
Assess & experiment inter-domain QoS
and bandwidth reservation techniques.
•
Interoperability between some major
GRID projects in Europe and North
America
7/17/2015
DataGrid, possibly other EU funded Grid
projects
PPDG, GriPhyN,
Teragrid, iVDGL (USA)
The DataTAG Project
Olivier H. Martin (23)
DataTAG project
NewYork
Abilene
UK
SuperJANET4
IT
GARR-B
STAR-LIGHT
ESNET
CERN
GEANT
MREN
NL
SURFnet
STAR-TAP
Major 2.5 Gbps circuits between Europe & USA
Project positioning
Why yet another 2.5 Gbps transatlantic circuit?
deploying new equipment (routers, G-MPLS capable
multiplexers),
activating new functionality (QoS, MPLS, distributed VLAN)
The only known exception to date is the Surfnet circuit
between Amsterdam & Chicago (Starlight)
Concerns:
7/17/2015
Most existing or planned 2.5 Gbps transatlantic circuits
are for production, which makes them basically not
suitable for advanced networking experiments that require
a great deal of operational flexibility in order to
investigate new application driven network services, e.g.:
How far beyond Starlight can DataTAG extend?
How fast will US research network infrastructure match
that of Europe!
The DataTAG Project
Olivier H. Martin (25)
The STAR LIGHT
Next generation STAR TAP with the following
main distinguishing features:
Neutral location (Northwestern University)
1/10 Gigabit Ethernet based
Multiple local loop providers
Optical switches for advanced experiments
The STAR LIGHT will provide 2*622 Mbps ATM
connection to the STAR TAP
Started in July 2001
Also hosting other advanced networking projects
in Chicago & State of Illinois
N.B. Most European Internet Exchanges Points have
already been deployed along the same principles.
7/17/2015
The DataTAG Project
Olivier H. Martin (26)
Major Grid networking issues
QoS (Quality of Service)
on
a wide
over
high
scale
because
bandwidth,
of
complexity
long
of
distance
The loss of a single packet will affect a 10Gbps stream with 200ms RTT
(round trip time) for 5 hours. During that time the average throughput will
be 7.5 Gbps.
On the 2.5Gbps DataTAG circuit with 100ms RTT, this translates to 38
minutes recovery time, during that time the average throughput will be
1.875Gbps.
Line Error rates
unresolved
TCP/IP performance
networks
still largely
deployment
A 2.5 Gbps circuit can absorb 0.2 Million packets/second
Bit error rates of 10E-9 means one packet loss every 250 mseconds
Bit error rates of 10E-11 means one packet loss every 25 seconds
End to end performance in the presence of firewalls
There is a lack of high performance firewalls, can we rely on products
becoming available or should a new architecture be evolved?
7/17/2015
Evolution
of LAN infrastructure
to 1Gbps then Olivier
10Gbps
The DataTAG Project
H. Martin (27)
Uniform end to end performance
Single stream vs Multiple streams
effect of a single packet loss (e.g. link error, buffer overflow)
Streams/Throughput
10
5
1
7.5
4.375
2
10
9.375
Avg. 7.5 Gbps
75
Avg. 6.25 Gbps
Avg. 4.375 Gbps
5
Avg. 3.75 Gbps
2.5
T = 2.37 hours!
(RTT=200msec, MSS=1500B)
T
7/17/2015
T
T
T
The DataTAG Project
Time
Olivier H. Martin (28)
Concluding remarks
7/17/2015
The dream of abundant bandwith
has now become a hopefully lasting
reality!
Major transport protocol issues still
need to be resolved.
Large scale deployment of
bandwidth greedy applications still
remain to be done,
Proof of concept has yet to be
made.
The DataTAG Project
Olivier H. Martin (29)
Workplan (1)
WP1: Provisioning & Operations (P. Moroni/CERN)
Two main issues:
Procurement (largely done already for what concerns the circuit,
equipment still to be decided).
Routing, how can the DataTAG partners access the DataTAG
circuit across GEANT and their national network?
Funded participants: CERN(1FTE), INFN (0.5FTE)
WP5: Information dissemination and exploitation (CERN)
Will be done in cooperation with DANTE & National Research
& Education Networks (NREN)
Funded participants: CERN(0.5FTE)
WP6: Project management (CERN)
7/17/2015
Funded participants: CERN(2FTE)
The DataTAG Project
Olivier H. Martin (30)
Workplan (2)
WP2: High Performance Networking (Robin
Tasker/PPARC)
High performance Transport
tcp/ip performance over large bandwidth*delay
networks
Alternative transport solutions using:
Modified TCP/IP stack
UDP based transport conceptually similar to rate
based TCP
End to end inter-domain QoS
Advance network resource reservation
Funded
7/17/2015
participants: PPARC (2FTE), INFN (2FTE),
The DataTAG Project
Olivier H. Martin (31)
UvA (1FTE), CERN(1FTE)
Workplan (3)
WP3: Bulk Data Transfer & Application performance
monitoring (Cees deLaat/UvA)
Performance validation
End to end user performance
Validation
Monitoring
Optimization
Application performance
Netlogger
Funded participants: UvA (2FTE), CERN(0.6FTE)
7/17/2015
The DataTAG Project
Olivier H. Martin (32)
WP4 Workplan
(Antonia Ghiselli & Cristina Vistoli / INFN)
Main Subject:
Interoperability between EU and US Grids services from
DataGrid, GriPhyN, PPDG and in collaboration with
iVDGL, for the HEP applications.
Objectives:
Produce an assessment of interoperability solutions
Provide test environment to LHC Applications to extend
existing use-cases to test interoperability of the grid
components
Provide input to a common Grid LHC architecture
Plan EU-US Integrated grid deployment
Funded participants: INFN (6FTE), PPARC (1FTE), UvA
(1FTE)
7/17/2015
The DataTAG Project
Olivier H. Martin (33)
WP4 Tasks
Assuming the same grid basic services (gram,gsi,gris)
between the differen grid projects, the main issues are:
4.1 resource discovery, coord. C.Vistoli
4.2 security/authorization, coord. R.Cecchini
7/17/2015
4.3 interoperability of collective services between
EU-US grid domains, coord. F.Donno
4.4 test applications, contact people from each
application :
Atlas / L.Perini, CMS / C.Grandi,
Alice / P.Cerello
The DataTAG Project
Olivier H. Martin (34)
DataTAG/WP4 framework
and relationships
Grid projects:
DataGrid
PPDG
Griphyn
input
LCG
Globus
Condor
…..
feedback
Grid
DataTAG/WP4
Interoperability
Integration
Activities:
iVDGL
HICB/HIJTB
GGF …..
stardardization
Proposals
Applications:
7/17/2015
LHC experiments
The DataTAG Project
CDF
Babar
ESA …..
Olivier H. Martin (35)
WP4.1 - Resource Discovery
Objectives
7/17/2015
Enabling an interoperable system that allows for the
discovery and access of the Grid services available
at participant sites of all Grid domains, in particular
between EU and US Grids.
Compatibility of the Resource Discovery System
with the existent components/services of the
available GRID systems.
The DataTAG Project
Olivier H. Martin (36)
Task 4.1 Time Plan
7/17/2015
Reference agreement document on resource
discovery schema
by 31st of May 2002
“INTERGRID VO” MDS test
by 31st of July 2002
Evaluation of the interoperability of multiple
Resource Discovery Systems (FTree, MDS, etc…)
by 30th of September 2002
Network Element
by 31st of December 2002
Impact of the new Web Services Technology
by 31st of June 2003
Identify missing components.
by 31st of June 2003
Final deployment.
by 31st of December 2003
The DataTAG Project
Olivier H. Martin (37)
WP4.2 - Objectives
7/17/2015
Identify Authentication, Authorization and
Accounting (AAA) mechanisms allowing
interoperability between grids
Compatibility of the AAA mechanisms with
the existing components/services of the
available GRID systems.
The DataTAG Project
Olivier H. Martin (38)
Task 4.2 Time Plan
Reference document
Issues
Minimum requirements for DataTAG CA’s;
Analysis of available authorization tools and
policy languages and their suitability (in
cooperation with the DataGrid Authorization
WG);
Mapping of the policies of the VO domains;
Information exchange protocol between the
authorization systems;
Feasibility study of an accounting system (in
cooperation with the DataGrid WP1);
First draft:
31 July 2002
Final version:
30 April 2003
7/17/2015
Deployment
First:
Final:
30 September 2002
31 December 2003
The DataTAG Project
Olivier H. Martin (39)
WP4.3 / WP4.4 - Objectives
7/17/2015
Identify grid elements in EU and US grid
projects, Identify common components in
the testbeds used by the HEP
experiments for semi-production activities
in EU and US and classify them in an
architectural framework.
Plan and Setup “InterGrid VO”
environment with common EU-US services.
Deployment of an EU-US VO domain in
collaboration with iVDGL.
The DataTAG Project
Olivier H. Martin (40)
Task 4.3/4.4 Time Plan
The time plan will follow the schedule of
each experiment
7/17/2015
Study of exp. Layout and Classification – first
result
by 31st of June 2002
First deployment (already started)
by 31st of September 2002
First report of integration and interoperability
issues
by 30th of December 2002
First working version of a VO EU-US domain
by 31st of June 2003
Complete deployment.
by 31st of December 2003
The DataTAG Project
Olivier H. Martin (41)