GRID - lishep 2002

Download Report

Transcript GRID - lishep 2002

PIONIER - Polish Optical Internet:
The eScience Enabler
Jarek Nabrzyski
Poznan Supercomputing and Networking Center
[email protected]
Digital Divide and HEPGRID Workshop
Poland
Population: 38 mln
Area: 312 000 km2
Since 01.05.04 part of EU
Temperature today: 00C
PIONIER - an idea of „All Optical Network”, facts:
•
•
•
•
•
•
•
4Q1999 – proposal of program submited to KBN
2Q2000 – PIONIER testbed (DWDM, TNC 2001)
3Q2000 – project accepted (tender for co-operation, negotiations with Telcos)
4Q2001 – I Phase: ~10 mln Euro
– Contracts with Telbank and Szeptel (1434 km)
4Q2002 – II Phase: ~18.5 mln Euro
– Contracts with Telbank, regional Power Grids Companies (1214 km)
– Contract for equipment: 10GE&DWDM and IP router
2H2003 – installation of 10GE with DWDM rep./amp.
– 16 MANs connected and 2648 km of fibers installed
– Contracts with partners (Telbank and HAVE) (1426 km):
I phase ~ 5 mln Euro
2004/2005 – 21 MANs connected with 5200 km of fiber
PIONIER - fibers deployment, 1Q2004.
GDAŃSK
KOSZALIN
OLSZTYN
SZCZECIN
BYDGOSZCZ
BIAŁYSTOK
TORUŃ
POZNAŃ
GUBIN
WARSZAWA
ZIELONA
GÓRA
SIEDLCE
GDAŃSK
KOSZALIN
OLSZTYN
SZCZECIN
BYDGOSZCZ
BIAŁYSTOK
TORUŃ
POZNAŃ
GUBIN
WARSZAWA
ZIELONA
GÓRA
SIEDLCE
ŁÓDŹ
PUŁAWY
WROCŁAW
RADOM
CZĘSTOCHOWA
LUBLIN
KIELCE
OPOLE
GLIWICE
KATOWICE
KRAKÓW
CIESZYN
RZESZÓW
BIELSKO-BIAŁA
10GE links
10 GE nodes
ŁÓDŹ
PUŁAWY
WROCŁAW
RADOM
CZĘSTOCHOWA
KIELCE
OPOLE
GLIWICE
KATOWICE
KRAKÓW
CIESZYN
BIELSKO-BIAŁA
Installed fiber
PIONIER nodes
Installed fiber
PIONIER nodes
Fibers planned in 2004/2005
Fibers planned in 2004
PIONIER
PIONIER nodes planned in
2004/2005nodes planned in 2004
Fibers started in 2003
RZESZÓW
LUBLIN
How we build fibers
• Co-investment with telco operators or
self-investment (with right of way: power distribution, railways and
public roads)
• Average of 16 fibers available (4xG.652 for national backbone,
8xG.652 for regional use, 4xG.655 for long haul transmission)
(2001-2002)
• 2 pipes and one cable with 24 fibers available (2003)
• Average span length 60km for national backbone (regeneration
possible)
• Local loop contruction is sometimes difficult (urban area average 6 months waiting time for permissions)
• Found on time ...
Link optimization
•
•
•
•
a side effect of an urgent demand for DCM module ;-)
replacement of G.652 fiber with G.655 fiber (NZDS)
similar cost of fiber G.652 and G.655
cost reduction via:
• lower # of amp./regenerators,
• lower # of DCM
But:
• optimisation is valid for selected cases only and is
wavelength/waveset/link dependent.
G.652
optical lines:
Poznań MAN
Poznań TB
...
port 2
10GE
Switch
Wolsztyn
Grodzisk
Zielona Góra TB
Sulechów
...
Zielona Góra MAN
Lr= 161.1 km
10GE
Switch
port 1
5dB
5dB
10dB
-10.30dB
-11.50dB -11.50dB
1310nm
R/Rx< SCNFM
<L/Tx
10 km
2.20dB
-8.10dB
4.00dB
12.10dB
-20.10dB
-10.80dB
-17.10dB
<EDFA<
In/Out 1
57 km
8dB
6.00dB
12.00dB
12.00dB
7.80dB
6.30dB
DCM40
10dB
15dB
24 km
4 km
5.00dB
2.10dB
<R/Tx
L/Rx<
WCM
WCM
2.00dB
R/Tx>
>L/Rx
10dB
1310nm
3.90dB
-1.10dB
DCM40
<EDFA<
In/Out 1
42 km
26 km
3dB
-20.10dB -10.10dB
-3.00dB
DCM40
11.80dB
DCM40
(shared shelf)
-7.80dB
4.00dB
8.90dB
6.30dB
>EDFA>
In/Out 2
13dB
5dB
-1.00dB
-16.80dB -21.80dB
-7.30dB
-6.20dB
-8.30dB
SCNFM >R/Rx
-9.50dB
L/Tx>
-9.50dB
(shared shelf)
12.00dB
12.00dB
6.00dB
2.10dB
5.20dB
>EDFA>
In/Out 1
13dB
3dB
-1.00dB
-16.20dB -19.20dB
Cost approximately 140KEuro
optical lines:
G.655
Poznań MAN
Poznań TB
port 2
Grodzisk
10GE
Switch
Wolsztyn
Sulechów
Zielona Góra TB
...
Zielona Góra MAN
Lr= 161.1 km
10GE
Switch
port 1
5dB
5dB
10dB
1310nm
12.00dB
-12.80dB -9.80dB
<L/Tx
R/Rx< SCNFM
3dB
-8.60dB
10 km
2.20dB
-6.40dB
5.70dB
57 km
-21.90dB
26 km
12.10dB
6.00dB
-8.90dB
<EDFA<
In/Out 1
13dB
6.30dB
-1.10dB
3.90dB
42 km
24 km
4 km
7.80dB
5.00dB
2.10dB
WCM
>L/Rx
R/Tx>
(shared shelf)
6.00dB
10dB
1310nm
<R/Tx
L/Rx<
WCM
2.00dB
4.00dB
11.80dB
6.30dB
-7.80dB
>EDFA>
In/Out 1
8dB
-14.10dB
-22.10dB
8.90dB
5.20dB
3.10dB
2.10dB
-2.10dB
12.00dB
Cost approximately 90KEuro, cost savings (equipment only): 35%
-4.20dB
SCNFM >R/Rx
L/Tx>
5dB
-5.40dB -10.40dB
(shared shelf)
Community demands as a driving force
•
•
Academic Internet
– international connections:
• GEANT 10 Gb/s
• TELIA 2 Gb/s
• GTS/SPRINT 2 Gb/s
– national connections between MANs (10Gb/s, 622Gb/s leased lambda)
– near future – n x 10Gb/s
High Performance Computing Centers (FC, GE, 10GE)
– Project PROGRESS
„Access environment to computational services performed by cluster of SUNs”
SUN cluster (3 sites x 1Gb/s)
result presented on SC 2002 and SC 2003
– Project SGI
„HPC/HPV in Virtual Laboratory on SGI clusters”
SGI cluster (6 sites x 1Gb/s)
– Project CLUSTERIX (12 sites x 1 Gb/s)
„National CLUSTER of LInuX Systems”
– Project in preparation
• National Data Storage system (5 sites x 1Gb/s)
Community demands as a driving force
•
Dedicated Capacity for European Projects
– ATRIUM (622Mb/s)
– 6NET (155-622Mb/s)
– VLBI (2x1Gb/s dedicated)
– CERN-ATLAS (>1 Gb/s dedicated per site)
– near future – 6 FP IST
Intermediate stage - 10GE over fiber
OWN FIBERS
GDAŃSK
10 Gb/s
KOSZALIN
OLSZTYN
SZCZECIN
BIAŁYSTOK
BYDGOSZCZ
TORUŃ
GÉANT
GÉANT
10 Gb/s
POZNAŃ
ZIELONA
GÓRA
WARSZAWA
622 Mb/s
ŁÓDŹ
WROCŁAW
CZĘSTOCHOWA
155 Mb/s
RADOM
KIELCE
OPOLE
LEASED CHANNELS
PUŁAWY
LUBLIN
KATOWICE
KRAKÓW
BIELSKO-BIAŁA
RZESZÓW
Metropolitan
Area
Networks
PIONIER - the economy behind
Cost reduction via:
• simplified network architecture
IP / ATM / SDH / DWDM  IP / GE / DWDM
• lower investment, lower depreciation
ATM /SDH  GE
• simplified management
PIONIER - the economy behind...
Cost relation (connections between 21 MANs, per year):
•
622Mb/s channels from telco (real cost)
:
4.8 MEuro
•
2.5Gb/s channels from telco (estimate)
:
9.6 MEuro
•
10Gb/s channels from telco (estimate)
: 19.2 MEuro
PIONIER costs (5200km of fibers, 10GE)
: 55.0 MEuro
Annual PIONIER maintenance costs
:
2.1 MEuro
Return of Investment in 3 years!
(calculations made only for 1 lambda used)
PIONIER – e-Region
Two e-Regions already defined:
•
•
Cottbus – Zielona Gora (D-PL)
Ostrava – Bielsko Biala (CZ-PL)
e-Region objectives:
1. Creation of a rational base and possibility of integrated work
between institutions across the border, as defined by eEurope. (...) education, medicine, natural disasters, information
bases, protection of environment.
2. Enchancing the abilities of co-operation by developing new
generation of services and applications.
3. Promoting the region in the Europe (as a micro scale of eEurope concept)
PIONIER – „Porta Optica”
•„PORTA OPTICA” - a distributed optical gateway to
eastern neigbours of Poland (project proposal)
• A chance for close cooperation in scientific projects, by the
means of providing multichannel/multilambda Internet
connections to the neighbouring countries.
• An easy way to extend GEANT to Eastern European
countries
PIONIER – cooperation with neighbours
RUSSIA
GDAŃSK
ELBLĄG
KOSZALIN
P
O
R
T
A
GERMANY
OLSZTYN
SZCZECIN
BYDGOSZCZ
BIAŁYSTOK
TORUŃ
POZNAŃ
e-Region
WARSZAWA
ZIELONA
GÓRA
ŁÓDŹ
PUŁAWY
RADOM
WROCŁAW
O
P
T
I
C
A
LUBLIN
CZĘSTOCHOWA
KIELCE
OPOLE
GLIWICE
e-Region
KATOWICE
KRAKÓW
RZESZÓW
BIELSKO-BIAŁA
SLOVAKIA
BELARUS
SŁUPSK
HPC and IST project
SŁUPSK
GDAŃSK
ELBLĄG
KOSZALIN
HPC network (5+3)
PROGRESS (3)
VLBI
ATLAS
OLSZTYN
SZCZECIN
BYDGOSZCZ
BIAŁYSTOK
TORUŃ
POZNAŃ
WARSZAWA
ZIELONA
GÓRA
ŁÓDŹ
PUŁAWY
RADOM
WROCŁAW
OTHER PROJECTS?
LUBLIN
CZĘSTOCHOWA
KIELCE
OPOLE
GLIWICE
KATOWICE
KRAKÓW
BIELSKO-BIAŁA
RZESZÓW
CLUSTERIX National CLUSTER of LInuX Systems
launched in November 2003, 30 months
its realization is divided into two stages:
- research and development – first 18 months
- deployment – starting after the r&d stage and lasting
12 months
in more than 50 % funded by the consortium members
consortium: 12 universities and Polish Academy of
Sciences
CLUSTERIX goals
to develop mechanisms and tools that allow the deployment of a
production Grid environment with the basic infrastructure
comprising local PC- clusters based on 64-bit Linux machines located
in geographically distant independent centers connected by the fast
backbone network provided by the Polish Optical Network PIONIER
existing PC-clusters, as well as new clusters with both 32- and 64-bit
architecture, will be dynamically connected to the basic infrastructure
as a result, a distributed PC-cluster of a new generation with a
dynamically changing size, fully operational and integrated with the
existing services offered by other projects related to the PIONIER
program, will be obtained
results in the software infrastructure area will allow for increasing the
portability and stability of the software and performance of the
services and computations in the Grid-type structure
CLUSTERIX - objectives
development of software capable of managing clusters with dynamically
changing configuration, i.e. changing number of nodes, users and available
services; one of the most important factors is reducing the management
overhead;
new quality of services and applications based on the IPv6 protocols;
production Grid infrastructure available for the Polish research community;
integration and making use of the existing services delivered as the outcome of
other projects (data warehouse, remote visualization, computational resources
of KKO);
taking into consideration local policies of infrastructure administration and
management, within independent domains;
integrated end-user/administrator interface;
providing required security in a heterogeneous distributed system.
CLUSTERIX: Pilot installation
Architecture
CLUSTERIX: Technologies
the software developed will be based on the Globus Toolkit v.3, using
the OGSA (Open Grid Services Architecture) concept
- this technology ensures software compatibility with other
environments used for creating Grid systems, and makes the created
services easier to reuse
- accepting OGSA as a standard will allow for co-operation of the
services with other meta-clusters and Grid systems
Open Source technology
- allows anybody to access the project source code, modify it and
publish the changes
- makes the software more reliable and secure
- open software is easier to integrate with the existing solutions and
helps other technologies using Open Source software to develop
Integration with existing software will be used extensively, e.g.,
GridLab broker, Virtual Users Account System
CLUSTERIX: R&D
architecture design accordingly to specific requirements of users
data management
procedures of attaching a local PC cluster of any architecture
design and implementation of the task/resource management system
users account and virtual organization management
security mechanisms in a PC cluster
network resources management
utilization of the IPv6 protocol family
monitoring of cluster nodes and distributed applications
design of a user/administrator interface
design of tools for an automated installation/reconfiguration of all nodes
within the entire cluster
dynamic load balancing and checkpointing mechanism
end-user’s applications
High Performance Computing and
Visualisation with the SGI Grid
for Virtual Laboratory Applications
Project No. 6 T11 0052 2002 C/05836
Project duration:
R&D – December 2002 .. November 2004
Deployment – 1 year
Partners:
HPC centers
ACK CYFRONET AGH (Kraków)
PSNC (Poznan)
TASK (Gdansk)
WCSS (Wroclaw)
TASK
University
Lodz
End User
PSNC
IMWM (Warsaw)
Institute of Bioorganic Chemistry PAS
Industry
SGI, ATM S.A.
IMWM
PŁ
WCSS
Funds:
KBN, SGI
CYFRONET
Structure
A PPLIC ATIONS
VIRTUAL LABORATORY
HIGH PERFORMANCE
VISUALISATION APPS
EMERGENCY COMPUTING
CENTER
G R I D M I D D L E W AR E
A DV ANCED NETWO RK
HPC/HPV
INFR ASRUCTURE (PIONIER)
INFR AS TRUCTURE
Added value
• Real remote access to the national cluster (... GRID):
• ASPs
• HPC/HPV
• Labour instruments
• Better usage of licences
• Dedicated Application Servers
• Better usage of HPC resources
• HTC
• Emergency Computing Site
• IMWM
• Production Grid environment
• Midleware we will work out
Virtual Laboratory
VERY limited access
Main reason - COSTS
Main GOAL - to make
accessible on a common way
Added Value
virtual, remote (?)
The Goal
• Remote usage of expensive and unique facilities
• Better utilisation
• Joint venture and on-line co-operation of scientific teams
• Shorter deadlines, faster work
• eScience – closer
• Equal chances
• Tele –work, -science
Testbed infrastructure
Pilot installation of
NMR Spectroscopy
Optical network
HPC, HPV systems
Data Mining
... more than remote access
Remaining R&D activities
• Building a national wide HPC/HPV infrastructure:
•
Connecting the existing infrastructure with the new testbed.
• Dedicated Application Servers
• Resource Management
• Data access optimisation
• tape subsystems
• Access to scientific libraries
• Checkpoint restart
• kernel level
• IA64 architecture
• Advanced visualization
• Distributed
• Remote visualization
• Programming environment supporting the end user
• How to simplify the process of making parallel applications
PROGRESS (1)
• Duration: December 2001 – May 2003 (R&D)
• Budget: ~4,0 MEuro
• Project Partners
– SUN Microsystems Poland
– PSNC IBCh Poznań
– Cyfronet AMM, Kraków
– Technical University Łódź
• Co-funded by The State Committee for Scientific Research
(KBN) and SUN Microsystems Poland
PROGRESS (2)
• Deployment: June 2003 – ....
– Grid constructors
– Computational applications
developers
– Computing portals operators
• Enabling access to global grid through
deployment of PROGRESS open
source packages
PROGRESS (3)
• Cluster of 80
processors
• Networked Storage of
1,3 TB
• Software: ORACLE,
HPC Cluster Tools, Sun
ONE, Sun Grid Engine,
Globus
Gdańsk
Wrocław
PROGRESS GPE
http://progress.psnc.pl/
http://progress.psnc.pl/portal/
[email protected]
EU Projects: Progress and GridLab
What is CrossGrid ?
• 5. FP, founded by the EU
– Time frame: March 2002 – February 2005
• Structure of the project
– WP1 - CrossGrid Applications Development
– WP2 - Grid Application Programming Environment
– WP3 - New Grid Services and Tools
– WP4 - International Testbed Organisation
– WP5 - Project Management (including Architecture Team
and central Dissemination/ Exploitation)
Partners
•
•
•
•
21 partners
2 industry partners
11 countries
The biggest
testbed in Europe
Project structure
– WP1 - CrossGrid Applications Development
– WP2 - Grid Application Programming Environment
– WP3 - New Grid Services and Tools
– WP4 - International Testbed Organisation
WP5
– WP5 - Project Management
8%
WP1
28%
WP4
32%
WP2
WP3
20%
12%
Middleware
Mobile Access
Desktop
Visualization
G
R
I
D
M
I
D
D
L
E
W
A
R
E
Supercomputer, PC-Cluster
Data-storage, Sensors, Experiments
Internet, networks
Applications
Surgery
planning &
visualisation
HEP
data
analysis
Flooding
control
MIS
Weather &
pollution
modelling
What’s the best way to
‘travel’ ?
Microsoft Windows
Migrating Desktop
Grids
Roaming
Access
take it
anywhere
access anyhow
Linux
http://ras.man.poznan.pl
GridLab
Enabling Applications on the Grid
www.gridlab.org
Jarek Nabrzyski, Project Coordinator
[email protected]
[email protected]
Poznan Supercomputing and
Networking Center
GridLab Project
•
•
•
•
Funded by the EU (5+ M€), January 2002 – December 2004
Application and Testbed oriented
–
Cactus Code, Triana Workflow, all the other applications that want to be
Grid-enabled
Main goal: to develop a Grid Application Toolkit (GAT) and set of grid services
and tools...:
–
resource management (GRMS),
–
data management,
–
monitoring,
–
adaptive components,
–
mobile user support,
–
security services,
–
portals,
... and test them on a real testbed with real applications
GridLab Members
•









•
•



PSNC (Poznan) - coordination
AEI (Potsdam)
ZIB (Berlin)
Univ. of Lecce
Cardiff University
Vrije Univ. (Amsterdam)
SZTAKI (Budapest)
Masaryk Univ. (Brno)
NTUA (Athens)
Sun Microsystems
HP
ANL (Chicago, I. Foster)
ISI (LA, C.Kesselman)
UoWisconsin (M. Livny)
collaborating with:
– Users!
• EU Astrophysics Network,
• DFN TiKSL/GriKSL
• NSF ASC Project
– other Grid projects
• Globus, Condor,
• GrADS,
• PROGRESS,
• GriPhyn/iVDGL,
• CrossGrid and all the other
European Grid Projects
(GRIDSTART)
• GWEN, HPC-Europa
GridLab Aims
• Get Computational Scientists using the “Grid” and
Grid services for real, everyday, production work (AEI
Relativists, EU Network, Grav Wave Data Analysis,
Cactus User Community), all the other potential grid
apps
• Make it easier for applications to make flexible,
efficient, robust, use of the resources available to
their virtual organizations
• Dream up, prototype, and test new application
scenarios which make adaptive, dynamic, wild, and
futuristic uses of resources.
What GridLab isn’t
•
•
We are not developing low level Grid infrastructure,
We do not want to repeat work which has already been done (want to
incorporate and assimilate it …)
– Globus APIs,
– OGSA,
– ASC Portal (GridSphere/Orbiter),
– GPDK,
– GridPort,
– DataGrid,
– GriPhyn,
– ...
…need to make it easier to use
Application
“Is there a better resource I could be using?”
GAT_FindResource( )
GAT
The Grid
GridLab Architecture
The Same Application …
Laptop
Super Computer
The Grid
Application
Application
Application
GAT
GAT
GAT
No network!
Firewall issues!
More info / summary
•
•
•
•
www.GridLab.org
[email protected], [email protected]
[email protected]
Bring your application and test it with the GAT and our services.