CrossGrid – Short Overview

Download Report

Transcript CrossGrid – Short Overview

X#
Overview of the CrossGrid
Project
Marian Bubak
Institute of Computer Science & ACC CYFRONET
AGH, Kraków, Poland
and
Michał Turała
Institute of Nuclear Physics, Cracow, Poland
Cracow Grid Workshop, November 5-6, 2001
Towards the CrossGrid
X#
– 1st meeting January 24, 2001, to join DataGrid
– CPA9 Call
– Extended collaboration meeting at GGF1 (March 7)
• 23 partners
• New type of applications
–
–
–
–
Proposal submitted – April 22, 2001; 22 partners
Comments of reviewers and PO
Negotiations October 24, 2001; 21 partners
...
Cracow Grid Workshop, November 5-6, 2001
X#
CrossGrid Collaboration
Ireland:
Poland:
TCD Dublin
Netherlands:
UvA Amsterdam
Germany:
FZK Karlsruhe
TUM Munich
USTU Stuttgart
Austria:
Portugal:
LIP Lisbon
U.Linz
Spain:
CSIC Santander
Valencia &
RedIris
UAB Barcelona
USC Santiago
& CESGA
Italy:
DATAMAT
Cyfronet
& INP Cracow
PSNC Poznan
ICM & IPJ Warsaw
Slovakia:
II SAS Bratislava
Greece:
Algosystems
Demo Athens
AuTh Thessaloniki
Cracow Grid Workshop, November 5-6, 2001
Cyprus:
UCY Nikosia
Main Objectives
– New category of Grid enabled applications
•
•
•
•
–
–
–
–
computing and data intensive
distributed
near real time response (a person in a loop)
layered
New programming tools
Grid more user friendly, secure and efficient
Interoperability with other Grids
Implementation of standards
Cracow Grid Workshop, November 5-6, 2001
X#
CrossGrid Architecture
Interactive, Compute and Data Intensive Applications
(WP1)
Interactive simulation and visualisation of a
biomedical system
Flooding crisis team support
Distributed data analysis in HEP
Weather forecast and air pollution modelling
HLA
Grid
Visualisation
Kernel
MPI code debugging and
verification
Metrics and benchmarks
Interactive and semiautomatic
performance evaluation tools
New Grid Services and Tools (WP3)
Datagrid
Services
GriPhyN
Data Mining
Grid Application
Programming Environment
(WP2)
Portals and roaming access
Grid resource management
Grid monitoring
Optimisation of data access
...
Globus Middleware
Fabric Infrastructure
Cracow Grid Workshop, November 5-6, 2001
X#
X#
Key functionalities of applications
– Data gathering
• Data generators and data bases geographically distributed
• Selected on demand
– Processing
• Needs large processing capacity on demand
• Interactive
– Presentation
• Complex data require versatile 3D visualisation
• Support interaction and feedback to other components
Cracow Grid Workshop, November 5-6, 2001
Why Interactive Computing?
X#
– Goal: From Data, via Information to Knowledge =>Planning
and Management
– Complexity: Huge data-sets, complex processes
– Approach: Parametric exploration and sensitivity analyses:
• Combine raw (sensory) data with simulation
• Person in the loop:
• Sensory interaction
• Intelligent short-cuts
Cracow Grid Workshop, November 5-6, 2001
X#
Common issues of applications
– Inherently distributed applications profit from grid
approach
– All tasks require high performance & MPI
• 1.1 and 1.2 - interactive, near-real time
• 1.3 and 1.4 - high throughput
– Data mining
• 1.3 and 1.4
– Data discovery
• 1.2 and 1.4
Cracow Grid Workshop, November 5-6, 2001
X#
Example – medical application
Cracow Grid Workshop, November 5-6, 2001
Architecture
Cracow Grid Workshop, November 5-6, 2001
X#
X#
Distributed Data Analysis in HEP
Complementarity with DataGrid HEP application package:
• Crossgrid will develop interactive final user application for physics
analysis, will make use of the products of non-interactive simulation & dataprocessing preceeding stages of Datagrid
• Apart from the file-level service that will be offered by Datagrid, Crossgrid
will offer an object-level service to optimise the use of distributed
databases:
-Two possible implementations (will be tested in running experiments):
–Three-tier model accesing OODBMS or O/R DBMS
–More specific HEP solution like ROOT.
• User friendly due to specific portal tools
Cracow Grid Workshop, November 5-6, 2001
X#
Distributed Data Analysis in HEP
•Several challenging points:
–Access to large distributed databases in the Grid.
–Development of distributed data-mining techniques.
–Definition of a layered application structure.
–Integration of user-friendly interactive access.
•Focus on LHC experiments (ALICE, ATLAS, CMS and LHCb)
Cracow Grid Workshop, November 5-6, 2001
X#
WP2 - Grid Application Programming Environments
Objectives
•
•
•
•
specify
develop
integrate
test
tools that facilitate the development and tuning of parallel
distributed
high-performance and high-throughput computing applications on
Grid infrastructures
Cracow Grid Workshop, November 5-6, 2001
X#
WP2 - Grid Application Programming Environments
Six Tasks in WP2
2.0 Co-ordination and Management
2.1 Tools requirement definition
2.2 MPI code debugging and verification
2.3 Metrics and benchmarks
2.4 Interactive and semiautomatic performance evaluation tools
2.5 Integration, testing and refinement
Cracow Grid Workshop, November 5-6, 2001
X#
WP2 - Components and relations to other WPs
Benchmarks
(2.3)
Performance analysis (2.4)
Automatic
analysis
Application
WP1
running on
testbed WP4
Grid
monitoring
(3.3)
Performance
measurement
Visualization
Analytical
model
MPI
verification
(2.2)
Application
source code
Cracow Grid Workshop, November 5-6, 2001
WP3 Objectives
X#
• Tools for development of interactive compute- and dataintensive applications
• To address user-friendly Grid environments
• To simplify the applications and Grid access by
supporting the end user
• To achieve a reasonable trade-off between resource
usage efficiency and application speedup
• To support management issues while accessing
resources
Cracow Grid Workshop, November 5-6, 2001
WP3
Portals
(3.1)
Roaming Access
(3.1)
Applications
WP1
End Users
Grid Resource
Management
(3.2)
Grid Monitoring
(3.3)
Performance
evaluation tools
(2.4)
Optimisation of
Data Access
(3.4)
Tests and
Integration
(3.5)
Testbed
WP4
WP1, WP2,
WP5
Cracow Grid Workshop, November 5-6, 2001
X#
X#
Testbed Organisation (WP4)
– Testbed setup and incremental evolution
• from several local testbeds to fully integrated one
– Integration with DataGrid
• common design, environment for HEP applications
– Infrastructure support
• flexible fabric management tools and network support
– Verification and quality control
• reliability of the middleware and network infrastructure
Cracow Grid Workshop, November 5-6, 2001
X#
CrossGrid WP4 - International Testbed Organisation
Partners in WP4
TCD Dublin
PSNC Poznan
U v Amsterdam
ICM & IPJ Warsaw
FZK Karlsruhe
USC Santiago
CYFRONET Cracow
II SAS Bratislava
CSIC Santander
LIP Lisbon
WP4 lead by
CSIC (Spain)
CSIC Madrid
Auth Thessaloniki
U A Barcelona
CSIC Valencia
DEMO Athens
Cracow Grid Workshop, November 5-6, 2001
UCY Nikosia
X#
WP4 - International Testbed Organisation
Testbed site responsibles:
– CYFRONET (Krakow) A.Ozieblo
– ICM(Warsaw) W.Wislicki
– IPJ (Warsaw) K.Nawrocki
4.0 Coordination and management
– UvA (Amsterdam) D.van Albada
(task leader: J.Marco, CSIC, Santander)
– FZK (Karlsruhe) M.Kunze
–Coordination with WP1,2,3
– IISAS (Bratislava) J.Astalos
–Collaborative tools (web+videoconf+repository)
– PSNC(Poznan) P.Wolniewicz
–Integration Team
– UCY (Cyprus) M.Dikaiakos
– TCD (Dublin) B.Coghlan
4.1 Testbed setup & incremental evolution – CSIC (Santander/Valencia) S.Gonzalez
– UAB (Barcelona) G.Merino
(task leader:R.Marco, CSIC, Santander)
– USC (Santiago) A.Gomez
–Define installation
– UAM (Madrid) J.del Peso
–Deploy testbed releases
– Demo (Athenas) C.Markou
–Trace security issues
– AuTh (Thessaloniki) D.Sampsonidis
– LIP (Lisbon) J.Martins
Tasks in WP4
Cracow Grid Workshop, November 5-6, 2001
X#
WP4 - International Testbed Organisation
Tasks in WP4
4.2 Integration with DATAGRID (task leader: M.Kunze, FZK)
–Coordination of testbed setup
–Exchange knowledge
–Participate in WP meetings
4.3 Infrastructure Support (task leader: J.Salt, CSIC, Valencia)
–Fabric management
–HelpDesk
–Provide Installation Kit
–Network support
4.4 Verification & quality control (task leader: J.Gomes, LIP)
–Feedback
–Improve stability of the testbed
Cracow Grid Workshop, November 5-6, 2001
Technical Coordination
X#
– Merging of requirements
– Specification and refinement of the GrossGrid architecture
(protocols, APIs; HLA, CCA ...)
– Establishing standard operational procedures
•
•
•
•
repository acces procedures
problem reporting mechanism
handling changed requests mechanism
release preparation procedure
– Specification of the structure of deliverables
– Approach: rapid prototyping and iterative engineering
Cracow Grid Workshop, November 5-6, 2001
Project Phases
X#
M 4 - 12: first development phase: design,
1st prototypes, refinement of requirements
M 25 -32: third development phase:
complete integration, final code versions
M 1 - 3: requirements
definition and merging
M 33 -36: final phase:
demonstration and documentation
M 13 -24: second development phase:
integration of components, 2nd prototypes
Cracow Grid Workshop, November 5-6, 2001
Clustering with # Projects
– Objective – exchange of
• information
• software components
– Our partners
•
•
•
•
DATAGRID
DATATAG
GRIDLAB
EUROGRID and GRIP
– GRIDSTART
– Participation in GGF
Cracow Grid Workshop, November 5-6, 2001
X#
Expected Results of the CrossGrid
–
–
–
–
–
–
Grid enabled interactive applications
Elaborated methodology
Generic application architecture
New programming tools
New Grid services
Extension of the Grid in Europe and to new virtual
organisations
Cracow Grid Workshop, November 5-6, 2001
X#
Dissemination & Exploitation
– Methods & software developed will be available to
scientific community
– Each collaboration partner
• topic conferences, GGF, national Grid initiatives
• MSc, PhD and lectures on Grid technology
– Centralised
•
•
•
•
CrossGrid vortal
workshops, seminars, user/focus groups
newsletter, brochures
industrial deployment
Cracow Grid Workshop, November 5-6, 2001
X#
X#
Overall Links between WPs and Tasks
1.0
Coordination
2.0
Coordination
3.0
Coordination
1.1-1.4
Applications
2.1
Requirements
3.1-3.4
Services
5.1
Coordination
&
Management
5.2
Architecture
Team
2.2-2.4
Tools
4.0
Coordination
GGF
2.5
Tests
3.5
Tests
4.2
Integration with
DataGrid
4.1, 4.3, 4.4
Testbeds
5.3
Dissemination
& Exploitation
DataGrid Other Grid Projects
Cracow Grid Workshop, November 5-6, 2001
X#
Ready to start
January 1, 2002
Cracow Grid Workshop, November 5-6, 2001