SERVOGrid - Indiana University
Download
Report
Transcript SERVOGrid - Indiana University
iSERVO and SERVOGrid:
(International) Solid Earth
Research Virtual Observatory
Grid/Web Services and Portals
Supporting Earthquake Science
Jan 16 2004 Los Angeles
Geoffrey Fox
Community Grids Lab,
Pervasive Technologies
Laboratories
Indiana University
The Solid Earth Research
Virtual Observatory
A Web-based system for modeling
multi-scale earthquake processes
Andrea Donnellan, John Rundle, Geoffrey Fox, Marlon Pierce,
Dennis McLeod, Jay Parker, Robert Granat, Terry Tullis, Lisa Grant
Solid Earth Research Virtual
Observatory (SERVO)
Web-services and portlet based Problem Solving Environment (PSE)
Couples data with simulation, pattern recognition software, and visualization
software
Enable investigators to seamlessly merge multiple data sets and models, and
create new queries.
Data
•
•
•
•
Spaced-based observational data
Ground-based sensor data (GPS, seismicity)
Simulation data
Published/historical fault measurements
Analysis Software
• Earthquake fault
• Lithospheric modeling
• Pattern recognition software
International Version iSERVO
• Australia China and Japan
SERVOGrid Codes, Relationships
(Workflow)
Elastic Dislocation Inversion
Viscoelastic FEM
Viscoelastic Layered BEM
Elastic Dislocation
Pattern Recognizers
Fault Model BEM
SERVOGrid Application Descriptions
Codes range from simple “rough estimate” codes to parallel,
high performance applications.
• Disloc: handles multiple arbitrarily dipping dislocations (faults) in
an elastic half-space.
• Simplex: inverts surface geodetic displacements for fault
parameters using simulated annealing downhill residual
minimization.
• GeoFEST: Three-dimensional viscoelastic finite element model for
calculating nodal displacements and tractions. Allows for realistic
fault geometry and characteristics, material properties, and body
forces.
• Virtual California: Program to simulate interactions between vertical
strike-slip faults using an elastic layer over a viscoelastic half-space
• RDAHMM: Time series analysis program based on Hidden Markov
Modeling. Produces feature vectors and probabilities for
transitioning from one class to another.
• PARK: Boundary element program to calculate fault slip velocity
history based on fault frictional properties; a model for unstable
slip on a single earthquake fault.
• PDPC: Phase Dynamics Probability Change
Preprocessors, mesh generators
Visualization tools: RIVA, GMT
Data Access and Sharing, Code
Integration
Codes all use custom text formats for
describing input and output.
Input and output data often combined with
code-specific information.
• Number of iterations, array sizes, etc.
Data files often created by hand from journals,
online repositories
• Online repositories themselves use differing formats
Challenges are to develop common data
formats, access services, and client query
tools.
Solve by wrapping all codes as Web/Grid
services
Web Services for Data Access and
Computing Service Invocation
Web services:
• WSDL: Interface definition language, describes your service
“GeoFEST may be invoked with these input types”
• SOAP: Transport envelope for remote procedure calls/messages
“Invoke GeoFEST with this set of input”
Critical feature: all I/O message (not RPC) based
• WSDL is message version of method calls
Together, WSDL and SOAP are useful for manipulating,
returning XML data values
• So GML schemas act as our data models and return values
Do not distinguish between Web and Grid services
Note OMII (Open Middleware Infrastructure Institute)
will develop e-Science core technology
• Currently only in UK but likely to spread
Wrappers convert conventional file/parameter I/O to
Web Service messages
Building PSE’s with the
Rule of the Millisecond I
Typical Web Services are used in situations with
interaction delays (network transit) of 100’s of
milliseconds
But basic message-based interaction architecture only
incurs fraction of a millisecond delay
Thus use Web Services to build ALL PSE components
• Use messages and NOT method/subroutine call or RPC
Interaction
Nugget1
Nugget3
Nugget2
Nugget4
Data
Building PSE’s with the
Rule of the Millisecond II
Messaging has several advantages over scripting languages
• Collaboration trivial by sharing messages
• Software Engineering due to greater modularity
• Web Services do/will have wonderful support
“Loose” Application coupling uses workflow technologies
Find characteristic interaction time (millisecond programs;
microseconds MPI and particle) and use best supported
architecture at this level
• Two levels: Web Service (Grid) and
C/C++/C#/Fortran/Java/Python
Major difficulty in frameworks is NOT building them but rather in
supporting them
• IMHO only hope is to always minimize life-cycle support risks
• Science is too small a field to support much!
Expect to use DIFFERENT technologies at each level even though
possible to do everything with one technology
• Trade off support versus performance/customization
(i)SERVO Web (Grid) Services
• Programs: All applications wrapped using proxy strategy as Services
• Job Submission: supports remote batch and shell invocations
– Used to execute simulation codes (VC suite, GeoFEST, etc.), mesh generation
(Akira/Apollo) and visualization packages (RIVA, GMT).
• File management:
– Uploading, downloading, backend crossloading (i.e. move files between remote
servers)
– Remote copies, renames, etc.
• Job monitoring
• Workflow: Apache Ant-based remote service orchestration
– For coupling related sequences of remote actions, such as RIVA movie
generation.
• Database services: support SQL queries
• Data services: support interactions with XML-based fault and surface
observation data.
– For simulation generated faults (i.e. from Simplex)
– XML data model being adopted for common formats with translation services to
“legacy” formats.
– Migrating to Geography Markup Language (GML) descriptions.
GML Schemas as Data Models for
Services
Fault and GPS Schemas are based on GMLFeature object.
Seismicity Schema is based on GML-Observation
object.
Working schema available from
http://grids.ucs.indiana.edu/~gaydin/schemas/
Work interfaced with openGIS Consortium who
have well developed set of GIS Web services
The Next Generation Grid Portal
(http://www.ogce.org)
• Building on Standard Technologies
– Portlet Design (JSR-168) IBM, Oracle, Sun, BEA, Apache
– Grid standards: Java CoG, Web/Grid Services
– Web server: JetSpeed (open source)
• User configurable, Service Oriented
– Philosophy: The Portal is a gateway to distributed Grid and Web
Services
• With common API, portlets can be exchanged, interoperate
Event and
logging
Services
Portal Server
MyProxy
Server
Metadata
Directory
Service(s)
Application
Factory
Services
Messaging
and group
collaboration
Directory
& index
Services
OGCE
Consortium
Collage of Portals
Earthquakes – NASA
Fusion – DoE
Computing Info – DoD
Publications -- CGL
QuakeSim Portal Shots
Clients
Portlet Class:
WebForm
Aggregation and Rendering
Clients (Pure HTML, Java Applet ..)
Portal Architecture
Portlet Class:
IFramePortlet
Portlet Class:
JspPortlet
Portlet Class:
VelocityPortlet
Jetspeed
Internal
Services
Portal
Portlets
(Jetspeed)
Gateway
(IU)
Remote
or Proxy
Portlets
Web/Grid
service
Computing
Web/Grid
service
Data Stores
Web/Grid
service
Instruments
GridPort
etc.
(Java)
COG Kit
Local
Portlets
Libraries
Hierarchical
arrangement
Services
Resources
GeoFEST:
Northridge Earthquake Deformation
Virtual California:
1000 Years of Simulated Earthquakes
Pattern Recognition:
InSAR and Seismicity Anomalies
International iSERVO Resources
• USC, Indiana and JPL are current USA resources
• University of Queensland
– Host resources: Web/compute server
– Finite Element Application, “Finley”
– Australian Fault Map data from Geoscience Australia
• University of Tokyo
– Linux server for Web server hosting
– Finite Element Application, “GeoFEM” and related
tools.
iSERVO Example: Finley
• Finley is a finite element code being developed by
the QUAKES group at the University of
Queensland.
• Compatible with GeoFEST-style geometry models
and mesh generation tools.
– So we can reuse the services we wrapped for GeoFEST.
• The Finley application itself is a separate service
and also has a separate (simple) visualization
service.
Setting Up Finley Simulation of
Northridge
Selected Fault
Components
Select Fault from
USC database
Run Finley, Retrieve Generate Movie