GRID Computing

Download Report

Transcript GRID Computing

Single System Image
Infrastructure and Tools
1
Cluster Computer Architecture
Parallel Applications
Parallel Applications
Parallel Applications
Sequential Applications
Sequential Applications
Sequential Applications
Parallel Programming Environment
Cluster Middleware
(Single System Image and Availability Infrastructure)
PC/Workstation
PC/Workstation
PC/Workstation
PC/Workstation
Communications
Communications
Communications
Communications
Software
Software
Software
Software
Network Interface
Hardware
Network Interface
Hardware
Network Interface
Hardware
Network Interface
Hardware
Cluster Interconnection Network/Switch
2
A major issues in Cluster design
•
Enhanced Performance (performance @ low cost)
•
Enhanced Availability (failure management)
•
Single System Image (look-and-feel of one system)
•
Size Scalability (physical & application)
•
Fast Communication (networks & protocols)
•
Load Balancing (CPU, Net, Memory, Disk)
•
Security and Encryption (clusters of clusters)
•
Distributed Environment (Social issues)
•
Manageability (admin. And control)
•
Programmability (simple API if required)
•
Applicability (cluster-aware and non-aware app.)
3
A typical Cluster Computing
Environment
Applications
PVM / MPI/ RSH
???
Hardware/OS
4
The missing link is provide by
cluster middleware/underware
Applications
PVM
PVM//MPI/
MPI/RSH
RSH
Middleware or
Underware
Hardware/OS
5
Middleware Design Goals

Complete Transparency (Manageability):

Lets the see a single cluster system..


Scalable Performance:

Easy growth of cluster


Single entry point, ftp, telnet, software loading...
no change of API & automatic load distribution.
Enhanced Availability:

Automatic Recovery from failures


Employ checkpointing & fault tolerant technologies
Handle consistency of data when replicated..
6
What is Single System Image
(SSI)?


SSI is the illusion, created by
software or hardware, that presents
a collection of computing resources
as one, more whole resource.
SSI makes the cluster appear like a
single machine to the user, to
applications, and to the network.
7
Benefits of SSI







Use of system resources transparent.
Transparent process migration and load
balancing across nodes.
Improved reliability and higher availability.
Improved system response time and
performance
Simplified system management.
Reduction in the risk of operator errors.
No need to be aware of the underlying system
architecture to use these machines effectively.
8
Desired SSI Services

Single Entry Point:








telnet cluster.my_institute.edu
telnet node1.cluster. institute.edu
Single File Hierarchy: /Proc, NFS, xFS, AFS, etc.
Single Control Point: Management GUI
Single virtual networking
Single memory space - Network RAM/DSM
Single Job Management: Glunix, Codine, LSF
Single GUI: Like workstation/PC windowing
environment – it may be Web technology
9
Availability Support Functions

Single I/O space:


Single process Space:


Any node can access any peripheral or disk devices
without the knowledge of physical location.
Any process on any node create process with cluster
wide process wide and they communicate through
signal, pipes, etc, as if they are one a single node.
Checkpointing and process migration:

Can saves the process state and intermediate results
in memory to disk to support rollback recovery when
node fails. RMS Load balancing...
10
SSI Levels

SSI levels of abstractions:
Application and Subsystem Level
Operating System Kernel Level
Hardware Level
11
SSI at Application and
Sub-system Levels
Level
Application
Sub-system
File system
Toolkit
Examples
batch system and
system management
Distributed DB,
OSF DME, Lotus
Notes, MPI, PVM
Sun NFS, OSF,
DFS, NetWare,
and so on
OSF DCE, Sun
ONC+, Apollo
Domain
Boundary
An application
A sub-system
Importance
What a user
wants
SSI for all
applications of
the sub-system
Shared portion of
the file system
Implicitly supports
many applications
and subsystems
Explicit toolkit
facilities: user,
service name, time
Best level of support
for heterogeneous
system
(c) In search of clusters
12
SSI at OS Kernel Level
Level
Kernel/
OS Layer
Kernel
interfaces
Virtual
memory
Microkernel
Examples
Boundary
Importance
Each name space: Kernel support for
Solaris MC, Unixware
MOSIX, Sprite, Amoeba files, processes,
applications, adm
pipes, devices, etc. subsystems
/GLunix
UNIX (Sun) vnode,
Locus (IBM) vproc
None supporting
OS kernel
Mach, PARAS, Chorus,
OSF/1AD, Amoeba
Type of kernel
objects: files,
processes, etc.
Modularises SSI
code within
kernel
Each distributed
virtual memory
space
May simplify
implementation
of kernel objects
Each service
outside the
microkernel
Implicit SSI for
all system services
(c) In search of clusters
13
SSI at Hardware Level
Level
Examples
Boundary
Importance
Application and Subsystem Level
Operating System Kernel Level
memory
memory
and I/O
SCI, DASH
SCI, SMP techniques
memory space
better communication
and synchronization
memory and I/O
device space
lower overhead
cluster I/O
(c) In search of clusters
14
SSI Characteristics
Every SSI has a boundary.
 Single system support can exist
at different levels within a
system, one able to be build on
another.

15
SSI Boundaries
Batch System
SSI
Boundary
(c) In search
of clusters
16
Relationship Among Middleware
Modules
17
SSI via OS path!

1. Build as a layer on top of the existing OS



Benefits: makes the system quickly portable, tracks
vendor software upgrades, and reduces development
time.
i.e. new systems can be built quickly by mapping new
services onto the functionality provided by the layer
beneath. e.g.: Glunix.
2. Build SSI at kernel level, True Cluster OS


Good, but Can’t leverage of OS improvements by vendor.
E.g. Unixware, Solaris-MC, and MOSIX.
18
SSI Systems & Tools

OS level SSI:




Middleware level SSI:


SCO NSC UnixWare;
Solaris-MC;
MOSIX, ….
PVM, TreadMarks (DSM), Glunix, Condor,
Codine, Nimrod, ….
Application level SSI:

PARMON, Parallel Oracle, ...
19
SCO Non-stop Cluster for UnixWare
http://www.sco.com/products/clustering/
UP or SMP node
UP or SMP node
Users, applications, and
systems management
Standard OS
kernel calls
Standard SCO
UnixWare
with clustering
hooks
Extensions
Users, applications, and
systems management
Extensions
Modular
kernel
extensions
Standard OS
kernel calls
Standard SCO
UnixWare
with clustering
hooks
Modular
kernel
extensions
Devices
Devices
ServerNet
Other nodes
How does NonStop Clusters
Work?

Modular Extensions and Hooks to Provide:












Single Clusterwide Filesystem view;
Transparent Clusterwide device access;
Transparent swap space sharing;
Transparent Clusterwide IPC;
High Performance Internode Communications;
Transparent Clusterwide Processes, migration,etc.;
Node down cleanup and resource failover;
Transparent Clusterwide parallel TCP/IP networking;
Application Availability;
Clusterwide Membership and Cluster timesync;
Cluster System Administration;
Load Leveling.
Sun Solaris MC

Solaris MC: A High Performance Operating System for
Clusters




A distributed OS for a multicomputer, a cluster of computing nodes
connected by a high-speed interconnect
Provide a single system image, making the cluster appear like a
single machine to the user, to applications, and the the network
Built as a globalization layer on top of the existing Solaris kernel
Interesting features





extends existing Solaris OS
preserves the existing Solaris ABI/API compliance
provides support for high availability
uses C++, IDL, CORBA in the kernel
leverages spring technology
22
Solaris-MC: Solaris for
MultiComputers
Applications

System call interface

Network
File system
C++
Processes
Solaris MC
Object framework
Object invocations
Existing Solaris 2.5 kernel
Other
nodes

global file
system
globalized
process
management
globalized
networking
and I/O
Kernel
Solaris MC Architecture
http://www.sun.com/research/solaris-mc/
23
Solaris MC components

Applications
System call interface

Network
File system
C++
Processes
Solaris MC
Other
nodes

Object framework
Object invocations
Existing Solaris 2.5 kernel
Kernel

Solaris MC Architecture

Object and
communicatio
n support
High
availability
support
PXFS global
distributed file
system
Process
management
Networking
24
MOSIX: Multicomputer OS for UNIX
http://www.mosix.cs.huji.ac.il/ || mosix.org



An OS module (layer) that provides the
applications with the illusion of working on a single
system.
Remote operations are performed like local
operations.
Transparent to the application - user interface
unchanged.
Application
PVM / MPI / RSH
Hardware/OS
25
Main tool
Preemptive process migration that can
migrate  any process, anywhere, anytime



Supervised by distributed algorithms
that respond on-line to global resource
availability – transparently.
Load-balancing - migrate process from overloaded to under-loaded nodes.
Memory ushering - migrate processes from
a node that has exhausted its memory, to
prevent paging/swapping.
26
MOSIX for Linux at HUJI

A scalable cluster configuration:







50 Pentium-II 300 MHz
38 Pentium-Pro 200 MHz (some are SMPs)
16 Pentium-II 400 MHz (some are SMPs)
Over 12 GB cluster-wide RAM
Connected by the Myrinet 2.56 G.b/s LAN
Runs Red-Hat 6.0, based on Kernel 2.2.7
Upgrade: HW with Intel, SW with Linux
Download MOSIX:

http://www.mosix.cs.huji.ac.il/
27