Evergrid Financing - University of Oklahoma

Download Report

Transcript Evergrid Financing - University of Oklahoma

User-Friendly Checkpointing and Stateful
Preemption in HPC Environments Using
Evergrid Availability Services
Keith D. Ball, PhD
Evergrid, Inc.
Oklahoma Supercomputing Symposium
October 3, 2007
1
Overview
• Challenges for Large HPC Systems
• Availability Services (AVS) Checkpointing
• AVS Performance
• Preemptive Scheduling
• AVS Integration with LSF on Topdawg
• Conclusions
• About Evergrid
2
Challenges for Large HPC Systems
Robustness & fault tolerance:
 Long runs + many nodes = Increased likelihood of failure
 Need to insure real-time and compute-time “investment” in
long computations
Scheduling: need “stateful preemption” capability for efficient and
optimal fair-share scheduling
Without stateful preemption:
 High priority jobs terminate low priority jobs, forcing them to
restart from the beginning
 Increases average throughput time, decreases utilization rate
Maintenance: long time to quiesce system;hard to do scheduled
(or emergency) maintenance without killing jobs
3
Relentless Effect of Scale
MTBF (hour)
1
MTBF 
(1 R N )
140
120
100
80
60
40
20
0

2048
4096
8192
16384
32768
65536
131072
System Size N
R
0.9999
0.99999
0.999999
4
What Happens in the Real World?
System
#CPUs
Reliability
ASCI Q
8,192
MTBI: 6.5 hrs (114 Unplanned
outages/month)
ASCI White
8,192
MTBF: 5 hrs (01), 40 hrs (05)
PSC Lemieux
3,016
MTBI: 9.7 hrs
Google
15,000
20 reboots/day, 2-3% replaced/yr
Source: D. Reed, High-end computing: The challenge of scale, May 2004
5
Solution Requirements
How about a checkpoint /restart (CP/R) capability?
Need the following features to be useful in HPC systems:
• “Just works”: allows users to do their research (and not
more programming!)
• No recoding or recompiling: allows application developers
to focus on their domain (and not system programming)
• Requires transparent, standardized CP/R: restart and/or
migrate application between machines without side effects
• CP/R must be automatic and integrate with existing use of
resource managers and schedulers to fully realize potential
6
Evergrid Availability Services (AVS)
• Implemented via dynamically-linked library libavs.so
– Uses LD_PRELOAD env. variable = no recompiling!
• Completely asynchronous and concurrent CP/R
• Incremental checkpointing
– Fixed upper bound on checkpoint file size
– Tunable “page” size
• Application/OS Transparent
– Migration capable
– Stateful preemption for both serial and parallel jobs
• Integrates with commercial and open-source queuing
systems: LSF, PBS, Torque/Maui, etc.
7
Technology: OS Abstraction Layer
Application
Application
Application
User
Space
AVS
Server/
OS Pool
App Lib
App Lib
App Lib
OS Abstraction
OS Abstraction
OS Abstraction
OS
OS
System
Space
OS
Interconnect
OS Abstraction Layer
Key Features
• Decouples applications from the
operating system
• Transparent fault tolerance for
stateful applications
• Pre-emptive scheduling
Distributed: N nodes running the
same/different apps
Transparent: No modifications to OS or
application
Performance: <5% Overhead
8
What Do We Checkpoint?
AVS virtualizes the following resources used by applications to ensure
transparent CP/R:
Memory
•Heap
•mmap()’d pages
•Stack
•Registers
•Selected shared libs
Files
•Open descriptors
•STDIO streams
•STDIN, STDOUT
•File contents (COW)
•Links,Directories
Network
•BSD Sockets
•IP Addresses
•MVAPICH 0.9.8
•OFED, VAPI
Process
•Process ID
•Process group ID
•Thread ID
•fork(), Parent/Child
•Shared Memory
•Semaphores
9
Checkpoint Storage Modes
Shared-filesystem checkpointing
• Best for jobs using fewer ( <
~ 16 ) processors
• Works with NFS, Lustre, GPFS, SAN, ….
Local-disk checkpointing
• More efficient for large distributed computations
• Provides for “mirrored checkpointing”
– Backup of checkpoint in case checkpointing fails or ruins
local copy
– Provides redundancy: checkpoint automatically
recovered from the mirror if local disk/machine fails
10
Local Disk Checkpointing & Mirroring
11
Local Disk Checkpointing & Mirroring
12
Interoperability
Application types:
•
•
•
•
Parallel/distributed
Serial
Shared-memory (testing)
Stock MPICH, MVAPICH (customized), OpenMPI (underway)
Interconnect fabrics:
• Infiniband, Ethernet (p4, “GigE”), 10GigE
• Potential Myrinet support via OpenMPI
Operating Systems:
• RHEL 4, 5 (+ CentOS, Fedora)
• SLES 9, 10
Architecture: 64-bit Linux (x86_64)
Supported platforms, apps, etc. are customer-driven
13
Tested Codes
QA-certified codes and compilers, with many more in the pipeline
Compilers
• Pathscale
• Intel Fortran
• Portland Group
• GNU Compilers
Benchmarks
• Linpack
• NAS
• STREAM
• IOzone
• TI-06 (DoD) apps
Academic Codes
• LAMMPS, Amber, VASP
• MPIBlast, ClustalW-MPI
• ARPS, WRF
• HYCOMM
Commercial Codes
• LS-DYNA
• StarCD (CFD)
• Cadence and other EDA
apps underway
14
Runtime & Checkpoint Overhead
Virtualization and checkpoint overheads are negligible
( < 5%) with most workloads
With AVS
0.2%
1.4%
0.9%
1.2%
0.5%
W/ 1 chkpt/hr
2.6%
3.3%
1.3%
2.1%
3.5%
15
Memory Overhead
On a per node basis, the RAM overhead is constant:
16
Preemptive Scheduling
High Priority
Queue
Running Jobs
Checkpoints
Low Priority
Queue
Increases server utilization & job throughput by 10-50% based on priority mix
17
17
Integration with LSF: Topdawg @ OU
Topdawg cluster at OSCER
• 512 dual-core Xeon 3.20 GHz, 2MB cache, 4GB RAM
• RHEL 4.3, kernel 2.6.9-55.EL_lustre-1.4.11smp
• Using Platform LSF 6.1 for resource manager and scheduler
Objective: Set up two queues with preemption (“lowpri” and “hipri”)
• lowpri
Long/unlimited run time, but preemptable by hipri
• hipri
Time-limited, but can preempt lowpri jobs
e.g.: Have long-running (24-hour) low-priority clustalw-mpi job,
which can be preempted by 4-6 hour ARPS and WRF jobs
18
Integration with LSF: Topdawg @ OU
Checkpointing and preemption under LSF
• Uses echkpnt and erestart for checkpointing/preempting and
restarting
• Allows use of custom methods “echkpnt.xxx” and “erestart.xxx”
• Checkpoint method defined as environment variable, or in lsf.conf
• Checkpoint directory, interval, and preemption defined in lsb.queues
Evergrid integration of AVS into LSF
• Introduces methods echkpnt.evergrid and erestart.evergrid
to handle start, checkpointing, and restart under AVS
• Uses Topdawg variables MPI_INTERCONNECT, MPI_COMPILER to
determine parallel vs. serial, IB vs. p4, run-time compiler libs
• User sources only one standard Evergrid script from within bsub script!
19
Integration with LSF: Topdawg @ OU
In environment (/etc/bashrc, /etc/csh.cshrc):
export EVERGRID_BASEDIR=/opt/evergrid
Before starting job:
export MPI_COMPILER=gcc
export MPI_INTERCONNECT=infiniband
At the top of your bsub script:
## Load the Evergrid and LSF integration:
source $EVERGRID_BASEDIR/bsub/env/evergrid-avs-lsf.src
Submitting a long-term preemtable job:
bsub -q lowpri < clustalw-job.bsub
Submitting a high-priority job:
bsub -q hipri < arps-job.bsub
20
What’s Underway
• Working with OpenMPI
- Support for Myrinet
• Growing list of supported applications
-EDA, simulation, …
• Configure LSF for completely transparent integration
21
Conclusions
Evergrid’s Availability Services provides:
• Transparent, scalable checkpointing for HPC
applications
• Compute time overhead of < 5% for most applications
• Bounded, nominal memory overhead
• Eliminates impacts of hardware faults
• Ensures jobs run to completion
• Seamless integration into resource managers and
schedulers for preemptive scheduling, maintenance
and job recovery
22
Reference
Ruscio, J.F., Heffner, M.A., and Srinidhi Varadarajan,
IEEE International Parallel and Distributed Processing Symposium
(IPDPS) 2007, 26-30 March 2007, pp. 1 - 10.
23
About Evergrid
Vision: Build a vertically integrated management system that makes
multi-datacenter scale-out infrastructures behave as a single
managed entity
HPC: Cluster Availability Management Suite (CAMS):
Availability Services (AVS), Resource Manager (RSM)
Enterprise: DataCenter Management Suite: AVS, RSM Enterprise,
Live Migration, Load Manager, Provisioning
Founded: Feb. 2004 by Dr. Srinidhi Varadarajan (VA Tech, “SystemX”),
B. J. Arun
Team: 50+ employees: R&D in Blacksburg, VA and Pune, India.
HQ in Fremont, CA.
Patents: 1 patent pending, 6 patents filed, 2 in process
24
Acknowledgements
• NSF (funding for original research)
• OSCER (Henry Neeman , Brett Zimmerman, David Akin,
Jim White)
25
Finding Out More
To find out more about Evergrid Software, contact:
Keith Ball
Sales:
Natalie Van Unen
617-784-8445
[email protected]
Partnering opportunities:
Mitchell Ratner
510-668-0500 ext. 5058
[email protected] [email protected]
http://www.evergrid.com
Note: Evergrid will be at booth 2715 at the SC07
conference in Reno, Nevada Nov 13-15. Come by for
a demo and presentation on other products
26