High Performance Computing in Academia
Download
Report
Transcript High Performance Computing in Academia
Overview of High
Performance Computing at
KFUPM
Khawar Saeed Khan
ITC, KFUPM
Agenda
► KFUPM
HPC Cluster Details
► Brief look at RHEL and Windows 2008 HPC
Environments
► Dual Boot Configuration
► Job Scheduling
► Current and Soon to be available Software
► Expectations from users
Why Cluster Computing and
Supercomputing?
►
►
►
►
►
►
►
Some Problems Larger Than Single Computer Can
Process
Memory Space (>> 4-8 GB)
Computation Cost
More Iterations and Large Data sets
Data Sources (Sensor processing)
National Pride
Technology Migrates to Consumers
How Fast Are Supercomputers?
The Top Machines Can Perform Tens of Trillions Floating
Point Operations per Second (TeraFLOPS)
► They Can Store Trillions of Data Items in RAM!
► Example: 1 KM grid over USA
► 4000x2000x100 = 800 million grid points
► If each point has 10 values, and each value takes 10
► ops to compute => 80 billion ops per iteration
► If we want 1 hour timesteps for 10 years, 87600 iters
► More than 7 Peta-ops total!
►
Lies, Damn Lies, and Statistics
Manufacturers Claim Ideal Performance
► 2 FP Units @ 3 GHz => 6 GFLOPS
► Dependences mean we won't get that much!
► How Do We Know Real Performance
► Top500.org Uses High-Perf LINPACK
► http://www.netlib.org/benchmark/hpl
► Solves Dense Set of Linear Equations
► Much Communications and Parallelism
► Not Necessarily Reflective of Target Apps
►
HPC in Academic Institutions
► HPC
cluster resources are no longer a research
topic but a core part of the research
infrastructure.
► Researchers are using HPC clusters and are
dependent on them
► Increased competitiveness
► Faster time to research
► Prestige, to attract talent and grants
► Cost-effective infrastructure spending
Top Universities using HPC Clusters
►
►
►
►
►
►
►
►
►
National Center for Supercomputing Applications at University of
Illinois Urbana Champagne
Texas Advanced Computing Center/University of Texas, Austin
United States
National Institute for Computational Sciences/University of Tennessee
United States
Information Technology Center, The University of Tokyo
Japan
Stony Brook/BNL, New York Center for Computational Sciences
United States
GSIC Center, Tokyo Institute of Technology
Japan
University of Southampton, UK
University of Cambridge, UK
Oklahoma State University, US
Top Research Institutes using HPC
Clusters
►
►
►
►
►
►
►
DOE/NNSA/LANL
United States
Oak Ridge National Laboratory
United States
NASA/Ames Research Center/NAS
United States
Argonne National Laboratory
United States
NERSC/LBNL
United States
NNSA/Sandia National Laboratories
United States
Shanghai Supercomputer Center
China
KFUPM HPC Environment
HPC @ KFUPM
►
►
►
►
►
►
Planning & Survey started in early 2008
Procured in October 2008
Cluster Installation and Testing during Nov-Dec-Jan
Applications like Gaussian with Linda, DL-POLY, ANSYS
tested on the cluster setup
Test problems were provided by professors of Chemistry,
Physics, Mechanical Engg. Departments.
More applications on the cluster will be installed shortly
e.g., GAMESS-UK.
KFUPM Cluster Hardware
HPC IBM Cluster 1350
► 128 nodes, 1024 Cores.
Master Nodes
► 3x Xeon E5405 Quad-Core, 8 GB, 2x 500 GB HD (mirrored)
Compute Nodes
► 128 nodes(IBM 3550 rack mounted). Each node dual processor, QuadCore Xeon E5405 (2 GHz). 8 GB RAM, 64TB total local storage.
► Interconnect 10GB Ethernet. Uplink 1000-Base-T Gigabit.
Operating Systems for Compute nodes (Dual Boot)
► Windows HPC Server 2008 and Red Hat Linux 5.2.
►
Dual Boot clusters
►
►
►
►
Choice of the right operating system for a HPC cluster can be a very
difficult decision.
This choice will usually have a big impact on the Total Cost of
Ownership (TCO) of the cluster.
Parameters like multiple user needs, application environment
requirements and security policies add to the complex human factors
included in training, maintenance and support planning, all leading to
associated risks on the final return on investment (ROI) of the whole
HPC infrastructure.
Dual Boot HPC clusters provide two environments (Linux and Windows
in our case) for the price of one.
Key takeaways:
-
Mixed clusters provide a low barrier to
leverage HPC related hardware, software,
storage and other infrastructure
investments better – “Optimize, flexibility of
infrastructure”
-
Maximize the utilization of compute
infrastructure by expanding the pool of
users accessing the HPC cluster resources -
“Ease of use and familiarity breeds usage”
Possibilities with HPC
► Computational
fluid dynamics
► Simulation and Modeling
► Seismic tomography
► Nano Sciences
► Vizualization
► Weather Forecasting
► Protein / Compound Synthesis
Available Software
► Gaussian
with Linda
► ANSYS
► FLUENT
► Distributed
MATLAB,
► Mathematica
► DL_POLY
► MPICH
► Microsoft MPI SDK
The following software will also be made available in the near future.
► Eclipse, GAMESS-UK, GAMESS-US, VASP and NW-CHEM
►
Initial Results of Beta Testing
► Few
applications like Gaussian etc have
been beta tested and considerable speed up
in computing times has been reported
► MPI program run tested on the cluster
reported considerable speed up as
compared to serial server runs.
HPC @ KFUPM
►
►
►
►
►
►
Several Firsts
Dual Boot Cluster
Supports RedHat Linux 5.2 and Windows 2008 HPC Server
Capability to support variety of applications
Parallel Programming Support
Advanced Job Scheduling options
Expectations
► Own
the system
► Respect other’s jobs
► Assist ITC HPC team by searching and
sending complete installation, software
procurement and licensing requirements
► Help other users by sharing your experience
Use Vbulletin at http://hpc.kfupm.edu.sa/