The Path to Extreme Supercomputing A Workshop in
Download
Report
Transcript The Path to Extreme Supercomputing A Workshop in
The Path to Extreme Supercomputing—LACSI Workshop
DARPA HPCS
David Koester, Ph.D.
DARPA HPCS Productivity Team
12 October 2004
Santa Fe, NM
This work is sponsored by the Department of Defense under Army Contract W15P7T-05-C-D001. Opinions, interpretations,
conclusions, and recommendations are those of the author and are not necessarily endorsed by the United States Government.
Slide-1
LACSI
Extreme Computing
MITRE
MIT Lincoln Laboratory
ISI
Outline
• Brief DARPA HPCS Overview
–
–
–
–
–
–
Impacts
Programmatics
HPCS Phase II Teams
Program Goals
Productivity Factors — Execution & Development Time
HPCS Productivity Team Benchmarking Working Group
• Panel Theme/Question
– How much?
– How fast?
Slide-2
LACSI
Extreme Computing
MITRE
MIT Lincoln Laboratory
ISI
High Productivity Computing Systems
Create a new generation of economically viable computing systems (2010) and
a procurement methodology (2007-2010) for the security/industrial community
Impact:
Performance (time-to-solution): speedup critical national
security applications by a factor of 10X to 40X
Programmability (idea-to-first-solution): reduce cost and
time of developing application solutions
Portability (transparency): insulate research and
operational application software from system
Robustness (reliability): apply all known techniques to
protect against outside attacks, hardware faults, &
programming errors
HPCS Program Focus Areas
Applications:
Intelligence/surveillance, reconnaissance, cryptanalysis, weapons analysis, airborne contaminant
modeling and biotechnology
Fill the Critical Technology and Capability Gap
Today (late 80’s HPC technology)…..to…..Future (Quantum/Bio Computing)
Slide-3
LACSI
Extreme Computing
MITRE
MIT Lincoln Laboratory
ISI
High Productivity Computing Systems
-Program Overview Create a new generation of economically viable computing systems (2010) and
a procurement methodology (2007-2010) for the security/industrial community
Full Scale
Development
Half-Way Point
Phase 2
Technology
Assessment
Review
Petascale/s Systems
Vendors
Validated Procurement
Evaluation Methodology
Advanced
Design &
Prototypes
Test Evaluation
Framework
Concept
Study
New Evaluation
Framework
Phase 1
Slide-4
LACSI
Extreme Computing
MITRE
Phase 2
(2003-2005)
MIT Lincoln Laboratory
Phase 3
(2006-2010)
ISI
HPCS Phase II Teams
Industry
PI: Elnozahy
PI: Mitchell
PI: Smith
Mission Partners
Productivity Team (Lincoln Lead)
MIT Lincoln
Laboratory
PI: Lucas
PI: Kepner
PI: Basili PI: Benson & Snavely PI: Dongarra
Ohio
CSAIL State
PI: Koester
Slide-5
LACSI
Extreme Computing
PIs: Vetter, Lusk, Post, Bailey PIs: Gilbert, Edelman, Ahalt, Mitchell
MITRE
MIT Lincoln Laboratory
ISI
HPCS Program Goals
Productivity Goals
• HPCS overall productivity goals:
– Execution (sustained performance)
Production
Observe
1 Petaflop/s (scalable to greater
than 4 Petaflop/s)
Reference: Production workflow
– Development
Production
Act
10X over today’s systems
Reference: Lone researcher and
Enterprise workflows
Enterprise
Lone Researcher
Visualize
Orient
Decide
Development
Execution
Design
Theory
Port Legacy
Software
Researcher
Enterprise
Experiment
Simulation
10x improvement in time to first solution!
Slide-6
LACSI
Extreme Computing
MITRE
MIT Lincoln Laboratory
ISI
HPCS Program Goals
Productivity Framework
Productivity = Utility/Cost
U
U(T )
C CS + C O + C M
Activity & Purpose
Benchmarks
BW bytes/flop (Balance)
Memory latency
Memory size
……..
Execution
Time
Productivity
(Utility/Cost)
Actual
Work
Flows
Productivity
Metrics
Slide-7
LACSI
Extreme Computing
MITRE
Processor flop/cycle
Number of processors
Clock frequency………
System
or
Development
Time
System Parameters
(Examples)
Model
MIT Lincoln Laboratory
Bisection bandwidth
Power/system
# of racks
……….
Code size
Restart time
Peak flops/sec
…
ISI
HPCS Program Goals
Hardware Challenges
• General purpose
architecture capable of:
FFT
Low
Spatial Locality
Subsystem Performance
Indicators
1) 2+ PF/s LINPACK
2) 6.5 PB/sec data
STREAM bandwidth
3) 3.2 PB/sec bisection
bandwidth
4) 64,000 GUPS
HPCS Program Goals &
The HPCchallenge Benchmarks
Mission
Partner
Applications
MITRE
PTRANS
HPL
STREAM
High
High
Slide-8
LACSI
Extreme Computing
RandomAccess
Temporal Locality
MIT Lincoln Laboratory
Low
ISI
Productivity Factors
Execution Time & Development Time
Productivity = Utility/Cost
U
U(T )
C CS + C O + C M
Utility and some Costs are relative to
–
–
–
High (Good)
CO
High (Bad)
Low
Reductions in both Execution
Time and Development Time
contribute to positive
increases in Utility
–
–
Slide-9
LACSI
Extreme Computing
Utility generally is inversely
related to time
Quicker is better
MITRE
Low (Good)
Low
Low
ExecTime
•
CS
High (Bad)
Low (Good)
Low
Low
DevTime
U(T)
Low (Bad)
•
Workflow (WkFlow)
Execution Time (ExecTime)
Development Time (DevTime)
Operating Costs
Procurement Costs
Utility
Cost
Utility = f(WkFlow,ExecTime, DevTime)
DevTime
DevTime
Utility
•
Reductions in both Execution
Time and Development Time
contribute to positive decreases
in operating costs
–
–
Low
ExecTime
Reduction in programmer costs
More work performed over a
period
MIT Lincoln Laboratory
•
ExecTime
However, systems that will
provide increased utility
and decreased operating
costs may have a higher
initial procurement cost
–
Need productivity metrics
to justify the higher initial
cost
ISI
HPCS Benchmark Spectrum
Local
DGEMM
STREAM
RandomAcces
1D FFT
Global
Linpack
PTRANS
RandomAccess
1D FFT
8 HPCchallenge
Benchmarks
Discrete
Math
…
Graph
Analysis
…
Linear
Solvers
…
Signal
Processing
…
Simulation
…
I/O
Many (~40)
Micro & Kernel
Benchmarks
•
Others
…
Several (~10)
Small Scale
Applications
Near-Future
NWChem
ALEGRA
CCSM
9 Simulation
Applications
Spectrum of benchmarks provide different views of system
–
–
•
Purpose
Benchmarks
…
Current
UM2000
GAMESS
OVERFLOW
LBMHD
RFCTH
HYCOM
Existing Applications
Execution
Bounds
6 Scalable
Compact Apps
Pattern Matching
Graph Analysis
Simulation
Simulation
Simulation
Signal Processing
Emerging Applications
Spanning
Kernels
System Bounds
Development
Indicators
Future Applications
Execution
Indicators
HPCchallenge pushes spatial and temporal boundaries; sets performance bounds
Applications drive system issues; set legacy code performance bounds
Kernels and Compact Apps for deeper analysis of execution and development time
Slide-10
LACSI
Extreme Computing
MITRE
MIT Lincoln Laboratory
ISI
Panel Theme/Question
• “How much should we change supercomputing to enable
the applications that are important to us, and how fast?”
• How much? — HPCS is intended to “Fill the Critical
•
Technology and Capability Gap between Today’s (late 80’s
HPC technology)…..to…..Future (Quantum/Bio Computing)
How fast?
– Meaning when — HPCS SN-001 in 2010
– Meaning performance — Petascale/s sustained
• I’m here to listen to you — the HPCS Mission Partners
– Scope out emerging and future applications for 2010+
delivery
(What applications will be important to you?)
– Collect data for the HPCS Vendors on future
Applications
Kernels
Application characterizations and models
Slide-11
LACSI
Extreme Computing
MITRE
MIT Lincoln Laboratory
ISI
Statements
• Moore’s Law cannot go on forever
Proof: 2x →x→∞
∞
So what?
• Moore’s Law doesn’t matter as long as we need to invest
the increase in transistors into machine state — i.e.,
overhead — instead of real use
Slide-12
LACSI
Extreme Computing
MITRE
MIT Lincoln Laboratory
ISI