Sonoma_2008_Tues_Open Fabrics Alliance - Tabor

Download Report

Transcript Sonoma_2008_Tues_Open Fabrics Alliance - Tabor

Addison Snell, VP/GM, [email protected]
Interconnect Trends in High
Productivity Computing
Actionable Market Intelligence for High Productivity Computing
April 2008
Tabor: “High Productivity Computing”
Tabor Research is studying how HPC transcends
price/performance and server sales.
• Productivity analysis studies: How do users define
and measure productivity?
• Site census studies: How do users configure their
systems over time?
• Budget maps: What are the components of TCO?
• New online Productivity Analysis Tool
Expanding Technology Contexts in HPC
• Technology differentiation pushed outside the server
– HPC systems are no longer “self-contained”
– Interconnects, processors, OS, storage all may
come from different vendors
• Only a third of HPC budget goes to the server
– A third goes to other products and services
– Remaining third to facilities, staffing: much of
facilities spending is “NIB”
• Network spending is about 10% of hardware (not
including bundled system interconnect)
Expanding Usage Contexts in HPC
• Tabor Research is doing a study on “Edge HPC”
• HPC technology or application profiles outside the
traditional areas of engineering, science, analytics
– Complex event processing: Wide-scale, sensorbased, event-driven, real-time or near real-time
– Organization optimization: BI, data mining,
logistics, inventory / supply chain management
– Virtual environments: Online games, Second
Life, “augmented reality,” virtual economies
– Ultra-scale: Other usage of supercomputers
System (Node-to-Node) Interconnects
Data from Tabor Research HPC Site Census:
• Almost even thirds distribution between Ethernet,
Infiniband, and other high-end cluster interconnects
(Myrinet, Quadrics)
– Almost no 10GbE as system fabrics
– Infiniband seems to have had more impact on
Ethernet than other high-end interconnects
• Average 200 nodes/system (SMP = 1 node)
• Data skewed a bit towards academic
LAN (Compute Room) Interconnects
Data from Tabor Research HPC Site Census:
• Ethernet dominates as LAN fabric
– Almost a third of Ethernet LANs have at least
some 10GbE
– 10GbE outnumbers Infiniband by two-to-one
• Very little usage of anything other than Ethernet or
Infiniband
– Some mentions of wireless, FC, Myrinet
– Here Infiniband seems to have taken over from
fast non-Ethernet technologies
Storage Interconnects
Data from Tabor Comprehensive Research Study:
• Over one-third of HPC users implement IB as a
storage interconnect
– More common as storage infrastructure grows
– Native IB protocol most common, followed closely
by FC, with SATA a distant third
– IB also most common implementation of RDMA
• On average, users place 25%-30% premium value on
doubling the storage bandwidth
Other Thoughts
• Converged Fabric Strategies
– About a third are “likely” to implement
– Despite Ethernet position in LAN, much more
likely to consider converged fabric on Infiniband
• Impact of multi-core at low end
– In the near term, could create interest in SMP
– Is MPI equipped to handle new level of parallelism
at the socket?
– Will changes in workload management or job
scheduling create new interconnect requirements?
Interested in Tabor Research?
• Please, help us in our research efforts!
• Join the HPC User Views Advisory Council
– Access to research
– Rewards for participation
Addison Snell, VP/GM, [email protected]
Interconnect Trends in High
Productivity Computing
Actionable Market Intelligence for High Productivity Computing
April 2008