Cloud Computing (1/2)

Download Report

Transcript Cloud Computing (1/2)

AN ASSESSMENT OF VIRTUALIZATION SYSTEMS
FOR HOSTING HPC APPLICATIONS IN CLOUD
COMPUTING
Ahmad Hassan
Centre for Advanced Computing and Emerging Technologies,
School of Systems Engineering,
The University of Reading, UK
29-MAR-10
Agenda










Personal Portfolio
Hypothesis
Motivation
Research concerns
Background
Experimental test-bed
Benchmark results
Conclusion
References
Appendix
Erasmus Mundus Conference
2
Personal Portfolio


Worked as a Researcher in Large Hadron Collider
(LHC) experiment at European Organization for
Nuclear Research (CERN), Switzerland
Erasmus Mundus MSc in Network and E-business
Centred Computing



University of Reading (UoR) – United Kingdom
Aristotle University of Thessaloniki (AUTh) – Greece
University Carlos III Madrid (UC3M) – Spain
Erasmus Mundus Conference
3
Hypothesis
“virtualization and in particular cloud computing are
viable technologies for executing HPC applications
with minimal performance degradation.
Erasmus Mundus Conference
“
4
Motivation

Leverage of running multiple virtual machine
concurrently on single physical hardware

Cloud Computing for running scientific applications

Virtualization features

Logging and monitoring of virtualization systems for
Service Level Agreements (SLA) in cloud
Erasmus Mundus Conference
5
Research Concerns




Do scientific applications run more efficiently on VM
based systems, rather than on bare hardware?
Does virtualization effect the compute intensive
applications?
Does Logging and monitoring impact the
performance of virtual machines?
Application packaging and deployment on cloud
based systems
Erasmus Mundus Conference
6
Background




Virtualization
Cloud Computing
Virtualization Techniques
Related Research in HPC Benchmarking
Erasmus Mundus Conference
7
Virtualization

Software abstraction layer for the underlying physical
hardware.



Abstraction layer is known as Virtual Machine Monitor
or hypervisor
Type-I Hypervisor


CPU, Disk, RAM and Network
Hypervisor runs directly on the top of underlying physical
hardware i.e. XEN
Type-II Hypervisor

Hypervisor lies at one layer above the host operating system
i.e. VMware server
Erasmus Mundus Conference
8
Cloud Computing (1/2)


Platform to deliver IT resources as a service
 compute, storage, collaboration tools, networking
and a range of applications
Software as a Service (SaaS)


Complete application is provided as a service, e.g. Google
applications
Platform as a Service (PaaS)

Provides a runtime application platform, and Web
development tools for running applications on clouds, e.g.
Windows Azure Platform and Google Application engine
Erasmus Mundus Conference
9
Cloud Computing (2/2)

Infrastructure as a Service (IaaS)


Way to deliver IT resources, i.e. Compute, Storage over the
Internet, for example Amazon Elastic Compute Cloud (EC2)
According to a study conducted by Gartner [7]:
“By 2011, early technology adopters will forgo capital
expenditures and instead purchase 40% of their IT
infrastructure as a service. ‘Cloud computing’ will
take off, thus untying applications from a specific
infrastructure.”
Erasmus Mundus Conference
10
Virtualization benefits for Cloud computing

Server Consolidation


Migration


Running many virtual machines over single hypervisor
Encapsulation


Isolate VMs from physical hardware and thus allows the
migration of VMs from one platform to other.
Resource Sharing


Consolidate individual workloads onto a single physical
machine
A virtual machine can act as a software appliance
Isolation
Erasmus Mundus Conference
11
Virtualization Techniques

Full Virtualisation


(1/2)
Hypervisor acts as a mediator between the operating system
and the underlying hardware for transmitting hardware
instructions. The VM’s are unaware of all this underlying
procedure. For example VMware workstation
Hardware Assisted virtualisation

Underlying hardware supports the virtualisation technologies
and provides assistance to enable virtualization i.e. Intel-VT
and AMD-V. For example KVM
Erasmus Mundus Conference
12
Virtualization Techniques

(2/2)
Para-virtualization

An OS-assisted virtualization, where guest operating system
is modified to get better performance. For example XEN
Erasmus Mundus Conference
13
Related research in HPC Benchmarking




(1/2)
Lamia [1] revealed that Para-virtualization does not
impose significant performance overhead in high
performance computing
AMD claimed to achieve near-native performance
through virtualization
Anand et.al [2] found that XEN virtualization impacts
different application in different ways
Tikotekar [4] tested the effect of virtualization on
hyper-spectral radiative transfer code, Hydrolight and
observed 11 % performance overhead
Erasmus Mundus Conference
14
Related research in HPC Benchmarking


(2/2)
Macdonell et.al [5] found the performance of
GROMACS, is under 6% and for I/O intensive
applications i.e. BLAST and HMMer the overhead is
9.7%
Zhao et.al [6] tested VMs for distributed computation
within a grid framework. The study concluded that
the overheads of virtual machines are acceptable
when considering the variety of extra features
provided by virtualisation systems than bare metal
Erasmus Mundus Conference
15
Limitations of Existing Research

Not all the virtualization systems were considered

Most of the studies used XEN for benchmarking


Same HPC application was not used for
benchmarking across different virtualization systems
Needs wide range of application testing on variety of
hardware resources for general conclusion
Erasmus Mundus Conference
16
Experimental Test-bed (1/3)

The virtualization systems considered for
benchmarking were:




XEN
Linux KVM
CITRIX Xenserver
VMware Server
Erasmus Mundus Conference
17
Experimental Test-bed (2/3)

The following HPC applications were identified and
deployed for benchmarking of virtualization systems





PARallel Kernels and BENCHmarks (PARKBENCH)
High Performance Linpack (HPL) benchmark
DL_POLY molecular simulation package
ScaLAPACK (Scalable LAPACK)
PARKBench, HPL and ScaLAPACK are available on
netlib.org (Univ. of Tennessee)
Erasmus Mundus Conference
18
Experimental Test-bed (3/3)

This research study used the following hardware
resources:



AMD64 cluster
IBM JS20 Blade server
Thamesblue (IBM JS21 blades)
Erasmus Mundus Conference
19
Test-bed deployment
Internet
Linux box to
interact with Cluster
head node and
worker node
Firewall
SSH in to Cluster
Switch
Hactar Cluster: Master Node
AMD64 based cluster node 1 AMD64 based cluster node 2
Test-beds
1. Eucalyptus
Cloud
AMD64 based cluster node 3 AMD64 based cluster node 4
Test-beds
Test-beds
1. Citrix XenServer
Test-bed
1. XEN Test-bed
1. XEN Test-bed
2. KVM Test-bed
2. KVM Test-bed
3. VMware Server
Test-bed
3. VMware Server
Test-bed
Erasmus Mundus Conference
Test-beds
20
PARKBench Results

(1/2)
Logging and Monitoring OFF
Erasmus Mundus Conference
21
PARKBench Results

(2/2)
Logging and Monitoring ON
Erasmus Mundus Conference
22
High Performance LAPACK (HPL) Results (1/2)

Logging and Monitoring OFF
Erasmus Mundus Conference
23
High Performance LAPACK (HPL) Results (2/2)

Logging and Monitoring ON
Erasmus Mundus Conference
24
DLPOLY Molecular Simulation Results

(1/2)
Logging and Montioring OFF
Erasmus Mundus Conference
25
DLPOLY Molecular Simulation Results

(2/2)
Logging and Montioring ON
Erasmus Mundus Conference
26
ScaLAPACK Results

(1/2)
Logging and Montioring OFF
Erasmus Mundus Conference
27
ScaLAPACK Results

(2/2)
Logging and Montioring ON
Erasmus Mundus Conference
28
Observations

Considering the bare metal performance as ‘1’, the
following values show the multiplying factor for the
performance of virtualization system comparing the
bare metal performance
Virtualization
systems
PARKBench
(Message size:
10 Bytes)
HPL
(problem size:
2000)
DLPOLY
(Simulation of
a sodiumpotassium
disilicate glass)
ScaLAPACK
(problem size:
4000)
XEN
2.587
1.0057
1.0037
1.0347
KVM
8.254
1.011
1.3117
1.0031
VMware
Server
11.525
1.048
1.3192
1.1178
Erasmus Mundus Conference
29
Conclusion (1/2)



Virtualization imposes little performance overhead
Logging and Monitoring causes some overhead but
not to a larger extent
Can we say that the hypothesis, developed for this
study, has been proved


In some cases, near native performance was observed
Wide range of virtualization features VS little performance
degradation
Erasmus Mundus Conference
30
Conclusion (2/2)


Needs further research in the field by testing more
applications and identifying the parameters involve in
the performance degradation of HPC application on
virtualization systems
Cloud review community http://cloudreview.org
Erasmus Mundus Conference
31
References
1)
2)
3)
4)
Para-virtualisation for HPC
URL:http://homepages.inf.ed.ac.uk/group/lssconf/files2008/youseff.pdf
Anand Tikotekar, Geo_roy Valle, Thomas Naughton, Hong H. Ong,
Christian Engel-mann, and Stephen L. Scott. An Analysis of HPC
Benchmarks in Virtual Machine Environments. In 3rd Workshop on
Virtualization in High-Performance Cluster and Grid Computing (VHPC)
2008
Sequoia Scientific, Inc., Redmond,WA, USA. Hydrolight 4.2, 2000
A. Tikotekar, G. Vall´ee, T. Naughton, H. Ong, C. Engelmann, S. L.
Scott, and A. M. Filippi, “Effects of virtualization on a scientific
application running a hyperspectral radiative transfer code on virtual
machines,” in HPCVirt ’08: Proceedings of the 2nd workshop on
System-level virtualization for high performance computing. USA ACM,
2008.
Erasmus Mundus Conference
32
References
5)
6)
7)
Macdonell, C. and P. Lu. Pragmatics of Virtual Machines for HighPerformance Computing: A Quantitative Study of Basic Overheads. in
2007 High Performance Computing & Simulation Conference
(HPCS'07). 2007. Prague, Czech Republic
Zhao et al., 2004] Zhao, M., Zhang, J., and Figueiredo, R. (2004).
Distributed File System Support for Virtual Machines in Grid Computing.
In HPDC ’04: Proceedings of the 13th IEEE International Symposium on
High Performance Distributed Computing (HPDC’04), pages 202–211,
Washington, DC, USA. IEEE Computer Society
Gartner Press Release, “Gartner Highlights Key Predictions for IT
Organisations and Users in 2008 and Beyond” 1/31/08 [Date accessed:
29-Oct-2009]
Erasmus Mundus Conference
33
Thanks of your attention
Questions please !!
Ahmad Hassan
Contact: [email protected]
Web: http://cern.ch/ahmadh/portfolio
Erasmus Mundus Conference
34
Appendix
Erasmus Mundus Conference
35
Benchmark Designs

The benchmarks were designed to run on
virtualisation system and bare metal
Internet
Linux box to
interact with VMs
Firewall
SSH in to VMs
Switch
Hactar Cluster: Master Node
VM
Communication over MPI
VM
Bridged Network
Bridged Network
Logging &
Monitoring Off
Cluster node I
(Running virtualisation system)
Logging &
Monitoring Off
Cluster node II
(Running virtualisation system)
Erasmus Mundus Conference
36
State-of-the-art Technologies

XEN Para-virtualization system






Open source virtualization system started by University of
Cambridge Computer laboratory
Type-I hypervisor
Provides an abstraction to the underlying physical hardware
Undertakes scheduling
Direct access to the physical hardware, to guest operating
system through a split driver model
Needs modified guest operating systems.
Erasmus Mundus Conference
37

XEN Architecture
Application
Application
Application
Guest Operating System
E.g. Linux or Windows
Application
Guest Operating System
E.g. Linux or Windows
Virtual Machine
Virtual Machine
[CPU, RAM, DISK, Network card]
[CPU, RAM, DISK, Network card]
XEN Hypervisor (DOM0)
Physical Hardware
[CPU, RAM, Disk, Network & Disk controllers]

XEN Split Driver Model
Inter-Domain Connection
HOST
KERNEL
XEN
BACKEND
DRIVER
XENBUS
BACKEND
CHANNEL
XENBUS
FRONTEN
D
CHANNEL
Erasmus Mundus Conference
XEN
FRONT-END
DRIVER
GUEST
KERNEL
38

Linux KVM




Kernel module added to the existing Linux open source
distribution
Enables using standard Linux as hypervisor
Hardware-assisted virtualization that uses processor
virtualization extensions i.e. INTEL-VT, AMD-V.
Provides Para-virtualization for I/O devices through Paravirtualization drivers.
Erasmus Mundus Conference
39

KVM architecture
Guest OS
User
Process
QEMU
Standard Linux Kernel

KVM Module
KVM and QEMU relationship
Kernel Modules
KVM
User Space Tools
QEMU-KVM
KQEMU
QEMU
Erasmus Mundus Conference
40

VMware Server






Supports both full virtualization and Para-virtualization
solutions.
Uses Virtual Machine Interface (VMI) Technology for Paravirtualization
Volume Shadow Copy Service (VSS) to take snapshots
Virtual Machine Communication Interface (VMCI) for
communication between host and virtual machine
VMware infrastructure Web Management Interface for web
based management of virtual machines
VMware is retiring Para-virtualisation support in favour of
Hardware assisted virtualization in future products
Erasmus Mundus Conference
41
Experimental Test-bed (1/3)

Hardware platforms

AMD64 hardware based cluster
MODEL
Dual-Core AMD Opteron(tm)
Processor 1212
CPUs
2009.438 MHz
RAM
4 Gigabytes
Network Connectivity
Ethernet
Erasmus Mundus Conference
42
Experimental Test-bed (2/3)

IBM JS20 Blade Specification
Model
IBM PowerPC JS20 Blade Server
CPU
PPC97FX
2194.624561MHz
RAM
2 Gigabytes
Network connectivity
Myrinet/Ethernet
Erasmus Mundus Conference
43
Experimental Test-bed (3/3)

Thamesblue Specification
Model
IBM PowerPC Blade Centre JS21 Cluster
CPUs
2800 2.5 GHz processors
20 TeraFlops of sustained performance
RAM
5.6 Terabytes.
Network connectivity
Myrinet/Ethernet
Erasmus Mundus Conference
44
Hypervisor Types (1/2)

Type-I Hypervisor

Hypervisor runs directly on the top of underlying physical
hardware i.e. XEN
Application
Application
Application
Guest Operating System
E.g. Linux or Windows
Application
Guest Operating System
E.g. Linux or Windows
Virtual Machine
Virtual Machine
[CPU, RAM, DISK, Network card]
[CPU, RAM, DISK, Network card]
Type I Hypervsior
Physical Hardware
[CPU, RAM, Disk, Network & Disk controllers]
Erasmus Mundus Conference
45
Hypervisor Types (2/2)

Type-II Hypervisor

Hypervisor lies at one layer above the host operating system
i.e. VMware server
Application
Application
Guest Operating System
E.g. Linux or Windows
Virtual Machine
[CPU, RAM, DISK, Network card]
Type II Hypervsior
HOST OS Processes
Host Operating System
Physical Hardware
[CPU, RAM, Disk, Network & Disk controllers]
Erasmus Mundus Conference
46
PARKBench Results


Logging OFF
Virtualisation system
Time decrease for Message
Size (10 Bytes)
Time decrease for Message
Size (10 Megabytes)
XEN
2.587
1.066
KVM
8.254
2.978
VMware
11.525
4.439
Virtualisation system
Time decrease for Message
Size (10 Bytes)
Time decrease for Message
Size (10 Megabytes)
XEN
4.14
1.2395
KVM
8.3098
2.849
VMware
11.916
4.449
Logging ON
Erasmus Mundus Conference
47
High Performance LAPACK (HPL) Results
Logging & Monitoring OFF
Problem
Size
(N)
Bare
Metal
Time to
solve
Linear
system
(sec)
XEN
Time to
solve
Linear
system
(sec)
KVM
Time to
solve
Linear
system
(sec)
VMware
Server
Time to
solve
Linear
system
(sec)
Logging & Monitoring ON
Problem
Size
(N)
Bare
Metal
Time to
solve
Linear
system
(sec)
XEN
Time to
solve
Linear
system
(sec)
KVM
Time to
solve
Linear
system
(sec)
VMware
Server
Time to
solve
Linear
system
(sec)
1000
0.93
0.90
0.93
1.00
1000
0.93
0.90
1.01
0.95
2000
8.69
8.74
8.79
9.11
2000
8.69
8.75
9.54
9.23
3000
30.19
30.37
30.37
31.37
3000
30.19
30.39
32.98
31.77
4000
72.24
72.78
72.38
73.79
4000
72.24
72.83
78.59
75.91
Erasmus Mundus Conference
48
DLPOLY Molecular Simulation Results
Logging & Monitoring OFF
Benchmark
Benchmark
Wall
time on
Bare
Metal
(sec)
Simulation of
metallic
aluminium at
300K
temperature
Simulation of
15 peptide in
1247 water
molecules
Simulation of
a sodiumpotassium
disilicate
glass
Logging & Monitoring ON
73.453
1351.87
5
823.104
Wall
time on
XEN
(sec)
Wall
time on
KVM
(sec)
Wall
time on
VMware
Server
(sec)
72.678
112.591
117.466
1682.47
4
1696.31
6
1079.67
4
1085.85
6
1369.4
64
826.21
7
Wall
time on
Bare
Metal
(sec)
Wall
time on
XEN
(sec)
Wall
time on
KVM
(sec)
Wall
time on
VMware
Server
(sec)
Simulation of
metallic
aluminium at
300K
temperature
Simulation of
15 peptide in
1247 water
molecules
73.453
72.954
121.608
118.020
1351.87
5
1360.4
77
1797.06
5
1739.77
8
Simulation of
a sodiumpotassium
disilicate
glass
823.104
823.25
5
1164.59
9
1113.69
5
Erasmus Mundus Conference
49
ScaLAPACK Results
Logging & Monitoring OFF
Problem
Size
(N)
Wall
time on
Bare
Metal
(sec)
Wall
time on
XEN
(sec)
Wall
time on
KVM
(sec)
Wall time
on
VMware
Server
(sec)
1000
3.621
3.680
3.620
4.080
2000
25.018
25.135
24.813
28.312
3000
80.816
83.422
79.594
81.454
4000
190.100
196.697
190.696
212.499
Logging & Monitoring ON
Problem
Size
(N)
Wall
time on
Bare
Metal
(sec)
Wall
time on
XEN
(sec)
Wall
time on
KVM
(sec)
Wall time
on
VMware
Server
(sec)
1000
3.621
3.716
4.029
4.226
2000
25.018
25.300
27.290
27.082
3000
80.816
83.664
89.198
86.578
4000
190.100
196.059
208.353
229.618
Erasmus Mundus Conference
50
rBuilder for Application Packaging
Erasmus Mundus Conference
(1/2)
51
rBuilder for Application Packaging
Erasmus Mundus Conference
(2/2)
52