A study of introduction of the virtualization technology

Download Report

Transcript A study of introduction of the virtualization technology

A study of introduction of the
virtualization technology into
operator consoles
T.Ohata, M.Ishii / SPring-8
ICALEPCS 2005, October 10-14, 2005
Geneva, Switzerland
Contents
Virtualization technology overview
Categorize virtualization technologies
Performance evaluation
How many virtual machines run on a server
Introduction into the control system
System setup
Conclusion
ICALEPCS 2005 in Geneva, Switzerland
What is virtualization technology?
ICALEPCS 2005 in Geneva, Switzerland
Overview of a virtualization technology
CPU
CPU
Network
card
Network
card
MEMORY
MEMORY
DISK
DISK
Mainframe
VM VM VM VM VM VM VM VM
Originated from IBM
System/360
Enable to consolidate
many computers into a
small number of host
computer
Each virtual machine
(VMs) has independent
resources (CPU, disks,
MAC address, etc.) like a
stand-alone computer
Host computer
ICALEPCS 2005 in Geneva, Switzerland
Why we need virtualization technology?
ICALEPCS 2005 in Geneva, Switzerland
Problem of present control system
Network distributed computing is standard method
We can construct an
efficient control system
We have over 200 computers
only in beamline control system
Computer proliferation
• Increasing maintenance
tasks such as version up,
patching etc.
• We faced increasing
hardware failure
maintain them by a few staff
ICALEPCS 2005 in Geneva, Switzerland
Virtualization technology has revived
We can reduce a number
of computers.
consolidation
General-purpose server
We can cut hardware
costs and their
maintenance costs
drastically.
ICALEPCS 2005 in Geneva, Switzerland
Category of virtualization technology
- Three virtualization approaches typical products
Resource
multiplex
Emulation
Application
shielding
Xen*, LPAR(IBM), nPartition(HP)
VMware*, VirtualPC, QEMU, Bochs
User-Mode-Linux*, coLinux
Solaris container*, jail, chroot
* Evaluated products
ICALEPCS 2005 in Geneva, Switzerland
1. Resource multiplex
Special OS to suit
layer interface
S/W
S/W
OS
Originated
S/W
OS
OS
Multiplex
hardware resources
CPU, memory, etc.
Hardware
from mainframe
Major UNIX vendors released
several products
A layer multiplexes hardware
resources (called hypervisor
or virtual machine monitor)
Need small patch to kernel
Less overhead
ICALEPCS 2005 in Geneva, Switzerland
2. Emulation
Many
emulation overhead
S/W
S/W
S/W
OS
OS
OS
Emulation layer
Operating system
Hardware
emulator for PC/AT, 68K
and game machines
Suitable for development and
debugging
Usable unmodified OS
Some overhead in transform
instructions
ICALEPCS 2005 in Geneva, Switzerland
3. Application shielding
Developed
partitions
S/W
S/W
S/W
Operating system
Hardware
for web hosting
of IPS (internet service
provider) to obtain separate
computing environment
Partition makes invisible
computing space from
others
No overhead
ICALEPCS 2005 in Geneva, Switzerland
Performance evaluation
How many VMs can run on a
server computer
ICALEPCS 2005 in Geneva, Switzerland
Evaluated products
Products
VMware 4.5
Workstation
User-Mode-Linux
(UML)
Solaris container
Xen 2.06
* Next sheet
Host OS
Guest OS
Linux-2.6.8
Linux-2.6.8
Comments
Commercial,
Support many OS
Linux-2.6.8
Only Linux on x86
Linux-2.4.26um
Sparc and x86
Solaris 10
FSS*, CPU pinning*
Linux-2.6-xen0 FSS, CPU pinning
Linux-2.6-xenU Live migration*
ICALEPCS 2005 in Geneva, Switzerland
Special function
◆
Fair Share Scheduler (FSS)
◆Scheduling policy, CPU usage is equally distributed among tasks
◆
CPU pinning
◆Pin a VM to specific CPU (effective in SMP environment)
◆(Linux has “affinity” function, which can pin only a process)
◆
Live migration
◆VMs migrate to other host dynamically
◆VMs can be running during migration
VM VM VM VM VM VM VM VM
Host 1
Host 2
Live migration
ICALEPCS 2005 in Geneva, Switzerland
Measurement procedure
Response time between virtual machine and VME by
using MADOCA application
VM
Message queue (SYSV IPC)
and ONC-RPC network
communication protocol
RPC
Message size is 350 bytes
(Remote Procedure Call) including RPC header and
Ethernet frame header
VME
MADOCA: Message And Database Oriented Control Architecture
ICALEPCS 2005 in Geneva, Switzerland
Measurement bench
VM
VM
VM
VM
MADOCA
client
• 1~10 VMs are running on
single server computer
(Dual Xeon 3.0GHz)
• MADOCA client is running
on each VM
Network
MADOCA
server
MADOCA
server
MADOCA
server
MADOCA
server
Measure response time
1~10 MADOCA servers
on a network
ICALEPCS 2005 in Geneva, Switzerland
Number of VM dependency of average
response time
average response time
[sec]
HP B2000 (reference)
HP B2000 is
present operator
console
VMware and UML
becomes worse at
many VMs
5~6 VMs of Solaris
and Xen are
comparable to HP
workstation
Number of VMs
ICALEPCS 2005 in Geneva, Switzerland
Statistics of response time @ 10VMs
Solaris container
2,000
15.00
1,500
11.25
1,000
7.50
500
0
Xen
HP B2000
better
UML
better
response time[msec]
VMware
3.75
0.00
Max.
Min.
Ave.
SD.
ICALEPCS 2005 in Geneva, Switzerland
Limit of hardware resources
- CPU utilization CPU utilization
(%)
Solaris container
CPU utilization of
the Host of VMs
No more IDLE time
at 5~6 VMs
5~6 VMs are optimum
Number of VMs
ICALEPCS 2005 in Geneva, Switzerland
Limit of hardware resources
- Network interface card (NIC) utilization (MB/s)
NIC utilization
Solaris container
Traffic on the GbE
network interface
card
Utilization is a few
percent of full
bandwidth
Saturation comes
from CPU overload
Number of VMs
ICALEPCS 2005 in Geneva, Switzerland
Limit of hardware resources
- Page fault frequency page fault frequency
150
Solaris container
100
50
0
1
2
3
4
5
6
7
8
Page fault wastes
CPU time
It makes performance
deterioration
Saturation come from
miss hit of TLB and
swap out
9 10
Number of VMs
ICALEPCS 2005 in Geneva, Switzerland
How many VMs are optimum?
5~6 VMs are optimum@(Dual Xeon 3.0GHz)
If you want to run more VMs…
Large page size on large addressing space
architecture is important.
- Physical Address Extension (PAE) or 64-bit architecture
Many core CPU is attractive.
- One CPU core is enough for 2~3 VMs
ICALEPCS 2005 in Geneva, Switzerland
Introduction into the control system
We installed virtualization technology into a beamline control.
 We
use Xen and Linux PC
servers by replacing HP
operator console.
 Control application programs
ported onto VM (Linux).
 We
installed a pair of Xen
host and NFS server to
keep image file of VM.
ICALEPCS 2005 in Geneva, Switzerland
System setup and live migration
X-server (thin client)
Primary Xen host
VM
VM
VM
Enable
shutdown
Control
programs
It is possible to use continuously
during maintenance.
Migration
Secondary Xen host
VM VM
VM
Control
A few
programs
100msec
Gigabit Ethernet
NFS server
VM Image VM Image VM Image
ICALEPCS 2005 in Geneva, Switzerland
Future plan
- High availability cluster 
We are studying high availability Single System
Image (SSI) cluster configuration with Xen
•
Migration function of Xen is not effective when host
computer suddenly dies.
software
VM
VM
software
software
Single System Image cluster
VM VM VM VM VM
Xen hypervisor
Xen hypervisor
Structure of OpenSSI with Xen
software
VM
VM
Xen hypervisor
ICALEPCS 2005 in Geneva, Switzerland
Future plan (cont’)
- reduandant storage We will introduce a redundant storage system
such as SAN, iSCSI and NAS.
NFS server is a single failure point
Primary Xen host
FC Switch
SAN fibers
Secondary Xen host
SAN storage
FC Switch
ICALEPCS 2005 in Geneva, Switzerland
Cost estimation
About 50 HP-UX workstations will be replaced
8 PC-base servers + redundant storage
(6 VMs runs on each PC server)
75% of total cost can be saved
(only hardware)
ICALEPCS 2005 in Geneva, Switzerland
Conclusion
We studied several virtualization technology to
introduce as operator console.
We measured performances of some virtualization
environments, and verified they are stable.
5~6 VMs are optimum for one server computer.
We introduced Xen, which has live migration
function, into beamline control system.
We have plan to apply Xen for more beamline.
ICALEPCS 2005 in Geneva, Switzerland
Thank you for your attention.
ICALEPCS 2005 in Geneva, Switzerland
Running on Xen primary host
Running on Xen secondly host