XenServer - Agenda INFN

Download Report

Transcript XenServer - Agenda INFN

Introducing XenServer 1/2
Monforte Salvatore
CCR – Virtualization Tutorial
Catania 1-3 December 2010
Introducing Citrix XenServer
• Citrix XenServer is a complete server virtualization platform,
optimized for both Windows and Linux virtual servers, with all the
capabilities required to create and manage a virtual infrastructure
• XenServer is a Bare-Metal Type1 Hypervisor
▫ runs directly on server hardware without requiring an additional
underlying operating system
 this results in a very efficient and scalable system
Introducing Citrix XenServer
• Major features in XenServer
▫ Role Based Access Control
 to enable you to control, at both broad and granular levels, which actions
different types of administrators can perform in your virtualized environment.
▫ Dynamic Memory Control
 allowing you to change the amount of physical memory assigned to a virtual
machine without rebooting it
▫ VM snapshot management
 including support for snapshots with or without memory, and snapshot rollback
for a quick backup and restore mechanism
▫ Access to VM consoles
 VNC for installation-time, Xvnc for graphical displays on Linux, and Remote
Desktop for Windows
▫ XenSearch
 searching, sorting, filtering, and grouping, using folders, tags and custom fields
▫ Complete resource pool management
▫ High availability configuration
 both for VMs and core Services
▫ Workload Balancing
 pool-wide VMs load balancing
▫ Workload Reports
 giving performance views over time and across the datacenter
▫ Performance metrics display
Introducing Citrix XenServer
XenServer Free
• data center consolidation
• back-up restore cloning
• failure recovery
• resource balancing
• automatic VMs
placement
• manual via
scripting/programming
SDK
XenServer Advanced
• VMs High availabilty
• Dynamic Memory Control
• Performance alerting/reporting
XenServer Enterprise
• Role –based administration
• Live memory snapshots
• Automated workload balancing
Introducing Citrix XenServer
Citrix XenServer
Overview
•
XenServer hosts and resource pools
▫ add and remove XenServer hosts from pools
▫ create shared storage and attach it to a pool
▫ start VMs on different XenServer hosts within a pool
▫ live migrate running VMs between XenServer hosts within a pool
• XenServer network configuration
▫ configure physical networking on XenServer hosts and resource pools
▫ create virtual network interfaces for VMs, and bridge these to physical
networks
▫ work with VLANs
• XenServer storage configuration
▫ create shared and local storage repositories on a variety of different
substrates (iSCSI, NFS)
▫ create virtual disk images within storage repositories as part of the
process of installing VMs
▫ manage and administer storage repositories
• XenServer VMs management
• XenServer Backups / Snapshots
• XenServer SDKs and scripting
Introducing Citrix XenServer
• System Requirements
▫ XenServer requires at least two separate physical x86 computers
 XenServer hostand
 dedicated entirely to the task of hosting VMs and is not used for other applications
 XenCenter application
 any general-purpose Windows computer
• XenServer host system requirements
▫ 64-bit x86 server-class machine
 XenServer can make use of up to 256 GB of RAM, up to 16 NICs and up to 64 logical
processors
• The following are the system requirements for the XenServer host:
Introducing Citrix XenServer
• Windows
▫
▫
▫
▫
▫
▫
▫
Windows
Windows
Windows
Windows
Windows
Windows
Windows
Server 2008 64-bit & 32-bit
Server 2003 32-bit SP0, SP1, SP2, R2; 64-bit SP2
Small Business Server 2003 32-bit SP0, SP1, SP2, R2
XP 32-bit SP 2, SP3
2000 32-bit SP 4
Vista 32-bit SP 1
7 32-bit/64-bit
• Linux
▫ Red Hat Enterprise Linux 32-bit 3.5, 3.6, 3.7, 4.1, 4.2, 4.3, 4.4, 4.5, 4.7, 5.0, 5.1,
5.2; 64-bit 5.0, 5.1, 5.2
▫ Novell SUSE Linux Enterprise Server 32-bit 9 SP2, SP3, SP4; 10 SP1; 64-bit 10
SP1, SP2
▫ CentOS 32-bit 4.1, 4.2, 4.3, 4.4, 4.5, 5.0, 5.1 , 5.2; 64-bit 5.0, 5.1, 5.2
▫ Oracle Enterprise Linux 64-bit & 32-bit 5.0, 5.1
▫ Debian sarge (3.1), etch (4.0)
Introducing Citrix XenServer
Methods of administering XenServer
• Different methods of administering XenServer
▫ XenCenter
 You can administer XenServer remotely using its graphical,
Windows-based user interface
 manage servers, resource pools and shared storage, and deploy, manage, and monitor
virtual machines from your Windows desktop machine
▫ OpenXenManager
 Open source clone of Citrix's XenServer XenCenter
 is written in python using the XML-RPC based XenServer API
▫ Command-line Interface (CLI)
 You can use the Linux-based xe commands, also known as the CLI,
on the host to administer XenServer
 xe commands are described in the XenServer Administrator's Guide
Installing Citrix XenServer
Installing the XenServer host
• The XenServer host must be installed on a dedicated 64-bit x86 server
▫ simplified text based linux installation
▫ upgrade choice if it is run on a server that already has a previous
version of XenServer on it
 follows the first-time installation process
 existing settings for networking configuration, system time setting and so on are retained
▫ Do not install any other operating system in a dual-boot
configuration with the XenServer host
• The major components of Citrix XenServer are packaged as ISO
images, which must be burned onto CDs before use
▫ The product installer includes XenServer, the XenCenter
administration console installer, and product documentation
 431 MB
▫ Linux Guest Support includes templates and other tools required for
support of Linux guests
 121 MB
Installing Citrix XenServer
• Once the system has been restarted at the end of the installation
process XenServer boots on the XSConsole
▫ text-based administration console which can be used to configure
the hypervisor from a console to the server
▫ for remote administration, xsconsole can be accessed through
SSH
 SSH is enabled for root login by default for XenServer, and once you are in an
SSH session, simply type "xsconsole".
XenServer hosts and resource pools
Resource Pool
• comprises multiple XenServer host installations, bound together into a
single managed entity which can host Virtual Machines
▫ when combined with shared storage VMs can be started on any XenServer host
which has sufficient memory and then dynamically moved between XenServer
hosts while running
 If an individual XenServer host suffers a hardware failure then the administrator can
restart the failed VMs on another XenServer host in the same resource pool
 If high availability (HA) is enabled on the resource pool, VMs will automatically be
moved if their host fails
• A pool always has at least one physical node, known as the master
▫ other physical nodes join existing pools are described as members
• Only the master node exposes its administration interface and will forward
commands to individual members as necessary
▫ used by XenCenter and the CLI
Master
Shared Storage
XS Pool
XS server
XS server
XS server
XenServer hosts and resource pools
Requirements for creating resource pools
•
XenServer homogeneous host:
▫ each CPU is from the same vendor
 in particular AMD-V and Intel VT CPUs cannot be mixed
▫ each CPU is the same model (except for stepping)
▫ each CPU has the same feature flags
▫ all hosts are running the same version of XenServer software
•
In practice, it is often difficult to obtain multiple servers with the exact same CPUs, and so minor
variations are permitted
▫ If you are sure that it is acceptable in your environment for hosts with varying CPUs to be
part of the same resource pool, then the pool joining operation can be forced
•
In addition to being homogeneous, an individual XenServer host can only join a resource pool if
▫ it has a static IP address
 either manually assigned or via DHCP
▫ it is not a member of an existing resource pool
▫ its clock is synchronized to the same time source as the pool master
 for example, via NTP
▫ it has no shared storage configured
▫ there are no running or suspended VMs on the XenServer host which is joining
▫ there are no active operations on the VMs in progress
 such as one shutting down
▫ the management NIC of the XenServer host which is joining is not part of a NIC bond
XenServer hosts in resource pools may contain
▫ different numbers of physical network interfaces
▫ Local storage repositories of varying size
•
XenServer hosts and resource pools
Creating a Resource Pool
• Resource pools can be created using either the XenCenter management
console or the CLI
• The joining host synchronizes its local database with the pool-wide one,
and inherits some settings:
▫ shared storage repositories in the pool
 SR configuration created so that the new host can access existing
shared storage automatically
▫ networking information is partially inherited
 the structural details of NICs, VLANs and bonded interfaces are all
inherited
 policy information is not i.e. IP addresses of management NICs
XenServer hosts and resource pools
To join XenServer hosts host1 and host2 into a resource
pool via the CLI
• open a console on XenServer host host2
• command XenServer host host2 to join the pool on XenServer host host1 by
issuing the command:
xe pool-join master-address=<host1> master-username=<root> master-password=<password>
[--force ]
To remove a host b from a resource pool using the CLI
• open a console on any host in the pool
• find the UUID of the host b using the command
xe pool-list
• eject the host from the pool:
xe pool-eject host-uuid=<uuid>
• The XenServer host will be ejected and left in a freshly-installed state
Warning
Do not eject a host from a resource pool if it contains important data stored on its
local disks. All of the data will be erased upon ejection from the pool.
If you wish to preserve this data, copy the VM to shared storage on the pool first using
XenCenter, or the xe vm-copy CLI command.
XenServer networking
Networking
• XenServer networking operates at Layer 2 of the OSI
▫ types of server-side software objects which represent networking entities
 PIF, represents a physical network interface on a XenServer host
▫ name and description, a globally unique UUID, the parameters of the NIC
that they represent, and the network and server they are connected to
 VIF, represents a virtual interface on a Virtual Machine
▫ name and description, a globally unique UUID, and the network and VM
they are connected to
 Network, which is a virtual ethernet switch on a XenServer host
▫ name and description, a globally unique UUID, and the collection of VIFs
and PIFs connected to them
• Both XenCenter and the CLI allow
▫ configuration of networking options
▫ control over which NIC is used for management operations
▫ creation of advanced networking features
 VLANs, NIC bonds, QoS
• XenCenter hides much of the complexity of XenServer networking
XenServer networking
MAC Addressing
• Two important bits really matter when assigning a MAC address
▫ first and second least significant bits of the first leftmost byte in a MAC
address
 a user-specified Xen Network MAC address should at least be a
unicast MAC address, and probably locally administered
▫ basically the 2nd hex digit should be one of: 2, 6, A or E
XenServer networking
Initial Network Configuration
• PIFs are created for each NIC in the host
▫ PIF of the NIC selected for use as the management interface is
configured with the IP addressing options specified during installation
▫ all other PIFs are left unconfigured
• a network is created for each PIF
▫ "network 0", "network 1", etc.
• each network is connected to one PIF
Initial networking configuration allows connection to the XenServer host by
XenCenter, the xe CLI, and any other management software running separate
machines via the IP address of the management interface
• External networking for VMs is achieved by bridging PIFs to VIFs via the
Network Object which acts as a virtual Ethernet switch
XenServer networking
• XenServer host has one or more networks, which are virtual Ethernet
switches
▫ Networks without an association to a PIF
 considered internal,
▫ used to provide connectivity only between VMs on a given XenServer
host, with no connection to the outside world
▫ Networks with a PIF association
 considered external
▫ provide a bridge between VIFs and the PIF connected to the network,
enabling connectivity to resources available through the PIF's NIC
VMs DOM U
internal network
vif2
vif0
pif0
nic0
vif3
vif1
external network xenbr0
XenServer Host DOM 0
XenServer networking
• In the case of creating a VLAN,
▫ every distinct VLAN will get its own bridge
▫ the bridge name will start with “xapi”
▫ the (pseudo) PIF will have a dot separated name to include the vlan tag
number
 apart from that, everything else will be the same as normal external
network
• It is not possible to create an internal VLAN network.
VMs DOM U
vif5
vif4
vif1
vlan10
xapi10
vif0
vlan5
xapi5
pif0.10
vif3
pif0.5
vif2
pif0
nic0
external network xenbr0
XenServer Host DOM 0
XenServer introducing Storage
XenServer Storage
• A key object in XenServer storage is a Storage Repository (SR)
▫ physical on-disk structure or format imposed on the available physical
storage by XenServer
▫ built-in support for
▫ IDE, SATA, SCSI and SAS drives locally connected
▫ iSCSI, NFS and Fibre Channel (HBA) remotely connected
• XenServer allocates Virtual Disk Images (VDI) on an SR
▫ this is what a VM sees as a physical disk
• Each XenServer host can use multiple SRs and different SR types
simultaneously
• SRs can be shared or dedicated between hosts
 If the SR is shared, a VDI can be started on any XenServer host in a
resource pool that can access that SR
XenServer introducing Storage
• Types of server-side software objects which represent storage entities
▫ Storage Repositories (SRs)
 are storage targets containing homogeneous virtual disks (VDIs)
▫ Physical Block Devices (PBDs)
 represent the interface between a physical server and an attached SR.
PBDs are connector objects that allow a given SR to be mapped to a
XenServer host
▫ Virtual Disk Images (VDIs)
 are an on-disk representation of a virtual disk provided to a VM.
VDIs are the fundamental unit of virtualized storage in XenServer.
▫ Virtual Block Devices (VBDs)
 are a connector object that allows mappings between VDIs and VMs
SR
VM
VBD
VBD
PBD
VM
VDI
XenServer
VDI
VDI
VBD
XenServer introducing Storage
• Storage Attributes
▫ Shared
 storage is based on a Storage Area Network (SAN) or NFS, and so can
inherently be shared by multiple XenServers in a pool
▫ Sparse allocation
 expansion of a VDI file is allocated as the VM writes data to it
 the VM is writing to what it thinks is a local drive
▫ the VM VDI files take up only as much space as is required
▫ Resizable
 the size of a detached VDI can be increased
▫ the VM operating system should have the capability to extend into the
new space
▫ Fast Cloning
 if a VM is cloned, the resulting VMs will share the common on-disk
data at the time of cloning
 each VM proceed to make its own changes in an isolated copy-on-write
(CoW) version of the VDI
▫ VM can be cloned almost instantaneously because the space is allocated
on an as-needed basis
XenServer introducing Storage
VDI types
• depending on the type of SR a VDI is contained in
▫ VHD
 essentially flat files stored either on an NFS SR or a local storage
external SR
▫ supports sparse allocation and fast cloning
▫ shareable when the containing SR is of type NFS
▫ supports snapshots without the need for support from the storage backend
▫ LVM
 used on a raw block device (either local or SAN-based), VDI ends up
being a Logical Volume (LV) in a Volume Group (VG)
▫ supports VDI resizing
▫ sharable when he storage originates from a SAN
• depending on the supported Storage Appliance
 plugins supported by third parties may be added to
/opt/xensource/sm directory
▫ NETApp/DELL EqualLogic
 VDIs ends up as a LUN in the SA Volume
▫ supports sparse allocation, VDI resizing, fast cloning, and is shareable
XenServer introducing Storage
VDI types
• depending on the type of SR it is contained
▫ VHD
 essentially flat files stored either on an NFS SR or a local storage
external SR
▫ supports sparse allocation and fast cloning
ext
on Local Disk
▫ shareable
whenVHD
the
containing SR is of type NFS
▫ supports
snapshots
without
VHD on Network
File the need for support from the storage backnfs
System
end
▫ LVM
lvm
LVM on Local Disk
 used on a raw block device (either local or SAN-based), VDI ends up
being alvmohba
Logical LVM
Volume
(LV)
in a Volume Group (VG)
over Host Bus
Adapter
▫ supports VDI resizing
lvmoiscsi
over iSCSI originates from a SAN
▫ sharable
when heLVMstorage
netapp
NetApp filer using Ontap
• depending on the supported Storage Appliance
 plugins supported
by EqualLogic
third parties
may be added to
equal
filer
/opt/xensource/sm directory
▫ NETApp/DELL EqualLogic
 VDIs ends up as a LUN in the SA Volume
▫ supports sparse allocation, VDI resizing, fast cloning, and is shareable
XenServer creating VMs
• VMs running on XenServer may operates in the following mode
▫ Para-Virtualizaition (PV)
▫ Hardware-assisted Virtualization (HVM)
• Guest OS installation methods
▫ Para-virtualized Kernel support, including vendor installer
 PV mode, installs straight into PV mode from vendor media
▫ Para-virtualized Kernel support, without vendor installer
 HVM mode
 can be switched into PV mode by installing/enabling PV kernel mode
▫ Fully virtualized with Para-virtual Drivers
 HVM mode
 PV drivers available for post-install
▫ use for Windows guests
▫ Fully virtualized without Para-virtual Drivers
 HVM mode
 no PV option available and use the "Other install media" option template
XenServer creating VMs
Creating VMs
• VMs are created from templates
▫ contain all the various configuration settings to instantiate a specific VM
• When you want to create a new VM, you must create the VM using a
template for the operating system you want to run on the VM
• The Linux templates create Pure Virtual (PV) guests, as opposed to the
HVM guests created by the Windows and Other Install Media templates
▫ block devices are passed through as PV devices
 XenServer does not attempt to emulate SCSI or IDE
▫ provides access to PV drviers through xvd* devices
• Additionally, VMs can be created by:
▫ importing an existing, exported VM
▫ converting an existing VM to a template
XenServer creating VMs
Installing Linux VMs
• Installing a Linux VM requires using a process such as the following:
▫ create the VM for your target operating system using the New VM wizard or
the CLI
▫ install the operating system using vendor's installation media
▫ install the correct kernel version, if applicable.
▫ install the Linux Guest Agent so that information about the VM appears in
XenCenter and the CLI
• XenServer supports the installation of many Linux distributions as VMs
▫ there are four installation mechanisms:
 complete distributions provided as built-in templates
 using the vendor media in the server's physical DVD/CD drive
 using the vendor media to perform a network installation
 installing from an ISO library
• Installing Linux VMs requires the Linux Pack to be installed onto the XenServer
host
▫ Linux Pack can be also installed after XenServer installation
 xe-install-supplemental-pack
XenServer creating VMs
• Create a new VM based from the"CentOS 5.5 (32-bit)”
XenServer creating VMs
• Give the new VM a name.
XenServer creating VMs
• Select as the source for the CentOS installation media a network URL
(FTP/NFS/HTTP)
XenServer creating VMs
• Select automatic placement of home server for VM if possible.
XenServer creating VMs
• Select required CPU and Memory settings for VM
▫ Note that they can also be changed later on when needed.
XenServer creating VMs
• Add virtual disk storage
XenServer creating VMs
• Select the required virtual network interface
XenServer creating VMs
• Finish VM creation and ensure VM is set to start automatically
▫ If you don't start it automatically you can if so required alter certain
settings like for instance providing a kickstart file
 under Advanced OS Boot parameters
XenServer creating VMs
• Continue the wizard as any normal CentOS installation
XenServer creating VMs
• Once the VM has rebooted
▫ Install the Linux Guest Agent by selecting the xs-tools.iso into the
Virtual DVD Drive of your VM
 then issue the following
#mount /dev/xvdd /media
#cd /media/Linux
#./install.sh
 or directly install
xe-guest-utilities-5.6.0-???.{i386,x86_84,amd64}.{rpm, deb}
xe-guest-utilities-xenstore-5.6.0-???.{i386,x86_84,amd64}.{rpm, deb}
 most suitable to the just installed Linux Distribution
XenServer: How To Extend VM disk size
How To Extend the Virtual Disk Size of a Xen Virtual Machine
• Shut down the corresponding virtual machine
▫ XenCenter
 click on Storage tab of the VM
 select the VDI to extend and open Properties dialog
 select Size and Location
 change the size of the VDI
▫
CLI
 find the VDI Universally Unique Identifier (UUID) by running the following command:
xe vm-disk-list vm=<vm name>
 run the following command to resize the virtual disk:
xe vdi-resize size=uuid=<vdi uuid> size=<size (GiB,MiB)>
•
After resizing the disk in XenServer start the operating system
▫ add single to VM boot options if you want to resize the system volume
 execute the following commands:
#fdisk /dev/xvd<disk><partition>
#d (Delete the partition and recreate it)
#n (New partition)
#w (Write out table)
•
for ext3 FS execute then following and reboot:
#e2fsck -f /dev/xvd<disk><partition>
#resize2fs /dev/xvd<disk><partition>
•
for lvm FS execute then following and reboot:
#pvresize/dev/xvd<disk><partition>
#lvextend –l100%PV /dev/mapper/<volume group>-<logical volume>
XenServer: How To Switch from HVM to PV mode
How To Switch from HVM to PV mode
•
As a condition for ParaVirtualization to work, a kernel that supports the Xen hypervisor needs to
be installed and booted in the virtual machine
▫ boot the virtual machine and open the console and install the Linux Xen
kernel
yum install kernel-xen
▫ build a new initrd without the SCSI drivers and with the xen PV drivers
cd /boot
mkinitrd --omit-scsi-modules --with=xennet --with=xenblk --preload=xenblk \
initrd-$(uname -r)xen-no-scsi.img $(uname -r)xen
▫ modify the grub boot loader menu
 /boot/grub/menu.lst
▫ remove the kernel entry with ‘gz’ in the name
▫ rename the first “module” entry to “kernel”
▫ rename the second “module” entry to “initrd”
▫
▫
▫
▫
correct the *.img pointer to the new initrd*.img
edit the line “default=” to point to the modified xen kernel entry
save the changes
shutdown but do not reboot the machine
XenServer: How To Switch from HVM to PV mode
How To Switch from HVM to PV mode
•
As a condition for ParaVirtualization to work, a kernel that supports the Xen hypervisor needs to
be installed and booted in the virtual machine
▫ boot the virtual machine and open the console and install the Linux Xen
kernel
yum install kernel-xen
▫ build a new initrd without the SCSI drivers and with the xen PV drivers
cd /boot
mkinitrd --omit-scsi-modules --with=xennet --with=xenblk --preload=xenblk \
initrd-$(uname -r)xen-no-scsi.img $(uname -r)xen
▫ modify the grub boot loader menu
 /boot/grub/menu.lst
▫ remove the kernel entry with ‘gz’ in the name
▫ rename the first “module” entry to “kernel”
▫ rename the second “module” entry to “initrd”
▫ correct the *.img pointer to the new initrd*.img
▫ edit the line “default=” to point to the modified xen kernel entry
XenServer: How To Switch from HVM to PV mode
• Edit the VM record of to convert it to PV boot mode
▫ clear the HVM boot mode
▫ set pygrub as boot loader
▫ find the UUID of VDI and set the bootable flag to the related VBD
xe
xe
xe
xe
vm-param-set uuid=<vm uuid> HVM-boot-policy=””
vm-param-set uuid=<vm uuid> PV-bootloader=pygrub
vm-disk-list uuid=<vm uuid>
vbd-param-set uuid=<vbd uuid> bootable=true