XenServer 6.1 Advanced Training Q1 2013 v1.1x

Download Report

Transcript XenServer 6.1 Advanced Training Q1 2013 v1.1x

Advanced XenServer Training
CCAT
Q1 - 2013
Coursework Agenda
• XenServer Overview
• XenServer Components Deep-Dive
ᵒ Lab 1: Navigating Dom0, Logs and the CLI
• XenServer Storage Primitives (Caveman-ish?)
ᵒ Lab 2: Playing with Storage
• XenServer Networking
ᵒ Lab 3: NICs, NICs and more NICs
• XenServer Troubleshooting
ᵒ Lab 4: How do I fix this stuff?
© 2012 Citrix | Confidential – Do Not Distribute
Hypervisor Overview
What is XenServer?
Topics Covered
• Comparisons to other hypervisors
ᵒ VMware vSphere 5.1
ᵒ Microsoft Hyper-V 2012
• Real world applications for XenServer
ᵒ XenApp
ᵒ XenDesktop
ᵒ PVS
© 2012 Citrix | Confidential – Do Not Distribute
Hypervisor Overview
XenServer vs. The Competition
Market Share and Growth Comparison
Company
Hypervisor
Platform
2011 Growth Current
Estimated
Market Share
Microsoft
Hyper-V
62%
25%
Citrix
XenServer
25%
15 – 20%
VMware
ESX
21%
50%
© 2012 Citrix | Confidential – Do Not Distribute
VMware vSphere
• Current Version – 5.1
• Consists of two components: VMware ESX and vCenter Server
ᵒ ESXi is the hypervisor and installs on bare metal hardware.
ᵒ vCenter Server provides centralized management and allows administrators to
configure and monitor ESXi hosts, provision virtual machines, storage, networking, and
much more.
• Management mechanism: vSphere Client
ᵒ Windows application that acts as a single pane of glass to manage either a standalone
ESXi host directly or an entire datacenter though vCenter.
© 2012 Citrix | Confidential – Do Not Distribute
vSphere Architecture diagram
© 2012 Citrix | Confidential – Do Not Distribute
vSphere VM Architecture diagram
© 2012 Citrix | Confidential – Do Not Distribute
Hyper-V
• Current Version – Hyper-V 2012
• Originally released in Server 2008
• Bundled with Windows Server licenses
• Behind VMware, Hyper-V retains the most market share in server virtualization
• Consists of single component: Windows Server with Hyper-V role
ᵒ Windows Server is the base hypervisor
• Management mechanism: SCVMM
ᵒ SCVMM (System Center Virtual Machine Manager) provides centralized management
and allows administrators to configure and monitor hosts, provision virtual machines,
storage, networking, etc.
ᵒ Can tie in directly with MOM/SCOM and Systems Management Server
© 2012 Citrix | Confidential – Do Not Distribute
Hyper-V Architecture diagram
© 2012 Citrix | Confidential – Do Not Distribute
Hyper-V VM Architecture diagram
© 2012 Citrix | Confidential – Do Not Distribute
Why is Hyper-V important?
• Bundled in with Windows Server 2012,
Hyper-V provides free virtualization, and
enterprise features at no additional cost
• Bridging functionality gap with VMware
• Tight integration with Windows
infrastructure
• Microsoft has had much success with
midmarket companies (<1000 users),
winning 30 – 40% of the time
© 2012 Citrix | Confidential – Do Not Distribute
XenServer
• Current Version – 6.1
• Consists of single component: XenServer Core
ᵒ CentOS-based hypervisor
• Management mechanism: XenCenter
ᵒ XenCenter console which is Windows only is the management tool. It directly connects
to the current Pool master in order to send commands to the rest of the XS pool
ᵒ Has APIs for monitoring and management (To be discussed in more detail later).
© 2012 Citrix | Confidential – Do Not Distribute
XenServer VM Architecture diagram
© 2012 Citrix | Confidential – Do Not Distribute
Limitations Comparison
Host
Parameter Hyper-V 2012
XenServer 6.1
vSphere 5.1
Cores
320
160
160
RAM
4 TB
1 TB
2 TB
VMs
1024
150 (60 w/HA)
512
900
2048 (25/core)
4000
1024 (600/SR)
4000
Hosts
64
16
32
CPUs
64
16
64
RAM
1 TB
128 GB
1 TB
NUMA
Yes
Host Only
Yes
vCPUs
Cluster/Pool VMs
VM
© 2012 Citrix | Confidential – Do Not Distribute
Feature Comparison
Feature
Hyper-V
2012
XenServer 6.1
vSphere 5.1
Incremental Backups
Yes
Yes (VM Prot & Reco)
Yes
NIC Teaming
Yes
Yes
Yes
Integrated HA
Yes
Yes
Yes
OS App Monitoring
Yes
No
API-Only
Failover Prioritization
Yes
Yes
Yes
Affinity & Rules
Yes
API-Only (SCVMM)
Yes
Cluster-Aware Updating
Yes
50/50 (Pool Upgrade
is not very smart)
Yes
© 2012 Citrix | Confidential – Do Not Distribute
Hypervisor Overview
XenServer 6.1 - What’s new?
-Thanks to Ryan McClure for donating content
XenServer 6.1 – Key features
• Link Aggregation Control Protocol (LACP) support
• Storage XenMotion
• Integrated StorageLink (iSL) support for EMC VNX series arrays
© 2012 Citrix | Confidential – Do Not Distribute
XenServer 6.1 – Virtual Appliances
• Demo Linux Virtual Appliance
• Workload Balancing 6.1.0 Virtual Appliance
• vSwitch Controller Virtual Appliance
• Web Self Service 1.1.2 Virtual Appliance
• Citrix License Server VPX v11.10
• Citrix XenServer Conversion Manager
ᵒ Allows up to 500 VM conversions from VMware vSphere to XenServer
© 2012 Citrix | Confidential – Do Not Distribute
What’s New? – Management and Support
• Storage XenMotion
• Live VDI Migration
• Emergency Network Reset
• XenServer Conversion Manager
• Enhanced Guest OS Support
• Supported VM Count Increased to 150
© 2012 Citrix | Confidential – Do Not Distribute
What’s New? – Networking
• LACP Support
• VLAN Scalability Improvements
• IPv6 Guest Support
• Updated OvS
© 2012 Citrix | Confidential – Do Not Distribute
Storage XenMotion and VDI Migration
Storage XenMotion
© 2012 Citrix | Confidential – Do Not Distribute
Live VDI Migration
XenServer 6.1 – Feature(s) that didn’t get lime
light
• Performance Monitoring Enhancements Supplemental Pack
ᵒ Extended XenServer monitoring available (stored in RRD – Round Robin Databases)
Metric
Availability
Average physical CPU usage over all CPUs (%)
Time physical CPU spent in [C,P]-state (%)
Host memory reclaimed by dynamic memory control
(B)
Maximum host memory that could be reclaimed by
dynamic memory control (B)
Host aggregate, physical network interface traffic
received/sent (B/s)
I/O throughput read/write/total per PBD (MiB/s)
Average I/O latency (milliseconds)
I/O average queue size
Number of I/O requests currently in flight
IOPS, read/write/total (requests/s)
Time spent waiting for I/O (%)
New
New
© 2012 Citrix | Confidential – Do Not Distribute
New
New
New
New
New
New
New
New
New
XenServer Roadmap
• Sarasota
ᵒ
ᵒ
ᵒ
ᵒ
Dom0 Disaggregation
Increased Scalability
Fewer Reboots due to Hotfixes
vGPU
• Jupiter
ᵒ Storage QoS
• Tallahassee – Private Release
ᵒ NetScaler SDX Update
ᵒ XenDesktop Interoperability and
Usability Enhancements
ᵒ Hardware Monitoring and Reporting
© 2012 Citrix | Confidential – Do Not Distribute
• Clearwater
ᵒ Windows 8 Guest Support
ᵒ UI Enhancements
• Naples
ᵒ User Experience and Third Party
Enhancements
ᵒ VHDX Disk Format Support
Additional Resources
• CTX118447 XenServer Administrator’s Guide
• CTX118448 XenServer Installation Guide
• CTX118449 XenServer Virtual Machine Installation Guide
• CTX118451 XenServer Release Notes
• CTX119472 XenConvert Guide
• CTX118450 XenServer Software Development Guide
© 2012 Citrix | Confidential – Do Not Distribute
Advanced XenServer Training
XenServer Under The Hood
Agenda
• Core Components
ᵒ Hardware
ᵒ Hypervisor
ᵒ Dom0
ᵒ DomU
• Beyond Single Server
ᵒ Resource Pools
ᵒ HA
ᵒ Meta Data
ᵒ XenMotion/Live Migration
• Workload Balancing
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Overview
Core Components
Terminology to Remember
• Host
• Resource Pool
• Pool Master
• Dom0
• XAPI (XENAPI)
• XenServer Tools
• Live VDI Migration
• Storage XenMotion
© 2012 Citrix | Confidential – Do Not Distribute
Behind The Scenes
• Native 64-bit bare metal hypervisor – Is this true?
• Cent OS
• Based on Linux 2.6.32 kernel
• Drivers are SUSE Linux Enterprise Server v11 SP1
• Xen hypervisor and Domain 0 manage physical server resources among virtual
machines
© 2012 Citrix | Confidential – Do Not Distribute
Legacy Virtualization Architecture
Legacy Hypervisor
User
Apps
HALT
User
Apps
SAFE HALT
Hardware
© 2012 Citrix | Confidential – Do Not Distribute
“Binary Translation”
Slow, expensive software
emulation layer
Paravirtualization
• Relies on “enlightened”
operating systems
XenServer
User
Apps
User
Apps
• Kernel and I/O paths know
they are being virtualized
• Cooperation provides best
performance
Paravirtualized guests
make high-speed calls
directly to the hypervisor
HALT
HALT
HYPERCALL
VT/AMD-V
Hardware
© 2012 Citrix | Confidential – Do Not Distribute
Hardware-Assisted Virtualization
XenServer
User
Apps
User
Apps
• Hardware-assist allows high
performance without
emulation
HALT
HALT
HYPERCALL
VT/AMD-V
Hardware
© 2012 Citrix | Confidential – Do Not Distribute
Other guests benefit
from hardwareaccelerated call
translation
Understanding Architectural Components
The Xen hypervisor and Domain 0 manage physical server resources
among virtual machines
© 2012 Citrix | Confidential – Do Not Distribute
Understanding the Hardware Component
Hardware layer contains the physical server components, including
memory, CPU and storage
© 2012 Citrix | Confidential – Do Not Distribute
Understanding the Hypervisor Component
Xen hypervisor is a thin layer of software that runs right on top of the
hardware
© 2012 Citrix | Confidential – Do Not Distribute
Understanding the Domain 0 Component
Domain 0, or the Control Domain, is a Linux VM that manages the
network and storage I/O of all guest VMs
© 2012 Citrix | Confidential – Do Not Distribute
Dom0 Resourcing
• Linux-based guests
• virtual network interfaces
• virtual disks
• virtual CPUs
• Windows-based guests
• virtual network interfaces
• virtual disks
• Resource Control
• Processor, network, disk I/O priority
• Machine Power-States
© 2012 Citrix | Confidential – Do Not Distribute
Why This Model Kinda Sucks
• Control Requests - XS Console / Direct CLI
ᵒ Queuing
ᵒ Confusion
• Dom0 Overburdening
• Distributed Domino Effect
• Limited space due to pre-allocation
• Log Bug
• Not built to Scale out of the Box
ᵒ Dom0 Memory
ᵒ Dom0 CPU
© 2012 Citrix | Confidential – Do Not Distribute
Why This Model Kinda Sucks
• Control Requests - XS Console / Direct CLI
ᵒ Queuing
ᵒ Confusion
• Dom0 Overburdening
• Distributed Domino Effect
• Limited space due to pre-allocation
• Log Bug
• Not built to Scale out of the Box
ᵒ Dom0 Memory
ᵒ Dom0 CPU
© 2012 Citrix | Confidential – Do Not Distribute
DOM0 Tuning
• Supports 4 vCPUs (Default 1 Prior to 6.0)
ᵒ More is better
ᵒ IRQBALANCE (http://irqbalance.org/)
• Memory Tweaking (Default 752)
ᵒ 2940M Tested for 130/16.25 VMs
ᵒ Warning: Single Server!
• XenHeap (5.5 Only)
ᵒ Set xenheap_megabytes to (12 + max-vms/10)
ᵒ Xc.Error("creating domain failed: hypercall 36 fail: 12: Cannot allocate memory (ret 1)")
© 2012 Citrix | Confidential – Do Not Distribute
Understanding the DomU Linux VM Component
Linux VMs include paravirtualized kernels and drivers
© 2012 Citrix | Confidential – Do Not Distribute
Understanding the DomU Windows VM Component
Windows VMs use paravirtualized drivers to access storage and network
resources through Domain 0
© 2012 Citrix | Confidential – Do Not Distribute
Real-time Resource Adjustment
• Linux-based guests
ᵒ Add/remove virtual network interfaces
ᵒ Add virtual disks
ᵒ Add/remove virtual CPUs
• Windows-based guests
ᵒ Add/remove virtual network interfaces
ᵒ Add/remove virtual disks
• Resource QoS Control
ᵒ Processor, network, disk I/O priority
© 2012 Citrix | Confidential – Do Not Distribute
Why This is Cool
• Paravirtualization is key to performance
ᵒ Hypervisor and OS cooperation provides best performance
• Utilizes hardware-assisted virtualization
ᵒ Fully supports Intel VT and AMD-V enabled processors
ᵒ Hardware-assist allows high performance without emulation
• Future: Paravirtualization with Hardware-Assist
• Benchmarks vs running on native
ᵒ Linux can expect .5 - 4% overhead
ᵒ Windows can expect 2 - 7% overhead
• VMware (EULA prevents publishing)
ᵒ 5-10% Overhead seen on internal testing
© 2012 Citrix | Confidential – Do Not Distribute
Lessons Learned: The other DOM - DOMU
• We can only push around 350 Mb/s through a
single Windows VM.
• We can 1 to 1.1 Gb/s on a single Linux VM
• To push more traffic through a single Windows
VM:
ᵒ Manually add additional netback processes
• xen-netback.netback_max_groups=<#Procs>
under /boot/extlinux.conf in section labeled xe-serial
just after the assignment console=hvc0.
ᵒ Manually pinning vCPUs to specific VMs
• xe vm-param-set uuid=<vm-uuid> VCPUsparams:mask=<CPUs to pin>
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Overview
Beyond Single Server
Management Architecture Comparison
VMWare
Traditional Management
Architecture
• Single backend management
server
© 2012 Citrix | Confidential – Do Not Distribute
XenServer
Distributed
Management Architecture
• Clustered management layer
Resource Pools
•
Join multiple physical servers into one logical
pool
•
Enables shared configurations, including
storage
•
Required for Automated VM Placement and
XenMotion
© 2012 Citrix | Confidential – Do Not Distribute
High Availability
Shared Storage
© 2012 Citrix | Confidential – Do Not Distribute
XenMotion - Live Migration
Shared Storage
© 2012 Citrix | Confidential – Do Not Distribute
XenMotion – Live VM Movement
• XenMotion allows minimal downtime movement of VMs between physical
systems
• Generally 150-200ms of actual “downtime”
• Most of the downtime is related to network switch moving IP traffic to new port
© 2012 Citrix | Confidential – Do Not Distribute
XenMotion CPU Requirements
• XenMotion requires systems XS have similar CPUs
• Must be the same manufacturer
• Must be the same type
• Can be different speed
• Example Xeon 51xx series chips
ᵒ Could have a mix of 5130 and 5120 chips
© 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 1
• Systems verify correct storage and network setup on destination server
• VM Resources Reserved on Destination Server
Source Virtual Machine
© 2012 Citrix | Confidential – Do Not Distribute
Destination
Pre-Copy Migration: Round 1
• While source VM is still running XenServer copies over memory image to destination server
• XenServer keeps track of any memory changes during this process
© 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 1
© 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 1
© 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 1
• After first pass most of the memory image is now copied to the destination server
• Any memory changes during initial memory copy are tracked
© 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 2
• XenServer now does another pass at copying over changed memory
© 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 2
© 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 2
• Xen still tracks any changes during the second memory copy
• Second copy moves much less data
• Also less time for memory changes to occur
© 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration: Round 2
© 2012 Citrix | Confidential – Do Not Distribute
Pre-Copy Migration
• Xen will keep doing successive memory copies until minimal differences
between source and destination
© 2012 Citrix | Confidential – Do Not Distribute
XenMotion: Final
• Source VM is paused and last bit of memory and machine state copied over
• Master unlocks storage from source system and locks to destination system
• Destination VM is unpaused and attached to storage and network resources
• Source VM resources cleared
© 2012 Citrix | Confidential – Do Not Distribute
The Concept of MetaData
• The metadata contains information about the VMs:
ᵒ Name, description, Universally Unique Identifier (UUID), VM configuration, and
information about the use of resources on the host or Resource Pool such as Virtual
Networks, Storage Repository, ISO Library.
• Most metadata is written when the VM, SR and network are created and is
updated if you make changes to the VM configuration.
• If the metadata is corrupted, the resource pool and/or VMs may become
unusable.
• Always backup metadata during:
ᵒ Resource pool and/or host upgrade
ᵒ Before patching
ᵒ Migration from one storage repository to another
© 2012 Citrix | Confidential – Do Not Distribute
Backing up the MetaData
• It is not necessary to perform daily
exports of all the VM metadata.
• To export the VM metadata:
1.
2.
3.
4.
5.
6.
Select Backup, Restore and
Update from the menu.
Select Backup Virtual Machine
Metadata.
If prompted, log on with root
credentials.
Select the Storage Repository
where the VMs you want to back
up are stored.
After the metadata backup is
done, verify the successful
completion on the summary
screen.
In XenCenter, on the Storage tab
of the SR selected in step 3, a new
VDI should be created named
Pool Metadata Backup.
© 2012 Citrix | Confidential – Do Not Distribute
Restoring MetaData
• Prerequisites:
ᵒ Set up/re-attach the vDisk Storage Repository
ᵒ Virtual Networks are set up correctly by using the same names
• From the console menu:
1.
2.
3.
4.
5.
6.
7.
8.
Select Backup, Restore and Update from the menu.
Select Restore Virtual Machine Metadata.
If prompted, log on with root credentials.
Select the Storage Repository to restore from.
Select the Metadata Backup you want to restore.
Select if you want to restore only VMs on this SR or all VMs in the pool.
After the metadata restore is done, verify the summary screen for errors.
Your VMs are now available in XenCenter and can be started at the new site.
© 2012 Citrix | Confidential – Do Not Distribute
Data Replication for DR
© 2012 Citrix | Confidential – Do Not Distribute
Simplifying Disaster Recovery
1
Automated backup of VM
metadata to SR
4
2
Replication of SR includes
Virtual Disks and VM
metadata
1
3
Attach replicated SR
4
Restore of VM metadata
will recreate VMs
© 2012 Citrix | Confidential – Do Not Distribute
3
2
Shared Storage
Shared Storage
Production Site
DR Site
XenServer Overview
Workload Balancing
Workload Balancing
• Highly granular, continuous guest and host performance profiling
• Policy-driven workload balancing across XenServer pools and hosts
• Historical analysis and trending
for planning purposes
• Reports available in XenCenter
© 2012 Citrix | Confidential – Do Not Distribute
Workload Balancing – Critical Thresholds
• Components included in WLB
evaluation:
•
•
•
•
•
•
CPU
Memory
Network Read
Network Write
Disc Read
Disk Write
• Optimization recommendation is
being triggered if a threshold is
reached
© 2012 Citrix | Confidential – Do Not Distribute
Workload Balancing – Weights
• Weights are all about the
recommended target host
• Simple interface to determine which
factors are most relevant for the VM
© 2012 Citrix | Confidential – Do Not Distribute
Workload Balancing
• Ideal placement recommendations at
VM start-up, resume and live
relocation
• Ideal VM relocation
recommendations for host
maintenance
© 2012 Citrix | Confidential – Do Not Distribute
Workload Balancing – Placement Strategies
• Maximize Performance
• Default setting
• Spread workload evenly across all physical
hosts in a resource pool
• The goal is to minimize CPU, memory, and
network pressure for all hosts
• Maximize Density
• Fit as many virtual machines as possible
onto a physical host
• The goal is to minimize the number of
physical hosts that must be online
© 2012 Citrix | Confidential – Do Not Distribute
Workload Balancing Server
• All components can run on one (virtual) machine
Analysis Engine service
• Multiple server deployment recommended for
larger, multi-pool environments
• Data store can be hosted on existing DB platform
• Architecture look familiar?
Data
Store
Data Collection
Manager
service
Web Service Host
© 2012 Citrix | Confidential – Do Not Distribute
Workload Balancing - Components
• Workload Balancing Components
Data Collection Manager service
Analysis Engine service
Web Service Host
Data Store
XenServer
XenCenter
XenServer
Resource Pool
•
•
•
•
•
•
Analysis Engine service
XenCenter
Data
Store
XenServer
Resource Pool
Data Collection
Manager
service
Recommendations
Web Service Host
© 2012 Citrix | Confidential – Do Not Distribute
Hypervisor Overview
Lab 1 – Intro Lab
In this lab… 30 minutes
• Create a XenServer Resource Pool
• XenServer Resource Pool Management (switching pool master, Dom0 tweaks)
• Upgrading from XenServer 6.0.2 to 6.1
• Backing up and Restoring Metadata
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Storage
Deep Dive
Agenda
• Introduction to XenServer Storage
• SRs – Deep Dive
• Xen Storage – Command line
• Storage XenMotion Under The Hood
• Debugging Storage Issues
• Key Takeaways and References
© 2012 Citrix | Confidential – Do Not Distribute
Before we get started…
• Lets familiarize ourselves with the following terms:
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
HBA
LUN
Fabric
Storage Processor
SR
VDI
PBD and VBD
Multipathing
Fast Clone, Linked Clone
XenMotion and Storage XenMotion
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Storage
Introduction to Storage
Disk Types in common use
• SATA (Serial ATA) disks
ᵒ Size up to 3 TB (4TB + due for release)
ᵒ Spin speed 7200 RPM
ᵒ Transfer rate ~ 30-40 MB per second
ᵒ IOPS 70-80
ᵒ Pros: Inexpensive, large Storage. Cons: Relatively slow, long rebuild time
• SAS (Serial Attached SCSI) Disks
ᵒ Size up to 600 GB (900+ GB due for release)
ᵒ Spin speed 15000 RPM
ᵒ Transfer rate ~ 80-120 MB per second
ᵒ IOPS 160-180
ᵒ Pros faster access time, redundancy Cons Higher cost per GB
• SSD (Solid state Disk)
ᵒ Size 100GB upwards (enterprise class)
ᵒ Extremely fast access speeds
ᵒ Transfer rate ~ up to full bus speed
ᵒ IOPS read > 40,000 write > 10,0000
ᵒ Pros Extremely fast access times low power Cons Very expensive, Limited life.
© 2012 Citrix | Confidential – Do Not Distribute
Disk Sizing
Disk Size
In MB
Expected MB
Diff
Reported Size in
windows
100 GB
100,000,000,000
107,374,182,400
7.37%
93.1 GB, 95,367 MB
1 TB
1,000,000,000,000
1,099,511,627,776
9.95%
931 GB, 953,674 MB
Latency Time
The faster the disk spins, the shorter the wait for the computer. A hard drive spinning at
7,200 rotations per minute has an average latency of 4.2 thousandths of a second. This is
approximately one-half the time it takes for the disk to turn one revolution.
Latency Consequences
While 4.2 thousandths of a second seems inconsequential, a computer might process
many thousands of data requests. The computer's CPU is fast enough to keep up with the
workload but the hard disk's rotational latency creates a bottleneck.
© 2012 Citrix | Confidential – Do Not Distribute
RAID (redundant array of independent disks)
• Way of presenting multiple disks so that they appear to the server as a single
larger disk.
• Can also provide for redundancy in the event of a single/multiple disk failure(s)
• Has multiple types each with its advantages and disadvantages.
© 2012 Citrix | Confidential – Do Not Distribute
Raid Sets
Pros – cheapest
Con – A failure of any
disk will fail all of the
data on the disk
© 2012 Citrix | Confidential – Do Not Distribute
Pros – a failure of any
drive will still provide full
access to data
Only 1 additional write to
secure data
Con - Double the amount
of disks needed
Pros – Data is striped across multiple disks
with parity
Can support a single disk failure without
data loss
Cons – when a disk fails, performance is
reduced because the controller has to
calculate the missing disks data.
Full performance is not recovered until the
failed disk is replaced and rebuilt.
Write intensive applications can run slowly
due to the overhead of calculating and
writing the parity blocks
© 2012 Citrix | Confidential – Do Not Distribute
Pros – Data is striped across multiple disks with
double parity
Can support two disk failures without data loss
A single disk failure does not affect performance
Cons – Write intensive applications can run slower
because of the overhead of calculating and writing
two parity blocks
Additional 2 disks per raid set required to handle
parity requirements.
Hybrid RAID types
• RAID 0+1
• RAID 10
• RAID 50
© 2012 Citrix | Confidential – Do Not Distribute
IOPS (Input/Output Operations per second)
IOPS are primarily a benchmarking figure that is used to find/compare the performance of storage
solutions.
IOPS are affected by many factors – such as RAID type, stripe size, Read/Write ratios,
Random/sequential balance, cache sizes on controller and disk.
Quoted maximum IOPS from manufactures seldom translate to real world applications.
Measurement
Description
Total IOPS
Total number of I/O operations per second (when performing a mix of read and write tests)
Random Read IOPS
Average number of random read I/O operations per second
Random Write IOPS
Average number of random write I/O operations per second
Sequential Read IOPS
Average number of sequential read I/O operations per second
Sequential Write IOPS
Average number of sequential write I/O operations per second
© 2012 Citrix | Confidential – Do Not Distribute
What is a SAN?
• SAN (Storage Area Network)
• Dedicated network that provides access to consolidated, block level data
storage.
• Can be a single device or Multiple Devices
• Does not provide any file level functionality
• However file systems that make use of storage on a SAN will allow for file
locking sharing etc.
• Provide access to a pool of shared storage using Either ISCSI, Fibre Channel,
FCOIP (fibre channel over IP) for connectivity.
• Is (ideally) on a separate network from other traffic.
© 2012 Citrix | Confidential – Do Not Distribute
© 2012 Citrix | Confidential – Do Not Distribute
NAS (Network Attached Storage)
 A NAS differs from a SAN in that it offers remote Access to a filesystem
as if it were a fileserver whereas a SAN offers remote access to Block
level storage as if it was a hard disk controller.
 NAS normally allows connection to its storage via NFS (Unix/Linux),
SMB/CIFS (Windows), AFP (Apple)
 Some NAS can be used for other protocols such as FTP, HTTP,
HTTPS,NDMP, and various media streaming protocols.
 A NAS Head (or Filer) can have single or
multiple nodes
 A NAS essential replaces one or more
fileservers
© 2012 Citrix | Confidential – Do Not Distribute
What are SAN/NAS used for
• Both provide access to SHARED storage.
• Both can provide high availability/fault tolerance by having
multiple Disk Controllers / NAS Heads
• A SAN is used to provide BLOCK level storage to multiple
Hosts.
• Connection to a SAN is Normally through Fibre channel or
ISCSI (1GB/10GB) over Ethernet.
• A NAS is used to replace FILE servers by providing CIFS /
NFS/FTP etc. features.
• Connection to a NAS is normally via Ethernet.
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Storage
SRs, SR types and more…
XenServer Storage Objects
• Describes a particular storage target in which Virtual Disk Images
(VDIs) are stored.
• Flexible—supports a wide variety of storage types.
• Centralized—easier to manage, more reliable with a XenServer pool.
• Must be accessible to each XenServer host.
© 2012 Citrix | Confidential – Do Not Distribute
Types of SRs supported in XenServer
• Block based – LVM
ᵒ Fiber Channel (FC)
ᵒ iSCSI
ᵒ Local LVM
• File based
ᵒ Ext (local)
ᵒ NFS
ᵒ CIFS (ISOs only)
© 2012 Citrix | Confidential – Do Not Distribute
Protocols
• NFS – inexpensive, easiest to implement, 1 or 10 Gb, overhead*
• iSCSI – fairly cheap and easy, HW and SW flavors, 1 or 10 Gb
• FCoE – pretty expensive and difficult, requires CNAs, 10 Gb only
• FC – most expensive and difficult to implement, 1/2/4/8/16** Gb
• FC is most common in the enterprise, but 10 Gb+ networking is making
NFS and iSCSI more and more common
• In terms of performance, typically you will find that 8 Gb FC will
outperform 10 Gb iSCSI or NFS
• Ask the customer if they have “storage tiers”
ᵒ Ex: 1. EMC 8 Gb FC 2. HP 10 Gb iSCSI 3. NetApp 1 Gb NFS
102
© 2012 Citrix | Confidential – Do Not Distribute
Comparisons to other Hypervisors
Facet
XenServer
VMware vSphere
Microsoft Hyper-V
VM Storage
LVM, File-level
File-level
CSV, SMB 3.0 (CIFS)
Storage plugins
Storagelink
(integrated)
PSP, NMP, Vendor
supported
N/A
Multipathing
Yes
Yes
Yes
NFS Support
Yes
Yes
No
© 2012 Citrix | Confidential – Do Not Distribute
XenServer – under the hood
streaming
services
XenAPI
vm consoles
dom0
xapi
XML/RPC; HTTP
db
scripts:
Storage
Network
SM
VM1
Domains
xenstore
i/f
qemu-dm
Linux
storage
devices
Linux
Network
devices
Netback
Blktap/
blkback
kernel
Xen Hypervisor
Hardware devices
© 2012 Citrix | Confidential – Do Not Distribute
Shared memory pages
XenServer - under the hood
• I/O (Network and Storage) depends significantly on Dom0
ᵒ This changes in the future releases of Xenserver (disaggregation of Dom0)
• All VMs use para-virtualized drivers (installed with Xen tools) to communicate
ᵒ Blkfront, netfront components in VMs send I/O traffic to Dom0’s Blkback, netback
ᵒ Hence the SLOW performance and scalability limits with XenServer
© 2012 Citrix | Confidential – Do Not Distribute
The competition
• ESXi - file based
architecture
• VMFS - clustered
filesystem
• VMFS 5 - can be shared
between 32 hosts
• Custom plugins can be
installed based on shared
filesystem
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Storage Objects
• XenServer Storage Objects
ᵒ SRs, VDIs, PBDs and VBDs
• Virtual Disk Data Formats
ᵒ File-based VHD, LVM and StorageLink
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Storage Objects
VDIs, PBDs, VBDs
• Virtual Disk Images are a storage abstraction that is presented to a
VM.
• Physical Block Devices represent the interface between a physical
server and an attached SR.
• Virtual Block Devices are connector objects that allow mappings
between VDIs and VMs.
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Storage Objects
SR
XenServer Host
PBD
VDI
VBD
Virtual Machine
XenServer Host
PBD
VDI
VBD
Virtual Machine
XenServer Host
PBD
© 2012 Citrix | Confidential – Do Not Distribute
VDI
VBD
Virtual Disk Data Formats
Logical Volume (LVM)-based VHDs
• The default XenServer block device-based storage inserts a Logical Volume
manager on a disk. VDIs are represented as volumes within the Volume
manager.
• Introduced LVHD in XenServer 5.5
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
Enhances LVM for SRs
Hosts VHD files directly on LVM volumes
Adds Advanced Storage features like Fast Cloning and Snapshots
Fast and simple upgrade
Backwards compatible
© 2012 Citrix | Confidential – Do Not Distribute
Virtual Disk Data Formats
File-based VHD
• VM images are stored as thin-provisioned VHD format files on either a local
non-shared file system (EXT type SR) or a shared NFS target (NFS type SR).
• What is VHD?
ᵒ A Virtual Hard Disk (VHD) is a file formatted to be structurally identical to a physical
Hard Disk Drive.
ᵒ Image Format Specification was created by Microsoft in June, 2005.
© 2012 Citrix | Confidential – Do Not Distribute
Virtual Disk Data Formats
StorageLink (LUN per VDI)
• LUNs are directly mapped to VMs as VDIs by SR types that provide an
array-specific plug-in (NetApp, Equallogic or StorageLink type SRs).
The array storage abstraction therefore matches the VDI storage
abstraction for environments that manage storage provisioning at an
array level.
© 2012 Citrix | Confidential – Do Not Distribute
Virtual Disk Data Formats
StorageLink Architecture
ᵒ XenServer calls direct to Array API‘s to provision
and adjust storage on demand.
ᵒ Fully leverages array hardware capabilities.
ᵒ Virtual disk drives are individual LUNs.
ᵒ High performance storage model.
ᵒ Only the server running a VM connects to the
individual LUN(s) for that VM.
© 2012 Citrix | Confidential – Do Not Distribute
LVM vs. StorageLink
XenServer 6.1
iSCSI / FC + integrated
StorageLink
XenServer 6.1
iSCSI / FC
Storage Repository
Storage Repository
LUN
VHD
header
VHD
header
LVM
Logical
Volume
LVM
Logical
Volume
LUN
LVM Volume Group
VM Virtual
Disk
© 2012 Citrix | Confidential – Do Not Distribute
LUN
XenServer Storage
Command line vs. XenCenter
Xen Storage – command line vs. XenCenter
• Can you trust XenCenter info?
ᵒ The answer is – NO
ᵒ XenCenter relies on the database integrity of XenServer resource pool
ᵒ Each entity of XenServer (Resource pool data, CPU, NIC and Storage) are stored in
XenServer’s built-in database
ᵒ From a Storage perspective:
•
•
•
•
PBD
SR
VDI
VBD
ᵒ All the above entities are stored in each XenServer’s database
ᵒ And now…why should I remember this?
• If a VM fails to boot – the above information will come in handy during troubleshooting
• If SR disappears from XenServer (oh yes it happens!), you will need these commands to
recreate the storage path
© 2012 Citrix | Confidential – Do Not Distribute
Xen Storage – Command line
• All Xen commands start with xe <Object>-<Action> <parameters>
ᵒ For example:
# xe vbd-list vm-name-label=WIN7
The above command will list all the disks attached to the VM
1. Object can be SR, VBD, VDI, PBD
2. Action(s) can be create, delete, list, probe, scan etc.
3. Parameters are specific to the command
• Entire list of Xen Commands can be found here
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Storage
Storage XenMotion
Storage XenMotion – under the hood
• New with XenServer 6.1
• Supported to migrate VMs across multiple resource pools
• Also supports Storage migration between local storage and shared storage
• Please keep in mind – its version 1.0
© 2012 Citrix | Confidential – Do Not Distribute
Storage XenMotion and VDI Migration
Storage XenMotion
© 2012 Citrix | Confidential – Do Not Distribute
Live VDI Migration
Feature Overview: Use cases
1. Upgrade a storage array
2. Upgrade a pool with VMs on local storage
3. Provide tiered storage arrays
4. Rebalance VMs between XenServer pools, or CloudStack clusters
“The Cloud” was the major use case we had in mind when designing this.
© 2012 Citrix | Confidential – Do Not Distribute
Feature Overview: Bird’s-eye view
Cross-pool migration and VDI migration consist of the following:
1. Synchronously mirror VDIs between source and destination
2. Create new VM object on destination pool (new ref, same uuid)
3. When copy complete, migrate VM as usual
Note: VDI migrate implemented with “localhost” cross-pool migrate!
© 2012 Citrix | Confidential – Do Not Distribute
Feature Overview: Limitations
• No more than 6 VDIs, and no more than 1 VM snapshot
• No more than 3 concurrent ops per host
• No VMs with PCI pass-through enabled
• The following are untested and unsupported:
ᵒ HA and WLB on source or destination pool
ᵒ DVSC integration
ᵒ VDI-migrate between multiple local SRs
© 2012 Citrix | Confidential – Do Not Distribute
Feature Overview: Caveats
• Minimum network or storage throughput requirements? This is currently
unknown, but we’ll begin investigations soon.
• Can’t check whether destination SR has space available for incoming VDIs – if
you fill up an SR, your migration will fail to complete.
• Extra temporary storage is required on the source (and destination) SR, so you
must be careful when migrating VMs off of a full SR.
• IO performance inside guest will be reduced during migration because of
synchronous mirroring of VDIs.
© 2012 Citrix | Confidential – Do Not Distribute
Feature Overview: API walkthrough
Host.migrate_receive
host:ref
network:ref
options:map
vdi_sr:map
vif_network:map
options:map
Result = None
Result = receive_token
VM.migrate_send
vm:ref
receive_token
live:bool
vdi_sr:map
vif_network:map
options:map
Result = None
VM.assert_can_migrate
vm:ref
dest:map
live:bool
© 2012 Citrix | Confidential – Do Not Distribute
VDI.pool_migrate
vdi:ref
sr:ref
options:map
Result = vdi:ref
Feature Overview: CLI walkthrough
• xe vm-migrate
ᵒ New params: remote-address, remote-username, remote-password,
remote-network, vif, vdi
•
•
•
•
•
Extends the original vm-migrate command
Bold params are required to enable cross-pool migration
vif and vdi map VIFs to target networks and VDIs to target SRs
remote-network specifies the network used for data transfer
Can use host/host-uuid to specify host on pool to send VM
• xe vdi-pool-migrate
ᵒ Params: uuid, sr-uuid
• uuid of target VDI
• sr-uuid of destination SR
© 2012 Citrix | Confidential – Do Not Distribute
Feature Overview: GUI walkthrough
© 2012 Citrix | Confidential – Do Not Distribute
Feature Overview: GUI walkthrough
© 2012 Citrix | Confidential – Do Not Distribute
Architecture: VDI operations
• For each VDI:
ᵒ Snapshot VDI and synchronously mirror all subsequent writes to destination SR
ᵒ Copy the snapshot to destination SR
ᵒ Finally, compose those writes onto the snapshot on the destination SR
• Continue to mirror all new writes
• Each of these operations occurs sequentially for each VDI, but each VDI mirror
continues until the VM migration is complete
• VM memory is copied only after final VDI compose is complete
VDI 1: snapshot & start mirroring
VDI 1: copy snapshots
VDI 2: snapshot & start mirroring
VDI 2: copy
snapshots
Copy VM memory
© 2012 Citrix | Confidential – Do Not Distribute
VDI mirroring in pictures
SOURCE
no color = empty
gradient = live
DESTINATION
mirror
VM
root
© 2012 Citrix | Confidential – Do Not Distribute
VM
VMware Storage VMotion In Action
5
4
Delete original VM
home and disks
“Fast suspend/resume”
VM to start running on
new home and disks
2
1
Start changed
block tracking
Copy VM home
to new location
3
Pre-copy disk to
destination
(multiple iterations)
Source
132
© 2012 Citrix | Confidential – Do Not Distribute
4
Copy all remaining
disk blocks
Destination
XenServer Storage
Debugging Storage Issues
Troubleshooting tips
• Check logs: /var/log/{xensource.log,SMlog,messages}
ᵒ Note: xensource.log and messages both implemented with syslog, so they now have
consistent timestamps!
• xn command: CLI to xenopsd
ᵒ Try ‘xn help’ for documentation.
• tap-ctl command
ᵒ Could be useful for diagnosing problems (can’t describe usage here)
© 2012 Citrix | Confidential – Do Not Distribute
Logging/Debugging
• All backend drivers use the Smlog to record storage events
ᵒ /var/log/Smlog
• Logs are rotated, same as system message log, xensource.log files
• In the python module util.py there is a helper log function ->
util.SMlog(STRING)
• On retail edition, all python backend files are editable for logging/debugging
purposes
• Some things to look for in SMLog (tail –f /var/log/SMLog):
ᵒ Slow SR Probing: Check for breaks in the SMLog timestamps or for
SR_BACKEND_FAILURE_46, this is likely an issue with multipathing
ᵒ Storage Issue: Look for the word FAILURE within the log, this is likely an issue
communicating with the storage or a corrupted SR
ᵒ Metadata Corruption: Look for SR_BACKEND_FAILURE_18, this is likely an issue
with metadata being corrupted due to special characters being used in VM names
ᵒ VDI Unavailable: This is typically caused by faulty clones/snapshots but could also
point to misbehaving Storage.
© 2012 Citrix | Confidential – Do Not Distribute
Debugging tips
• Use tail –f on any log file while actively debugging a system
• Correlate logs between xensource.log, SMLog and messages
• Verify IP settings, firewall config for any IP based storage connections
• Check status of dependent objects, such as PIFs, VIFs, PBDs etc...
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Storage
Lab 2 – Storage Lab
In this lab… 90 minutes
• Create a local SR
• Create a shared SR with Integrated Storagelink
• Storage XenMotion Lab
• (Advanced Lab) Create shared SR using NetApp SIM
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Networking
Foundational Concepts
Agenda
• Networking Overview
• XenServer Networking
• XenServer 6.1 – What’s New?
ᵒ Link Aggregation (LACP)
• Debugging Networking Issues
• Key Takeaways and References
© 2012 Citrix | Confidential – Do Not Distribute
Before we get started…
• Lets familiarize ourselves with the following terms:
Common Enterprise Networking terminologies:
ᵒ NIC
ᵒ Switch – Core
ᵒ Router, Hub
ᵒ Firewall
ᵒ DHCP, PXE, TFTP, TSB
XenServer:
ᵒ LACP
ᵒ VLAN
ᵒ VIF
ᵒ PIF
© 2012 Citrix | Confidential – Do Not Distribute
Example Enterprise Switching Environment
Server
Farm
Firewall
Backbone Switch
Distribution Switch
Access Switch
© 2012 Citrix | Confidential – Do Not Distribute
Internet
Example WAN Environment
© 2012 Citrix | Confidential – Do Not Distribute
Network Vendors
Vendor
Components
Cisco
Switches, ACE (load
balancing), ASA
(Firewall), Routers
Juniper
Network Security
Solutions
Brocade
Switches, Routers
Citrix NetScaler
F5
River Bed
Access Gateway, WAN
Accelerators,Load
Balancing
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Advanced Training
Introduction to Enterprise Networking
XenServer Networking Conceptual Model
Control Domain
(Dom 0)
PIF
Virtual Machine
VIF
Bridge
vNIC
Netback
Linux
Driver
Xen Hypervisor
Hardware
Network Card
© 2012 Citrix | Confidential – Do Not Distribute
Netfront
XenServer Networking Configurations
Command Line
XAPI
XenServer Pool
DB
Linux
Config
Files
Linux NIC
Drivers
Network
Card
XenCenter
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Network Terminology
Private
(xapi1)
VIF
Virtual Switches
Network 0
(xenbr0)
Network
Card
Virtual Machine
VIF
PIF (eth0)
Virtual Machine
VIF
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Network Terminology
Network 0
(xenbr0)
Network
Card
PIF (eth0)
VIF
Virtual Switches
Network 1
(xenbr1)
Network
Card
Virtual Machine
VIF
PIF (eth1)
Virtual Machine
VIF
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Network Terminology
PIF (eth0)
Network
Card
PIF
Bond 0+1
(xapi2)
VIF
Virtual Machine
PIF (bond0)
VIF
Network
Card
VIF
PIF (eth1)
Virtual Machine
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Network Terminology
Virtual Switches
VLAN 25
Network
Card
Network 0
PIF
VIF
Virtual Machine
VIF
Network
Card
Network 1
PIF
VIF
Virtual Machine
VLAN 55
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Network Terminology
VLAN 25
VIF
Network
Card
Bond
Virtual Machine
0+1
VIF
Network
Card
PIFs
VLAN 55
© 2012 Citrix | Confidential – Do Not Distribute
VIF
Virtual Machine
Bonding Type (Balance SLB)
0:10
0:30
0:20 SEC
0:00
Virtual Machine
Network
Card
Bond
Virtual Machine
Stacked
Switches
Network
Card
Virtual Machine
© 2012 Citrix | Confidential – Do Not Distribute
What about faster iSCSI/NFS?
0:10
0:30
0:20 SEC
0:00
Virtual Machine
Network
Card
Bond
iSCSI/NFS
Network
Card
Virtual Machine
iSCSI/NFS
Dom0 with iSCSI or NFS
software
© 2012 Citrix | Confidential – Do Not Distribute
Advanced XenServer Training
Distributed vSwitch
Open Virtual Switch for XenServer
Visibility· Resource control · Isolation · Security
V
M
V
M
V
M
V
M
Hypervisor
V
M
V
M
V
M
Hypervisor
V
M
V
M
V
M
Hypervisor
• Open Source Virtual Switch maintained at www.openvswitch.org
• Rich layer 2 feature set (in contrast to others on the market)
• Ships with XenServer 5.6 FP1 as a post-install configuration option
© 2012 Citrix | Confidential – Do Not Distribute
V
M
Distributed Virtual Switch Controller
V
M
V
M
V
M
Hypervisor
V
M
V
M
V
M
Hypervisor
V
M
V
M
V
M
V
M
V
M
Hypervisor
DVS
Hypervisor
© 2012 Citrix | Confidential – Do Not Distribute
DVS Controller is a XenServer Virtual
Appliance that controls multiple Open
vSwitches
Distributed Virtual Switch
Built-in policy-based ACLs move with VMs
V
M
V
M
V
M
V
M
Hypervisor
V
M
V
M
V
M
V
M
Hypervisor
Virtual Interface (VIF) {MAC, IP} ACLs
permit tcp 10.0.0.0 0.0.0.255 10.20.0.0 0.0.0.255 eq domain
permit tcp 192.168.0.0 0.0.0.255 10.20.0.0 0.0.0.255 eq domain
permit tcp 172.16.0.0 0.0.0.255 10.20.0.0 0.0.0.255 eq domain
permit udp 10.0.0.0 0.0.0.255 10.20.0.0 0.0.0.255 eq domain
permit udp 192.168.0.0 0.0.0.255 10.20.0.0 0.0.0.255 eq domain
permit udp 172.16.0.0 0.0.0.255 10.20.0.0 0.0.0.255 eq domain
permit tcp 10.0.0.0 0.0.0.255 10.20.0.0 0.0.0.255 eq 123
DVS
© 2012 Citrix | Confidential – Do Not Distribute
V
M
V
M
Hypervisor
V
M
XenServer Networking
LACP Support – Finally!!
Overview
• Existing bonding modes and related issues:
• active-active (balance-slb)
• ARP table confusion for some switches (due to MAC flapping)
• active-passive (active-backup)
• only one link used at a time
• LACP (802.3ad)
• not supported
• working only on Linux Bridge
• The modern way – LACP bonding mode:
• Link aggregation on both server and switch side.
• Sides communicate with LACP protocol.
• Better load balancing.
© 2012 Citrix | Confidential – Do Not Distribute
VM 1
App 1
Virtual
interface
(VIF)
App 2
NIC 1
Stacked
Switches
LACP Bond
App 1
VM traffic
App 2
VM 2
© 2012 Citrix | Confidential – Do Not Distribute
Virtual
interface
(VIF)
NIC 2
Architecture
• For given NICs, we bond the PIFs and set mode to LACP.
• Bond object is created on xapi level.
• Command bond-create executes vSwitch commands.
• LACP is configured on the switch ports.
• Switch and server exchange LACP frames.
• Switch chooses the active members.
• Switch and server balance their outgoing traffic independently.
© 2012 Citrix | Confidential – Do Not Distribute
Load balancing
• Hash-based traffic balancing:
ᵒ vSwitch computes flow number (0-255) for each packet
ᵒ Each flow (hash) is assigned to an active link
ᵒ Flows can be moved between links (re-balancing every 10s)
• Two hashing algorithms:
ᵒ tcpudp_ports: based on IP and port of source and destination
• default algorithm for LACP
• source MAC also taken into account
• VM traffic from different applications can use more than one link
ᵒ src_mac: based on source MAC
• vSwitch uses the same mechanism as for active-active
• traffic from one VIF will not be split
© 2012 Citrix | Confidential – Do Not Distribute
Configuration
• Two new bond modes in XenCenter wizard:
ᵒ LACP with load balancing based on IP and port of source and destination
ᵒ LACP with load balancing based on source MAC address
• CLI commands:
Create LACP bond
xe bond-create mode=lacp network-uuid=… pif-uuids=… \
properties:hashing-algorithm=src_mac
specify hashing algorithm
xe bond-param-set uuid=… \
properties:hashing_algorithm=tcpudp_ports
© 2012 Citrix | Confidential – Do Not Distribute
Switch configuration
• Switch must support 802.3ad (LACP)
• Choose unused Link Aggregation Group (LAG)
• For each NIC being bonded:
ᵒ Find the switch port/unit connected to the NIC
ᵒ Set LAG membership for the port
ᵒ Enable LACP for the port
• If necessary, bring up the LAG interface
• If required, configure the VLAN settings on the LAG
© 2012 Citrix | Confidential – Do Not Distribute
Side note: Static LAG
• Static LAG:
ᵒ Ports are members of a Link Aggregation Group.
ᵒ Ports do not have LACP set.
ᵒ All links within the LAG are active.
• Static LAG is not LACP.
ᵒ Use balance-slb, not LACP bond mode for static LAG.
© 2012 Citrix | Confidential – Do Not Distribute
Limitations and changes
• LACP for Linux Bridge not supported.
If used anyway, note that it was renamed from 802.3ad to lacp.
•
•
•
•
•
EtherChannel/Cisco Port Aggregation Protocol (PAgP) not supported.
Switches must be stacked.
Up to four NICs per bond for all bond modes.
Only 2 NICs per bond supported for Linux Bridge.
Improvements for bonding planned in near future:
ᵒ Switching off dynamic load balancing (preventing MAC flapping)
ᵒ Choice of active link for active-passive bonds
© 2012 Citrix | Confidential – Do Not Distribute
Troubleshooting
• Frequent surprises:
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
Wiring misinformation – wrong ports might be aggregated
Switch rules – and it can decide to use just one link as active
No HCL – 802.3ad is a standard
Longer set-up – more time might be required for LACP and DHCP
One ‘fat’ flow still will not be split
In ovs commands, ‘bond mode’ means ‘hashing algorithm’
• Mismatching settings
ᵒ LACP only on server side – we should have active-active mode
ᵒ LACP only on switch side – you are on the mercy of the switch
© 2012 Citrix | Confidential – Do Not Distribute
Debugging
• CLI command:
xe bond-param-list
• vSwitch commands:
ᵒ ovs-vsctl list port
ᵒ ovs-appctl bond/show bond0 (lacp_negotiated: false/true)
ᵒ ovs-appctl lacp/show bond0 (actor and partner information, hashes)
• xensource.log – network daemon entries, including DHCP
• /var/log/messages – vSwitch entries, including hash shifting
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Networking
SR IOV and more
SR-IOV
• SR-IOV – VM to talk directly to a NIC rather than having to pass network traffic
through Dom0
• SR-IOV virtualizes the network in hardware, rather than in software
• benefits and disadvantages:
ᵒ good: there has generally a significant speed improvement in per-VM bandwidth (and
usually in latency) when using SR-IOV instead of Dom0-mediated networking
ᵒ good: partitioning
ᵒ good: avoiding the switch
ᵒ bad: avoiding the switch
ᵒ bad: migration issues
© 2012 Citrix | Confidential – Do Not Distribute
SR-IOV Pre-requisites
• SR-IOV capable network device (i.e. Intel 82599 10 Gb-E Controller)
• The SR-IOV NIC cannot be used as the management.
ᵒ A second physical NIC must be installed on the system
• iommu must be enabled on the XS host
ᵒ /opt/xensource/libexec/xen-cmdline --set-xen iommu=1
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Networking – without SR-IOV
Control Domain
(Dom 0)
PIF
Virtual Machine
VIF
Bridge
vNIC
Netback
Linux
Driver
Xen Hypervisor
Hardware
Network Card
© 2012 Citrix | Confidential – Do Not Distribute
Netfront
XenServer Networking – with SR-IOV
Control Domain
(Dom 0)
PIF
Virtual Machine
VIF
Bridge
vNIC
Netback
VF Driver
Linux
Driver
Xen Hypervisor
Hardware
Network Card
© 2012 Citrix | Confidential – Do Not Distribute
SR-IOV Aware
Network Card
Using Multi-Queue NICs
1
Driver
Domain
UnMap
buffer
7
Backend
Driver
Map
buffer
3
post buf on dev
queue
Post grant on I/O
channel
9
Push
into the network
stack
I/O
Guest Domain
Channels
Frontend
gr
Driver
•
2
Physical Driver
5
event
IRQ
guest
• Avoid data copy
• Avoid software
bridge
DMA
One RX queue
per
guest
8
6
Advantage of
multi-queue
Xen
MAC addr
MQ NIC
Incoming Pkt
4
Hardware
demux
© 2012 Citrix | Confidential – Do Not Distribute
175
VLAN Overview
• A VLAN allows a network administrator to create groups of logically
networked devices that act as if they are on their own independent
network, even if they share a common infrastructure with other VLANs.
• Using VLANs, you can logically segment switched networks based on
functions, departments, or project teams.
• You can also use a VLAN to geographically structure your network to
support the growing reliance of companies on home-based workers.
• These VLANs allow the network administrator to implement access and
security policies to particular groups of users.
© 2012 Citrix | Confidential – Do Not Distribute
VLAN Overview
© 2012 Citrix | Confidential – Do Not Distribute
VLAN in details
• A VLAN is a logically separate IP subnetwork.
• VLANs allow multiple IP networks and subnets to exist on the same
switched network.
• For computers to communicate on the same VLAN, each must have an
IP address and a subnet mask that is consistent for that VLAN.
• The switch has to be configured with the VLAN and each port in the
VLAN must be assigned to the VLAN.
© 2012 Citrix | Confidential – Do Not Distribute
VLAN in details
• A switch port with a singular VLAN configured on it is called an access
port.
• Remember, just because two computers are physically connected to
the same switch does not mean that they can communicate.
• Devices on two separate networks and subnets must communicate via
a router (Layer 3), whether or not VLANs are used.
© 2012 Citrix | Confidential – Do Not Distribute
Dom0 – Limitations with VLAN
• Dom0 doesn’t support VLANs
ᵒ
ᵒ
ᵒ
ᵒ
Management network needs to be on its on NIC
XenMotion cannot be separate from management NIC
iSCSI, NFS can be on VLANs
Dom0 supports NIC bonding
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Networking
Lab 3 – Networking Lab
In this lab… 30 minutes
• Create a LACP bond
• Create a distributed virtual switch
• Explore VIFs, PIFs and eth# interfaces
© 2012 Citrix | Confidential – Do Not Distribute
Resource Links
• XS Administrator’s Guide – revised section on bonding
• The standard:
http://standards.ieee.org/getieee802/download/802.1AX-2008.pdf
• Internal wiki page:
http://confluence.uk.xensource.com/display/engp/Bonding+and+LACP
• Feature lead:
Kasia Borońska ([email protected])
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Health
Monitoring & Troubleshooting
Agenda
• Managing & Monitoring this wonderful piece of software
• It’s broken, How do I fix it?
• Show me Your Commands!
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Monitoring &
Troubleshooting
Managing & Monitoring this wonderful piece of
Software
Using Console Folder View
• Manage XenServer environment in your own way
• Selection styles:
• SRs (Remote and Local)
• Snapshots
• Networks
• VMs
• Pools
• Servers
• Custom Folders
• Simple drag & drop functionality
© 2012 Citrix | Confidential – Do Not Distribute
Delegated Management (Web Self Service)
• Delegate VM level access to end-users
• View consolidated virtual machine guests from multiple resource pools
• Basic life cycle operations such as Start, Stop, Suspend and Reset on virtual
machine
• Remote login (VNC for Linux Guests and RDP for Windows Guests) to the
virtual machine guests
• Force Shutdown & Force Reboot (Only with XS Tools)
• Fully AD Integrated
© 2012 Citrix | Confidential – Do Not Distribute
Web Self Service Roles
XenServer Roles
WSS Roles
Pool Admin
WSS Admin
Pool Operator
WSS Operator
VM Admin
VM Operator
VM Power Admin
Read Only
WSS User
No Role (Default)
WSS User
© 2012 Citrix | Confidential – Do Not Distribute
Web Self Service Oddities
Sharing Through Tags? What?
© 2012 Citrix | Confidential – Do Not Distribute
Web Self Service Oddities
• Authentication Can’t be changed – Better Make up your mind
• Integrated Auth is pretty annoying
• Not Really AD, XS AD!
• Process is still manual for AD?
ᵒ Must Add every User unless auto-login is enabled
© 2012 Citrix | Confidential – Do Not Distribute
Legacy Performance Monitoring
• In Citrix XenServer 5.6+ you are no longer able to collect Host and VM
performance data via the “legacy” API calls. Instead they direct you to use the
web services via URL
© 2012 Citrix | Confidential – Do Not Distribute
RRDs (Round Robin Databases)
• RRDs are the OpenSource industry standard, high performance data logging
and graphing system for time series data.
• XenServer’s Performance monitoring file is actually an RRDTool File, go figure.
• Download format is:
http://<username>:<password>@<host>/rrd_updates?start=<secondssin
ceJan1,1970>
ᵒ Use &host=true suffix to get RRD for a XS Host
• XML downloaded contains every VM's data. It is not possible to query a
particular VM on its own or a particular parameter.
ᵒ To differentiate, the 'legend' field is prefixed with the VM's uuid.
• Free RRDTool available at: http://oss.oetiker.ch/rrdtool/
© 2012 Citrix | Confidential – Do Not Distribute
TAAS (Previously XenoScope)
• XenServer Metrics:
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
Uses RRD Metrics
Can be drilled down by domain, time frame, graph size.
Host memory usage Piechart
Host Hardware
Pool hosts Overview
Pool Hardware Comparison
Installed Hotfixes Comparison
Running Virtual Machines Overview
CPU Flags Overview and Comparison
Recommendations
Server Log review tools
© 2012 Citrix | Confidential – Do Not Distribute
TAAS (Previously XenoScope)
A Very Clean Interface (Also for XA/XD/NS)
© 2012 Citrix | Confidential – Do Not Distribute
TAAS (Previously XenoScope)
A Very Clean Interface (Also for XA/XD/NS)
© 2012 Citrix | Confidential – Do Not Distribute
DoubleRev’s Management Pack
(http://www.doublerev.com/ )
• Pros:
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
Automatic and agent less mass update of XenTools
DVD Manager
Storage Analyzer
Mass VM-Copy
Re-convert templates to virtual machines
VM-Inventory Export
Seamless XS Console Integration
© 2012 Citrix | Confidential – Do Not Distribute
DoubleRev’s Management Pack
(http://www.doublerev.com/ )
• Cons:
ᵒ
ᵒ
ᵒ
ᵒ
CLI Commands
Limited Monitoring
Most of these features probably will come in Future XS Console
Costs $$$
• Verdict:
ᵒ Should only be considered if you are extremely lazy as most of these features are
nothing terribly difficult to accomplish
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Monitoring &
Troubleshooting
It’s Broken, How do I fix it?
Methodology For Recovery Overview
© 2012 Citrix | Confidential – Do Not Distribute
Step 1 - Is the Pool Master down?
• Then the Master server cannot be contacted and is likely non-functional.
ᵒ If connection to the XenServer pool is not possible from XenCenter
ᵒ If issuing commands (xe host-list) at CLI of a surviving host returns “Cannot perform
operation as the host is running in emergency mode”
• Under some circumstances (such as power outages) there may have been
multiple XenServer host failures in the pool. In this case, the pool master
should be attempted to be recovered first.
© 2012 Citrix | Confidential – Do Not Distribute
Who’s the Pool Master Anyways?
• Determined through pool.conf located under /etc/xensource
ᵒ Master for PoolMaster
ᵒ slave:<pool master IP> for all others
• Important Commands when in Emergency Mode:
ᵒ
ᵒ
ᵒ
ᵒ
xe pool-emergency-reset-master
xe pool-emergency-transition-to-master
Follow up with: xe pool-recover-slaves
Check that default-SR is valid: xe pool-param-list
© 2012 Citrix | Confidential – Do Not Distribute
Step 2 - Recover Pool operations
If recovery of the pool master is not possible within a short timeframe, it will be
necessary to promote a member server to a master to recover pool management
and restart any failed Virtual Machines.
1. Select any running XenServer within the pool that will be promoted.
2. From the server’s command line, issue the following command: xe pool-emergencytransition-to-master
3. Once the command has completed, recover connections to the other member
servers using the following command: xe pool-recover-slaves
4. Verify that pool management has been restored by issuing a test command at the
CLI (xe host-list)
© 2012 Citrix | Confidential – Do Not Distribute
Step 3 - Verify which XenServer(s) failed
This step verifies which XenServer(s) in the pool have failed
1.
Issue the following command at the Command line of a surviving pool
member: xe host-list params=uuid,name-label,host-metrics-live
2.
Any servers listed as host-metrics-live = false have failed.
If it was necessary to recover the Pool Master with Step 2, the Master
will now show as true.
1.
Note down the first few characters of the UUID of any failed servers (or the
UUID of the Master server if it originally failed).
© 2012 Citrix | Confidential – Do Not Distribute
Step 3 - Verify which XenServer(s) failed
ᵒ Result of xe host-list params=uuid,name-label,host-metrics-live
© 2012 Citrix | Confidential – Do Not Distribute
Step 4 - Verify which Virtual Machines have
failed
• Regardless of whether a Master or Slave server fails, virtual machines running
on other hosts continue to run.
1.
Issue command at the CLI of a surviving server. Use the UUIDs from Step
3, type first 3 digits of the UUID, and press [tab] to complete: xe vm-list iscontrol-domain=false resident-on=UUID_of_failed_server
2.
Note that the power state of these VMs is still “running” even though the
server has failed. Step 5 will take care of this problem.
3.
Repeat this step if necessary using the UUID of any other failed servers
(including master).
© 2012 Citrix | Confidential – Do Not Distribute
Step 4 - Verify which Virtual Machines have
failed
• Result of xe vm-list is-control-domain=false residenton=UUID_of_failed_server
© 2012 Citrix | Confidential – Do Not Distribute
Step 5 - Reset power state on failed VMs
• To restart VMs after a host failure, it is necessary to reset their power state.
1.
ᵒ
2.
Issue the following command at the command line of a surviving server: xe
vm-reset-powerstate resident-on= UUID_of_failed_server --force –
multiple
Alternately, you can reset VMs individually.
Verify that there are no VMs still listed as resident on the failed server by
repeating step 4. The vm-list command should now return no results.
© 2012 Citrix | Confidential – Do Not Distribute
Step 6 - Restart VMs on another XenServer
1.
Load XenCenter and verify that each VM that was originally running on the
failed server is now marked as halted (Red icon next to the VM)
Note: VMs which have a home server assigned will not appear in XenCenter –
Why?
ᵒ xe vm-param-set uuid=<uuid of vm to change> affinity=<uuid of new home
server>
2.
Restart each VM on a surviving pool member
© 2012 Citrix | Confidential – Do Not Distribute
Collecting Crash Dumps (Regular Style)
1.
Configure System for Memory Dump:
http://support.microsoft.com/kb/927069
2.
Find the Universally Unique Identifier (UUID) of the VM from which you
would like to collect memory dump using: xe vm-list
3.
Based on the UUID of the virtual machine, find a domain ID in the current
system. The Domain ID changes every time the VM starts: list_domains
4.
Use the Domain ID (1st number listed) to crash the host using:
/usr/lib/xen/bin/crash_guest <VM Domain>
© 2012 Citrix | Confidential – Do Not Distribute
Collecting Crash Dumps (Ninja Style)
• Crashdump redirection to dom0 filespace
ᵒ When streaming PVS guests, where you cannot save the crashdump to the Pagefile
after a crash due to the network being disabled automatically
ᵒ When the root drive doesn't have enough space to store a Pagefile as large as the
guest's memory size
• Consider mounting either an iscsi lun or nfs, cifs filesystem under /crashdumps
to store large dumps since root partition typically has <1.5GB available space
© 2012 Citrix | Confidential – Do Not Distribute
Collecting Crash Dumps (Ninja Style)
1.
Open /opt/xensource/libexec/qemu-dm-wrapper with vi
2.
Search for "SDL_DISABLE_WAITMAPPED“ and modify by adding the
following at the line immediately following it:
ᵒ
ᵒ
ᵒ
ᵒ
ᵒ
3.
qemu_args.append("-priv")
qemu_args.append("-dumpdir")
qemu_args.append("/crashdumps")
qemu_args.append("-dumpquota")
qemu_args.append("1024")
Create the directory where the dumps will be stored: mkdir -p
/crashdumps
© 2012 Citrix | Confidential – Do Not Distribute
Collecting Crash Dumps (Ninja Style)
4.
Reboot the VM
5.
use the crash_guest util to force the crashdump. If the condition leads to
the BSOD, just wait for the streaming to finish and check the file at
/crashdumps. Example:
Find dom ID: xe vm-list name-label="Windows Server 2003" params=dom-id
Result: dom-id ( RO) : 3
To crash the domain id 3: /usr/lib/xen/bin/crash_guest 3
6.
Copy the dumps outside the VM with a tool such as winscp.
© 2012 Citrix | Confidential – Do Not Distribute
VM Needs Imaginary Storage
• While working on-site with a client, XenServer decided to subscribe to the cult
of Jacob Marley and refused to turn on VM saying ‘This VM needs storage that
cannot be seen from that host’
© 2012 Citrix | Confidential – Do Not Distribute
VM Needs Imaginary Storage
• Caused by CIFS/DVD drives on XenServer and the machine where the
ISOs/DVD resided failed.
1.
xe vm-cd-eject –multiple (Gets rid of Faulty CD/DVD media)
© 2012 Citrix | Confidential – Do Not Distribute
File System on Control Domain Full
© 2012 Citrix | Confidential – Do Not Distribute
File System on Control Domain Full
• Result of the Dom0 Storage allocation becoming Full
• Typically caused by overgrown Log Files
ᵒ XenServer logrotate feature can break - it cannot parse the size parameter.
1.
Check Disk Space to validate issue: df –h
2.
Check what’s causing the problem: find {/path/to/directory/} -type f -size
+{size-in-kb}k -exec ls -lh {} \; | awk ‘{ print $9 “: ” $5 }’
ᵒ
3.
Specify the path and the size in kb
In order to determine root cause lower the log rotations and enable
compression (/etc/logrotate.conf)
© 2012 Citrix | Confidential – Do Not Distribute
File System on Control Domain Full
4.
In Case of Total Loss (XAPI died): xe-toolstack-restart
5.
Typically this issue is due to TCPOffload freaking out causing unnecessary
entries in logfiles.
6.
Run Script to disable TCP/Checksum Offload
ᵒ
Disabling TCP Offload and checksum means that the main CPU handles all the
load which was previously handled directly by the network card
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Unable To See NICs
• The XenServers are experiencing problems detecting their network cards. The
network cards are registered in the XenServer OS, however, they do not
display in the XenServer console.
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Unable To See NICs
• Issue is caused by disconnecting the pool master while it has attached slave
servers.
1.
Log into the local XenServer, opening the file pool.conf located under
/etc/xensource, deleting the contents of this file and replacing them with
the word master.
2.
Set the management console’s NIC to a valid IP/subnet
3.
Run xe-toolstack-restart
4.
Edit the pool.conf again to read slave:<pool master IP>
© 2012 Citrix | Confidential – Do Not Distribute
Frozen VM
• My VM(s) Froze and now XenCenter refuses to respond because it simply
does not care about how troubled my life already is
© 2012 Citrix | Confidential – Do Not Distribute
Frozen VM
• This is typically caused by conflicting commands being sent through the Xen
Center console resulting in erroneous metadata
• To force reboot VM (Like a Boss)
1.
Find out the target VM UUID, type the command line : xe vm-list (Hint: Use
| more when there are too many VMs)
2.
Find your VM(s) UUID
3.
Run the command: xe vm-reset-powerstate uuid=<Machine UUID>
force=true
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Installation Fails
• I started installing XenServer on a bunch of machines, went out to a super
relaxing lunch only to come back to my XenServer installation stuck at 49%
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Installation Fails
• Some versions of XenServer do not include the firmware for the Qlogic
QLA2100, QLA2200, QLA2300 and QLA2322 series NICs
1.
Turn on the host, using the XenServer installation media
2.
At the boot prompt, type: shell
3.
You will now be presented with a command prompt. Type the following:
rmmod qla2xxx; rmmod qlge
4.
Type: Exit
5.
Install XS normally
© 2012 Citrix | Confidential – Do Not Distribute
Commands CheatSheet
• XenServer Host and Dom0: xentop
• Host
ᵒ xe-toolstack-restart to restart XAPI
ᵒ eject to eject physical CD from server.
ᵒ cat /etc/xensource-inventory to see your host information.
• Network
ᵒ
ᵒ
ᵒ
ᵒ
ifconfig to see what interfaces are up for networking.
ethtool –p eth0 60 for make NIC flash for identification.
ethtool eth0 to check the status of the interface.
ethtool –i eth0 to check the driver type and version of NIC.
© 2012 Citrix | Confidential – Do Not Distribute
Commands CheatSheet
• Disk
ᵒ fdisk –l to view local disk information.
ᵒ df –h to see how much space you have left in root disk.
• Multipath
ᵒ multipath -ll to view the current mulitpath topology as presented by control domain.
• VM
ᵒ xe vm-reboot vm=<VM Name> force=true to hard-reboot a VM.
• Logs
ᵒ xen-bugtool –yestoall to get the logs for support.
ᵒ tail –f /var/log/messages to view events in messages log.
© 2012 Citrix | Confidential – Do Not Distribute
XenServer Monitoring &
Troubleshooting
Lab 4 -Show Me Your Commands!
In this lab… 60 minutes
• Install Performance monitoring pack (new with XenServer 6.1)
• Configure Performance VM and run sample workloads (Discuss results)
• TAAS.CITRIX.COM – Upload server status reports
• (Advanced) – Break/Fix lab
© 2012 Citrix | Confidential – Do Not Distribute
Advanced XenServer Training
Questions, Yes?
Work better. Live better.