VMWare Update - WordPress.com
Download
Report
Transcript VMWare Update - WordPress.com
Call and Learn: VMware
Knowledge Sharing Session
3/16/2012
1
© Copyright 2012 Avanade Inc. All Rights Reserved.
The Avanade name and logo are registered trademarks in the US and other countries.
Foundation
• Virtualization uncouples resources from the underlying
physical hardware they run on.
Easier to move servers around when they are not physically
attached to one particular piece of hardware.
– This ‘uncoupling’ allows for easier migration of systems since
there is no tie to a physical piece of equipment residing in one
facility
• Virtualization for servers has been around for some time.
– VMware capitalized on this space in the x86 server market.
• Virtualization for storage is starting to be more popular, as
well as networking.
– Many of these moves are related to the ‘cloud’ computing
market that requires a full de-coupling of physical infrastructure
from the instances & applications that run on them
© Copyright 2012 Avanade Inc. All Rights Reserved.
What is VMware?
• VMware is a 4 billion dollar software company owned
mostly by EMC.
– 79.8% of total outstanding shares of common stock and 97.3%
of voting power due to Class B common stock.
• They develop a number of products, mostly around the
virtualization and cloud space.
• Their key competitors are Microsoft & Citrix.
– They have everything but a server based general purpose
Operating System (Though they almost owned SUSE Linux
and give it away for free for new license purchases)
• Both the CEO (Paul Maritz) and one of the Co-Presidents
(Tod Nielsen) came from Microsoft.
– New CEO does not come from Microsoft
© Copyright 2012 Avanade Inc. All Rights Reserved.
Product List (Server Virtualization)
• vSphere/ESX/ESXi:
– Their flagship enterprise virtualization product that allows you
to run multiple virtual machines on one physical server.
– It is a Type 1 Hypervisor and is an Operating System itself.
– Both vSphere and ESX/ESXi can be used interchangeably
• Server:
– Is also a product that allows you to run multiple virtual
machines on one server but is a Type 2 Hypervisor and runs
on top of another OS, such as Windows or Linux
• vCenter Server:
– Management platform for vSphere/ESX/ESXi
• vCloud Director:
– Self-Service Infrastructure as a Service (IaaS) solution to
provision virtual machines and virtual apps (vApps).
• For the purposes of this presentation, we’ll focus on
vSphere/ESX/ESXi
© Copyright 2012 Avanade Inc. All Rights Reserved.
Product List (Desktop Virtualization)
• Workstation:
– Is used by IT professionals and developers to create multiple
virtual machines for testing purposes on their workstation
• Fusion:
– Is used by Mac users to create Windows virtual machines that
allow Windows programs to run as if they were natively
running within OS X.
• View:
– Is their desktop virtualization offering that allows multiple virtual
desktops to run on one physical server and be managed in
pools
• ThinApp:
– ThinApp is comparable to App-V and is a way to package
applications into a single virtual application that is isolated from
each other and the operating system.
“Claim to fame” is the ability to run virtualized IE6 on Windows 7.
© Copyright 2012 Avanade Inc. All Rights Reserved.
Product List (Management)
• vCenter Operations Management Suite
– Automated operations management using analytics and other intelligent sensors
– Includes:
vCenter Configuration Manager
– Change and configuration management across virtual and physical environments.
vCenter Infrastructure Navigator:
– Automated discovery and dependency mapping of applications and infrastructure
vCenter Chargeback Manager
– Cost modeling for virtual infrastructure resource usage
• vCenter Orchestrator
– Automation and Orchestration for vSphere.
• vCenter Site Recovery Manager
– Disaster Recovery management and automated testing for virtual environments.
• vCenter Lab Manager
– Automated management of transient environments such as development, QA, training to provide
self-service for lab owners.
• vCenter Service Manager
– Service Management (Service desk, service catalog, incident & problem management)
• vShield
– App
Basic traffic monitoring and policy management for virtual machines
App with Data Security also includes Sensitive Data Discovery within the virtual workloads.
– Edge
Provides edge security such as firewall, VPN, load balancing, NAT for virtual environments.
– Endpoint
Offloads antivirus and anti-malware agent processing to a dedicated virtual appliance
• vFabric Application Performance Manager
– Application level dashboard to provide real-time application and transaction level detail.
© Copyright 2012 Avanade Inc. All Rights Reserved.
Product Comparison (current versions)
VMware
Microsoft
Type 1 Hypervisor vSphere
Hyper-V
Desktop
Workstation
Windows Virtual PC
vCenter Server
Hyper-V
vCloud Director
VMM Self-Service Portal
View
ThinApp
App-V
Configuration Manager
SC Configuration Manager
Infrastructure Navigator
SC Operations Manager
Chargeback Manager
Orchestration
Orchestrator
SC Orchestrator
Site Recovery Manager
Service Desk
Service Manager
SC Service Manager
vShield
Application Performance Manager AVICode
© Copyright 2012 Avanade Inc. All Rights Reserved.
7
Gartner Magic Quadrant
© Copyright 2012 Avanade Inc. All Rights Reserved.
8
ESX/ESXi
• VMware ESX is the flagship server virtualization product from
VMware.
– ESX was first released in 2001, though usage did not pick up until
ESX 2.5 was released on 11/2004.
– ESXi 5 was released August 2011.
– ESXi 5.1 was released September 2012
• ESX is an OS itself, and is installed on a ‘bare-bones’ server,
that is, a server not running any other OS.
• ESX was originally composed of two key components, a Linux
kernel that is commonly called the service console running Red
Hat Enterprise Linux, and the vmkernel.
– While this can get confusing, the Linux kernel is started, which then
starts the vmkernel, which then moves the Linux kernel to be the first
virtual machine it runs
– ESXi removes the Service Console
• The vmkernel itself is not a Linux OS, though some of the
modules within the vmkernel are derived from Linux modules.
© Copyright 2012 Avanade Inc. All Rights Reserved.
ESX/ESXi
• The vmkernel is the ‘secret sauce’ that ESX brings to the
virtualization space, as it interfaces to the hardware and the
virtual machines, allowing each virtual machine to believe it
has access to the physical resources.
– This vmkernel also implements some of the more advanced
features including VMotion, SVMotion, DRS, HA, VMFS, and
others.
• This approach where the physical server runs the ESX OS
itself, and then virtual machines, or guests, run on top of
ESX, reduce the overhead compared to Type 2
Hypervisors that require an underlying OS to be installed,
and then virtual machines are created on top of that.
© Copyright 2012 Avanade Inc. All Rights Reserved.
ESX/ESXi
• ESXi is the standard going forward for VMware ESX.
• In the ESXi model, the Service Console has been removed
completely, leaving just the vmkernel
– There are some new modules that have been added to the
vmkernel to help manage it including a bare bones menu
system that helps during initial configuration as well as a local
command console for troubleshooting
• This has reduced the number of patches needed and
security risks from within ESX
• The vmkernel itself is under 100mb, allowing systems to
run ESXi on a USB flash drive.
– This helps to drive the virtualization process forward even
more, as the physical servers themselves are just compute
engines that contain no critical data and can be stateless.
• This also reduces the overhead used by ESX itself,
providing more resources to the virtual machines itself.
• All new ESX releases after 4.1 are based on this model.
© Copyright 2012 Avanade Inc. All Rights Reserved.
Comparison of ESX vs. ESXi
ESX
ESXi
© Copyright 2012 Avanade Inc. All Rights Reserved.
ESX/ESXi
• When you purchase a new physical system to run ESXi,
the first step is to install ESXi.
– You can purchase systems direct from most manufacturers
already running ESXi on a USB stick
• The ESXi install itself takes about 6 minutes and requires
you to answer a few question
– Which disk to install ESXi.
– Root password
– Once it’s installed, there are some basic configuration steps to
complete:
Configure Management Network – This is the initial network
needed to manage ESXi with a GUI later.
– Once this is complete, you’re on the way to creating virtual
machines, or guests.
© Copyright 2012 Avanade Inc. All Rights Reserved.
Virtual Machines
• Virtual machines are the virtual instances, guests, or VM’s
(depending on what you like to call them) that run on the
server.
• Virtual machines are defined with some basic properties.
– There’s the virtual machine name which creates a folder on a
datastore with some files including a vmdk and vmx file.
When a virtual machine is renamed the underlying files are not
renamed.
– Another key component of the virtual machine are the
resources assigned. This includes memory, number of vCPU’s
(virtual CPU’s), how many hard drives, how many network
adapters, and some other options like floppy drive or CDROM.
© Copyright 2012 Avanade Inc. All Rights Reserved.
Virtual Machines
• For each of the properties of a VM, there’s some options.
– Peripherals: Floppy & CD-Rom
If the VM doesn’t need access to a Floppy or CD-Rom, then do not
add those to the VM, as they can take a small amount of resources
away
• Network adapters: VM’s can have one or more network cards
with different settings including what type of driver.
– The network adapter defines one virtual network card, the network
it’s connected to, and its MAC address.
– For systems that require more virtual network cards, you can keep
adding Network adapters like any regular physical server.
• Each VM also needs a hard disk to store its information, like a
physical server.
– When a new VM is created, it needs a place to store configuration
information, the OS that will be installed, and any data volumes.
When a new hard disk is created, you tell ESX where the data will go,
which will be stored on disk somewhere at a particular file size.
These disks are also commonly called “vmdk”.
© Copyright 2012 Avanade Inc. All Rights Reserved.
Virtual Machines
• CPU:
– CPU is a fairly simple process where you select anywhere from 1 to 64 vCPU’s
for your virtual machine.
– You should be as conservative as possible with this configuration, so if your VM
only needs 1 processor, do not give it 2, as the scheduling between multiple
processors can consume more processing power than it provides to the VM
This requires proper sizing as just adding more processors to a system may actually
cause performance issues rather than solve them.
– General rule of thumb is 4:1 vCPU’s to Physical CPU’s
• Memory:
– Select any amount of memory that the system can use up to 1TB per VM.
• For both memory and CPU, you can over-allocate resources, like an airline
that oversells its seats
– If your physical server has 32GB of memory, you can allocate 128GB (or more)
to virtual machines
Whether you see performance issues or not depend upon how much of that
memory is used and how it’s used.
– ESX does a good job of managing memory, and will share memory if each VM is
looking for the same data set, or will compress memory if need be.
– The same applies to CPU’s
– In both cases, over provisioning needs to be well managed or it will cause
performance issues
© Copyright 2012 Avanade Inc. All Rights Reserved.
Storage
• All VM information is stored in two key files, one storing configuration
information, the other storing data.
– The configuration file, also known as .vmx, stores the configuration of
the VM, including the details on the name, configuration, options,
settings, etc.
There are other files including .nvram which stores BIOS options like boot
order
– The data file, also known as .vmdk, stores the data that the VM stores.
– Both of these files make up the VM itself, and can be moved to another
ESX host if need be.
These can also be exported and converted into a number of different formats
• These files are stored on a shared storage system, using technologies
like Fibre Channel (FC), iSCSI, NFS, etc.
– This is to ensure multiple ESX hosts can access the VM’s that are
running to provide some features such as HA, vMotion, DRS, and more.
• These files are typically stored on volumes formatted in the “VMFS”
format. There are different versions of VMFS, the latest is 5.
– Basically, VMFS allows any one ESX host to lock one individual file, or
group of files that comprise a VM.
This differs from traditional file systems that lock an entire volume, not an
individual file.
– NFS volumes are not formatted as VMFS volumes.
© Copyright 2012 Avanade Inc. All Rights Reserved.
Networking
• Traditional physical servers are connected to the specific
network they need access to, whether it’s an internal secure
network, DMZ, etc.
– The server is patched into a particular network port, which is
configured to be on the appropriate network, and we’re done.
• In the virtual world, one physical server may support many
networks. This is not possible if we patched in one cable for each
network when there could be hundreds of possible networks.
• For virtualization we can create network configurations that allow
a physical ESX host to have access to many networks, using
VLANs, Port Aggregation, and trunking.
– In a typical configuration, each network in a site is defined by a VLAN
– Our physical ESX host will use physical connections that are set to
trunk mode, which allows access to multiple VLAN’s
– Considering that each physical host could support multiple VM’s a
typical configuration bonds multiple physical network connections
into one logical connection using Aggregation protocols like pAGP
and LACP.
– This configuration allows us to define, per VM, which VLAN it belongs
to.
The ESX kernel then tags all traffic that VM sends with its proper
802.1q VLAN tag.
© Copyright 2012 Avanade Inc. All Rights Reserved.
Management
• vSphere/ESX/ESXi Management
– During the installation of ESX an IP address is defined to use for management.
– This allows use of a GUI, the vSphere client, to manage this ESX host.
If multiple hosts are being managed you would also install server software called
vCenter Server that manage multiple hosts from one console.
Client in 5.1 is a web client not a GUI
– Within the vSphere client a connection is made to the management IP of the ESX
host, using the root username and password defined during the installation.
• This management console provides advanced configuration for networking,
storage, VM’s, and more
• When you first logon you’ll notice there are some configuration options
specific to the ESX host, and some specific to VM’s.
– Its always important to remember whether you’re making changes to the entire
host, or, to VM’s.
• In an environment of multiple ESX hosts, there are other groupings that are
created to help manage some advanced features and to more easily
manage multiple hosts as one logical grouping.
– Datacenters are created that allow multiple ESX hosts to be grouped into one
logical datacenter. Within that datacenter we can apply permissions, alerts, and
other options.
– Within Datacenters you can create clusters, which are sub-groupings that enable
some special features within ESX such as HA, DRS and DPM.
© Copyright 2012 Avanade Inc. All Rights Reserved.
Key Features
• VMotion
– VMotion is a technology that allows a running VM to move from one
physical ESX host to another physical ESX host in the same datacenter
cluster with NO disruption.
– ESXi 5.1 supports moving a VM from any physical ESX host to any other
ESX host regardless of same datacenter cluster or shared storage.
If you were streaming in HD the Super Bowl using a VM, and wanted to move
it to another ESX host without disrupting the VM, you would use the VMotion
technology to do that.
In the ESX design, where the VM is really a few files, and what is in memory,
VMotion copies the data in memory to the other ESX host, then quickly
changes which ESX host has access to the .vmx and .vmdk files. That’s it.
• In practice, this can reduce outages for hardware maintenance to zero.
• Storage VMotion, SVMotion, is Vmotion for storage.
– Traditionally, moving data from one storage array to another is a
complex task, taking lots of downtime to accomplish.
– SVMotion allows running VM’s to be moved from one storage system to
another with no disruption.
Just like vMotion, SVMotion copies the .vmdk file to another array, and then
tells the ESX host to use the new file.
• This reduces to zero the time required to upgrade the storage
supporting ESX
© Copyright 2012 Avanade Inc. All Rights Reserved.
Key Features
• Building on VMotion are two other technologies, DRS, and HA.
• HA, or High Availability, is a feature that keeps track of which
ESX hosts are online.
– When an ESX host fails, all the VM’s running on that host will also
fail.
– HA realizes the ESX host is down, and then restarts all the VM’s that
were running on the failed ESX host on other ESX hosts in the same
cluster.
– This reduces downtime significantly for hardware related issues that
take down an ESX host in your cluster and its virtual machines.
• DRS is a feature that load balances virtual machines running
within the ESX environment
– Traditionally, if you had some number of ESX hosts in a cluster, each
could be overloaded with VM’s.
– DRS looks at each ESX host in the cluster, and then VMotion will
move VM’s around the cluster in order to balance out performance of
each ESX host.
– This can be configured in multiple ways, but a typical option is
automatic recommendations and balancing
© Copyright 2012 Avanade Inc. All Rights Reserved.
vSphere Editions
Standard
Enterprise
Enterprise Plus
vCPU Limit
8
32
64
vRAM Entitlement
N/A
N/A
N/A
vMotion/HA
Hot Add
vShield Zones
FT
VAAI
Storage vMotion
DRS/DPM
I/O Control
Distributed Switch
Host Profiles
Auto Deploy
Storage DRS
Profile-Driven Storage
© Copyright 2012 Avanade Inc. All Rights Reserved.
22
Hypervisor Comparisons
• Comparison of vSphere 5.1 Enterprise Plus edition to
Hyper-V 2008 R2 SP1 and information available on HyperV 3.0 coming in Windows 8
– Hyper-V 3.0 details are what are publicly available and still
early in the release cycle
© Copyright 2012 Avanade Inc. All Rights Reserved.
23
vSphere 5 vs Hyper-V Scalability
vSphere5.1 Hyper-V 3.0 Hyper-V 2.0
Max RAM per Host
2TB
2TB
1TB
# logical processors per host
160
160
64
Max nodes per cluster
32 *
63
16
Max guest vCPU
64
64
4
Max vRAM per VM
1TB
1TB
64GB
Max guest virtual disk size
2TB
64TB
2TB
Max VM’s per cluster
4000
4000
1000
1024
384
Max powered on VM’s per host 512
• * vSphere 5 only will support 32 nodes per cluster officially but can scale to 64
© Copyright 2012 Avanade Inc. All Rights Reserved.
24
vSphere 5 vs Hyper-V Memory Management
vSphere5.1 Hyper-V 3.0 Hyper-V 2.0
RAM over-commit
Yes
Yes
Yes
Memory ballooning
Yes
Yes
Yes
Transparent page sharing
Yes
No
No
Memory Compression
Yes
No
No
Guest memory resource
shares
Yes
No
No
© Copyright 2012 Avanade Inc. All Rights Reserved.
25
vSphere 5 vs Hyper-V Scalability
vSphere5.1
Hyper-V 3.0 Hyper-V 2.0
Block level storage support
iSCSI/FC
iSCSI/FC
iSCSI/FC
File level storage support
NFS
SMB
No
Storage QoS
Yes (SIOC)
No
No
Storage load balancing
Yes (storage
DRS)
No
No
Profile driven storage
Yes
No
No
Storage migration
Yes
Yes
No
Offload data transfer to array
Yes (VAAI)
Yes (ODX)
No
In-guest fiber HBA/SCSI disk
support
Yes
Yes
No
Thin provisioning virtual disk
Yes
Yes
Yes
Change block tracking
Yes
Yes
Yes
© Copyright 2012 Avanade Inc. All Rights Reserved.
26
vSphere 5 vs Hyper-V Networking
vSphere5.1 Hyper-V 3.0 Hyper-V 2.0
Native NIC teaming
Yes
Yes
No
3rd party virtual switch
Yes (1000v)
Yes (1000v)
No
Distributed virtual switch
(native)
Yes
No
No
Network QoS
Yes (NIOC)
Yes (min/max) No
Networking monitoring
Yes (vShield)
No
No
Networking offload
Yes
Yes
Yes
3rd party network monitoring
Yes
Yes
No
© Copyright 2012 Avanade Inc. All Rights Reserved.
27
vSphere 5 vs Hyper-V Availability
vSphere5.1 Hyper-V 3.0 Hyper-V 2.0
Failover of VM if host fails
HA
Failover
Clustering
Failover
Clustering
Migration of running VM’s
vMotion
Live Migration Live Migration
Uninterrupted failover of VM
when host fails
Yes (FT)
No
No
Disaster Recovery
SRM/vMotion
Hyper-V
Replica
No
© Copyright 2012 Avanade Inc. All Rights Reserved.
28
Key Takeaways
• Memory Management
– One of the largest performance differentiators is in the way vSphere 5 manages memory
allowing for a larger consolidation ratio.
Transparent Page Sharing (TPS) allows for multiple running virtual machines to share the same set of
physical memory addresses. As an example if there are 20 virtual machines that load the same .dll or
other user data vSphere will only keep one copy.
Memory ballooning allows virtual machines to inflate their memory usage and report back to the
Hypervisor that the memory pages in the balloon can be reclaimed. This is only used when the host is
under memory pressure.
Memory Compression allows for memory contents to be compressed using CPU cycles rather than
paging out memory to disk. This only occurs when host memory is overcommitted.
Hypervisor Swapping is always a “bad” operation however vSphere 5 tries to mitigate this by providing
an option to swap to SSD in each host to reduce the latency required during paging.
• Resource Contention
– Another large performance differentiator are the numerous I/O Control technologies built
into vSphere.
– During normal operation all systems have equal access to the resources, network,
compute, and memory.
– However there are numerous ways to tune which virtual machines have priority as well as
ensuring no one virtual machine takes all the resources available when there is contention.
Storage I/O Control & Network I/O Control dynamically regulate the amount of bandwidth that any one
virtual machine can take on storage or networking based on the number of shares assigned.
Reservations guarantees the minimum requirement a virtual machine needs.
Shares allow for certain virtual machines to receive more priority in case of contention
© Copyright 2012 Avanade Inc. All Rights Reserved.
29
SCVMM 2008 R2
SCVMM 2008 Features
Hypervisor Management – Hyper-V, Cluster integration
VMware
Host Configuration
Library Management
Virtual Machine Creation
Conversions: P2V and V2V
Delegation and Self Service
Intelligent Placement
Deployment and Storage
Monitoring and Reporting
Automation with PowerShell
Performance and Resource
Optimization (PRO)
New in SCVMM 2008 R2
Manage Windows Server 2008 R2 Hyper-V and FREE MS Hyper-V Server 2008 R2
Live Migration
Multiple VMs per LUN using CSV
SAN related enhancements, Quick Storage
Maintenance mode
Migration
VDI integration
Network optimizations
© Copyright 2012 Avanade Inc. All Rights Reserved.
Benefits of SCVMM 2012 with vSphere
• Can create and manage virtual machines on ESX hosts
• Importing templates no longer requires copying .vmdk file to the
VMM library
– Deploying ESX virtual machines using the template is much quicker
than in VMM 2008 R2
• VMM now supports distributed switch functionality
– VMM no longer automatically creates port groups for network
equivalency.
• No longer need to enable root SSH access
– VMM now uses https for all file transfers
• VMM command shell is common across all supported
hypervisors
• VMM supports vMotion and Storage vMotion
• ESX hosts can be placed in and out of maintenance mode using
VMM console
• VMM supports hot add and removal of virtual hard disks on ESX
virtual machines
© Copyright 2012 Avanade Inc. All Rights Reserved.
31
Limitations with SCVMM 2012
• No support for ESXi 5.0 yet (coming in SP1)
– Also no support for ESXi 5.1 (not currently in SP1)
• Advanced management such as configuration of port
groups, standard and distributed virtual switches, vMotion,
Storage vMotion, and storage volumes is done within
vSphere Server
• When templates are imported into the VMM Library they
are converted to a Thick Format
• Update management is still done via vSphere Server
Update Manager
• Deployment of bare-metal computers to ESXi is not
supported
• Cluster creation is not supported
© Copyright 2012 Avanade Inc. All Rights Reserved.
32
Datacenter Design
© Copyright 2012 Avanade Inc. All Rights Reserved.
Datacenter network Design
• Traditional network design in a Data Center consists of a core,
aggregation, and access layer. Each has their own requirements.
• Servers usually connect to the access layer.
• Some configurations physically separate out internal access layer with
a DMZ/secure access layers.
• This design also encompasses numerous virtual networks within this
physical network construct.
• Taken together, this adds some complexity when virtualizing systems,
as one physical server may need to connect many networks together
depending on which virtual machines it is currently running.
• With the added benefit of virtual machines moving from one physical
server to another, each physical server's network connectivity
becomes extremely critical
© Copyright 2012 Avanade Inc. All Rights Reserved.
34
ESX/ESXi Requirements
• For a typical ESX deployment there are two different network requirements.
• First, there are management requirements, including connectivity for VMotion, vSphere Server, HA, DRS, as well as
FT if in use.
• Second there is connectivity for the virtual machines themselves.
• If you add up the requirements for each host you realize how many network connections are needed.
Making this more complicated is redundancy which typically is N+1 for each connection to ensure the
loss of one connection does not cause any outage.
• This becomes even more complicated in blade environments as there are less physical connections on
board.
• Some designs combine the VMotion and the vSphere Server connection to reduce the requirements.
• Other designs just provide one pool of physical networks for all network requirements especially with
larger 10GB connections.
• The loss of any management connections will reduce functionality, but should not cause any outages
• There are options in vSphere Server that can cause the loss of a management connection to lead to an
outage
• VM connectivity can be tricky if you’re in a typical environment with lots of internal networks.
• It’s always important to work with network designers to ensure the correct configuration as this can be
very complex.
• The network connections used for the VM’s themselves should always have redundancy. Those
connections are usually configured to support all networks in the Datacenter
© Copyright 2012 Avanade Inc. All Rights Reserved.
35
VLAN & Trunking
• In the network world, one of the best ways to split up one physical network into multiple
logical networks is to define “VLAN”, which are Virtual Local Area Networks.
• Each VLAN is a separate logical network that segregates traffic from other networks.
• A router is needed to route traffic from one VLAN to another the same as if these
were different physical networks
• With traditional servers, the physical network port they are attached to on a switch is
configured for one particular VLAN.
• This allows the switch to “tag” all the data leaving this port with the proper VLAN tag,
so the switches and routers know where to send the data.
• In the virtual world, this doesn’t work since the physical server needs connectivity to
every VLAN, or at least a large subset.
• In the networking world, when a physical port needs to be able to accept traffic from
multiple VLAN’s, it is called a “trunk” port.
• A trunk port is designed to allow some other device attached to the physical port to
“tag” data with the correct VLAN.
• ESX has a way to “tag” traffic for each VM based on which network each VM is
connected to.
• Depending on your network configuration you may need multiple trunks if there are any
security restrictions on which physical ports can connect to what networks.
© Copyright 2012 Avanade Inc. All Rights Reserved.
36
Basic Virtual Switches
• After the layer 1 physical connectivity configuration is complete it’s time to configure Virtual
Switches.
o These are just like regular switches in that there are a number of options depending on
the network design in place.
• The Virtual Switches are usually named vSwitch0 and up.
• In the Virtual Switch you can add however many physical NIC’s are part of the configuration.
o If we’re configuring an EtherChannel of three switch ports together, then we should add
those three physical connections (pNIC) to the vSwitch.
• Next on the list is defining the networks, or VLAN's, that are allowed on this virtual switch.
• This is based on the VLAN’s that the trunk is carrying.
• When VLAN's are defined on a virtual switch this will allow the virtual switch to "tag" traffic with
different VLAN tags.
• There are other configuration options for each virtual switch as well for load-balancing across
multiple physical network cards as well as other advanced options.
• A virtual switch in many ways is like a physical switch in that traffic goes from VM to switch
which will then figure out how to forward on and tag the traffic with the right VLAN id.
• There are also options for "shaping" the bandwidth available and some other fine tuning that is
possible.
© Copyright 2012 Avanade Inc. All Rights Reserved.
37
Advanced Switching
•vDistributed Switch (vDSwitch)
–With the advent of ESX 4.x, this is a new option for virtual switch.
–With virtual switches every ESX host has it's own collection of virtual switches.
If these ESX hosts are in one cluster or other grouping then each virtual switch has to
have identical configuration.
–Distributed switches allow you to define one switch that is distributed across multiple
hosts.
–The configuration is a bit more complex initially, but on-going maintenance is easier.
–A distributed switch has some basic configuration along with a list of the ESX hosts that
will be part of this distributed switch, as well as which physical ports on those hosts can
this distributed switch use.
• Nexus 1000 V
– Cisco has also introduced a “virtual” switch of their own, called Nexus 1000V.
– The goal of this design is to create a virtual switch that has the same abilities and CLI as
any other Cisco physical switch.
– This also allows the networking configuration to be very easily administered by existing
networking resources, as the Nexus 1000v has the same administration capabilities of any
other Nexus switch.
• There are other features available in the Nexus 1000V that are not available in the traditional
vSwitch or vDSwitch (Distributed).
© Copyright 2012 Avanade Inc. All Rights Reserved.
38
Screenshots
© Copyright 2012 Avanade Inc. All Rights Reserved.
Screenshots
© Copyright 2012 Avanade Inc. All Rights Reserved.
Screenshots
© Copyright 2012 Avanade Inc. All Rights Reserved.
Screenshots
© Copyright 2012 Avanade Inc. All Rights Reserved.
Site Recovery Manager
• VMware offers an application for improving on the Disaster Recovery
processes for virtual environments
• Site Recovery Manager (SRM) is a platform which can be used to enable
a more efficient, and more effective, DR solution to resolve the challenges
we have in our environment
• SRM basically creates run books that are used to enable the Production
environments to automatically fail over to the DR site when a disaster is
initiated.
• SRM is an application that simplifies DR processes around each of the
following components:
– Setup – The setup of any environment to be protected is completed within the
Virtual environment. This allows any virtual machine to be easily part of a DR
plan with minimal incremental cost.
– Testing – The DR plan can be tested on a scheduled basis within an isolated
environment at the recovery site without any impact to Production.
– Failover – Failover to the DR site for any protected applications occur literally by
hitting the big red button.
– Automation – The entire failover process, including any step required to bring up
the servers in the DR site is put into an automated run book. When the disaster
is initiated, the run book runs through every step without any intervention. This
single button can kick off recovery for every application.
© Copyright 2012 Avanade Inc. All Rights Reserved.
The Current State of (Physical) DR
Tier
RPO
RTO
Cost
I
Immediate
Immediate
$$$
II
24+ hrs.
48+ hrs.
$$
III
7+ days
5+ days
$
• DR services tiered according to business needs
• Physical DR is challenging
– Maintain identical hardware at both locations
– Apply upgrades and patches in parallel
– Little automation
– Error-prone and difficult to test
© Copyright 2012 Avanade Inc. All Rights Reserved.
Advantages of Virtual Disaster Recovery
• Virtual machines are portable
• Virtual hardware can be automatically configured
• Test and failover can be automated (minimizes human
error)
• The need for idle hardware is reduced
• Costs are lowered, and the quality of service is raised
© Copyright 2012 Avanade Inc. All Rights Reserved.
SRM Recovery Plan
VM Shutdown
High Priority
VM Shutdown
Attach
Virtual Disks
High Priority
VM Recovery
Normal Priority
VM Recovery
© Copyright 2012 Avanade Inc. All Rights Reserved.
SRM Recovery Plan - cont.
Low Priority
VM Recovery
Post Test
Cleanup
Virtual Disk
Reset
SRM Recovery Plans:
– turn manual BC/DR run books into an automated
process
– specify the steps of the recovery process in VirtualCenter
– Provide a way to test your BC/DR plan in an isolated
environment at the recovery site without impacting the
protected VMs in the protected site
© Copyright 2012 Avanade Inc. All Rights Reserved.
Testing a SRM Recovery Plan
• SRM enables you to ‘Test’ a recovery plan by simulating a failover of protected
VMs with zero downtime to the protected VMs in the protected site
© Copyright 2012 Avanade Inc. All Rights Reserved.
Testing a SRM Recovery Plan - continued
© Copyright 2012 Avanade Inc. All Rights Reserved.
Managing Virtual Machines
• Adding new ESX hosts to Inventory
• Create new virtual machine
• Deploying from template
• Updating templates
• Managing snapshots
• Customization Wizard
© Copyright 2012 Avanade Inc. All Rights Reserved.
50
Certification:
• For folks interested in getting certified, the base level
certification is called VMware Certified Professional (VCP)
– In order to become a VCP you have to both pass the VCP test
and take one of the qualifying courses:
VMware vSphere: Install, Configure, Manage
VMware vSphere: Fast Track
• There are now three certification tracks, desktop,
datacenter virtualization, and cloud.
– Each track has a VCP test, a VCAP test, and an equivalent
VCDX
– VCDX is a certified design expert and is a board level
certification where you create and defend a design in front of
a board.
© Copyright 2012 Avanade Inc. All Rights Reserved.
51
© Copyright 2012 Avanade Inc. All Rights Reserved.
52
© Copyright 2012 Avanade Inc. All Rights Reserved.
53
© Copyright 2012 Avanade Inc. All Rights Reserved.
54
© Copyright 2012 Avanade Inc. All Rights Reserved.
55
© Copyright 2012 Avanade Inc. All Rights Reserved.
56
© Copyright 2012 Avanade Inc. All Rights Reserved.
57
© Copyright 2012 Avanade Inc. All Rights Reserved.
58
© Copyright 2012 Avanade Inc. All Rights Reserved.
59