vSphere 4.1 - What`s New Technical Highlights

Download Report

Transcript vSphere 4.1 - What`s New Technical Highlights

vSphere 4.1 –
What’s New Technical Highlights
Confidential
© 2010 VMware Inc. All rights reserved
Green Computing
Virtualization of
the Desktop
Beyond
Business Continuity and
Disaster Recovery
Management
Additional Opportunities for Your
Virtualization Practice
Provisioning
Beyond Server Consolidation
Network Services
Your Virtualization Practice
Datacenter OS/Hypervisor
Your Future Investment in Building a
Virtualization Practice
Plus
Infrastructure
Your Current Practice (System, SAN,
and LAN Technologies)
Your Business
Diagram By Jon Busshart
Key Solutions / Use Cases
Server Consolidation and Containment – Eliminate server sprawl
by deploying systems into virtual machines
Infrastructure Provisioning – Reduce the time for provisioning new
infrastructure to minutes with sophisticated automation
capabilities.
Business Continuity – Reduce the cost and complexity of
business continuity by encapsulating entire systems files that can
be replicated and restored onto any target server
Test and Development – Rapidly provision and re-provision test
and development servers; store libraries of pre-configured test
machines
Enterprise Desktop – Secure unmanaged PCs. Alternatively,
provide standardized enterprise desktop environments hosted on
servers.
Legacy Application Re-hosting – Migrate legacy operating
systems and software applications to virtual machines running on
new hardware for better reliability
3
Confidential
Agenda
Memory – Memory Compression
Network – Network I/O Control, Load-Based Teaming, IPv6, Performance
Storage – Storage I/O Control, VAAI, I/O Metrics, iSCSI Offload enhancements
ESXi – Deployment Methods, Tech Support Mode enhancements
HA & FT – HA Diagnostics & Reliability, FT enhancements, vMotion enhancements
DRS & DPM – DRS Host Affinity, DPM enhancements
Management – vCenter Server & vSphere Platform enhancements
Memory Compression
Description
A new hierarchy for
VMware’s memory
overcommit technology
(a VMware key
differentiator)
Benefits
• Optimized use of memory (freeing up space as
needed)
• Safeguard for using memory overcommit feature
with confidence
• Reclaim Performance
Beta Feedback
“Great for memory over-subscription.”
Physical Memory (Physical Host)
Proof Point
1,000x faster
than swap-in!
Virtual Memory (Virtual Machine)
Agenda
Memory – Memory Compression
Network – Network I/O Control, Load-Based Teaming, IPv6, Performance
Storage – Storage I/O Control, VAAI, I/O Metrics, iSCSI Offload enhancements
ESXi – Deployment Methods, Tech Support Mode enhancements
HA & FT – HA Diagnostics & Reliability, FT enhancements, vMotion enhancements
DRS & DPM – DRS Host Affinity, DPM enhancements
Management – vCenter Server & vSphere Platform enhancements
Network Traffic Management—Emergence of 10 GigE
1GigE pNICs
iSCSI
FT
10 GigE pNICs
vMotion
iSCSI
NFS
TCP/IP
FT
vMotion
NFS
TCP/IP
vSwitch
vSwitch
10 GigE
1GigE
• NICs dedicated for some traffic types e.g.
vMotion, IP Storage
• Bandwidth assured by dedicated physical NICs
Traffic Types
compete.
Who gets what share
of the vmnic?
•
Traffic typically converged to two 10 GigE
NICs
•
Some traffic types & flows could dominate
others through oversubscription
Network I/O Control—Configuration from vSphere Client
Limits
Maximum bandwidth
for traffic class/type
Shares
vDS only
feature!
Preconfigured
Traffic Classes
e.g. VM traffic in this example:
- limited to max of 500 Mbps (aggregate of all VMs)
- with minimum of 50/400 of pNIC bandwidth (50/(100+100+50+50+50+50)
Guaranteed minimum
service level
NIC Teaming Enhancements—Load Based Teaming (LBT)
Distributed Virtual Port Group configuration panel
Note: adjacent physical switch
configuration is same as other
teaming types (except IP-hash).
i.e. same L2 domain
Load Based Teaming (LBT) – only available with vDS
• Existing vSphere 4.0 teaming modes use static virtual port to pNIC allocation—teaming algorithms do not account for load
• LBT Goal: avoid congestion by balancing load on the team
• LBT invoked if saturation detected on Tx or Rx (>75% mean utilization over 30s period)
• 30 sec period—long period avoids mac address flapping issues with adjacent physical switches
• Note: Current implementation does not take NetIORM shares into account
vSphere 4.1—Network Feature Summary
Network Performance & Scale Improvements
• vmkernel TCP/IP stack—vMotion, NFS, FT logging performance gains
• UDP and intra-host VM to VM performance improvements
• vDS scaling to ~350 hosts/vDS (from current 64 hosts/vDS)—final number is TBD!
Traffic Management (*vDS only* features)
• NetIOC (Network I/O Control)
 Software scheduler to guarantee service levels for specified traffic types
• LBT (Load Based Teaming)
 Avoid congestion by dynamic adjustment to NIC team based upon pNIC load
IPv6—NIST Compliance
• Compliance with NIST “Host” Profile
Nexus 1000V Enhancements
• Additional Nexus 1000V features—Nexus 1000V V1.4 or 2.0
Agenda
Memory – Memory Compression
Network – Network I/O Control, Load-Based Teaming, IPv6, Performance
Storage – Storage I/O Control, VAAI, I/O Metrics, iSCSI Offload enhancements
ESXi – Deployment Methods, Tech Support Mode enhancements
HA & FT – HA Diagnostics & Reliability, FT enhancements, vMotion enhancements
DRS & DPM – DRS Host Affinity, DPM enhancements
Management – vCenter Server & vSphere Platform enhancements
The I/O Sharing Problem
• Low priority VM can limit I/O bandwidth for high priority VMs
• Storage I/O allocation should be in line with VM priorities
What you want to see
What you see
online
store
Microsoft
Exchange
data
mining
datastore
online
store
Microsoft
Exchange
data
mining
datastore
Solution: Storage I/O Control (SIOC)
CPU shares: High
online
store
Memory shares: High
I/O shares: High
CPU shares: High
Microsoft
Exchange
data
mining
CPU shares: Low
Memory shares: High
Memory shares: Low
I/O shares: High
I/O shares: Low
32GHz
16GB
Datastore A
vStorage APIs for Array Integration (VAAI)
Improves performance by leveraging efficient array-based
operations as an alternative to VMware host-based solutions
Three Primitives include :
1.
2.
3.
Full Copy – Xcopy like function to offload work to the array
Write Same -Speeds up zeroing out of blocks or writing repeated content
Atomic Test and Set – Alternate means to locking the entire LUN
Helping function such as:
•
•
•
•
Storage vMotion
Provisioning virtual machines from Template
Improves thin provisioning disk performance
VMFS share storage pool scalability
•Requires firmware from Storage Vendors (6 participating)
•Supports block based storage only in the 4.1 release
For more details on VAAI
vSphere 4.1 Documentation also describes use of this features in the ESX
Configuration Guide Chapter 9 (pages 124 - 125)
Listed in TOC as “Storage Hardware Acceleration”
Three setting under advanced settings:
 DataMover.HardwareAcceratedMove
 DataMover.HardwareAcceratedInit
 VMFS3.HarwareAccerated Locking
- Full Copy
- Write Same
- Atomic Test Set
Additional Collateral planned for release at GA
 Frequently Asked Questions
 Datasheet or webpage content
Partners include : Dell/EQL, EMC, HDS, HP, IBM and NetApp
* Will only support block based storage in 4.1
NFS & HW iSCSI Support (Storage) in vSphere 4.1
Improved NFS performance
 Up to 15% reduction in CPU cost for both read & write
 Up to 15% improvement in Throughput cost for both read & write
Broadcom iSCSI HW Offload Support
 89% improvement
in CPU read cost !
 83% improvement in
CPU write cost !
New Performance Monitoring Statistics
 Comprehensive host & VM storage
performance statistics enable proactive
monitoring to simplify troubleshooting
 Heterogeneous customer storage
environments supported (FC, iSCSI, NFS)
 Tools support varied usage scenarios
o GUI for trending and user-friendly
comparative analysis
o Command-line for scripting/ drill-down
at host
Features:
•Throughput and latency statistics for:
• Datastore per host
• Storage adapter & path per Host
• Datastore per VM
• VMDK per VM
•Realtime and historical trending (vCenter)
•Esxtop (for ESX) and resxtop (for ESXi)
Agenda
Memory – Memory Compression
Network – Network I/O Control, Load-Based Teaming, IPv6, Performance
Storage – Storage I/O Control, VAAI, I/O Metrics, iSCSI Offload enhancements
ESXi – Deployment Methods, Tech Support Mode enhancements
HA & FT – HA Diagnostics & Reliability, FT enhancements, vMotion enhancements
DRS & DPM – DRS Host Affinity, DPM enhancements
Management – vCenter Server & vSphere Platform enhancements
Driving Customers to ESXi with vSphere 4.1
ESXi - Establishing the Gold Standard in Hypervisors
4.1 Enhancements for ESXi
 New Deployment Options
 Centralized updating of
3rd
What Does This All Mean for
You?
 Recommend that all vSphere
party
code with Update Manager
 Improved Local Authentication
including Built-in Active Directory
Service
 Easier CLI options for
Troubleshooting
 Better Control over Local Activity
4.1 deployments use the ESXi
hypervisor
 vSphere 4.1 is the last release
with the ESX hypervisor
(sometimes known as “ESX
classic”)
 See ESX to ESXi Upgrade
Center for more details
(updated by 7/13 for launch)
Summary of new features in ESXi 4.1
Deployment Options
• Boot from SAN
• Scripted Installation (a la “Kickstart”)
Centralized updating of 3rd party code
• VMware Update Manager can deploy drivers, CIM providers, other modules
Improved Local Authentication
• Built-in Active Directory Service
• DCUI and Tech Support Mode access by any authorized user (not just root)
Easier CLI options for troubleshooting
• Full support of Tech Support Mode – both local and remote (via SSH)
• Additional commands in Tech Support Mode: vscsiStats, nc, tcpdump-uw, etc.
• Additional management options in vCLI: SCSI, VAAI, Network, VM
Better control over local activity
• DCUI and Tech Support Mode is configurable in vCenter Server
• Total host lockdown possible
• Activity in Tech Support Mode is sent to syslog
Agenda
Memory – Memory Compression
Network – Network I/O Control, Load-Based Teaming, IPv6, Performance
Storage – Storage I/O Control, VAAI, I/O Metrics, iSCSI Offload enhancements
ESXi – Deployment Methods, Tech Support Mode enhancements
HA & FT – HA Diagnostics & Reliability, FT enhancements, vMotion enhancements
DRS & DPM – DRS Host Affinity, DPM enhancements
Management – vCenter Server & vSphere Platform enhancements
DRS Host Affinity
Benefits
Description
• Tune environment according to availability,
performance, and/or licensing requirements
Set granular policies
that define only certain
virtual machine
movements
• A cloud enabler(multi-tenancy)
Beta Feedback
“Awesome, we can separate VMs
between data centers or blade
enclosures with DRS host affinity
APP
APP
APP
APP
APP
OS
OS
OS
OS
OS
“A”
“B”
“A”
“A”
“B”
rules”
Proof Point
Mandatory
Compliance
Enforcement for
Virtual
Machines
“Server A”
“Server A”
“Server B”
“Server B”
4-host DRS/HA cluster
VMs A  Servers A Only
VMs B  Servers B Only
DRS Host Affinity
Required rules
Preferential rules
Rule enforcement: 2 options
• Required: DRS/HA will never violate the rule;
event generated if violated manually. Only advised
for enforcing host-based licensing of ISV apps.
• Preferential: DRS/HA will violate the rule if
necessary for failover or for maintaining
availability
DPM Enhancements
Scheduling Distributed Power Management
• Turning on/off DPM is now a scheduled task
• DPM can be turned off prior to business hours in anticipation for higher resource
demands
Disabling DPM brings hosts out of standby
• Eliminates risk of ESX hosts being stuck in standby mode while DPM is disabled.
• Ensures that when DPM is disabled, all hosts are powered on and ready to
accommodate load increases.
vMotion Enhancements
• Significantly decreased the overall migration time (time will vary depending on
workload)
• Increased number of concurrent vMotions:
 ESX host:
- 4 on a 1 Gbps network and
- 8 on a 10 Gbps network
 Datastore: 128 (both VMFS and NFS)
• Elapsed time reduced by 5x on 10GbE tests
• Maintenance mode evacuation time is greatly decreased due to above
improvements
Fault Tolerance (FT) Enhancements
FT fully integrated with DRS
DRS
• DRS load balances FT Primary and
Secondary VMs. EVC required.
FT Primary
VM
FT Secondary
VM
Versioning control lifts
requirement on ESX build
consistency
• Primary VM can run on host with a
Resource Pool
different build # as Secondary VM.
Events for Primary VM vs.
Secondary VM differentiated
• Events logged/stored differently.
HA Diagnostic and Reliability Improvements
HA Healthcheck Status
• HA provides an ongoing healthcheck facility to ensure that the required cluster
configuration is met at all times. Deviations result in an event or alarm on the
cluster.
HA Operational Status
• A new Cluster Operational Status
window displays more information
about the current HA operational
status, including the specific status
and errors for each host in the
HA cluster.
Improved HA-DRS interoperability during HA failover
• DRS will perform vMotion to free up contiguous resources (i.e. on one host) so
that HA can place a VM that needs to be restarted
HA app awareness – expose APIs for 3rd party app developers
VMware Data Recovery: New Capabilities
Backup and Recovery Appliance
-
Support for up to 10 appliances per vCenter Server
instance to allow protection of up to 1000 virtual machines
-
File Level Restore client for Linux VMs
VMware vSphere 4.1
-
Improved VSS support for Windows 2008 and Windows 7:
application level quiescing
Destination Storage
VMware
vCenter Server
-
Expanded support for DAS, NFS, iSCSI or Fibre Channel
storage plus CIFS shares as destination
-
Improved deduplication performance
vSphere Client Plug-In
-
Ability for seamless switch between multiple backup
appliances
-
Improved usability and user experience
Agenda
Memory – Memory Compression
Network – Network I/O Control, Load-Based Teaming, IPv6, Performance
Storage – Storage I/O Control, VAAI, I/O Metrics, iSCSI Offload enhancements
ESXi – Deployment Methods, Tech Support Mode enhancements
HA & FT – HA Diagnostics & Reliability, FT enhancements, vMotion enhancements
DRS & DPM – DRS Host Affinity, DPM enhancements
Management – vCenter Server & vSphere Platform enhancements
Enhanced vCenter Scalability – “Cloud Scale”
vSphere 4
vSphere 4.1
Ratio
320
320
1x
32
32
1x
1280
3000
3x
300
1000
3x
Registered VMs per VC
4500
15000
3x+
Powered-On VMs per VC
3000
10000
3x
30
120
4x
100
500
5x
2500
5000
2x
VMs per host
Hosts per cluster
VMs per cluster
Hosts per VC
Concurrent VI Clients
Hosts per DC
VMs per DC
Management – New Features Summary
vCenter Server MUST be hosted on 64-bit Windows OS
• 32-bit to 64-bit data migration
• Enhanced Scalability
Update Manager
Host Profile Enhancements
Orchestrator
Active Directory Support (Host and vMA)
Management Assistant (vMA)
• Scale and readiness
Converter
• Hyper-V Import
Virtual Serial Port Concentrator (VSPC)
Thank You
© 2010 VMware Inc. All rights reserved