Storage - Sankethika

Download Report

Transcript Storage - Sankethika

Storage
Storage Concepts
IP Storage: iSCSI and NAS/NFS
Fibre Channel SAN Storage
VMFS Datastores
Storage
Storage – virtual disks & VMFS
Storage Area Networks (SAN)
Understanding FC & iSCSI Storage
Why you need a SAN
Storage Terms you must know
What is in a Datastore?
ESX Server Storage Options
VMFS Specs and Maxs
Types of Storages
Local (SCSI/SATA/SAS/IDE)
SAN (Fibre Channel & iSCSI
NAS (NFS & CIFS)
Why do you need a Storage
ESX to Boot
For Virtual Machines
Centralized Storage is required for advanced features of vSphere like
VMotion, VMHA, FT, and DRS
ESX / ESXi
Incase if we choose to install ESX/ESXi Server, the Server can be
installed on the local disk of the physical machine or on the SAN (Boot
from SAN - Remote Boot). VMware ESX supports boot from SAN.
Booting from SAN requires one dedicated LUN per server.
VMware ESXi (4.1 Only) may be booted from SAN. This is supported
for Fibre Channel SAN, as well as iSCSI and FCoE for certain storage
adapters that have been qualified for this capability. (Refer HCL for
supported Storage Adapters).
ESXi comes in 2 version
Embedded : The ESXi is embedded in the hardware on which there is
a Flash ROM, these servers are mostly provided by vendors.
Installable/ISO: ESXi is also available as Installable or an ISO
Space Requirements to Install ESX
vmkcore
boot Partition
/ root partition
var/log partition
swap partition
110 Mb
1.1 Gb
5 Gb
2 Gb
600 Mb to 1.6 Gb
vmkcore partition is used to store the core dumps generated by the VMkernel.
The /var/core directory is used to store the core dumps.
Optionally you can go for Scratch Partition (Optionally) - We call it vFAT
Scratch partition - 4Gb
ESXi can locally boot from USB ROM, we may not have 4Gb on the USB Drive
and we might not go for Vscratch partition.
When we don't go for a Vscratch partition, ESXi Kernel additionally will use 512
Mb for itself, by default. VMkernel uses 154 Mb of memory plus incase you
don't have a scratch partition, 512 Mb for itself.
Vscratch partition is used for swap for the VMkernel.
Swap is used for service console on ESX servers. It uses double or 1.5 times
of the memory. Service Console uses minimum 300Mb minimum memory by
default, so the Minimum swap partition is of 600Mb. Maximum memory it might
use is 800 Mb, so swap should be allocated around 1.6Gb.
Space Requirements to Install ESX
So in all you need around 9.8Gb disk space is what you need maximum to
install ESX Server
ESX Server supports upto 1 Tb of memory
With regards to ESXi, it has a small footprint and it is possible to install
ESXi 110/120 Mb
ESXi can be installed on SCSI, SATA, IDE and SAN(ESXi 4.1). ESXi
cannot boot NFS or CIFS.
ESX can be booted from Fibre Channel SAN, 1st boot device would be
FC HBA through which you are booting and the path to target storage
processor between these two should be active path and not a passive
path, and it should be able to recognize this boot LUN
What are all the ways to provide storage to the virtual machines?
When creating a virtual machine, you have options of Create a new virtual disk,
use existing virtual disk, use RDM or NO disk. What is the difference between a
virtual disk and a raw disk
For a operating system what is considered to be a raw?.
A disk without a file system, without an operating system understandable file
system, such a disk is considered as a raw.
Block Size
When formatting a Datastore, we have to define block sizes. A block is the
minimum size that a file occupies and this is defined while creating a file system.
For example for a block size of 8Kb, a file of 1kb will occupy 8kb. Similarly a file of
18 Kb will occupy 3 blocks (24Kb). Its important to note that a block can only be
occupied by a single file, meaning if a file occupies half the size of block, it will not
share that remaining free space on the block with another file.
Now if I format the file system with 8Mb block size I get 2Tb of disk space. This
space will be used for creating virtual machines.
What is a Datastore?
A datastore is a logical storage unit that can use disk space on one physical device or
one disk partition, or span several physical devices.
Types of datastores:
1.VMFS
2.Network File System (NFS)
Datastores are used to hold virtual machines, templates, and ISO images. A VMFS
datastore can also hold a raw device mapping (RDM), which is used to access
raw data.
VMFS Datastore:
It can recognize only upto 2 TB LUN
A VMFS datastore can extend spanning multiple LUN’s with a maximum of 32
LUNs, meaning a single datastore can be of 64 TB. (This is not a good practice
though)
Allows concurrent access to shared storage
Can be dynamically expanded
Can use an 8MB block size, good for storing large virtual disk files
Provides on-disk, block-level locking
You can format local disk, SAN or iSCSI to create a datastore
VMFS can be formatted with different block sizes which are defined while creating
datastores. For example a 2 TB Disk Formatted with
1MB block size, maximum file size will be 256 GB
2MB block size, maximum file size will be 512 GB
4MB block size, maximum file size will be 1 TB
8MB block size, maximum file size will be 2 TB
Minimum size that a small file example of 1 KB will occupy a single block with a block size
for example 1MB, even if data in the file is 1KB. You cannot store more than a single file in
a block.
Block size and vmdk size limitation
Note: When you create a VMFS datastore on your VMware ESX servers many
administrators select the default 1MB block size without knowing when or why to change
it. The block size determines the minimum amount of disk space that any file will take up
on VMFS datastores. So an 18KB log file will actually take up 1MB of disk space (1 block)
and a 1.3MB file will take up 2MB of disk space (2 blocks). But the block size also
determines the maximum size that any file can be, if you select a 1MB block size on your
data store the maximum file size is limited to 256GB. So when you create a VM you
cannot assign it a single virtual disk greater then 256GB.
There is also no way to change the block size after you set it without deleting the
datastore and re-creating it, which will wipe out any data on the datastore.
Because of this you should choose your block size carefully when creating VMFS
datastores. The VMFS datastores mainly contain larger virtual disk files so increasing
the block size will not use all that much more disk space over the default 1MB size.
Block size and performance
Besides having smaller files use slightly more disk space on your datastore there are
no other downsides to using larger block sizes. There is no noticeable I/O
performance difference by using a larger block size. When you create your datastore,
make sure you choose your block size carefully. 1MB should be fine if you have a
smaller datastore (less than 500GB) and never plan on using virtual disks greater
then 256GB. If you have a medium (500GB – 1TB) datastore and there is a chance
that you may need a VM with a larger disk then go with a 2MB or 4MB block size.
For larger datastores (1TB – 2TB) go with a 4MB or 8MB block size. In most cases
you will not be creating virtual disks equal to the maximum size of your datastore
(2TB) so you will usually not need a 8MB block size
RDM
RDM or Raw Device Mappings is a method of presenting a RAW LUN to a Virtual
Machine
RDM’s provide a way for Virtual Machines to Access disks directly bypassing the
Virtualization Layer
RDM’s are used for Cluster applications like MCS (Microsoft Cluster Services) when
creating a cluster between a Physical and a Virtual Machine
RDM Mapping are supported for the following devices
SCSI
SATA
Fibre Channel
iSCSI
What Files Make Up a Virtual Machine?
VM Files
VMX file – The size of these files will be in KB’s
Log files, cannot grow more than MB’s
vmxd file, snapshot description file, NVRAM file, vmdk and flat.vmdk
What are the difference between vmdk and flat.vmdk
Vmdk is the description of the Virtual Disk and flat.vmdk is the actual disk for that particular
virtual machine. So it is a file which is acting as a disk for a virtual machine.
So when we create or provide a virtual disk for a virtual machine it has to be kept on VMFS
volume/datastore. So coming to conclusion it means when we format a VMFS volume with
1 MB block size, we can create a maximum virtual disk for a virtual machine of 256 Gb,
and so on.
SAN (Storage Area Network)
SAN
Disks
ESX / ESXi
0
1
ESX / ESXi
HBA
0
1
Storage
Processor
(HBA)
FC Switch
Interconnecting Multiple nodes using a FC Switch is called a Fabric
SAN (Storage Area Network)
Depending on the appropriate needs the SAN Administrator will create a
Hardware RAID and then create a LUN
LUNs are identified by their id, example 0, 1, 2 etc
LUN ids can be dynamically generated or can be created static
From the ESX Server side the HBA’s are recognized using WWN (World
Wide Node name), similar to the MAC Address of the ethernet controller
WWN is a 64 bit hexadecimal value assigned to the HBA by the vendor
ESX admin needs to provide the WWN to the Storage Administrator
ESX Servers can recognize upto 8 HBA and upto 16 paths per LUN
But ESX Supports a maximum of 1024 paths from all the ESX Servers
SAN (Storage Area Network)
SAN
Disks
ESX / ESXi
ESX / ESXi
Storage Group
WWN1, WWN2
0
WWN1
1
WWN2
HBA
WWN3
0
1
WWN3
Storage
Processor
(HBA)
WWN3, WWN4
FC Switch
Since WWN’s are 64 bit hexadecimal numbers and its difficult to remember, the
Storage Admin creates a Alias to the WWN's giving them a friendly name,
example for ESX1 he chooses ESX1 as the name and ESX2 respectively
LUN Mapping & LUN Masking is done at the Storage End
SAN
Disks
ESX / ESXi
ESX / ESXi
Storage Group
WWN1, WWN2
0
WWN1
1
WWN2
HBA
WWN3
0
1
WWN3
Storage
Processor
(HBA)
WWN3, WWN4
FC Switch
Depending on the Storage make a single LUN can be presented to 128 Nodes
Identifying HBA’s
HBA are identied by vmhba, vmhba1 and so on.
Each and every HBA has a controller which is always 0
Disks
ESX / ESXi
Storage Group
LUN1
0
vmhba0
1
HBA
Storage
Processor
(HBA)
Vmhba1 controller 0
Controller 0
T1 T2
FC Switch
When you can access a LUN using multiple path it provides multipathing.
Multipathing provides continuous access to a LUN incase of a hardware failure
Multipathing Policies
ESX / ESXi supports multipathing policies
NMT – Native Multipathing Polices which consists of
Fixed – Provides only Fault Tolerance
Most Recently Used (MRU) – Provides only Fault Tolerance
Round Robin – Provides both Fault Tolerance as well as Load balancing
When using fault tolerance using Fixed and MRU, one becomes a active
path and the other becomes as passive path and is used only incase of
a failure of the HBA currently being used, switching it to the 2nd HBA
In Fixed multipathing when a failed HBA recovers from a failure it becomes
active changing the state of the secondary HBA from active to Passive
But incase of MRU when a failed HBA recovers it goes into a passive mode
since the last used path was of the secondary HBA
In round robin both HBA’s are in active active mode
Storage vendors might have their own multipathing policies which might not
be recognized by ESX Server, so kindly check with the vendor before buying
the storage. Storage vendors might provide multipathing as a plugin to be
installed
Fibre Switch
For security a storage admin
can configure zones at the FC
Switch. Zones are 2 types
ESX / ESXi
Disks
Soft Zone
Hard Zone
Storage Group
LUN1
0
1
HBA
Storage
Processor
(HBA)
FC Switch
T1 T2
Zoning
Hard Zone
Is configured for the ports of the FC Switch
If a cable is plugged out from the zone port and attaches to another port
outside the zone, the LUNs are lost. Port need to be reconfigured in the hard
zone in this case
But if the HBA is changed and reconfigured with its WWN on the storage, no
changes are needed to be done on the FC Switch
Soft Zones
Are configured using WWN’s
Changing ports does not affect access to LUNs
If a HBA is changed the soft zones need to be reconfigured
What is the difference between a Fibre Channel and a iSCSI?.
Both are SAN
•
•
iSCSI uses IP based connection
Fibre Channel uses Fibre HBA
For a Fibre channel, the Fibre Channel storage is connected to a FC Switch which is
connected to a FC HBA on the ESX host.
For a iSCSI, iSCSI storage is connected to a Ethernet network or you can use a hardware
initiator.
What is a initiator?.
An initiator is similar to a HBA. You have a hardware initiator, similar to a Fibre channel
HBA. FC HBA will have a Fibre channel port, iSCSI initiator will have a Ethernet Port. A
hardware initiator has a controller. The role of a controller is to encapsulate SCSI
protocol into IP protocol for transportation from one end to the other end it has to use
the IP. Or i can use a plain NIC also which can also communicate using Ethernet
technology. In this case the NIC does not have the capability of encapsulating SCSI
protocol into IP protocol. In this case you have to use software initiator. Software
initiator uses CPU cycles since the NIC does not have a controller like a hardware
initiator.
Understanding iSCSI Storage
iSCSI (Internet SCSI) is sending SCSI disk commands and data
over a TCP/IP network
Why use it?
1.Low cost
2.Use existing hardware - Ethernet NIC, switch, and OS features
3.Supports almost all vSphere features
Understanding iSCSI Storage
Downside – performance? and reliability?
iSCSI Terms:
• iSCSI hardware initiator - a special iSCSI NIC card
• iSCSI software initiator - use your own NIC card and OS iSCSI
software
• iSCSI Target - the server running iSCSI
Disk Arrays
ESX/ESXi
This can be a
NIC or iSCSI
HBA
NIC
TCP/IP Network
SP
Ethernet Switch
iSCSI uses TCP/IP Protocol and uses IQN (iSCSI Qualified Name) naming
convention
Understanding iSCSI Storage
iSCSI uses IQN (iSCSI qualified name) to identify iSCSI Targets &
Initiators
It is laid out in this format:
• date in year-month format
• reversed domain of the iSCSI hardware provider, example
qlogic, if it’s a software iSCSI then for example Microsoft
might have provided the iSCSI software.
• a unique organization assigned name (ie: hostname)
• For example: 2009-10.com.hpesindia:iscsi1
Understanding iSCSI Storage
Configuring iSCSI
Hardware Initiator (HBA)
Login to the iSCSI storage and reboot
Go to the bios of the System and then to the BIOS of the HBA to
configure the IP
Software Initiator (Only for ESX Server)
By default the daemon iscsid is disabled
The iSCSI port 3260 is blocked by the firewall
iSCSI uses VMkernel connection type and ESX by default does not
have a VMkernel type
On ESXi everything is configured by default, all an admin has to do
is enable the software iSCSI initiator
Understanding iSCSI Storage
Configuring iSCSI
iSCSI uses 2 types of discovery method to connect to a iSCSI
storage
Static – Manually enter IP Address and the IQN informing the ESX
to connect to the iSCSI Storage
Send Target (Dynamic)
Send Target (Dynamic)
Disk Arrays
ESX/ESXi
NIC
TCP/IP Network
SP
Win2k
Ethernet Switch
For Dynamic Discovery you need an additional system with a special software like
iNS (iSCSI Name Server Software). iNS will resolve IQN’s just like a DNS Server
resolves Host Names to IP Address. It contains database of all IQN and at the
ESX Server end an admin needs to put the IP Address of the iNS Server
Send Target (Dynamic)
Lun Mapping and LUN Masking can be done at the iSCSI Storage System
These are done in 2 ways
On IP Address
Or
IQN
NFS
NFS is supported by ESX/ESXi
CIFS is not supported at all
By default 8 NFS volumes can be mounted
This figure can be changed and upto 64 NFS volumes can be mounted on a single
ESX box
In this way, ESX supports 256 LUNs or Disks and 64 NFS which makes a total of
320 Datastores
NFS
ESX/ESXi
Disks
/data (rw,
norootsquash)
/mnt/nfs
Unix/Linux
NIC
TCP/IP Network
Ethernet Switch
NIC
NFS is a file level access
Configured in /etc/exports
And then execute the command exportfs
On the ESX/ESXi side the admin should know the IP Address of the NFS Server
and the mount point
NFS also uses VMkernel.