VCP 5 – Objective 3.1 – Configure Shared Storage for vSphere

Identify storage adapters and devices

Supported Storage Adapters include the following; SCSI, iSCSI, RAID, Fibre Channel, Fibre Channel over Ethernet, and Ethernet.  These adapters are accessed directly through drivers in the vmkernel.  

The following details are available under the Storage Adapters section on the configuration tab of a host.

  • Model – Model of the adapter
  • Targets (FC and SCSI) – Number of targets accessed through the adapter
  • Connected Targets (iSCSI) – Number of connected targets.
  • WWN (FC) – the WWN formed.
  • iSCSI Name (iSCSI) – Unique iSCSI Name
  • iSCSI Alias (iSCSI) – The Friendly iSCSI Name
  • IP Address (independent iSCSI HW adapters) – Address assigned to the adapter
  • Devices- All devices or LUNs device can access
  • Paths – All paths adapter is using
  • Properties – Additional configuration (iSCSI and FCoE)
Identify storage naming conventions

Each storage device of LUN is identified by several different names.  Depending on the storage device used, different algorithms are used to generate an identifier for that LUN.

iSCSI Inquiry Identifiers are represented by one of the following formats

  • naa.number
  • t10.number
  • eui.number

Path Based Identifiers.  When the device does not provide page 83 info the host will generate an mpx.path name, where path represents the path to a device such as mpx.vmhba1:C0:T1:L3.  The preceding example states that we are using hba1, channel 0, target 1 and LUN 3.

Identify hardware/dependent hardware/software iSCSI initiator requirements
 
Hardware iSCSI adapters.
 
Hardware iSCSI adapters are a third-party adapter that offloads iSCSI and network processing from your host to the adapter.  They are divided into two categories.
 
Dependent Hardware Adapters
  • Depends on VMware's networking and configuration.  All management interfaces are provided by VMware.
  • Usually presents a standard nic and iSCSI offload on the same port.
  • Broadcom 5709

Independent Hardware Adapters

  • Implements its own networking and management interfaces
  • All configuration such as IP management, MAC addressing and other parameters are completely separate from VMware.
  • QLogic QLA4052

Software iSCSI Adapters

A software iSCSI adapter is VMware code running inside the vmkernel.  This allows you to connect to an iSCSI target without using specialized hardware and just using standard NIC.

Compare and contrast array thin provisioning and virtual disk thin provisioning
 
Just a note, when using thin provisioning at either the virtual disk or array level you must monitor your storage usage in order to avoid conditions where you run out of space.  This technique called 'Storage Over-Subscription' allow you to report more virtual storage than there is real physical capacity and could result in downtime if not monitored efficiently.
 
Virtual Disk Thin Provisioning
  • Virtual disks are created in a thin format meaning the ESXi host will provision the entire space required, however only as much storage space that is used inside the disk is actually committed.
  • This is applied on a disk by disk basis within the VMs

Array Thin Provisioning

  • Rather than individual disk being thin provisioned, the entire LUN is is thin provisioned
  • Performed on the array level.
  • vSphere has no idea about this logically sized LUN unless the array is VAAI capable.  These capabilities include the ability to monitor the use of the space on thin provisioned LUNs as well as inform the array of datastore space that has been freed when files are deleted or moved in order for the array to reclaim those blocks.
  • Without the Storage APIs, a 2TB LUN that is thin provisioned containing only 1TB of data will report to the ESXi host as being 2TB in size when in fact it is only utilizing 1TB.

Describe zoning and LUN masking practices

Zoning
  • Provides access control in the SAN topology.  Essentially defines which HBAs can connect to which targets.  Any devices outside of the defined zones are not visible to devices inside the zone.
  • Reduces the number of targets and LUNS that are presented to a host.
  • Controls and isolates paths
  • Prevents non-ESXi hosts from seeing a VMFS datastore.
  • Can be used to separate environments as well (test and production).

LUN Masking

  • Similar to zoning, but applied at a HOST – LUN level
  • Limits which LUNS a host can see.
  • A host may be zoned to see a specific LUN, however that LUN could be masked out of the host.

Scan/Rescan storage

For the most part your host will initiate an automatic rescan of the storage adapters when you perform functions such as creating a VMFS datastore or RDM, adding extents to existing datastores, and increasing or deleting a VMFS datastore.  If this host is in a cluster this rescan will occur on all hosts within the cluster.
 
Some cases require you to perform a manual rescan (Right Click cluster/datacenter and select 'Rescan for Datastores') such as
  • Zoning a new disk array on a SAN
  • Creating new LUNS on a SAN
  • Changing any path masking on a host
  • Reconnecting a cable.
  • Changing and CHAP Settings (iSCSI)
  • Adding or Removing any discovery or static addresses (iSCSI)
  • Adding a single host to the vCenter after you have edited or removed a datastore from vCenter that was shared by the hosts or host you are adding.

Identify use cases for FCoE

FCoE encapsulates Fibre Channel frames into Ethernet frames.  In the end your host does not need to have an hba to connect to FC storage, but can use special Fibre Channel adapters and 10gbit lossless ethernet to deliver FC traffic.  There are two types of FCoE adapters
 
Hardware FCoE Adapters
  • Converged Network Adapters (CNA's) that are completely offloaded and contain both network and FC functionalities on the same card.
  • vSphere will recognize this as both a standard network adapter (vmnic) and a FCoE adapter (vmhba).
 
Software FCoE Adapters
  • Uses the native FCoE stack in the host for the protocol processing. 
  • Used with a NIC that offers Data Centre Bridging (DCB) and I/O capabilities.
  • Networking must be properly configured as well as the adapter needs to be activated.
  • Max of 4 software FCoE adapters per host.

Use Cases for FCoE?

  • If you had existing Fibre Channel infrastructures and processes in place you may want to use FCoE instead of exploring NFS or iSCSI
  • You can get a lossless extremely low-latency transport model while still utilizing a form of 'Network Storage'
  • By going with FCoE CNE's you still get the options of using them for NFS as well.
Create an NFS share for use with vSphere
 
vSphere is fully supported using NFSv3 over TCP.  NFS is not a block file system, therefore your datastore will not be formatted as VMFS.  The file system actually resides on the NFS server.  By moving the file system from the host to the NFS server you essentially do not need to perform any masking or zoning on the host itself, thus making it very easy to setup.  The process would most certainly vary depending on the NFS server you are using, but for the most part you just create a volume, create a folder on the volume and assign a share name to it.  From there you need to allow the IP's of your hosts to have read/write access to the share.
 
Connect to a NAS device
 
As stated above this is one of the easiest tasks to perform.  Essentially all you need to do is enter the IP Address or DNS name of the NFS Server along with the share name, and a name you want for the datastore.  Done by selecting the 'Add Storage' link on the Storage section of the hosts configuration tab.
 
Enable/Configure/Disable vCenter Server storage filters
 
Storage Filters are provided through vCenter server in order to help you avoid storage device corruption or performance degradation that can be caused by an unsupported use of storage devices.  There are 4 types of storage filters.
 
VMFS Filter
  • Filters out storage devices that are already used by a VMFS datastore or any host managed by vCenter
  • The LUNS will not show up as candidates to be formatted or to be used by a RDM
  • config.vpxd.filter.vmfsFilter

RDM Filter

  • Similar to the VMFS filter, but filters out RDMs
  • In order for VMs to use the same LUN, they must be setup to share it.
  • config.vpxd.filter.rdmFilter

Same Host and Transports Filter

  • Filters out LUNs ineligible for use as a VMFS datastore extent because of host or storage type incomparability.
  • Prevents LUNs not exposed to all hosts share the same original VMFS datastore from being an extent.
  • Also prevents LUNs that use a different storage type from the original VMFS datastore from being an extent.
  • config.vpxd.filter.SameHostAndTransportFilter

Host Rescan Filter

  • Automatically rescans and updates VMFS datastores after you perform datastore management operations
  • Helps to provide a consistent view of datastores across hosts.
  • If this filter is turned off it will not affect presenting a new LUN to a host or cluster.  The rescan operation will still go.
  • config.vpxd.filter.hostRescanFilter

All the filters can be enabled/disabled/configured by going to Home->vCenter Server Settings, clicking on Advance Settings and entering in their corresponding keys and a false/true value.

Configure/Edit hardware/dependent hardware initiators
 
HW Independent Initiators
  1. Check whether the adapter needs to be licensed – vendor documentation
  2. Install the adapter – vendor documentation
  3. Verify the adapter is installed correctly – If it is installed correctly it will be listed in the Storage Adapter section of the Configuration tab of the host.
  4. Configure Discovery information – explained further down
  5. Configure CHAP Parameters – explained further down
HW Dependent Initiators
  1. View the dependent adapter – again, check the Storage Adapters section of the configuration tab of the host.  If you adapter isn't listed, ensure that it has a valid license – vendor documentation.
  2. Determine the association between dependent HW adapters and physical NICS – Select the appropriate adapter and click 'Properties'.  From here, select the Network Configuration tab and click 'Add'.  Add the corresponding nic to the adapter.
  3. Configure Networking for iSCSI – explained further down.
  4. Configure Discovery Information – explained further down
  5. Configure CHAP – explained further down
 
Enable/Disable software iSCSI initiator
 
By default the SW iSCSI initiator is disabled and must be activated.  Only one SW initiator can be activated per host.  To do this, find the SW initiator in the storage adapter section of the hosts configuration tab.  Right click it and select 'Properties'  From there click 'Configure' and check/uncheck the Enabled checkbox.
 
Configure/Edit software iSCSI initiator settings
 
Right click your storage adapter and select properties. You should be presented with 4 tabs.
 
General Tab
  • By clicking 'Configure' you can now change the status, iSCSI name, and its' alias.  ***Note*** disabling an iSCSI initiator requires a host reboot.
  • CHAP – allows you to setup various CHAP settings – explained further down.
  • Advanced – many advanced settings.

Network Configuration Tab

  • Allows you to configure port bindings and select the port group to be associated with software iSCSI stack – explained further below.

Dynamic Discovery Tab

  • Also known as Send Targets.
  • Each time the initiator contacts a specified server, the initiator sends the SendTargets request to it.  The server will respond by supplying a list of available targets back.
  • The names and IPs of the targets will appear on the Static Discovery tab.  If you remove one of these from the Static Discovery tab, it more than likely will re-appear the next time a rescan happens, an hba is reset, or the host is rebooted.
  • To configure, click 'Add'.  Enter in the IP Address or DNS name of the storage system and click 'OK'.  Once the connection is established, the static discovery list will be updated.

Static Discovery Tab

  • No discovery is performed.
  • Need to manually input the target names and the associated IP Address.
  • Click 'Add' and specify target server name or IP, port, and associated target name  (IQN).

Configure iSCSI port binding

Done through the Network Configuration Tab listed above.  Simply select the port group containing the vmkernel port that you wish to bind your SW iSCSI stack to.  If you ware using a HW initiator, only the vmkernel port associated with the initiators corresponding NIC will be available.
 
Enable/Configure/Disable iSCSI CHAP
 
ESXi supports one-way CHAP( target authenticates initiator)  for all types of initiators, and mutual CHAP ( target authenticates initiator, initiator authenticates target)  for software and dependent hardware initiators.  Also, CHAP is set at the initiator level, meaning all targets receive or inherit  the same CHAP name and secret, however again for software and dependent hardware adapters a per-target CHAP is supported, allowing to configure different credentials for each target.
 
CHAP Security Levels
  • Do not use CHAP – Pretty self explanatory, no CHAP authentication will be used.  This is supported across all initiators
  • Do not use CHAP unless required by target.  – The host will prefer a non-CHAP connection, but can use CHAP security if the target requires so.  Supported only on software and dependent hardware initiators.
  • Use CHAP unless prohibited by target – The host will prefer CHAP, but if the target does not support or use it, it can use non-CHAP.  Supported across all initiators
  • Use CHAP – The host will require a successful CHAP connection.  Supported only on software and dependent hardware initiators.
Determine use case for hardware/dependent hardware/software iSCSI initiator
 
Independent Hardware Initiator
  • You would certainly want to utilize a hardware initiator if you are running production storage through iSCSI that requires a lot of I/O.  Using hardware iSCSI will offload most of the work from vSphere to initiator.
Dependent Hardware Initiator
  • You might have NICs that currently support this mode of iSCSI, in which it would make more sense to use this than a software initiator.
Software Initiator
  • Certainly keeps costs low as you can utilize your existing NICs.
 
Determine use case for and configure array thin provisioning
 
Setting up thin provisioning on your array is going to differ from SAN to SAN.  In most cases it's simply a checkbox.  As for use cases, it is certainly easier to configure thin provisioning on the array rather than configuring on each virtual disk inside of vCenter.  
 
 

Leave a Reply

Your email address will not be published. Required fields are marked *