Supported Storage Adapters include the following; SCSI, iSCSI, RAID, Fibre Channel, Fibre Channel over Ethernet, and Ethernet. These adapters are accessed directly through drivers in the vmkernel.
The following details are available under the Storage Adapters section on the configuration tab of a host.
- Model – Model of the adapter
- Targets (FC and SCSI) – Number of targets accessed through the adapter
- Connected Targets (iSCSI) – Number of connected targets.
- WWN (FC) – the WWN formed.
- iSCSI Name (iSCSI) – Unique iSCSI Name
- iSCSI Alias (iSCSI) – The Friendly iSCSI Name
- IP Address (independent iSCSI HW adapters) – Address assigned to the adapter
- Devices- All devices or LUNs device can access
- Paths – All paths adapter is using
- Properties – Additional configuration (iSCSI and FCoE)
Each storage device of LUN is identified by several different names. Depending on the storage device used, different algorithms are used to generate an identifier for that LUN.
iSCSI Inquiry Identifiers are represented by one of the following formats
- naa.number
- t10.number
- eui.number
Path Based Identifiers. When the device does not provide page 83 info the host will generate an mpx.path name, where path represents the path to a device such as mpx.vmhba1:C0:T1:L3. The preceding example states that we are using hba1, channel 0, target 1 and LUN 3.
- Depends on VMware's networking and configuration. All management interfaces are provided by VMware.
- Usually presents a standard nic and iSCSI offload on the same port.
- Broadcom 5709
Independent Hardware Adapters
- Implements its own networking and management interfaces
- All configuration such as IP management, MAC addressing and other parameters are completely separate from VMware.
- QLogic QLA4052
Software iSCSI Adapters
A software iSCSI adapter is VMware code running inside the vmkernel. This allows you to connect to an iSCSI target without using specialized hardware and just using standard NIC.
- Virtual disks are created in a thin format meaning the ESXi host will provision the entire space required, however only as much storage space that is used inside the disk is actually committed.
- This is applied on a disk by disk basis within the VMs
Array Thin Provisioning
- Rather than individual disk being thin provisioned, the entire LUN is is thin provisioned
- Performed on the array level.
- vSphere has no idea about this logically sized LUN unless the array is VAAI capable. These capabilities include the ability to monitor the use of the space on thin provisioned LUNs as well as inform the array of datastore space that has been freed when files are deleted or moved in order for the array to reclaim those blocks.
- Without the Storage APIs, a 2TB LUN that is thin provisioned containing only 1TB of data will report to the ESXi host as being 2TB in size when in fact it is only utilizing 1TB.
Describe zoning and LUN masking practices
- Provides access control in the SAN topology. Essentially defines which HBAs can connect to which targets. Any devices outside of the defined zones are not visible to devices inside the zone.
- Reduces the number of targets and LUNS that are presented to a host.
- Controls and isolates paths
- Prevents non-ESXi hosts from seeing a VMFS datastore.
- Can be used to separate environments as well (test and production).
LUN Masking
- Similar to zoning, but applied at a HOST – LUN level
- Limits which LUNS a host can see.
- A host may be zoned to see a specific LUN, however that LUN could be masked out of the host.
Scan/Rescan storage
- Zoning a new disk array on a SAN
- Creating new LUNS on a SAN
- Changing any path masking on a host
- Reconnecting a cable.
- Changing and CHAP Settings (iSCSI)
- Adding or Removing any discovery or static addresses (iSCSI)
- Adding a single host to the vCenter after you have edited or removed a datastore from vCenter that was shared by the hosts or host you are adding.
Identify use cases for FCoE
- Converged Network Adapters (CNA's) that are completely offloaded and contain both network and FC functionalities on the same card.
- vSphere will recognize this as both a standard network adapter (vmnic) and a FCoE adapter (vmhba).
- Uses the native FCoE stack in the host for the protocol processing.
- Used with a NIC that offers Data Centre Bridging (DCB) and I/O capabilities.
- Networking must be properly configured as well as the adapter needs to be activated.
- Max of 4 software FCoE adapters per host.
Use Cases for FCoE?
- If you had existing Fibre Channel infrastructures and processes in place you may want to use FCoE instead of exploring NFS or iSCSI
- You can get a lossless extremely low-latency transport model while still utilizing a form of 'Network Storage'
- By going with FCoE CNE's you still get the options of using them for NFS as well.
- Filters out storage devices that are already used by a VMFS datastore or any host managed by vCenter
- The LUNS will not show up as candidates to be formatted or to be used by a RDM
- config.vpxd.filter.vmfsFilter
RDM Filter
- Similar to the VMFS filter, but filters out RDMs
- In order for VMs to use the same LUN, they must be setup to share it.
- config.vpxd.filter.rdmFilter
Same Host and Transports Filter
- Filters out LUNs ineligible for use as a VMFS datastore extent because of host or storage type incomparability.
- Prevents LUNs not exposed to all hosts share the same original VMFS datastore from being an extent.
- Also prevents LUNs that use a different storage type from the original VMFS datastore from being an extent.
- config.vpxd.filter.SameHostAndTransportFilter
Host Rescan Filter
- Automatically rescans and updates VMFS datastores after you perform datastore management operations
- Helps to provide a consistent view of datastores across hosts.
- If this filter is turned off it will not affect presenting a new LUN to a host or cluster. The rescan operation will still go.
- config.vpxd.filter.hostRescanFilter
All the filters can be enabled/disabled/configured by going to Home->vCenter Server Settings, clicking on Advance Settings and entering in their corresponding keys and a false/true value.
- Check whether the adapter needs to be licensed – vendor documentation
- Install the adapter – vendor documentation
- Verify the adapter is installed correctly – If it is installed correctly it will be listed in the Storage Adapter section of the Configuration tab of the host.
- Configure Discovery information – explained further down
- Configure CHAP Parameters – explained further down
- View the dependent adapter – again, check the Storage Adapters section of the configuration tab of the host. If you adapter isn't listed, ensure that it has a valid license – vendor documentation.
- Determine the association between dependent HW adapters and physical NICS – Select the appropriate adapter and click 'Properties'. From here, select the Network Configuration tab and click 'Add'. Add the corresponding nic to the adapter.
- Configure Networking for iSCSI – explained further down.
- Configure Discovery Information – explained further down
- Configure CHAP – explained further down
- By clicking 'Configure' you can now change the status, iSCSI name, and its' alias. ***Note*** disabling an iSCSI initiator requires a host reboot.
- CHAP – allows you to setup various CHAP settings – explained further down.
- Advanced – many advanced settings.
Network Configuration Tab
- Allows you to configure port bindings and select the port group to be associated with software iSCSI stack – explained further below.
Dynamic Discovery Tab
- Also known as Send Targets.
- Each time the initiator contacts a specified server, the initiator sends the SendTargets request to it. The server will respond by supplying a list of available targets back.
- The names and IPs of the targets will appear on the Static Discovery tab. If you remove one of these from the Static Discovery tab, it more than likely will re-appear the next time a rescan happens, an hba is reset, or the host is rebooted.
- To configure, click 'Add'. Enter in the IP Address or DNS name of the storage system and click 'OK'. Once the connection is established, the static discovery list will be updated.
Static Discovery Tab
- No discovery is performed.
- Need to manually input the target names and the associated IP Address.
- Click 'Add' and specify target server name or IP, port, and associated target name (IQN).
Configure iSCSI port binding
- Do not use CHAP – Pretty self explanatory, no CHAP authentication will be used. This is supported across all initiators
- Do not use CHAP unless required by target. – The host will prefer a non-CHAP connection, but can use CHAP security if the target requires so. Supported only on software and dependent hardware initiators.
- Use CHAP unless prohibited by target – The host will prefer CHAP, but if the target does not support or use it, it can use non-CHAP. Supported across all initiators
- Use CHAP – The host will require a successful CHAP connection. Supported only on software and dependent hardware initiators.
- You would certainly want to utilize a hardware initiator if you are running production storage through iSCSI that requires a lot of I/O. Using hardware iSCSI will offload most of the work from vSphere to initiator.
- You might have NICs that currently support this mode of iSCSI, in which it would make more sense to use this than a software initiator.
- Certainly keeps costs low as you can utilize your existing NICs.