VCP 5 – Objective 3.2 – Configure the Storage Virtual Appliance for vSphere

Define Storage Virtual Appliance (SVA) architecture

Cluster Architecture
  • 2 or 3 physical hosts running ESXi 5 with local storage
  • The vSphere Storage Appliance VMs run on top of the hosts and run clustering services to create volumes to be exported as the VSA Datastores
  • If only using a 2 node cluster, an additional service called the VSA cluster service will run on the vCenter Server machine.  This service participates as a member in the cluster, but doesn't provide storage.
  • In order to remain online, the cluster requires at least half of its nodes to be online.  Thus, in a 2 node cluster if one node fails, the vCenter Server must remain online in order to keep the VSA online.

Network Architecture

  • All hosts within the cluster must have at least 4 NICs (either 2 dual port or 4 single port).  1Gb
  • The VSA network traffic is divided into front end and back end traffic.
  • Front End Traffic handles
    • Communication between each VSA node and the VSA Manager
    • Communication between ESXi and the VSA Volumes
    • Communication between each VSA Member cluster and the VSA Cluster Service
    • vMotion traffic between hosts
  • Back End Traffic handles
    • Replication between a volume and its replica
    • Clustering communication between all VSA Members
  • Each VSA has two virtual NICs.  One to handle front-end and one to handle back-end traffic.  The back-end virtual nic has an IP address from a private subnet whereas the front-end vNIC can have up to three IP addresses. (on for the VSA management Network, one for the exported NFS Volume, and one for the VSA Cluster).
  • The IP of the VSA cluster can move between nodes as this is assigned to the cluster leader, thus if a current cluster leader fails, it will migrate to another cluster leader as it is elected.
  • VSA cluster installation creates two standard switches on each ESXi host to isolate front and back end traffic.

How VSA Cluster Handles Failures 

  • Each VSA datastore will have two volumes, the cluster member exports the main volume as the VSA datastore.  Another VSA member will maintain the second volume as a replica.
  • If a failure occurs to the main member, the secondary member will take over that datastore and activate its replica.
  • After the main member comes back online, it synchronizes it self with the replica to provide protection against further failures.
  • A VSA Cluster can provide automatic failover from a single physical NIC, single physical switch, single physical host, or single VSA Cluster member failure.
Configure ESXi hosts as SVA hosts
 
HW Requirements for ESXi Hosts in a VSA Cluster
  • All ESXi hosts in the cluster need to have the same hardware configuration
  • 64 Bit CPUs (obviously) at least 2Ghz per core.
  • 6 GB Minimum Memory, 24 GB Recommended, 72 GB Maximum supported and tested.
  • 4 Gigabit NIC ports per host.
  • 4,6, or 8 hard disks of the same model and capacity.
  • 2 TB Max Capacity per disk, 180 GB Minimum total hard disk per host.
  • Must be same type (all SAS or all SATA).
  • Raid 10!

SW Requirements

  • Must be running ESXi 5
  • Must be licensed with Essentials Plus or higher if using a licensed VSA.  If using the trial VSA, you can use trial ESXi licenses.
  • ESXi hosts cannot participate in any other cluster
  • Each host needs the standard vSwitch and port groups that are created by default.  Do not create additional switches.
  • Must have a static IP address in the same subnet as vCenter Server.
  • No VMs residing on the hosts.

Network Requirements

  • Must have at least 1 gigabit ethernet switch that supports VLAN tagging.

VSA Manager Requirements

  • Same HW requirements as vCenter Server
  • 4.7 GB Free Disk Space
  • Open ports 2181 (VSA Client Port), 2888 (VSA Server Port), 3888 (VSA Election Port), 2375 (VSA Java Remote Method Invocation Port).
  • Installation needs to be ran under a local administrator account.

Process to add hosts to a VSA Cluster

  1. Install ESXi on the hosts using the HW and SW requirements above.
  2. Install an instance of vCenter Server.
  3. Create a new Datacenter.
  4. Add the hosts to the Datacenter (Steps 3 and 4 can be skipped if using the automated VSA installer).
  5. Install VSA Manager on the vCenter Server
  6. After the installation you would have installed and registered  the VSA Manager plug-in with vCenter (you may need to enable plug-in after).  It also installs the VSA cluster service.
  7. Next time you connect and select a datacenter you should see the VSA Manager tab.
Configure the storage network for the SVA/Deploy/Configure the SVA Manager (yeah, i bundled these ones up).
 
From within the vSphere Client select the datacenter containing the hosts which you would like to cluster.  Click the VSA Manager Tab.  This should open up the VSA Installer wizard.
 
Follow the steps in the wizard by
  1. Selecting the datacenter for the VSA Cluster
  2. Select the hosts in which you want to participate (These will be categorized by CPU family and you can only select hosts with the same family).
  3. Configure the networking by assigning IP addresses and configuration for the following
    • VSA Cluster IP Address (A static IP for the VSA cluster.  This will be assigned to the cluster member that is elected as the leader.  Do not use an IP from a 192.168 private network).
    • VSA Cluster Service IP (The cluster service IP, this is the service that runs on the vCenter service when only using 2 nodes.  Do not use a 192.168 private network).
    • For each ESXi host assign
      • A Management IP Address ( this is used for the management of the VSA Cluster.  Do not use 192.168 private network)
      • A Datastore IP Address ( This will be the IP utilized for the NFS volume that will be exported as a VSA datastore.  Do not use a private 192.168 address)
      • vSphere Feature IP (Can either be set static or you can use DHCP)
      • Back-end IP Address (This will be used for the back-end network of the VSA cluster.  This address must reside in a 192.168 private network).
  4. Select when to format your disks.   First access means disk will be formatted after the installation on the first read/write.  Immediately will format and zero disk during installation but will require extra time.
  5. Review the config and click Install and confirm.
Administer SVA storage resources
 
All VSA resources can be managed by selecting the VSA Manager tab when you are on the Datacenter object from within the vSphere client.   There are several resources that you can manage from within this tab as well as a few other notables outlined below.
 
Memory Over Commitment
  • Memory over commitment is not supported with the VSA since swapping to VSA datastores can make the cluster unstable.
  • To prevent this it is recommended to not over commit memory by doing the following
    • Set a memory reservation on each virtual machine for the amount of memory it is allocated.
    • Disable each VM from swapping to the VSA datastores.

Performing Maintenance task on a VSA Cluster or VSA Cluster Member.

  • To put the entire cluster into maintenance mode select the VSA Cluster Maintenance Mode link.  This will set the status of the cluster to Maintenance and take all of the VSA datastores offline.
  • To put a single node into maintenance mode, go into Appliances view, select the cluster member and select Appliance Maintenance Mode.  This will put the cluster member offline, however the datastore that was exported by this member will still be available through it's replica on another host.  The status of the datastore will change to degraded.

Replacing a Cluster Member

  • Power off and remove the failed host as well as add a replacement ESXi host to vCenter.
  • In the Appliances view, right click the member who is offline (failed) and click 'replace appliance'
  • Select the vSphere Storage Appliance whose status is offline, then select the newly installed ESXi host.
  • Chose your method of formatting again, click Install and verify.

Changing the VSA Password

  • Simply click the 'Change Password' link.  *** Note *** The username for the cluster is svaadmin and the default password is svapass.

Reconfigure the VSA Network

  1. Put cluster in Reconfigure Network Mode (Pretty easy, select 'Enter Reconfigure Network Mode')
  2. Reconfigure vCenter Network Settings.
  3. Reconfigure the networking on all the ESXi hosts.
  4. Remove all Feature Port Groups from the hosts (VSA-VMotion).
  5. Reconnect all ESXi hosts to vCenter Server.
  6. Enable the VSA Manager Plug-In.
  7. Reconfigure the VSA Network (selecting 'Reconfigure Network') 

Monitoring a VSA Cluster

  • Cluster Information Displayed includes
    • Name and status
    • IP addresses for members as well as Cluster management IP.
    • Capacity of cluster.  Physical capacity (total capacity of the all the hard disks across all ESXi hosts) and Storage Capacity (total capacity of the VSA datastores that you can store VMs on.)
  • Datastore Information (Name, Status, Capacity, Free, Used, Exported By, Datastore Address, Datastore Network).
  • Cluster Member information (Name, Status, Capacity, Management Address, Back-End Address, Exported Datastores, Hosted Replica, Host).
  • Graphical Map of Cluster
    • Datastore to Replica
    • Datastore to vSphere Storage Appliance
    • Replica to vSphere Storage Appliance
    • vSphere Storage Appliance to host.
Determine use case for deploying the SVA
 
Certainly the VSA is targeted towards the SMB market or those business looking to save money by not purchasing a full fledged SAN for shared storage.  Also, allows companies to re-purpose some old physical servers that might be chalked full of drives.  Also allows for a savings of space by utilizing your ESXi hosts as your SAN.
 
Determine appropriate ESXi host resources for the SVA
 
Host resources will always depend on the environment and the VMs workloads that you are running.  Keep in mind you are limited to 8 drives, 2TB in size, as well as 3 nodes.  Also, the memory overcommitment is not supported, so you may require more physical memory than normal.  You need 4 NICs.  The host is your san as well as your hypervisor!

1 thought on “VCP 5 – Objective 3.2 – Configure the Storage Virtual Appliance for vSphere

  1. Will a quad port NIC, each nic being 1 GB work?
    We are having all sorts of problems getting this to work on two HP DL360 Gen8 hosts. Each host has a 4 port nic.

Leave a Reply

Your email address will not be published. Required fields are marked *