Tag Archives: Storage

Setting up VVOLs on HP 3PAR

3par_arrayAs I’ve recently brought a HPE 3PAR 7200 into production with an ESXi 6.0 U2 cluster I thought what a better time than now to check out just how VVOLs are implemented by HPE.
Although the tasks to do so aren’t difficult by any means I find the documentation around doing so is a bit scattered in different KB’s and documents between VMware and HPE, especially if you have upgraded to their latest firmware (3.2.2 MU2).

Pre-reqs

As far as prerequisites go there really isn’t that many other than ensuring you are up to date on both your 3PAR firmware and ESXi versions.  For the 3PAR you will need to ensure you are running at the very least 3.2.1.  In terms of vSphere – 6.0 or higher.  Also don’t forget to check your HBA’s on the VMware HCL and ensure that they are actually  supported as well, and note the proper firmware/driver combinations recommenced by VMware.

After spending the day(s) updating firmware (ugh!) it’s finally time to get going.

Step 1 – Time

NTP is your friend for this step.  Before proceeding any further you need to ensure that all of your hosts, vCenter Server and 3PAR are all synced in terms of time.  If you have NTP setup and running then you are laughing here, but if you don’t, stop looking at VVOLs and set it up now!  It should be noted that the 3PAR and the VMware infrastructure can be set to different time zones, however they must still be synced in terms of time!

Step 2 – Can we see the protocol endpoint?

At this stage we should actually check our ESXi hosts to ensure we can see the protocol endpoint on the 3PAR.  To do so we will need to ensure that we see the same WWN after running a couple of different commands.  First, as shown below, the ‘showport’ command on our 3PAR.  Circled is the WWN of our 3PAR array.  Make note of this!

showport

With the WWN of our storage array in memory we can now head over to our ESXi hosts.  SSH in and run the ‘esxcli storage core device list –pe-only‘ command.  This command will return any Protocol Endpoints visible from the ESXi host.  If all goes well we should see the same WWN that we did with showport, and the ”Is VVOL PE’ flag set to true – as shown below

pe-only

As you can see, we have a match so at least we have some visibility from our hosts!

Step 3 – VASA

showvasaAs we all know the whole concept of VVOLs requires the array to support VASA 2.0 and act as a storage provider for vCenter – this is what allows us to create our VM profiles and have the array automatically provision VVOLs depending on what profile is selected.  On the 3PAR we can check the status of VASA by simply running the ‘showvasa’ command.  In the case shown we can see that it is already enabled and functioning properly, however this wasn’t always the case for me.  To enable the service I first tried the ‘startvasa’ command, however it was complaining about not having a certificate.  The easiest way, if you plan on using self-signed certificates to generate one is to simply run the ‘setvasa -reset’ command.  This will reset your VASA configuration and generate a self-signed cert.  After this you can simply run ‘startvasa’ to get everything up and running…

Step 4 – Create the storage container

Now if you are following the HPE VVOL integration guide you won’t see this step, mainly because it was created around the 3.2.1 firmware, which would have already had a default, and only one storage container created for you.  If you are running 3.2.2 though you have the option to create more than one storage container, and by default comes with, well, no storage containers.  So before we go and register our vCenter with the VASA provider we first need to create a storage container to host our VVOL datastore.  First, create a new Virtual Volume set with the following command

createvvset myvvolsetname

Then, let’s create our storage container and assign our newly created set to it

setvvolsc -create set:myvvolsetname

Again, these commands wouldn’t be required in 3.2.1 as far as I know, but are in 3.2.2

Step 5 – Register our VASA within vCenter

Now it’s time to head over the familiar, lightening fast interface we call the vSphere Web Client and register the 3PARs VASA implementation as a storage provider.  Make note of the ‘VASA_API2_URL’ shown in step 3 – you will need this when registering.  With your vCenter Server context selected, navigate to Manage->Storage Providers and click plus sign to add a new storage provider.

registerprovider

Enter your VASA URL from step 3, along with a name, username, and password and click ‘OK’.  For this instance I’ve used 3paradm, but you may be better off investigating creating a new account with just the ‘service’ role within the 3PAR.  Either way, get your new storage provider registered in vCenter and wait for the Status to show as Online and active.

Step 6 – The VVOL datastore

We are almost there I promise!  Before we can deploy VMs within a VVOL or assign storage profiles to match certain CPG’s within the 3par we need to have our VVOL datastore setup within vCenter.  I found the best spot to create this datastore is by right clicking the Cluster or ESXi host we want to have access to VVOLs and selecting Storage->New Datastore.  Instead of selecting VMFS or NFS as we normally would, select VVOL as the type as shown below

vvoltype

On the next screen simply give your datastore a name and select the storage container (this is what we made available in step 4).  Then, simply select the hosts you wish to have access to deploy VVOLs to and away you go!

Step 7 – Storage Profiles

At this point you could simply deploy VMs into your newly created VVOL datastore – the 3PAR will intelligently chose the best CPG to create the VVOL in, but the power really comes by being able to assign certain VM storage profiles to our disks, and having the VVOL go to the proper CPG depending on the array capabilities. Storage Profiles are created by clicking on the Home icon and navigating to Policies and Profiles within the web client.  In the VM Storage Profiles section simply click the ‘Create new storage profile’ button.  Give your new profile a name and continue on to the Rule-Set section.

profile

The Rule sets of my “Silver” VM storage profile are shown above.  As you can see, I’ve specified that I want this storage profile to place VM disks within my FastClass raid 5 CPG, and place their subsequent snapshots in the SSD tier CPG.  When you click next you will presented with a list of the compatible and incompatible storage.  Certainly select your compatible storage and click next.  Once we have all of the profiles we need we can simply assign them to our VMs disk as shown below…

vmprofile

As you can see I’ve selected our newly created “Silver” policy for our new VM.  What this states is that when this VM is created a new VVOL will be created on our FastClass disks on the 3PAR housing the VMs.

Step 8 – VVOL visibility

Although we are technically done with deploying VVOLs at this point I wanted to highlight the showvvvolvm command that we can utilize on the 3PAR in order to gain visibility into our VVOLs.  The first being simply listing out all of our VMs that reside on VVOLs within the 3PAR.

showvvolvm -sc

showvvol1

As you can see by the Num_vv column we have 3 VVOLs associated with our VM (MyNewVM).  But how do we get the information on those VVOLs individually – we can use the same command just with the -vv flag

showvvolvm -sc -vv

showvvol2

So now we can see that we have 1 VVOL dedicated for the config, 1 VVOL dedicated for the actual disk of the VM, and finally 1 VVOL hosting a snapshot that we have taken on the VM.

Anyways, that’s all I have for now – although I haven’t gone too deep into each step I hope this post helps someone along the way get their VVOLs deployed as I had a hard time finding all of this information in one spot.  For now I like what I see between HP and VMware concerning VVOLs – certainly they have a long road ahead of them in terms of adoption – we are still dealing with a 1.0 product from VMware here and there are a lot of things that need to be worked out concerning array based replication, VASA high availability, functionality without VASA, GUI integration, etc – but that will come with time.  Certainly VVOLs will change the way we manage our virtualized storage and I’m excited to see what happens – for now, it’s just fun to play with 🙂  Thanks for reading!

Quick Fix – Making your inactive NFS datastore active again!

Sticking to my rule of “If it happens more than once I’m blogging about it” I’m bringing you this quick post around an issue I’ve seen a few times in a certain environment.  Although this is solved by only a few esxcli commands I always find it easier for me to remember (and find) if I post it here… 🙂

Anyways, as it is I have a couple of NFS datastores that sometimes “act up” a bit in terms of their connections.  For the most part they are fine and dandy – however every now and then they show up within the vSphere client as “inactive” and ghosted.  After checking the network (I always try and pin things on the network) it appears that all the connections are fine – Host communicates with storage, storage with host – the same datastores are even functioning fine on other hosts.  Logically my next step is to remount them on the host in question but when trying to unmount and/or remount them through the vSphere client I usually end up with a “Filesystem busy” error.

Thankfully it doesn’t take a lot to fix this issue, but could certainly become tedious if you have many NFS datastores which you need to perform these commands on…

First up, list the NFS datastores you have mounted on the host with the following

esxcli storage nfs list

You should see that the ‘inactive’ datastores are indeed showing up with false under the accessible column.  Make note of the Volume Name, Share Name and Host as we will need this information for the next couple of commands….

Before we can add our datastore back we need to first get rid of it.  The following command takes care of that…

esxcli storage nfs remove -v DATASTORE_NAME

Depending on whether or not you have any VMs registered on the datastore and host you may get an error, you may not – I’ve found it varies 🙂  Anyways, lastly we simply need to mount the datastore back to our host using the following command…  Be sure to use the exact same values you gathered from the ‘nfs list’ command.

esxcli storage nfs add -H HOST -s ShareName/MountPoint -v DATASTORE_NAME

There you go!  You should now have a happy healthy baby NFS datastore back into your storage pool.  On a side note I’d love to see some sort of esxcli storage nfs remount -v DATASTORE_NAME command go into the command line in order to skip some of these steps – but, hey, for now I’ll just use three commands.

Learning 3PAR – Part 2 – Moar Chunklets

chicletsIn Part 1 we went through some of the common terminology within the HP 3PAR array and now we will go into a bit more detail about one of them – the Chunklet!  A Chunklet is a key player in how the 3PAR aims to utilize all of the disks within the array, and in turn maximize the performance and protection that they can get out of the array!  With that said I mentioned that during the initialization of a physical disk it is divided up into 1GB Chunklets but what I didn’t mention is there are a few different types of Chunklets within the HP 3PAR – Now these may not be “official” HP names as I kind of named them myself during my reading.  And for some reason I’m now craving gum 🙂

Normal Used Chunklets

These are the Chunklets that are utilized by Logical Disks.  They are stringed together within different RAID sets across different physical disks in order to provide capacity to a CPG, which in turn passes it along to a Virtual Volume (this is essentially our datastore when it’s all said and done).  These chunklets hold our production data

Normal Reserved Chunklets (Logging Chunklets)

I don’t know if these really exist but this is what I’m going to call them.  They are pretty much the same as Normal Used Chunklets however they have been pre-configured in  reserved Logical Disks which are created by the system.  We normally see a reserved Logical Disk for Logging (used for disk failures/rebuilds), admin (used to store event logs and administration information) and srdata (used to store historical stats and information).  We will often see these logical disks containing chunklets closer to the end of the spindles as well.

Normal Unused (Free) Chunklets

These Chunklets are exactly how they are described – they are Chunklets that are provisioned, and are NOT spares, but have not yet been claimed by any Logical Disk.  It’s pretty safe to say that during installation all chunklets (except designated spares and reserved) are essentially free chunklets until you start provisioning LUNs.

Spare Chunklets

Some Chunklets will be designated as spares during the initialization of the 3PAR.  Meaning, not all 1GB Chunklets are available to be used within a Logical Disk.  Spare Chunklets are essentially a placeholder which is utilized when we have a physical disk failure and the Logical Disk RAID set needs to be rebuilt.  An intelligent note here – the system automagically selects which Chunklets are to be assigned as spares, however it does it in a way that most of the spare chunklets are located as close to the end of the physical disks block space as possible, leaving the closer blocks for production.

Chunklet Relationships

Everything just seems silly with the word chunklet in front of it 🙂  Either way there are few terms that are used to describe the relationships between our Normal Used Chunklets and all other chunklets within the system.

  • Local Spare Chunklets –  This would be a chunklet designated as a spare, whose primary path is connected to the same node that owns the source logical disk containing the used chunklet.
  • Local Free Chunklet – An Unused/Free chunklet whose primary path is connected to the same node that owns the source logical disk containing the used chunklet.
  • Remote Spare Chunklet – A spare chunklet whose primary path is  connected to a node different then the node owning the source logical disk containing the used chunklet.
  • Remote Free Chunklet – A free/unused chunklet whose primary path is connected to a node different then the node owning the source logical disk containing the used chunklet.

So we have mentioned failing physical disks a couple of times so I think now would be a good time to discuss what exactly happens during a disk failure and how it affects our Chunklets…

  • When a connection is lost or a failure of a physical disk occurs, the system immediately forwards all writes destined for failed chunklets that have been cached to chunklets contained in the reserved Logging Logical Disk.  This occurs until the failed physical disk/chunklets comes back online, until the Logging LD becomes full, or until the rebuild process has been completed.
  • The rebuild process occurs concurrently with the above step, where the system begins to reconstruct lost data utilizing the remaining chunklets and RAID levels provided.
    • There is some logic that happens during this rebuild/relocation phase as well – the system first looks to select a local spare chunklet , if none are to be found it moves on to a local free chunklet , then a remote spare chunklet, and finally a remote free chunklet.  All the while trying to maintain consistency in terms of the characteristics between the failed and target chunklets (Speed, Drive Type, etc).
  • Once the rebuild has completed, the logging disks are replayed and data flushed back down to the newly constructed volume.

So in the end these little tiny 1GB chunks of contiguous space are a key player in the 3PAR array.   To help understand them I tend to try to remove the fact that they are on individual drives, and think of them somewhat as really small, granular 1GB drives, some marked as spares, some in different logical drives with different raid sets, and some set aside to provide functionality for the array.  All that said though they are not separate drives, different chunklets live on the same drive, leaving us with the ability to provide different RAID levels on the same drive, mix and match different sized drives without wasting capacity, and stripe our logical disks across multiple shelves, and in some cases, even provide shelf-level protection.   Plus, they make for a nice little visualization of coloured blocks within the 3PAR Management Console 🙂

Learning 3PAR – Part 1 – Chunklets, Logical Disk, CPGs, and Virtual Volumes

3parAs I’m currently in the beginning phases of a HP 3PAR deployment I thought it might be a good idea to write a few posts centering around some of the concepts built around the 3PAR architecture.  For the most part I can relate the different terminology names to other storage arrays I’ve used in the past but some of them are somewhat new to me as well.  Either way I’m no expert and am still learning myself so ease up on me if I make a mistake eh!  Anyways, for the first part of this series I’ll concentrate simply on some of the terminology and layers that exist within the 3PAR StorServ and try to explain them the best I can – remember, I’m explaining them to me as well!

5 Layers to the hosts

As with any array the path that data takes to get from our hosts to its’ final destination on disk is a complex one – but thankfully we don’t have to worry about all of the bumps in the road along the way.  That said it’s always nice to understand the road as best we can in order to determine how best practices and configuration changes will apply to our environment.  With the 3PAR that path contains 5 essential layers; Virtual Volumes, Common Provisioning Groups, Logical Disks, Chunklets, and Physical Disks.

3pardisk

 

We can somewhat see by the diagram the relationship between each layer but before taking a holistic view let’s first discuss each layer…

Physical Disks

This is an easy one right?  A physical disk is just that, a physical disk located inside of your 3PAR array, encompassing all types of disk within the array.

Chunklets

The first thing a 3PAR does when it is discovering its’ storage is break down all of the capacity on your physical disks into chunklets.  Each chunklet is 1GB in size and occupies contiguous space on a physical disk.  Chunklets are local to that physical disk only and cannot span to others.

Logical Disks

Logical disks are essentially a grouping of chunklets which are arranged as rows of like RAID sets. LD’s will ensure that each chunklet which resides in a RAID set is physically located on different physical disks.  We don’t directly create LD’s on the 3PAR – they are generated during the creation of a CPG (explained next), more-so, when a Virtual Volume is created on a CPG.   All of the metadata however, RAID type, allocation, growth of an LD is defined when creating the CPG itself.

Common Provisioning Groups (CPG)

A CPG is simply a pool of Logical Disks that provide the means for a Virtual Volume (explained next) to consume space.  When we deploy a CPG we do not actually use any of the space in our pooled logical disks until a virtual volume is created – meaning a 2TB CPG with no virtual volumes consumes no space at all.  We can think of a CPG similar to that of an EVA’s disk group, but feeding on logical disks instead of physical disks.

Virtual Volumes

No, these aren’t the VVOLs your looking for – this is simply a terminology that 3PAR uses to define the LUNs that are presented to the hosts – they are not the VVOLs which we have all seen come supported in vSphere 6.  Either way a Virtual Volume is a LUN that draws it’s capacity from a CPG – one CPG can provide space to many virtual volumes.  A virtual volume is the LUN that is exported out to your ESXi hosts, and eventually hosts datastores.  Just like most arrays Virtual Volumes can be provisioned either thick or thin – with a thin provisioned Virtual Volume only instructing its associated CPG to draw space from the logical disks as space is needed.  CPGs have the ability to create logical disks as needed to handle the increased demand for capacity up until the user-defined size limit of the CPG is reached.

So working backwards we can come to somewhat of the following

  • A datastore is located on a Virtual Volume
  • A Virtual Volume draws its’ space from a Common Provisioning Group (CPG).
  • A Common Provisioning Group is any given number of Logical Disks joined together to form some sort of contiguous space.
  • A Logical Disk is simply a collection of chunklets which are joined together in rows in order to produce a certain RAID set (1,5,6,etc).
  • A Chunklet is a 1GB piece (chunk) of any given physical disk within the array.  It’s also a very funny word.
  • A physical disk is…well, a physical disk.

So there we have it – it being the very very very basic understanding of some of the terminology within the HP 3PAR.  Certainly we can dive deeper into some of these terms here and we will in later posts – I mean, there are many different types of Chunklets, some reserved, some spare, but we will save those and some other terms such as Adaptive Optimization for another post (mainly because I have no idea quite yet Smile).

Resizing the root partition of the vCenter Server Appliance (VCSA)

There are many side effects of a root file system filling up – server halts, unexpected application crashes, slowness, midnight wake up calls, etc.   And the root file system on the VCSA is no exception – in fact, I found it while trying to deploy a VM from a template into my environment – kept getting the dreaded 503 error that stated nothing useful to help with the resolution!  But, after a little bit of investigative work it appeared to me that the root file system on my VCSA was nearly full!  No keep in mind this was in my lab, and in all honesty you should probably investigate just why your file system is taking up so much space in the first place – but do to my impatience in getting my template deployed I decided to simply just grant a little more space to the root partition so it had a little room to breathe!  And below is the process I followed – may be right, may be wrong – but it worked!

outofspace

Step 1 – Make the disk bigger through the vSphere Client!

This is a no-brainer – if we don’t expand the space on the disk belonging to the VCSA that hosts the root partition before we can expand the root partition into that space!  So go ahead and log in to vCenter (or better yet the host on which your VCSA runs) and expand it’s underlying disk

biggerdisk

Once you have done this you may need to reboot your VCSA in order to get the newly expanded disk to show as expanded – I for one couldn’t find any solution that would rescan the disk within the VCSA to show the new space, but if you know, by all means let me know in the comments!!!

Step 2 – Rewrite the partition table

Things are about to get dicey here!  We are going to use fdisk in order to recreate the partition tables for the root filesystem – so relax, be careful and take your time!!!

First up, let’s have a look at our disk by running “fdisk –l /dev/sda”  As shown below we can see that it is no reporting at 25GB in size.

fdiskl

Next, we need to find the partition that our root filesystem resides on.  The picture of the “df-h” output at the beginning of this post confirms we are running on /dev/sda3 – this is the filesystem we will be working with…

So listed below is a slew of fdisk commands and options that we need to run – also, you can see my complete output shown at below….

First up, delete partition number 3 using the d option.

1
2
3
fdisk /dev/sda
d (for delete)
3 (for partition 3)

Now, let’s recreate the same partition with a new last sector – thankfully we don’t have to figure this out and should be fine utilizing all the defaults that fdisk provides…this time selecting the n option, p for partition, 3 for our partition number and accepting all of the defaults

1
2
3
n (for new)
p (for partition)
3 (for partition number 3)

After accepting all the defaults we need to make this partition bootable – again done inside of fdisk by using ‘a’ and then ‘3’ for our partition number

1
2
a (to toggle bootable flag)
3 (for partition number 3)

wholeprocess

As you can see in the message pictured above we need to perform a reboot in order for these newly created partition tables to take affect – so go ahead and reboot the VCSA.

Step 3 – Extend the filesystem

Well, the hard part is over and all we have left to do is resize the filesystem.  This is a relatively easy step executed using the resize2fs command shown below

resize2fs /dev/sda3

After this has complete a simple “df –h” should show that we now have the newly added space inside our root partition.

done

There may be other and better ways of doing this but this is the way I’ve chosen to go – honestly, it worked for me and I could now deploy my template so I’m happy!  Anytime you are using fdisk be very careful to not “mess” things up – take one of those VMware snapshotty thingies before cowboying around Smile  Thanks for reading!

#VFD5 Preview – NexGen

logo1Alright here’s another company presenting at VFD5 in Boston that I recognize, but know very little about!  Thankfully the Stanley Cup playoffs are done and I now have a little extra room in my brain to take in all the info that will be thrown at us.  Anyways I started to do a little digging on NexGen and oh boy, what a story do they have!  Stephen Foskett has a great article on his blog in regards to the journey NexGen has taken – it’s pretty crazy!  Certainly read Stephens article but I’ll try to summarize the craziness as best I can…

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

Basically, a couple of the LeftHand founders got together and founded NexGen – ok, this story doesn’t seem all that crazy so far.  Well, after a few years Fusion-io came in with their wallets open and aquired NexGen – again, not a real crazy spin on a story!  Moving on, we all know that SanDisk walked in and acquired Fusion-io with that getting NexGen.   Then, the next thing you know SanDisk spun out NexGen on their own, putting them right back to where they started!  This just all seems wild to me!

So where do they stand today!

NexGen is a storage company, a storage company offering a hybrid flash array with software that helps their customers align their business practices with their storage by prioritizing the data they store. So what does that really mean?  Basically it comes down to QOS and service levels.  NexGen customers can use these two concepts in ways that they can define performance, availablity, and protection of their data by defining the IOPs, throughput and latency that they need for each and every application.  Depending on the service levels assigned to a workload, NexGen can borrow IOPs from a lower tiered service in order to meet the QOS defined on a business critical application.

Another unique feature of NexGen Storage is in the way they use flash and SSD.  Most arrays will place their flash behind some sort of a RAID controller, whereas NexGen utilizes the PCIe bus to access their flash, providing a redundant, high-speed, low latency caching mechanism for both reads and writes.

There is certainly a lot more bells and whistles within the NexGen arrays and a lot bigger of a story to be told here.  The way NexGen is utilizing flash within the array is definitely peaking my interest, but honestly, I’m interested more in the story of the company and how all those acquisitions and spin-offs have helped them.  I’m sure they will address both of them at VFD5 and believe me there will be more posts around NexGen and their offerings.  If you want to follow along during the VFD5 presentations you can see them live both on the official VFD5 event page, as well as my VFD5 event page where all my content will be posted.