Tag Archives: Storage

Is there still a need for specialized administrators?

We have been hearing the cliché’s for quite some time now within the technology industry.  Sayings like “Breaking down silos” and “Jack of all trades, Master of none” have been floating around IT offices for the past 5 years – and while I believe that these sayings certainly hold some clout I still have my doubts about this new “Generalized IT Admin”.  Honestly, with the changing landscape of technology and the fast paced change we see introduced into our infrastructure by all means we need to know (or know how to quickly learn) a lot – A LOT.  And while this generalized, broad skill set approach may be perfect for the day to day management of our environments the fact is, when the sky clouds over and the storm rolls in, taking with it certain pieces of our data centers we will want to have those storage specialists, that crazy smart network person, or the flip-flop wearing virtualization dude who knows things inside and out available to troubleshoot and do root/cause on issues in order to get our environments back up and running as quickly as possible!

Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc were all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.

Now all this said these problem situations don’t come (I Hope) that often and coupled  with the fact that we are seeing more and more “converged” support solutions, organizations can leverage their “one throat to choke” support call and get the specialists they need over the phone – this all leads them one step closer to being able to employ these “Jack of all trades, Master of none” personnel in their IT departments.  But perhaps the biggest stepping stone in eliminating these specialized roles is the new rage being set forth by IT vendors implementing a little concept called “Policy Based Management”.

Enter NetApp SolidFire

Andy Banta from NetApp SolidFire spoke at Storage Field Day 13 about how they are utilizing policy based management to make it easier and more efficient for everyday administrators to consume and manage their storage environments.  I got the chance to sit as a delegate at SFD13 and watch his presentation, cleverly titled “The death of the IT Storage Admin” – and if you fancy you can see the complete recorded presentations here.

NetApp SolidFire is doing a lot of things right in terms of taking the steps to introduce efficiency into our environments and eliminate a lot of those difficult mundane storage tasks that we used to see dedicated teams of specialized administrators perform.  With that said let’s take a look at a few of those tasks and explore how NetApp SolidFire, coupled with VMware’s VVOL integration is providing policy based automation around them.

Storage Provisioning

In the olden days (and I mean like 5 years ago) the way we went about provisioning storage to our VMware environments could be, how do I say this, a little bit inefficient.  Traditionally, we as “Generalized VMware administrators” would determine that we needed more storage.  From there, we’d put a request out to the storage team in order to provision us a LUN.  Normally, this storage team would come back with all sorts of questions – things like “How much performance do you need?”, “How much capacity do you need?”, “What type of transport mechanism would you like this said storage to be delivered over?”, “What type of availability are you looking for?”.  After answering (or sometimes lying) our way through these conversations the storage team would FINALLY provision the LUN and zone it out to our hosts.  We, then create our datastore, present it to our ESXi hosts and away we go filling it up – only to come back to the storage team with the same request the very next month.  It’s not a fun experience and is highly inefficient.

VMware’s VVOLs is a foundation to help change this and NetApp SolidFire has complete integration points into them.  So in true VVOLs fashion we have our storage container, which consumes space from our SolidFire cluster on a per VM/disk fashion.  What this means is as administrators we simply assign a policy to our VM, or our VM disk and after that our vmdk is provisioned automatically on the SolidFire cluster – No LUNs, no storage team conversations – all performed by our “generalized admin”.

Storage Performance/Quality of Service

Now as far as VVOL provisioning capacity goes there isn’t a whole lot that is different between SolidFire and other IT storage vendors – but when we get into QOS I think we can all agree that SolidFire takes a step above the crowd.  SolidFire has always focused on the point that application performance and quality of service is the number one most important piece of their storage – and with their VVOL implementation this is still true.

When setting up our policies within the vSphere SPBM, NetApp SolidFire exposes a number of metrics and configuration options as it pertains to QOS in our rule setup.  We can configure settings allowing us to set minimum, maximum and burst IOPs on both our data VVOL(the vmdks) as well as our configuration VVOLs (vmx, etc.).  Once setup we simply apply these policies to our VMs and immediately we have assurance that certain VMs will always get the performance they need – or, on the flip side, certain VMs will not be able flood our storage, consuming IOPs and affecting their neighboring workloads.  This is a really cool feature IMO – while I see a lot of vendors allowing us to do certain disk type placement for our VVOL (placing vmdk on SSD, SAS, etc.) I’ve not see many that go as deep as SolidFire allowing us to guarantee and limit IOPs.

This essentially removes the complexity of troubleshooting storage performance needs and constraints on our workloads – the setup is all completed within the familiar vSphere Web Client (complete with a NetApp SolidFire plug-in) and is simply applied the same way as you have always just edited a VMs settings.

So – is the storage admin dead?

NetApp SolidFire has definitely taking a good chunk of the storage admins duties away and put them into the laps of our generalized admins!   Even though I haven’t mentioned it, even the scaling of NetApp SolidFire cluster, as well as the VASA provider failover is all automated in some way within their product.  So, yeah, I think they are on the right track – and they have taken some very difficult and complex tasks and turned them into a simple policy.  Now I wouldn’t jump to conclusions and say that the storage admin is 100% dead (there is still a lot storage complexities and a lot of storage related tasks to do within the datacenter)  but NetApp SolidFire have, how do I put this – maybe just put them into a pretty good coma and has them lying in a hospital bed!   If you have made it this far I’d love to hear your take on things – leave a comment, hit me up on twitter,  whatever…  Take a look at the NetApp SolidFire videos from SFD13 and let me know – do you think the storage admin is dead?  Thanks for reading!

Setting up VVOLs on HP 3PAR

3par_arrayAs I’ve recently brought a HPE 3PAR 7200 into production with an ESXi 6.0 U2 cluster I thought what a better time than now to check out just how VVOLs are implemented by HPE.
Although the tasks to do so aren’t difficult by any means I find the documentation around doing so is a bit scattered in different KB’s and documents between VMware and HPE, especially if you have upgraded to their latest firmware (3.2.2 MU2).

Pre-reqs

As far as prerequisites go there really isn’t that many other than ensuring you are up to date on both your 3PAR firmware and ESXi versions.  For the 3PAR you will need to ensure you are running at the very least 3.2.1.  In terms of vSphere – 6.0 or higher.  Also don’t forget to check your HBA’s on the VMware HCL and ensure that they are actually  supported as well, and note the proper firmware/driver combinations recommenced by VMware.

After spending the day(s) updating firmware (ugh!) it’s finally time to get going.

Step 1 – Time

NTP is your friend for this step.  Before proceeding any further you need to ensure that all of your hosts, vCenter Server and 3PAR are all synced in terms of time.  If you have NTP setup and running then you are laughing here, but if you don’t, stop looking at VVOLs and set it up now!  It should be noted that the 3PAR and the VMware infrastructure can be set to different time zones, however they must still be synced in terms of time!

Step 2 – Can we see the protocol endpoint?

At this stage we should actually check our ESXi hosts to ensure we can see the protocol endpoint on the 3PAR.  To do so we will need to ensure that we see the same WWN after running a couple of different commands.  First, as shown below, the ‘showport’ command on our 3PAR.  Circled is the WWN of our 3PAR array.  Make note of this!

showport

With the WWN of our storage array in memory we can now head over to our ESXi hosts.  SSH in and run the ‘esxcli storage core device list –pe-only‘ command.  This command will return any Protocol Endpoints visible from the ESXi host.  If all goes well we should see the same WWN that we did with showport, and the ”Is VVOL PE’ flag set to true – as shown below

pe-only

As you can see, we have a match so at least we have some visibility from our hosts!

Step 3 – VASA

showvasaAs we all know the whole concept of VVOLs requires the array to support VASA 2.0 and act as a storage provider for vCenter – this is what allows us to create our VM profiles and have the array automatically provision VVOLs depending on what profile is selected.  On the 3PAR we can check the status of VASA by simply running the ‘showvasa’ command.  In the case shown we can see that it is already enabled and functioning properly, however this wasn’t always the case for me.  To enable the service I first tried the ‘startvasa’ command, however it was complaining about not having a certificate.  The easiest way, if you plan on using self-signed certificates to generate one is to simply run the ‘setvasa -reset’ command.  This will reset your VASA configuration and generate a self-signed cert.  After this you can simply run ‘startvasa’ to get everything up and running…

Step 4 – Create the storage container

Now if you are following the HPE VVOL integration guide you won’t see this step, mainly because it was created around the 3.2.1 firmware, which would have already had a default, and only one storage container created for you.  If you are running 3.2.2 though you have the option to create more than one storage container, and by default comes with, well, no storage containers.  So before we go and register our vCenter with the VASA provider we first need to create a storage container to host our VVOL datastore.  First, create a new Virtual Volume set with the following command

createvvset myvvolsetname

Then, let’s create our storage container and assign our newly created set to it

setvvolsc -create set:myvvolsetname

Again, these commands wouldn’t be required in 3.2.1 as far as I know, but are in 3.2.2

Step 5 – Register our VASA within vCenter

Now it’s time to head over the familiar, lightening fast interface we call the vSphere Web Client and register the 3PARs VASA implementation as a storage provider.  Make note of the ‘VASA_API2_URL’ shown in step 3 – you will need this when registering.  With your vCenter Server context selected, navigate to Manage->Storage Providers and click plus sign to add a new storage provider.

registerprovider

Enter your VASA URL from step 3, along with a name, username, and password and click ‘OK’.  For this instance I’ve used 3paradm, but you may be better off investigating creating a new account with just the ‘service’ role within the 3PAR.  Either way, get your new storage provider registered in vCenter and wait for the Status to show as Online and active.

Step 6 – The VVOL datastore

We are almost there I promise!  Before we can deploy VMs within a VVOL or assign storage profiles to match certain CPG’s within the 3par we need to have our VVOL datastore setup within vCenter.  I found the best spot to create this datastore is by right clicking the Cluster or ESXi host we want to have access to VVOLs and selecting Storage->New Datastore.  Instead of selecting VMFS or NFS as we normally would, select VVOL as the type as shown below

vvoltype

On the next screen simply give your datastore a name and select the storage container (this is what we made available in step 4).  Then, simply select the hosts you wish to have access to deploy VVOLs to and away you go!

Step 7 – Storage Profiles

At this point you could simply deploy VMs into your newly created VVOL datastore – the 3PAR will intelligently chose the best CPG to create the VVOL in, but the power really comes by being able to assign certain VM storage profiles to our disks, and having the VVOL go to the proper CPG depending on the array capabilities. Storage Profiles are created by clicking on the Home icon and navigating to Policies and Profiles within the web client.  In the VM Storage Profiles section simply click the ‘Create new storage profile’ button.  Give your new profile a name and continue on to the Rule-Set section.

profile

The Rule sets of my “Silver” VM storage profile are shown above.  As you can see, I’ve specified that I want this storage profile to place VM disks within my FastClass raid 5 CPG, and place their subsequent snapshots in the SSD tier CPG.  When you click next you will presented with a list of the compatible and incompatible storage.  Certainly select your compatible storage and click next.  Once we have all of the profiles we need we can simply assign them to our VMs disk as shown below…

vmprofile

As you can see I’ve selected our newly created “Silver” policy for our new VM.  What this states is that when this VM is created a new VVOL will be created on our FastClass disks on the 3PAR housing the VMs.

Step 8 – VVOL visibility

Although we are technically done with deploying VVOLs at this point I wanted to highlight the showvvvolvm command that we can utilize on the 3PAR in order to gain visibility into our VVOLs.  The first being simply listing out all of our VMs that reside on VVOLs within the 3PAR.

showvvolvm -sc

showvvol1

As you can see by the Num_vv column we have 3 VVOLs associated with our VM (MyNewVM).  But how do we get the information on those VVOLs individually – we can use the same command just with the -vv flag

showvvolvm -sc -vv

showvvol2

So now we can see that we have 1 VVOL dedicated for the config, 1 VVOL dedicated for the actual disk of the VM, and finally 1 VVOL hosting a snapshot that we have taken on the VM.

Anyways, that’s all I have for now – although I haven’t gone too deep into each step I hope this post helps someone along the way get their VVOLs deployed as I had a hard time finding all of this information in one spot.  For now I like what I see between HP and VMware concerning VVOLs – certainly they have a long road ahead of them in terms of adoption – we are still dealing with a 1.0 product from VMware here and there are a lot of things that need to be worked out concerning array based replication, VASA high availability, functionality without VASA, GUI integration, etc – but that will come with time.  Certainly VVOLs will change the way we manage our virtualized storage and I’m excited to see what happens – for now, it’s just fun to play with 🙂  Thanks for reading!

Quick Fix – Making your inactive NFS datastore active again!

Sticking to my rule of “If it happens more than once I’m blogging about it” I’m bringing you this quick post around an issue I’ve seen a few times in a certain environment.  Although this is solved by only a few esxcli commands I always find it easier for me to remember (and find) if I post it here… 🙂

Anyways, as it is I have a couple of NFS datastores that sometimes “act up” a bit in terms of their connections.  For the most part they are fine and dandy – however every now and then they show up within the vSphere client as “inactive” and ghosted.  After checking the network (I always try and pin things on the network) it appears that all the connections are fine – Host communicates with storage, storage with host – the same datastores are even functioning fine on other hosts.  Logically my next step is to remount them on the host in question but when trying to unmount and/or remount them through the vSphere client I usually end up with a “Filesystem busy” error.

Thankfully it doesn’t take a lot to fix this issue, but could certainly become tedious if you have many NFS datastores which you need to perform these commands on…

First up, list the NFS datastores you have mounted on the host with the following

esxcli storage nfs list

You should see that the ‘inactive’ datastores are indeed showing up with false under the accessible column.  Make note of the Volume Name, Share Name and Host as we will need this information for the next couple of commands….

Before we can add our datastore back we need to first get rid of it.  The following command takes care of that…

esxcli storage nfs remove -v DATASTORE_NAME

Depending on whether or not you have any VMs registered on the datastore and host you may get an error, you may not – I’ve found it varies 🙂  Anyways, lastly we simply need to mount the datastore back to our host using the following command…  Be sure to use the exact same values you gathered from the ‘nfs list’ command.

esxcli storage nfs add -H HOST -s ShareName/MountPoint -v DATASTORE_NAME

There you go!  You should now have a happy healthy baby NFS datastore back into your storage pool.  On a side note I’d love to see some sort of esxcli storage nfs remount -v DATASTORE_NAME command go into the command line in order to skip some of these steps – but, hey, for now I’ll just use three commands.

Learning 3PAR – Part 2 – Moar Chunklets

chicletsIn Part 1 we went through some of the common terminology within the HP 3PAR array and now we will go into a bit more detail about one of them – the Chunklet!  A Chunklet is a key player in how the 3PAR aims to utilize all of the disks within the array, and in turn maximize the performance and protection that they can get out of the array!  With that said I mentioned that during the initialization of a physical disk it is divided up into 1GB Chunklets but what I didn’t mention is there are a few different types of Chunklets within the HP 3PAR – Now these may not be “official” HP names as I kind of named them myself during my reading.  And for some reason I’m now craving gum 🙂

Normal Used Chunklets

These are the Chunklets that are utilized by Logical Disks.  They are stringed together within different RAID sets across different physical disks in order to provide capacity to a CPG, which in turn passes it along to a Virtual Volume (this is essentially our datastore when it’s all said and done).  These chunklets hold our production data

Normal Reserved Chunklets (Logging Chunklets)

I don’t know if these really exist but this is what I’m going to call them.  They are pretty much the same as Normal Used Chunklets however they have been pre-configured in  reserved Logical Disks which are created by the system.  We normally see a reserved Logical Disk for Logging (used for disk failures/rebuilds), admin (used to store event logs and administration information) and srdata (used to store historical stats and information).  We will often see these logical disks containing chunklets closer to the end of the spindles as well.

Normal Unused (Free) Chunklets

These Chunklets are exactly how they are described – they are Chunklets that are provisioned, and are NOT spares, but have not yet been claimed by any Logical Disk.  It’s pretty safe to say that during installation all chunklets (except designated spares and reserved) are essentially free chunklets until you start provisioning LUNs.

Spare Chunklets

Some Chunklets will be designated as spares during the initialization of the 3PAR.  Meaning, not all 1GB Chunklets are available to be used within a Logical Disk.  Spare Chunklets are essentially a placeholder which is utilized when we have a physical disk failure and the Logical Disk RAID set needs to be rebuilt.  An intelligent note here – the system automagically selects which Chunklets are to be assigned as spares, however it does it in a way that most of the spare chunklets are located as close to the end of the physical disks block space as possible, leaving the closer blocks for production.

Chunklet Relationships

Everything just seems silly with the word chunklet in front of it 🙂  Either way there are few terms that are used to describe the relationships between our Normal Used Chunklets and all other chunklets within the system.

  • Local Spare Chunklets –  This would be a chunklet designated as a spare, whose primary path is connected to the same node that owns the source logical disk containing the used chunklet.
  • Local Free Chunklet – An Unused/Free chunklet whose primary path is connected to the same node that owns the source logical disk containing the used chunklet.
  • Remote Spare Chunklet – A spare chunklet whose primary path is  connected to a node different then the node owning the source logical disk containing the used chunklet.
  • Remote Free Chunklet – A free/unused chunklet whose primary path is connected to a node different then the node owning the source logical disk containing the used chunklet.

So we have mentioned failing physical disks a couple of times so I think now would be a good time to discuss what exactly happens during a disk failure and how it affects our Chunklets…

  • When a connection is lost or a failure of a physical disk occurs, the system immediately forwards all writes destined for failed chunklets that have been cached to chunklets contained in the reserved Logging Logical Disk.  This occurs until the failed physical disk/chunklets comes back online, until the Logging LD becomes full, or until the rebuild process has been completed.
  • The rebuild process occurs concurrently with the above step, where the system begins to reconstruct lost data utilizing the remaining chunklets and RAID levels provided.
    • There is some logic that happens during this rebuild/relocation phase as well – the system first looks to select a local spare chunklet , if none are to be found it moves on to a local free chunklet , then a remote spare chunklet, and finally a remote free chunklet.  All the while trying to maintain consistency in terms of the characteristics between the failed and target chunklets (Speed, Drive Type, etc).
  • Once the rebuild has completed, the logging disks are replayed and data flushed back down to the newly constructed volume.

So in the end these little tiny 1GB chunks of contiguous space are a key player in the 3PAR array.   To help understand them I tend to try to remove the fact that they are on individual drives, and think of them somewhat as really small, granular 1GB drives, some marked as spares, some in different logical drives with different raid sets, and some set aside to provide functionality for the array.  All that said though they are not separate drives, different chunklets live on the same drive, leaving us with the ability to provide different RAID levels on the same drive, mix and match different sized drives without wasting capacity, and stripe our logical disks across multiple shelves, and in some cases, even provide shelf-level protection.   Plus, they make for a nice little visualization of coloured blocks within the 3PAR Management Console 🙂

Learning 3PAR – Part 1 – Chunklets, Logical Disk, CPGs, and Virtual Volumes

3parAs I’m currently in the beginning phases of a HP 3PAR deployment I thought it might be a good idea to write a few posts centering around some of the concepts built around the 3PAR architecture.  For the most part I can relate the different terminology names to other storage arrays I’ve used in the past but some of them are somewhat new to me as well.  Either way I’m no expert and am still learning myself so ease up on me if I make a mistake eh!  Anyways, for the first part of this series I’ll concentrate simply on some of the terminology and layers that exist within the 3PAR StorServ and try to explain them the best I can – remember, I’m explaining them to me as well!

5 Layers to the hosts

As with any array the path that data takes to get from our hosts to its’ final destination on disk is a complex one – but thankfully we don’t have to worry about all of the bumps in the road along the way.  That said it’s always nice to understand the road as best we can in order to determine how best practices and configuration changes will apply to our environment.  With the 3PAR that path contains 5 essential layers; Virtual Volumes, Common Provisioning Groups, Logical Disks, Chunklets, and Physical Disks.

3pardisk

 

We can somewhat see by the diagram the relationship between each layer but before taking a holistic view let’s first discuss each layer…

Physical Disks

This is an easy one right?  A physical disk is just that, a physical disk located inside of your 3PAR array, encompassing all types of disk within the array.

Chunklets

The first thing a 3PAR does when it is discovering its’ storage is break down all of the capacity on your physical disks into chunklets.  Each chunklet is 1GB in size and occupies contiguous space on a physical disk.  Chunklets are local to that physical disk only and cannot span to others.

Logical Disks

Logical disks are essentially a grouping of chunklets which are arranged as rows of like RAID sets. LD’s will ensure that each chunklet which resides in a RAID set is physically located on different physical disks.  We don’t directly create LD’s on the 3PAR – they are generated during the creation of a CPG (explained next), more-so, when a Virtual Volume is created on a CPG.   All of the metadata however, RAID type, allocation, growth of an LD is defined when creating the CPG itself.

Common Provisioning Groups (CPG)

A CPG is simply a pool of Logical Disks that provide the means for a Virtual Volume (explained next) to consume space.  When we deploy a CPG we do not actually use any of the space in our pooled logical disks until a virtual volume is created – meaning a 2TB CPG with no virtual volumes consumes no space at all.  We can think of a CPG similar to that of an EVA’s disk group, but feeding on logical disks instead of physical disks.

Virtual Volumes

No, these aren’t the VVOLs your looking for – this is simply a terminology that 3PAR uses to define the LUNs that are presented to the hosts – they are not the VVOLs which we have all seen come supported in vSphere 6.  Either way a Virtual Volume is a LUN that draws it’s capacity from a CPG – one CPG can provide space to many virtual volumes.  A virtual volume is the LUN that is exported out to your ESXi hosts, and eventually hosts datastores.  Just like most arrays Virtual Volumes can be provisioned either thick or thin – with a thin provisioned Virtual Volume only instructing its associated CPG to draw space from the logical disks as space is needed.  CPGs have the ability to create logical disks as needed to handle the increased demand for capacity up until the user-defined size limit of the CPG is reached.

So working backwards we can come to somewhat of the following

  • A datastore is located on a Virtual Volume
  • A Virtual Volume draws its’ space from a Common Provisioning Group (CPG).
  • A Common Provisioning Group is any given number of Logical Disks joined together to form some sort of contiguous space.
  • A Logical Disk is simply a collection of chunklets which are joined together in rows in order to produce a certain RAID set (1,5,6,etc).
  • A Chunklet is a 1GB piece (chunk) of any given physical disk within the array.  It’s also a very funny word.
  • A physical disk is…well, a physical disk.

So there we have it – it being the very very very basic understanding of some of the terminology within the HP 3PAR.  Certainly we can dive deeper into some of these terms here and we will in later posts – I mean, there are many different types of Chunklets, some reserved, some spare, but we will save those and some other terms such as Adaptive Optimization for another post (mainly because I have no idea quite yet Smile).

Resizing the root partition of the vCenter Server Appliance (VCSA)

There are many side effects of a root file system filling up – server halts, unexpected application crashes, slowness, midnight wake up calls, etc.   And the root file system on the VCSA is no exception – in fact, I found it while trying to deploy a VM from a template into my environment – kept getting the dreaded 503 error that stated nothing useful to help with the resolution!  But, after a little bit of investigative work it appeared to me that the root file system on my VCSA was nearly full!  No keep in mind this was in my lab, and in all honesty you should probably investigate just why your file system is taking up so much space in the first place – but do to my impatience in getting my template deployed I decided to simply just grant a little more space to the root partition so it had a little room to breathe!  And below is the process I followed – may be right, may be wrong – but it worked!

outofspace

Step 1 – Make the disk bigger through the vSphere Client!

This is a no-brainer – if we don’t expand the space on the disk belonging to the VCSA that hosts the root partition before we can expand the root partition into that space!  So go ahead and log in to vCenter (or better yet the host on which your VCSA runs) and expand it’s underlying disk

biggerdisk

Once you have done this you may need to reboot your VCSA in order to get the newly expanded disk to show as expanded – I for one couldn’t find any solution that would rescan the disk within the VCSA to show the new space, but if you know, by all means let me know in the comments!!!

Step 2 – Rewrite the partition table

Things are about to get dicey here!  We are going to use fdisk in order to recreate the partition tables for the root filesystem – so relax, be careful and take your time!!!

First up, let’s have a look at our disk by running “fdisk –l /dev/sda”  As shown below we can see that it is no reporting at 25GB in size.

fdiskl

Next, we need to find the partition that our root filesystem resides on.  The picture of the “df-h” output at the beginning of this post confirms we are running on /dev/sda3 – this is the filesystem we will be working with…

So listed below is a slew of fdisk commands and options that we need to run – also, you can see my complete output shown at below….

First up, delete partition number 3 using the d option.

1
2
3
fdisk /dev/sda
d (for delete)
3 (for partition 3)

Now, let’s recreate the same partition with a new last sector – thankfully we don’t have to figure this out and should be fine utilizing all the defaults that fdisk provides…this time selecting the n option, p for partition, 3 for our partition number and accepting all of the defaults

1
2
3
n (for new)
p (for partition)
3 (for partition number 3)

After accepting all the defaults we need to make this partition bootable – again done inside of fdisk by using ‘a’ and then ‘3’ for our partition number

1
2
a (to toggle bootable flag)
3 (for partition number 3)

wholeprocess

As you can see in the message pictured above we need to perform a reboot in order for these newly created partition tables to take affect – so go ahead and reboot the VCSA.

Step 3 – Extend the filesystem

Well, the hard part is over and all we have left to do is resize the filesystem.  This is a relatively easy step executed using the resize2fs command shown below

resize2fs /dev/sda3

After this has complete a simple “df –h” should show that we now have the newly added space inside our root partition.

done

There may be other and better ways of doing this but this is the way I’ve chosen to go – honestly, it worked for me and I could now deploy my template so I’m happy!  Anytime you are using fdisk be very careful to not “mess” things up – take one of those VMware snapshotty thingies before cowboying around Smile  Thanks for reading!

#VFD5 Preview – NexGen

logo1Alright here’s another company presenting at VFD5 in Boston that I recognize, but know very little about!  Thankfully the Stanley Cup playoffs are done and I now have a little extra room in my brain to take in all the info that will be thrown at us.  Anyways I started to do a little digging on NexGen and oh boy, what a story do they have!  Stephen Foskett has a great article on his blog in regards to the journey NexGen has taken – it’s pretty crazy!  Certainly read Stephens article but I’ll try to summarize the craziness as best I can…

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

Basically, a couple of the LeftHand founders got together and founded NexGen – ok, this story doesn’t seem all that crazy so far.  Well, after a few years Fusion-io came in with their wallets open and aquired NexGen – again, not a real crazy spin on a story!  Moving on, we all know that SanDisk walked in and acquired Fusion-io with that getting NexGen.   Then, the next thing you know SanDisk spun out NexGen on their own, putting them right back to where they started!  This just all seems wild to me!

So where do they stand today!

NexGen is a storage company, a storage company offering a hybrid flash array with software that helps their customers align their business practices with their storage by prioritizing the data they store. So what does that really mean?  Basically it comes down to QOS and service levels.  NexGen customers can use these two concepts in ways that they can define performance, availablity, and protection of their data by defining the IOPs, throughput and latency that they need for each and every application.  Depending on the service levels assigned to a workload, NexGen can borrow IOPs from a lower tiered service in order to meet the QOS defined on a business critical application.

Another unique feature of NexGen Storage is in the way they use flash and SSD.  Most arrays will place their flash behind some sort of a RAID controller, whereas NexGen utilizes the PCIe bus to access their flash, providing a redundant, high-speed, low latency caching mechanism for both reads and writes.

There is certainly a lot more bells and whistles within the NexGen arrays and a lot bigger of a story to be told here.  The way NexGen is utilizing flash within the array is definitely peaking my interest, but honestly, I’m interested more in the story of the company and how all those acquisitions and spin-offs have helped them.  I’m sure they will address both of them at VFD5 and believe me there will be more posts around NexGen and their offerings.  If you want to follow along during the VFD5 presentations you can see them live both on the official VFD5 event page, as well as my VFD5 event page where all my content will be posted.

#VFD5 Preview–DataGravity

DataGravity_392x652-wpcf_100x17Let’s set the stage here!  We got Paula Long – yes, the same Paula Long that co-founded EqualLogic  – yes the same EqualLogic that Dell purchased in 2008 for 1.4 billion.  We have John Joseph – another long time (as long as  you can get in startups) EqualLogic member!  These two get together to execute on an idea, hire David Siles, a long term member of the senior leadership team at Veeam to be their CTO and then, on Tuesday, August 19th, 2014 at approximately 12:01 am, weighing in at 85 lbs and 26.75” tall DataGravity was born.

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

DataGravity will present at Virtualization Field Day 5 in Boston on June 25th and I cannot be more excited to hear what they have to say.  I’ve spoke with them before, briefly at the craziness that is VMworld – and honestly, the booth was so busy with people wanting to get into to see the new baby that I couldn’t stay long – so having a couple hours with them will be long overdue.

Just another storage startup?

Technically yes and technically no!  So in terms of technically yes what I mean is DataGravity is a storage array!  They are your primary storage!  They can provide storage to your ESXi hosts not only through the traditional NFS mounts and iSCSI targets, but also have a built-in VM-Aware storage provider – allowing you to skip the whole LUN provisioning and treat your VMs as a first class citizen in terms of living on the array!  VM-Aware of course makes it easier for us to perform things like monitoring, data protection and provisioning.  That said, haven’t we seen all this before?  Isn’t the market full of this?

Those questions lead me to the “technically no” part of my answer!  Sure, they do the primary storage, they have their flash piece!  If this blog post ended here then they would certainly be just another storage startup – but it doesn’t!  DataGravity’s differentiator in my opinion is the way they split their nodes of storage, and the unique functionality those nodes provide!

Not just another storage startup!

I’m not going to go too deep into how DataGravity works, partly because they are going to jam 2 hours of awesomeness into my brain at the end of the month so I’ll save it for then, and partly because I don’t really know how it all works…yet.

The main thing I get is that they “optimize, protect, track, and analyze data as its stored” – their words.  My words – it does more than just primary storage with the sweet spot being the “analysis”.   Basically the primary storage is just that, primary storage – but as data comes in it’s stored on a secondary node – this node can be used for the obvious, data protection, but also for analysis.  So think of it this way – it’s easy now to see who created a certain file, but do we have visibility into who has modified that file over time, who else has read that file, where else that file might be stored, what other files this person has created!  DataGravity gives us this functionality – and not just on a per VM level, on a complete array level!   And all of this analysis and querying being run on a secondary storage node, leaving production to do production like things.   Essentially it’s like Google for your storage array!

For now that’s all I have to give you but expect a bit of a deeper post to come the end of June, early July on DataGravity as I hear what they have to say at VFD5.  Don’t forget if you want to join in on the Virtualization Field Day 5 action you can do so by watching the live stream and follow along with the #VFD5 hashtag on Twitter!  And just a reminder – I’ll try to have the live stream and any event related content on my VFD5 landing page here as well!

Disclaimer: As a Virtualization Field Day 5 delegate all of my flight, travel, accommodations, eats, and drinks are paid for. However I do not receive any compensation nor am I required to write anything in regards to the event or the sponsors. This is done at my own discretion.

Dell VRTX upgraded to the supported version of ESXi 5.5 – Once again, No Storage!

waldoLast month I published a post in regards to the Dell VRTX, ESXi 5.5, and storage – or the lack thereof.  Well shortly after publishing that article Dell announced full support for ESXi 5.5 and released an ESXi 5.5 image on their website for those looking to upgrade or install.  In the chance that my little driver work around might affect support I decided I’d better pull the image down and get it installed on the few VRTX’s I had already deployed.

That said, looking at the build numbers and attempting to not have to redo all of the configuration I had already applied, I decided to take the upgrade route – even though the only difference was most likely the storage driver.  The upgrade process itself went smooth – no issues, no problems.  But after it was complete guess what was missing?  Yup, the datastore was gone again!

Where’s Waldo?

Now this wasn’t the same issue of missing storage that I described the last post.  Previously I couldn’t see the storage at all, this time, when looking at my Storage Adapters I could actually see the device that hosted my datastore listed.  So it was off to the CLI to see if I could get a little more information about what was going on.

To the CLI Batman!

After doing some poking around I discovered that the volume was being detected as a snapshot/replica.  Why did this happen?  I have no idea – maybe its the fact that I was messing around with the storage drivers 🙂  I guess that’s why they say things are supported and unsupported 🙂  Either way, how I found this out was with the following command.

esxcli storage vmfs snapshot list

snapvol

This command actually displayed the volume that I was looking for, and more specifically showed that it was mountable.  So my next step was to actually mount that volume again.  Take caution here if you are doing the same.  I know for sure that this volume is the actual volume I’m looking for – but if you have an environment with lots of lun snapshots/replica’s you will want to ensure that you never mount duplicate volumes with the same signature – strange things can happen.  Anyways, to mount the volume, take note of the VMFS UUID and we can use the following command.

esxcli storage vmfs snapshot mount -5315e865-0263a58f-413a-18a99b8c1ace

And with that you should now have your Dell VRTX storage back online – everyone is happy and getting along once again – Thanks for reading!

ESXi 5.5, Dell VRTX and Storage – or the lack there of!

8306.VRTX.jpg-1024x0Over the past couple of months I’ve been working on some vCO workflows to setup and configure a Dell VRTX as we are on the verge of deploying a handful of them.  Now this isn’t going to be a big post about what VRTX can do nor is it how to use vCO to set it up – I’ll save them for later – this is simply one small quirk that I’ve found when installing ESXi 5.5 onto the blades inside of the VRTX.

 

Small is the new big.

I shouldn’t have said small quirk – it’s somewhat of a show stopper.  If you simply throw on the vanilla ESXi 5.5 image, or even the Dell released image of ESXi 5.5 you will quickly notice that you have absolutely no storage available to you from the shared PERC controller that sits inside the VRTX.  Kind of hard to use the box with no storage 🙂

Before I go into my Dell rant if you are just looking for the solution, just scroll down to the “Driver thang” section of this post.  For the rest of us…

Since writing this Dell has released a supported version of ESXi 5.5 for the Dell VRTX blades.  Head over to Dell.com and punch in your service tags to get the image.  I’ve used and tested this and it work flawlessly 🙂  Thanks Dell!

Start-Rant -Type ‘Mini’

ESXi 5.5 is not certified on a Dell VRTX, so ultimately, you could say, it isn’t supported – not by Dell or not by VMware.  What I don’t understand here is that how Dell can release a “converged” solution, promote the crap out of it stating how great it is to run VMware on, and not support the latest release of ESXi!?!?!  I mean, this thing was released in the summer of 2014.  ESXi 5.5 was announced at VMworld in August 2013!  You would think that Dell would have the drive to hit the market with this thing supporting the latest and greatest software – but no – either way, I’m sure it will all be updated soon, and I’m sure they have their reasons – but for the meantime, here’s how to get it going…

Stop-Rant

Ain’t nuttin but a driver thang.

The fact that you don’t see storage isn’t the result of any major issue or complex problem.  It’s simply a driver.  The driver that the shared PERC uses included with the ESXi 5.5 image is just too new (?!?!?!).   However the version you need, or the version that I’ve found to work is labelled megaraid_sas version 06.801.52.00.  What the difference is between these two versions I have no idea, i just know you need 6.801.52 to make it work.  You can grab that here.

Once you have the file you are just a vib install away from VRTXing all night long.  Pick your poison when it comes to vib installs; update manager, vMA or esxcli – for the sake of not having to put too much effort into anything I’ll go over the esxcli way of installing the vib.  First things, upload that VIB to one of your datastores or an accessible area on the host.  From there, ssh in and install the vib using the following command.

esxcli software vib install -d /tmp/megaraid/megaraid_sas-06.801.52.00-offline_bundle.zip

The only thing that stands in between you and your VRTX storage now is a reboot, so go ahead and do that.

There you have it – storage!

This is only the way I’ve found to make storage work with the VRTX and 5.5 and hey, I could be crazy by doing all of this – so if you have any other suggestions, concerns, or comments I encourage them below or send me a message on Twitter – Like I said, I have a handful of these to configure so I’d rather not roll them out in some kind of crazy state 🙂

New StarWind Whitepaper – Why Is 3-Way Synchronous Mirroring Better Than 2-Way?

StarWind Software, a global leader in storage management and SAN software for small and midsize companies (or in the community terms – the company with the iSCSI SAN) has released a new whitepaper titled ;’Why Is 3-Way Synchronous Mirroring Better Than 2-Way?’

This white paper describes the difference between 3-and 2-way synchronous mirroring – capabilities that are both available in the StarWind iSCSI SAN & NAS solution. The document outlines the advantages of the 3-node high availability storage cluster that provides cost efficiency, increased reliability, and higher performance compared to 2-node HA.  You can go and grab a copy of it here.  While your there, you might as well grab yourself a copy of their FREE iSCSI SAN and try it out to see if it has a fit for you.