We have been hearing the cliché’s for quite some time now within the technology industry. Sayings like “Breaking down silos” and “Jack of all trades, Master of none” have been floating around IT offices for the past 5 years – and while I believe that these sayings certainly hold some clout I still have my doubts about this new “Generalized IT Admin”. Honestly, with the changing landscape of technology and the fast paced change we see introduced into our infrastructure by all means we need to know (or know how to quickly learn) a lot – A LOT. And while this generalized, broad skill set approach may be perfect for the day to day management of our environments the fact is, when the sky clouds over and the storm rolls in, taking with it certain pieces of our data centers we will want to have those storage specialists, that crazy smart network person, or the flip-flop wearing virtualization dude who knows things inside and out available to troubleshoot and do root/cause on issues in order to get our environments back up and running as quickly as possible!
[symple_box color=”yellow” fade_in=”false” float=”center” text_align=”left” width=””]Just a disclaimer of sorts – every article, comment, rant or mention of SFD13 that you find here has been completed on my own merits. My travel, flights, accommodations, meals, drinks, gum, etc were all paid for by Gestalt IT, however I’m not required or obliged to return the favor in anyway other than my presence 🙂 – Which still feels weird to say 🙂 Well, my presence and possible a little bit of Maple Syrup.[/symple_box]
Now all this said these problem situations don’t come (I Hope) that often and coupled with the fact that we are seeing more and more “converged” support solutions, organizations can leverage their “one throat to choke” support call and get the specialists they need over the phone – this all leads them one step closer to being able to employ these “Jack of all trades, Master of none” personnel in their IT departments. But perhaps the biggest stepping stone in eliminating these specialized roles is the new rage being set forth by IT vendors implementing a little concept called “Policy Based Management”.
Enter NetApp SolidFire
Andy Banta from NetApp SolidFire spoke at Storage Field Day 13 about how they are utilizing policy based management to make it easier and more efficient for everyday administrators to consume and manage their storage environments. I got the chance to sit as a delegate at SFD13 and watch his presentation, cleverly titled “The death of the IT Storage Admin” – and if you fancy you can see the complete recorded presentations here.
NetApp SolidFire is doing a lot of things right in terms of taking the steps to introduce efficiency into our environments and eliminate a lot of those difficult mundane storage tasks that we used to see dedicated teams of specialized administrators perform. With that said let’s take a look at a few of those tasks and explore how NetApp SolidFire, coupled with VMware’s VVOL integration is providing policy based automation around them.
Storage Provisioning
In the olden days (and I mean like 5 years ago) the way we went about provisioning storage to our VMware environments could be, how do I say this, a little bit inefficient. Traditionally, we as “Generalized VMware administrators” would determine that we needed more storage. From there, we’d put a request out to the storage team in order to provision us a LUN. Normally, this storage team would come back with all sorts of questions – things like “How much performance do you need?”, “How much capacity do you need?”, “What type of transport mechanism would you like this said storage to be delivered over?”, “What type of availability are you looking for?”. After answering (or sometimes lying) our way through these conversations the storage team would FINALLY provision the LUN and zone it out to our hosts. We, then create our datastore, present it to our ESXi hosts and away we go filling it up – only to come back to the storage team with the same request the very next month. It’s not a fun experience and is highly inefficient.
VMware’s VVOLs is a foundation to help change this and NetApp SolidFire has complete integration points into them. So in true VVOLs fashion we have our storage container, which consumes space from our SolidFire cluster on a per VM/disk fashion. What this means is as administrators we simply assign a policy to our VM, or our VM disk and after that our vmdk is provisioned automatically on the SolidFire cluster – No LUNs, no storage team conversations – all performed by our “generalized admin”.
Storage Performance/Quality of Service
Now as far as VVOL provisioning capacity goes there isn’t a whole lot that is different between SolidFire and other IT storage vendors – but when we get into QOS I think we can all agree that SolidFire takes a step above the crowd. SolidFire has always focused on the point that application performance and quality of service is the number one most important piece of their storage – and with their VVOL implementation this is still true.
When setting up our policies within the vSphere SPBM, NetApp SolidFire exposes a number of metrics and configuration options as it pertains to QOS in our rule setup. We can configure settings allowing us to set minimum, maximum and burst IOPs on both our data VVOL(the vmdks) as well as our configuration VVOLs (vmx, etc.). Once setup we simply apply these policies to our VMs and immediately we have assurance that certain VMs will always get the performance they need – or, on the flip side, certain VMs will not be able flood our storage, consuming IOPs and affecting their neighboring workloads. This is a really cool feature IMO – while I see a lot of vendors allowing us to do certain disk type placement for our VVOL (placing vmdk on SSD, SAS, etc.) I’ve not see many that go as deep as SolidFire allowing us to guarantee and limit IOPs.
This essentially removes the complexity of troubleshooting storage performance needs and constraints on our workloads – the setup is all completed within the familiar vSphere Web Client (complete with a NetApp SolidFire plug-in) and is simply applied the same way as you have always just edited a VMs settings.
So – is the storage admin dead?
NetApp SolidFire has definitely taking a good chunk of the storage admins duties away and put them into the laps of our generalized admins! Even though I haven’t mentioned it, even the scaling of NetApp SolidFire cluster, as well as the VASA provider failover is all automated in some way within their product. So, yeah, I think they are on the right track – and they have taken some very difficult and complex tasks and turned them into a simple policy. Now I wouldn’t jump to conclusions and say that the storage admin is 100% dead (there is still a lot storage complexities and a lot of storage related tasks to do within the datacenter) but NetApp SolidFire have, how do I put this – maybe just put them into a pretty good coma and has them lying in a hospital bed! If you have made it this far I’d love to hear your take on things – leave a comment, hit me up on twitter, whatever… Take a look at the NetApp SolidFire videos from SFD13 and let me know – do you think the storage admin is dead? Thanks for reading!
The storage admin role is dead. I suspect the VMware admin role isn’t far behind.